One of the tools we will be using at Mid-Major Madness is the MRI, or Miraski Ratings Index.
Think of this as similar to the Sagarin ratings, or the Pomeroy ratings; the goal is a somewhat predictive rating of all of the Division 1 basketball teams.
How did it come about and what does it cover? That is what we are going to try to explain.
I went to school at Drexel, which meant I was forever frustrated about the lack of coverage that the Dragons received, and the lack of acknowledgment for any accomplishments on the court. This was even during the Malik Rose years, when Drexel was heading to the NCAA Tournament more often than not.
But it was before the Internet really took charge of how we watch, and discuss and learn about basketball. I didn't even know that the Sagarin ratings existed back then.
One night, I had too much time on my hands and started working on a ratings system for college basketball that would help to show that some of these mid-major teams (a term that didn't actually exist back then either) were just as good, if not better, than some of the teams getting the majority of the discussion on television and in the newspaper (Your see, there were these things called newspapers...).
The result was what was to become the MRI, a ratings system for college basketball, focused on actual game results.
It took two years before the rankings were cemented in their current form, and outside of two seasons, I have collected data on every game played between two Division 1 teams to create them during the last decade.
The rankings were able to identify George Mason as a strong contender the year it made the Final Four. It knew about Kent State, Virginia Commonwealth and a host of other mid-majors that were able to make deep NCAA Tournament runs. I had my own little cheat sheet. I published the results, and despite the naysayers, it kept on predicting winners.
The gist is simple. Take a team's performance on the court and translate it into a number. That number is based on the team's winning percentage, their opponents, and how well they perform in each game.
Here are all the components laid out for you:
Winning percentage: Wins matter here. You have to win the games you play. I don't care if you faced the most difficult schedule in the world, but if you don't win, or win only sparingly, how can I count you among the best teams. This piece does tend to tamp down some of the middling major conference teams that seem to be in the bubble discussion every season, even though they shouldn't be.
Opponents: This is a combination of the team's opponent's winning percentage and their opponent's winning percentage. The goal was to come up with something that showed the strength of schedule.
Rebound differential: You need size to succeed in the college ranks. You can sometimes get by with good shooting and not size (as West Virginia did for a few seasons) but more often than not, you need to have someone inside, to either dominate on defense, or get the easy shots. This stat is weighted across the entire landscape of Division 1 basketball, so you have to excel beyond the average team here to earn points. You can also lose points if you are worse at rebounding than the average team (see Northwestern).
Turnover differential: You need to hold onto the ball and take it away from your opponents. This stat is weighted similar to rebounds.
Weighted margin of victory/loss: Early on in the BCS years, the college football computer rankings used in that formula were required to eliminate margin of victory from their calculations. This to mean seems absurd. It makes a difference if you beat Syracuse by five or 25. It makes a difference if you beat Syracuse by five or Towson by the same margin. Heck, it means more that you beat Syracuse, period. But if you went by the BCS, both wins count the same.
Not with the MRI. We use the margin of victory in a game and give teams a bonus based on the strength of the opponent as measured by the team's winning percentage and opponent's winning percentage.
You also lose points here for losses. This based on the opponent's losing percentage (so you lose more for losing to a bad team, as opposed to be steamrolled by an undefeated squad).
Take all of the components and add them up for the final score.
Over the last several years, I have also been tracking the performance of the ratings in predicting games straight up. The computer has been able to predict about 71 percent of all games correctly over that time, which is more than 30,000 games.
In my mind, that is a pretty good percentage, and therefore marks the MRI as a pretty good indicator of overall strength of teams.
At Mid-Major Madness, we will use the MRI to help identify mid-major teams on the rise, those that are falling fast, and also those that might be in line for an at-large bid, or who should be considered for what seems to be an ever expanding Tournament bubble each season.
If you have any questions on the formula or the methodology, please feel free to shoot them my way or leave them in the comments and we will see what we can do to answer them.