Friday, November 27, 2020

RPI RANKINGS BASED ON FALL GAMES: A LESSON ON HOW THE RPI WORKS

 Although the NCAA has not published RPI rankings for the Fall, I have been generating them.  They are meaningless as ranks of teams, but they do help illustrate how the RPI works.

During the Fall, four conferences played conference schedules:  ACC, Big Twelve, SEC, and Sun Belt.  In addition, some teams from four other conferences played at least a few games: Conference USA, Southern, Southland, and Patriot.  All told, 58 teams played at least some games.

Here are the RPI rankings for the teams that played at least some games in the Fall:

Looking at Florida State and North Carolina in the #1 and #2 positions, you might think the rankings look pretty good.  As soon as you get to Arkansas State and Central Arkansas at #5 and #6, you can see the rankings are not good.

If you look only at the conferences that played conference schedules, you can see the distribution of teams, in order, is as follows:

ACC, ACC, SEC, Big 12, Sun Belt, SEC, ACC, Sun Belt, Big 12, ACC, SEC, Sun Belt, Sun Belt, ACC, Big 12, SEC, Big 12, ....

 What this shows is that the distribution of teams based on their conferences is as if the conferences are almost equal.

Here is how the conferences stack up based on average RPI ratings and ranks:


The four conferences to look at on the table are the top 4 that played conference schedules.  The key one is the Big 12.  It played a full round robin, did not have a conference tournament, and ended with an average RPI rating of 0.5000.  What is important about that is, it shows what the average RPI rating will be for any conference that plays a full round robin with no conference tournament and no non-conference games.

So, why do the ACC, SEC, and Sun Belt have different average ratings?  The ACC did not play a full round robin, some of its teams played extra games against each other, it had a conference tournament, and Pittsburgh played a number of games against teams from other conferences winning all of them.  Because of the RPI structure, the Pittsburgh non-conference wins benefitted all of the conference teams’ ratings, thus accounting for the conference having an average rating slightly above 0.5000.  The SEC played less than a full round robin and an extended conference tournament, and two teams missed a game, resulting in its average rating being slightly below 0.5000.  The Sun Belt played less than a full round robin and a conference tournament, plus some non-conference games, accounting for its average rating being below 0.5000.

This provides a great illustration of the fact that if conferences play only conference games, the RPI cannot rate or rank the conferences’ teams properly in relation to teams from other conferences.  Rather, the RPI will distribute the teams from the different conferences relatively equally across all the ratings and rankings.  In order to correct for this, the RPI is completely dependent on non-conference games.

This is a point to bear in mind as teams begin to fill out their schedules for play in the Spring:  In order for the RPI to be usable, teams will need to play a significant number of non-conference games.  In fact, based on studies I have done, they will need about half of their games to be non-conference or the RPI will be impaired.

Some of the conferences, however, already have indicated they will be playing no non-conference games for the entire season -- so far, Mountain West and Ohio Valley.  Others are playing expanded conference schedules that will not allow them to come close to half of their games being non-conference -- so far, WAC will have 14 conference games and Summit will have 16.  And the Big South will play a full round robin of 9 games each with a limitation to 2 non-conference games.  The RPI will not work for ranking these conferences’ teams in relation to teams from other conferences.

Even if teams from other conferences end up coming close to half of their games being non-conference, another concern will have to do with travel limitations.  Just as the RPI cannot rank teams from different conferences properly in relation to each other without enough non-conference games, it cannot rank teams from different geographical regions properly in relation to each other without enough outside-of-region games.  Historically, 20% of games have been out-of-region, and that really is not enough.  For the Fall, the number is 1.5%.  So this will be something else to watch for in the Spring.

The bottom line of this is that a big question for the Spring and the NCAA Tournament will be whether the RPI will be useful in relation to Tournament at large selections and seeds.  And, if it is not, an equally big question will be how the Women’s Soccer Committee will make its decisions.