Tuesday, September 13, 2022

SIMULATED END-OF-SEASON ARPI RANKS USING ACTUAL RESULTS THROUGH SEPTEMBER 11 AND SIMULATED RESULTS FOR FUTURE GAMES

In this post, I will show projected end-of-season Adjusted RPI ranks using actual results of games played through Sunday, September 11, and simulated results for future games.

But first, in response to a comment on my last set of rankings, some context about what these rankings represent.  In my August 19 post, I described how I produce the projected end-of-season ranks and also provided a link to a very large Excel document that shows in detail how I produce them.  As a brief summary:

1.  In advance of each season, I assign Adjusted RPI ranks and ratings to teams, based on historic trends.

2.  I then use the assigned ratings, as adjusted for home field advantage, to predict the outcome of each game as a win, loss, or tie for each team.  The value of home field is 0.0148, based on a study of all games since 2007.  All games with a rating difference of less than 0.0133 are predicted as ties, also based on a study of all games since 2007.  (With the elimination of Overtimes, this number likely will change, but I have not yet completed the study that will allow me to use a new rating difference.)

3.  With all of the game results, I then apply the RPI formula to the results to show what the end-of-season ratings and ranks would be with those results.  (This includes computing conference standings, using those rankings to populate conference tournaments, and determining conference tournament results.)

In my August 19 post, I showed both the pre-season ranks I assigned to teams and what the ratings underlying those ranks produced in terms of final ranks when I applied them to all of the games for the entire season.  A main reason I did that was to show that there could be big differences between the pre-season ranks and the end-of-season ranks.  This was to make the point that even though the pre-season ranks determined every game outcome, the RPI formula, combined with team schedules, could end up giving teams end-of-season ranks that were very different than their actual strength (represented by the pre-season ranks).

This shows something important:  The RPI does not show how strong teams are (their pre-season ranks).  Rather, it shows how they performed as measured by their results against the teams they played (their end-of-season ranks).

So if you have your own ranking of teams based on how strong you think they are, you have done something different than what the RPI does.  In terms of the NCAA Tournament, for which the Committee is mandated to use the RPI and not any other ranking system, this is a critical point.  The Committee, at least for at large selections, does not make decisions based on how strong it thinks teams are.  Rather, it makes decisions based on how teams performed in the games they played.

The poster who prompted these comments used the West Coast Conference as an example, suggesting the WCC should have had more teams in the Top 35 than showed up there in my rankings last week.  Since the poster referred to the WCC, here is a table that will illustrate what I have described above about team strength (as measured by their assigned pre-season ranks) as compared to team performance (as measured by applying the RPI formula to team schedules):

In this table, the Assigned Rank column shows the pre-season rank I assigned to each team, based on historic trends.  The Pre-Season Simulated Rank column, shows the ranks of the teams after applying their corresponding ratings and the ratings of their opponents to all of their games to determine game outcomes (and doing the same thing for all other games over the course of the season).  Remember:  Within my system, the Assigned Ranks operate, in every game, as the teams’ true strength.  Thus you can see very clearly that what the RPI shows -- as represented by the Pre-Season Simulated Ranks -- is not team true strength but rather team performance as measured by results in the games on their schedules (and also subject to unfortunate biases built into the RPI formula, which happen to have a negative effect on WCC rankings).

In the table, the Current Simulated Rank column shows team Adjusted RPI ranks, using the actual results of games played through September 11, and simulated results of games not yet played.  This interests me because it shows which teams, to date, have performed better or poorer than their Assigned Ranks would have indicated (in other words, appear better or poorer than their Assigned Ranks) and which have performed just about in accord with their Assigned Ranks.  (Remember, for all future games the system currently projects results based on the Assigned Ranks, so that differences between Current Ranks and Pre-Season Simulated Ranks are mostly based on teams having performed differently than their Assigned Ranks indicated they would perform.  Another contributing factor could be other teams likewise having performed differently than expected and thus themselves having moved up and down in the rankings.)

It also is worth noting that if a team so far has performed significantly better or more poorly than expected, it seems at least somewhat likely that in future games they will continue to outperform or underperform their Assigned Ranks, thus moving correspondingly farther up or down in the rankings than their Current Ranks indicate.

With the above as context, here are the current projected end-of-season Adjusted RPI ranks using actual results of games played through September 11 and simulated results of future games:





No comments:

Post a Comment