Monday, July 28, 2025

2025 ARTICLE 7: 2025 PRE-SEASON PREDICTIONS AND INFORMATION, PART 1, ASSIGNED PRE-SEASON RANKS AND RATINGS

INTRODUCTION

All the teams have published their schedules -- almost, with Akron yet to publish its non-conference schedule and Delaware State yet to publish, but unless those teams play each other, we can determine their schedules from what others have publlished.  So for practical purposes we have the all the schedules.  This makes it possible to do pre-season predictions of where teams will end up at the end of the regular season, including conference tournaments.

Pre-season predictions involve a lot of assumptions that may or may not prove correct.  Also, my prediction method depends entirely on teams' rank histories.  So don't take the pre-season predictions too seriously.  On the other hand, this series of articles will have educatational value, particularly about how the NCAA RPI works and about the importance of scheduling.  So I recommend not getting preoccupied with the details of the predictions but instead watch for what you can learn about the NCAA RPI and its interaction with teams' schedules.

ASSIGNED PRE-SEASON RANKS AND RATINGS

The first step in my process, which is the subject of this article, is to assign pre-season ratings and ranks to teams.  In essence, this is predicting teams' strength.

There are a lot of ways to predict team strength, some complex and some simple.  I predict strength using only teams' rank histories, without reference to changing player and coaching personnel.  There are others who do predictions using those kinds of detailed information about this year's teams -- conference coaches for their own conferences and in the past, but not currently, stats superstar Chris Henderson.  In the past, their predictions have been better than mine, but only slightly better.  So you can consider my predictions as somewhat crude but close to as good as you can get when making data-based predictions for the entire cast of teams.

When using only teams' rank histories, my analyses show that the best predictor of where teams will end up next year is the average of their last 7 years' ranks using my Balanced RPI, which is a modified version of the NCAA RPI that fixes its major defects.  There is a problem, however, with that predictor.  If a team has a major outlier year -- a much higher or lower rank than is typical -- using a 7-year average can significantly mis-rate that team.  On the other hand, if I use the median rating over the last 7 years, it avoids that problem.  It is a little less accurate as a predictor for where teams will end up next year, but not by much.  So for my predictor, I use teams' median Balanced RPI ranks over the last 7 years.

Once I have assigned teams' ranks, I then assign NCAA RPI ratings to the teams.  To do this, I have determined the historic average rating of teams at each rank level.  When I do this, however, I have to take into account the recent NCAA "no overtime" rule change and the 2024 NCAA RPI formula changes.  So when I determine historic average ratings, I use what past ratings would have been if the "no overtime" rule and 2024 NCAA RPI formula had been in effect, using the years 2010 to the present (but excluding Covid-affected 2020).

This produces the following "strength" ranks and ratings for teams.  You will note that no team has a #1 assigned rank and some teams have the same assigned rank.  This is because I am using teams' 7-year median ranks.  (Scroll to the right to see additional teams.)





2 comments:

  1. I don’t understand using any historical data other than previous year. If you use say previous 4 years you had many teams overachieving b/c of the extra covid year. Gifth year players were abundant/ not so much this yesr. I’d offer a better prediction model/ last years results, number of returning players , rating of incoming classes snd transfers.

    ReplyDelete
  2. Glad to get your comment. There are a number of ways to rank team strength and yours would be one of them. It is somewhat similar to the very detailed method that Chris Henderson did when he was doing pre-season rankings of teams within conferences. The conferences' coaches, on the other hand, do pre-season rankings for the teams within their conferences that probably are more based on their personal experiences playing the other conference teams coupled with what they know about who has left those teams and who their newcomers are.

    What has surprised me is that when I have used my system in the past (slightly different but fairly close to what I've used this year), and when I then have compared my, Henderson's, and the coaches' pre-season rankings to their actual end of season rankings, the accuracies of our rankings have been very similar. Mine have lagged slightly, being off the other two systems' by missing the actual rankings by about 0.2 more positions per year than the others. In other words, using teams' past ranking histories is a much more accurate method than one would expect, with much more detailed systems performing only slightly better.

    Your mention of the 5-year eligibility due to Covid is interesting. Using the median over the last 7 years (not counting 2020) may help avoid impacts from any 5-year-related outliers, but the multi-year effects of the 5 years might have resulted in a different pattern. We'll see.

    ReplyDelete