Tuesday, September 30, 2025

2025 ARTICLE 20: RPI REPORTS AFTER WEEK 7 GAMES

Before showing the regular tables in these weekly reports, I will provide some details that should be sounding alarm bells for the Women's Soccer Committee.

What Is Needed for the NCAA RPI to Work Properly as a Rating System for Teams Across the USA?

In order for the NCAA RPI to properly rate teams across the country in a single rating system, there are a number of conditions that the "facts on the ground" must meet:

1.A.  The different regions of the country, within which teams play most of their games, must be of equal strength; or

1.B.  There must be a lot of out-of-region competition.  If teams played all their games only in their own geographic region, then each region's teams' ratings would be the same as each other region's teams' ratings, for practical purposes.  This would be true regardless of whether the regions were equal in strength.  Thus if the regions are of unequal strength, the NCAA RPI depends on there being enough out-of-region competition to get each region's teams properly rated in relation to teams from other regions.

2.  Nationally, teams' ratings are distributed in a bell curve fashion.  At one end of the bell curve are relatively few teams with very high ratings and at the other end are relatively few with very low ratings. As ratings approach the middle of the rating spectrum, there are more and more teams, with the most teams at the center of the bell curve. 

Because of the structure of the NCAA RPI formula, for it to properly rate teams across the country in a single system, each region must have a similar bell curve distribution, in other words similar proportions of teams at the ends and in the middle areas of the curve.  Put more simply, the levels of parity must be the same for the different regions.  If the regions have different levels of parity, then the structure of the NCAA RPI formula will cause it to underrate the more highly rated teams from higher-parity regions and overrate the more highly rated teams from lower-parity regions.

The Women's Soccer Committee made this even more of an issue in 2024 when it changed the NCAA RPI formula so that the formula's Winning Percentage element values ties as only 1/3 of a win rather than the previous 1/2 of a win.  The proportion of in-region ties in a region is a function of the level of parity in the region, in other words of the shape of the region's rating distribution bell curve.  If each region has the same level of parity -- which would be expected to result in the same proportion of ties -- as each other region, then the change to 1/3 will effect the regions equally.  If the regions have different levels of parity, however, then the change will increase the NCAA RPI's discrimination against regions with a high level of parity by having a negative impact on those teams, since they will have more ties.

Out-of-Region Competition

With few exceptions, teams play most of their games within the geographic regions in which they are located.  I have identified four regional groups of states.  The schools in the states in each region, as a group, play the majority or plurality of their games in their region.  You can find the states in each region as well as a map showing the regions at the RPI: Regional Issues page of the RPI for Division I Women's Soccer website.

The following table shows the percentages of games teams from each region played against teams from their own regions as well as against teams from the other regions for the period 2013 to 2024:




This year, there is a big change.  The following table shows the actual regional distribution of games played so far:


As you can see from comparing the two tables, each region's teams are playing a significantly higher percentage of games in region as compared to what they have played historically.

And, when I add in the games on teams' schedules but not yet played, I get the following table:


When you compare this table to the first one above, you can see that this year there is going to be a big decline in the percentage of out-of-region games.  It will be an 18.7% decline for teams from the Middle, 28.2% decline for teams from the North, 31.1% decline for teams from the South, and 34.9% decline for teams from the West.

As I wrote above, in order for the NCAA RPI to function properly as a tool for rating teams across the nation, it must have enough cross-region games to get each region's teams properly rated in relation to teams from the other regions.  I will express an opinion here:  There will not be enough cross-region games this year to do that.

Is the Women's Soccer Committee aware of this problem?  Possibly.  If so, has it figured out how to deal with it?  I am doubtful.

Differences in In-Region Parity

As I also wrote above, the proportion of a region's in-region ties is a function of the level of parity within the region.

The following table shows the actual proportions of in-region ties so far this season, for the four regions:



And, the next table shows my predicted proportions of in-region ties when the entire season is completed.  It is likely the table slightly understates what the final proportions will be:


What these tables indicate is that when the final numbers are in, the South region likely will have a significantly lower proportion of in-region ties than the other regions,  The North seems likely to have the highest proportion, followed by the West and then the Middle.  In other words, the regions do not have the same levels of parity.  As a result, the NCAA RPI's ability to properly rate teams across the nation, already impaired by the decrease in out-of-region competition, will be further impaired due to differences in in-region parity -- an impairment even further exacerbated by the Committee's 2024 tie-value change.

Is the Women's Soccer Committee aware of this problem?  Probably not,  If so, has it figured out how to deal with it?  Probably not.

Consequences of the NCAA RPI Formula's Medthod of Computing Strength of Schedule

As I've written previously, because of the way the NCAA RPI formula is constructed, the ranks the formula assigns to teams is different than what teams' ranks are within the part of the formula that assigns them values as strength of schedule contributors to their opponents.

So that you can see how this causes a problem, I will use a current example that predicts how two teams will end up based on the actual results of all teams' games played so far and the predicted results of all teams' games not yet played:


Based on past history, teams in the Top 57 of the NCAA RPI are possible at large selections for the NCAA Tournament.  Teams ranked #58 and poorer never have gotten an at large selection.  Thus in this example, Lipscomb is a potential at large team and California is not.

As you can see, Lipscomb's opponents' NCAA RPI ranks and their ranks under the NCAA RPI formula as strength of schedule contributors are only four positions apart.  In other words, if the NCAA RPI ranks of Lipscomb's opponents are correct, then the NCAA RPI also is correctly measuring those opponents' contributions to Lipscomb's NCAA RPI strength of schedule and thus to its NCAA RPI rating.

On the other hand, that is not the case for Cal.  If the NCAA RPI ranks of Cal's opponents are correct, then the NCAA RPI is grossly understating (by 38 rank positions) those opponents' contributions to Cal's NCAA RPI Strength of Schedule and thus to its NCAA RPI rating.  In other words, the NCAA RPI is grossly underrating and underranking Cal.  It should be a candidate for an at large selection, but its NCAA RPI rank says it won't be.

How will the Women's Soccer Committee overcome this flaw in the NCAA RPI formula?  The most likely answer is, It won't.  The data reports the NCAA staff gives the Committee for use in the at large selection process contain a lot of data.  They do not, however, tell the Committee teams' ranks as strength of schedule contributors to their opponents.  Thus the Committee never is able to make the kind of comparison the above table allows.

Should the Committee ask the NCAA staff for SoS contributor rank information?  Yes.  Would the NCAA staff be likely to provide it, if asked?  I doubt it.  Is the Committee likely to find a satisfactory way to deal with this problem?  Probably not.

THIS WEEK'S TABLES

Below are the following reports, after completion of Week 7 of the season:

1.  Actual Current Ranks.  These are RPI reports based only on games already played.  Teams' actual ranks in these reports (and the ratings on which the ranks are based) exactly match those published by the NCAA at the NCAA's RPI Archive, and also those published at Chris Henderson's 2025 Division I College Women's Soccer Schedule website.  These reports also include teams' current KPIMassey, and Balanced RPI ranks so you can see how the different rating systems compare.

2.  "Predicted" End-of-Season Ranks.  These are RPI reports based on the actual results of games already played PLUS predicted results of games not yet played.  The purpose of these reports is to give an idea of where teams might end up at the end of the regular season. The reports show both NCAA RPI and Balanced RPI ranks.

The result predictions for future games use teams' actual current NCAA RPI ratings as the basis for the predictions.  So these reports show where teams will end up if they all perform exactly in accord with their current NCAA RPI ratings.  As each week passes, the predictions come closer and closer to where teams will end up.

ACTUAL CURRENT RANKS

 Here are the actual current NCAA RPI and Balanced RPI ranks for teams.  For an Excel workbook containing these data, use the following link: 2025 RPI Report Actual Results Only After Week 7.

NOTE:  If you use the link, you will see the workbook in a Google Sheets format, which will be difficult or impossible to read.  Rather than trying to use that workbook, take the following steps to download the workbook as an Excel workbook:

Click on File in the upper left.

In the drop down menu, click on Download.

In the drop down menu, click on Microsoft Excel (.xlsx).

This will download the workbook as an Excel workbook.

In the tables, be sure to note the differences between teams', conferences', and regions' NCAA RPI ranks and their ranks, within the NCAA RPI formula, as strength of schedule contributors to their opponents' ratings.  You also can see the same information for the Balanced RPI.

Also, for each of teams, conferences, and regions, these reports show current KPI and Massey ranks so you can compare them to the NCAA RPI and Balanced RPI ranks.

In the Teams table, the color coded columns on the left show, based on past history, the teams that are potential seeds and at large selections for the NCAA Tournament, given their NCAA RPI ranks at this point in the season.


Here are the actual current ranks for conferences:


And here are the current actual ranks for the regions.


"PREDICTED" END-OF-SEASON RANKS

Here are the predicted end-of-season NCAA RPI  and Balanced RPI ranks for teams.  For an Excel workbook containing these data, use the following link: 2025 RPI Report After Week 7.

The color coded columns on the left show, based on past history, the teams that would be candidates for NCAA Tournament seed pods and at large positions if these were the final NCAA RPI ranks.


Here are the predicted end-of-season ranks for conferences:


And here are the predicted end-of-season ranks for the four geographic regions:



Monday, September 22, 2025

2025 ARTICLE 19: RPI REPORTS AFTER WEEK 6 GAMES

Below are the following reports, following completion of Week 6 of the season:

1.  "Predicted" End-of-Season Ranks.  These are RPI reports based on the actual results of games already played PLUS predicted results of games not yet played.  The purpose of these reports is to give an idea of where teams might end up at the end of the regular season. The reports show both NCAA RPI and Balanced RPI ranks.

The result predictions for future games use teams' actual current NCAA RPI ratings as the basis for the predictions.  So these reports show where teams will end up if they all perform exactly in accord with their current NCAA RPI ratings.  As each week passes, the predictions will come closer and closer to where teams will end up.

2.  Actual Current Ranks.  These are RPI reports based only on games already played.  Teams' actual ranks in these reports (and the ratings on which the ranks are based) exactly match those published by the NCAA at the NCAA's RPI Archive, with the exceptions mentioned below, and also those published at Chris Henderson's 2025 Division I College Women's Soccer Schedule website.  These reports also include teams' current KPIMassey, and Balanced RPI ranks so you can see how the different rating systems compare.

There are two items of interest regarding the NCAA's current RPI ratings:

1.  For purposes of New Haven's rating, the NCAA is excluding from consideration its first two games of the season, which were losses to Brown and Stony Brook.  These games, however, are included for purposes of Brown's and Stony Brook's ratings.  This is something I have not seen before and I do not know why the NCAA has done it.

2.  The NCAA has imposed maximum penalty adjustments for ties and losses to non-Division 1 opponents.  Several years ago, there was a decision NOT to impose penalties for these games and I am not aware of a decision to go back to imposing penalties.  I suspect the NCAA currently imposing the penalties is an error.  I have advised them of this and expect they will make a correction for future RPI publications.  This is not a big issue, as the teams receiving the penalties are unlikely to be contenders for NCAA Tournament participation.

"PREDICTED" END-OF-SEASON RANKS

Here are the predicted end-of-season NCAA RPI  and Balanced RPI ranks for teams.  For an Excel workbook containing these data, use the following link: 2025 RPI Report After Week 6.

The color coded columns on the left show, based on past history, the teams that would be candidates for NCAA Tournament seed pods and at large positions if these were the final NCAA RPI ranks.

Of particular importance are the differences between teams' NCAA RPI ranks and their ranks, within the NCAA RPI formula, as strength of schedule contributors to their opponents' ratings.  While the NCAA itself publishes various sets of RPI-related data, it does not publish teams' ranks as strength of schedule contributors.  It seems likely they don't publish these ranks because they would expose a serious flaw within the RPI formula.


Here are the predicted end-of-season ranks for conferences:


And here are the predicted end-of-season ranks for the four geographic regions.  Note that at the right end of the table, the table shows the proportions of games for each region's teams that are in-region and out-of-region and the percentages of each region's in-region games that are ties (as an indicator of the level of in-region parity).


ACTUAL CURRENT RANKS

Here are the actual current NCAA RPI and Balanced RPI ranks for teams.  For an Excel workbook containing these data, use the following link: 2025 RPI Report Actual Results Only After Week 6.

As with the end-of-season reports, note the differences between teams' NCAA RPI ranks and their ranks, within the NCAA RPI formula, as strength of schedule contributors to their opponents' ratings.

Also, for each of teams, conferences, and regions, these reports show current KPI and Massey ranks so you can compare them to the NCAA RPI and Balanced RPI ranks.

In the Teams table, the color coded columns on the left show, based on past history, the teams that are potential seeds and at large selections for the NCAA Tournament.


Here are the actual current ranks for conferences:


And here are the current actual ranks for the regions.  Again, on the far right, you can see each region's distribution of games between in-region and out-of-region games as well as the percentage of each region's in-region games that have been ties.



Tuesday, September 16, 2025

2025 ARTICLE 18: RPI REPORTS AFTER WEEK 5 GAMES

Now that we have completed the fifth weekend of the season, each week I will publish two sets of reports:

1.  "Predicted" End-of-Season Ranks.  These are RPI reports based on the actual results of games already played plus predicted results of games not yet played, which are the kinds of reports I have published so far after each week of the season.  The purpose of these reports is to give an idea of where teams might end up at the end of the regular season. The reports show both NCAA RPI and Balanced RPI ranks.

New this week, however, the result predictions for future games use teams' actual current NCAA RPI ratings as the basis for predicting future results, rather than the assigned pre-season ratings used in my previous reports.  So these reports show where teams will end up if they all perform exactly in accord with their current NCAA RPI ratings. 

As an interesting note, the actual results of games played so far are slightly more consistent with the assigned pre-season ratings than with the actual current NCAA RPI ratings.

2.  Actual Current Ranks.  These are RPI reports based only on games already played.  Teams' actual ranks in these reports (and the ratings on which the ranks are based) exactly match those published by the NCAA at the NCAA's RPI Archive and also those published at Chris Henderson's 2025 Division I College Women's Soccer Schedule website.  These reports also include teams' current KPI, Massey, and Balanced RPI ranks so you can see how the different rating systems compare.

"PREDICTED" END-OF-SEASON RANKS

Here are the predicted end-of-season NCAA RPI  and Balanced RPI ranks for teams.  For an Excel workbook containing these data, use the following link: 2025 RPI Report After Week 5.

The color coded columns on the left show, based on past history, the teams that would be candidates for NCAA Tournament seed pods and at large positions if these were the final NCAA RPI ranks.

Of particular importance are the differences between teams' NCAA RPI ranks and their ranks, within the NCAA RPI formula, as strength of schedule contributors to their opponents' ratings.  While the NCAA itself publishes various sets of RPI-related data, it does not publish teams' ranks as strength of schedule contributors.  It seems likely they don't publish these ranks because they would expose a serious flaw within the RPI formula.


Here are the predicted end-of-season ranks for conferences:



And here are the predicted end-of-season ranks for the four geographic regions.  Note that at the right end of the table, the table shows the proportions of games for each region's teams that are in-region and out-of-region and the percentages of each region's in-region games that are ties (as an indicator of the level of in-region parity).



ACTUAL CURRENT RANKS

Here are the actual current NCAA RPI and Balanced RPI ranks for teams.  For an Excel workbook containing these data, use the following link: 2025 RPI Report Actual Results Only After Week 5.

As with the end-of-season reports, note the differences between teams' NCAA RPI ranks and their ranks, within the NCAA RPI formula, as strength of schedule contributors to their opponents' ratings.

Also, for each of teams, conferences, and regions, these reports show current KPI and Massey ranks so you can compare them to the NCAA RPI and Balanced RPI ranks.


Here are the actual current ranks for conferences:


And here are the current actual ranks for the regions.  Again, on the far right, you can see each region's distribution of games between in-region and out-of-region games as well as the percentage of each region's in-region games that have been ties.



Tuesday, September 9, 2025

2025 ARTICLE 17: RPI REPORT AFTER WEEK 4 GAMES

Below are my weekly tables showing predicted end-of-season rankings for teams, conferences, and regions based on the actual results of games played through Week 4 of the season and predicted results of games not yet played.  We are about a third of the way into the season, so the numbers remain pretty speculative.  Please note that the tables include NCAA RPI, my Balanced RPI, and Massey rankings.  As soon as KPI rankings are available, most likely next week, they also will be in the tables.

For those with a serious interest in the numbers, I sugest you download the 2025 RPI Report After Week 4 Excel workbook, which should be easier to use than the tables below.

As I have written previously, in the tables, it is worthwhile to look at the differences between teams' NCAA RPI ranks and their ranks as Strength of Schedule contributors, both for the teams themselves and for their opponents.  If you spend time looking through these differences on the Teams, Conferences, and Regions tables, you should be able to see the NCAA RPI's discriminatory patterns.

TEAMS


CONFERENCES


REGIONS



Tuesday, September 2, 2025

2025 ARTICLE 16: RPI REPORT AFTER WEEK 3 GAMES

Below are my weekly tables showing predicted end-of-season rankings for teams, conferences, and regions based on the actual results of games played through Week 3 of the season (including Labor Day, September 1) and predicted results of games not yet played.  Since we still are not very far into the season, the numbers remain pretty speculative.  Please note that the tables include NCAA RPI, my Balanced RPI, and Massey rankings.  Later in the season when KPI rankings are available, they also will be in the tables.

For those with a serious interest in the numbers, I sugest you download the 2025 RPI Report After Week 3 Excel workbook, which should be easier to use than the tables below.

Something I am watching very closely is the percentages, by region, of in-region games that are ties.  I watch this because, as discussed previously, the NCAA RPI formula punishes regions with high levels of parity, one measure of which is the percentage of in-region games that are ties.  This is a problem the Committee made a little worse last year when it changed the value of ties in the RPI formula's calculation of a team's winning percentage from half a win to a third of a win.  You can find the percentage of in-region ties for each region on the Regions page at the far right of the table.  So far this year, it looks like the West and Middle regions will have extraordinarily high levels of in-region ties, significantly higher than for the North and South regions.

The other things to look at on the tables, at this point, are the differences between teams' NCAA RPI ranks and their ranks as Strength of Schedule contributors, both for the teams themselves and for their opponents.  If you spend time looking through these differences on the Teams and Conference pages, you should be able to see for yourself the NCAA RPI's discriminatory patterns.

TEAMS



CONFERENCES


REGIONS



Tuesday, August 26, 2025

2025 ARTICLE 15: RPI REPORT AFTER WEEK 2 GAMES

In this week's RPI Report, I want to point out two sets of data that the Women's Soccer Committee and coaches should be concerned about.  Following that discussion, I'll post the regular weekly team, conference, and region tables.  For those who want to go directly to the weekly tables, here is a link to this week's Excel workbook 2025 RPI Report After Week 2.  (For those seriously interested in the current RPI Report information, I recommend downloading the workbook in an Excel format rather than using the weekly tables as reproduced below.)

Issue of Concern #1: Distribution of In-Region Ties, by Region



This table shows, for each of the four regional playing pools, the percentage of in-region games that are ties.  The percentage is based on actual ties to date and tie likelihoods for games not yet played.  The percentages in the table may be a little higher than they will be at the end of the season, but the placement of the regions in the table is consistent with what the placement has been historically.

Here is the problem the Committee and coaches should be concerned about:

As I have demonstrated elsewhere, the NCAA RPI historically has discriminated against teams from the West region, meaning that in games against teams from other regions, the West region teams on average have performed better than their ratings say they should have performed, i.e., are underrated.  At the other end of the spectrum, the South region teams on average have performed more poorly than their ratings say they should have performed, i.e., are overrated.  This doesn't mean all teams from the regions are under- or overrated, rather that on average teams from the regions are under- or overrated.

This year, the Committee made an NCAA RPI formula change that relates to this:  In the computation of a team's Winning Percentage (half the effective weight of a team's NCAA RPI rating), the Committee reduced the value of a tie from half of a win to one-third of a win.  In other words, it devalued ties.  Since it is reasonable to assume that regions with higher parity will have a higher proportion of ties, the effect of the change is to devalue the ratings of teams from regions with a high level of parity.

Also as I have demonstrated elsewhere, the West region historically has had the highest level of parity, followed by the Middle, then the North, and then the South.  The historic proportion of in-region games that are ties is consistent with this, as are the proportions for this year in the above table.

Thus the effect of the change likely will be to devalue the ratings of teams from the West, followed by the Middle, to the benefit of teams from the North and especially from the South.  This means that with the NCAA RPI already discriminating against teams from the West, the Committee's change has made the discrimination even worse.

The extent to which the Committee has made the change worse is not large, as indicated by the following table:


This table summarizes how the rating systems perform at rating teams from a region in relation to teams from the other regions.  It draws from the following data:

1.  For each region, its teams' actual winning percentages in games against teams from the other regions.

2.  For each region, its teams' expected winning percentages against teams from other regions based on teams' ratings as adjusted for home field advantage. [NOTE: Expected winning percentages are based on the rating differences between opponents as adjusted for home field advantage.  The expected winning percentages are calculated using result probability tables derived from the results of all games played since 2010 and are highly reliable when applied across large numbers of games.]

3.  The difference, for each region, between its actual winning percentage and its expected wining percentage.  Teams from regions with higher actual winning percentages are outperforming their ratings, in other words are underrated (discriminated against), whereas teams from regions with lower winning percentages are overrated.

In the table, the High column shows the actual v expected winning percentage difference for the region whose teams most outperform their ratings when playing teams from other regions -- the West region.  The Low column shows the difference for the region whose teams most underperform their ratings -- the South region.  The Spread column shows the difference between the High and Low, which is a measure of the extent of the rating system's discrimination among regions.  The Over and Under Total column shows the amount, for all four regions, by which the rating system misses a perfect match (no difference) between actual and expected winning percentage, which is another measure of the system's performance.

As you can see from the table's comparison of the two systems, although the difference between the systems' performance is small, the Committee's 2024 change increased the discrimination among regions.

 So, what was the Committee's rationale for the change?  It gave two reasons: (1) Valuing ties as 1/3 of a win matches how conferences compute in-conference standings and how leagues compute standings in the larger soccer world; and (2) The Division 1 men have made the change.

Neither rationale holds water.  (1) For conferences and leagues, for in-conference and in-league standings, there is no issue of whether different playing pools have different levels of parity.  Thus how they count ties, when determining standings, is irrelevant to how to count ties in a national rating system for teams playing games in different geographic regions.  (2) That the men have made the change should be a reason ... Why?  That does not demonstrate in any way that it is a good change for the women.  The Committee should make its own decision based on how the change will affect the NCAA RPI as a rating system for the women's teams, not let the Men's Soccer Committee make the decision for them.

The question the Committee should have considered is whether the change makes the NCAA RPI a better system for rating teams properly in relation to each other when the teams are distributed around the nation and tend to play in different regional playing pools.  They should have asked the NCAA statistics staff about this, and the staff should have advised the Committee that if the geographic playing pools have different levels of parity, then the change will punish teams from the pools with high parity and benefit stronger teams from the pools with low parity.  Did the Committee ask the staff about this and did the staff give them this answer?  I don't know, but I doubt it.

Issue of Concern #2: Proportions of Out-of_Region Games

As I showed above, the NCAA RPI discriminates against some regions and in favor of others.  At one time, I thought this was due exclusively to differences in region strength and there not being a high enough proportion of out-of-region games for the system to work properly on a national basis.  As it turns out, however, both Massey's and my Balanced RPI rating systems show only a small amount of discrimination in relation to region strength, and much less than the NCAA RPI.  In other words, there are enough out-of-region games for those systems to avoid all but a little discrimination in relation to region strength.  So the NCAA RPI's problem has not been driven mainly by there not having been enough out-of-region games (rather, as it has turned out, it is driven by how the NCAA RPI computes Strength of Schedule).

On the other hand, it is indisputable that there have to be "enough" out-of-region games for any rating system to work on a national basis.  As the following tables show, no doubt due to the changing college sports economic landscape, this year there is a substantial reduction in the proportion of out-of-region games from what the proportion has been in the past (based on teams' published 2025 schedules, adjusted to take canceled games to date into account):


This represents a 28.0% across the board reduction in the proportion out-of-region games.  For the Middle region, the reduction is 18.0%, for the North 28.5%, for the South 30.2%, and for the West 31.7%.  Given these reductions, a major question is whether they will significantly impair the ability of the NCAA RPI -- or any other rating system -- to properly rate teams on a national basis.  This is a question the Committee and coaches should be thinking about.

Weekly Team, Conference, and Region Tables

The following tables are based on the actual results of games played through Sunday, August 24, and predicted result likelihoods for games not yet played.

TEAMS



CONFERENCES



REGIONS




Tuesday, August 19, 2025

2025 ARTICLE 14: RPI REPORT AFTER WEEK 1 GAMES

In 2025 Article 7 and 2025 Article 8, I described how I assign pre-season NCAA RPI ratings and ranks to teams and then, assuming those ratings and ranks represent true team strength, apply them to teams' schedules to generate predicted end-of-season NCAA RPI ratings and ranks.  Once I have done that, at the end of each week of the season I replace that week's predicted results with games' actual results.  Then, using those actual results combined with predicted results for the balance of the season, I generate new predicted end-of-season NCAA RPI ratings and ranks.  After completing week 5 of the season, I will switch from using assigned pre-season NCAA RPI ratings and ranks as the basis for predicting future results to using the then actual NCAA RPI ratings and ranks as the basis.

Using this process, the predicted end-of-season NCAA RPI ratings and ranks are very speculative at the beginning of the season.  However, as each week passes, they become progressively closer to what the actual end-of-season ratings and ranks will be.  By the last few weeks of the season, they become helpful when trying to figure out what results teams need in their remaining games in order to get particular NCAA Tournament seeds or at large selections.

Today's report shows where things are with Week 1's actual results incorporated into the end-of-season predictions.  The report has a page for teams, for conferences, and for geographic playing pool regions.  You can download the report as an Excel workbook with this link: 2025 Week 1 RPI Report.  The same information also is set out in tables below, but I recommend downloading the workbook as it likely will be easier to use.  (If using the tables below, scroll to the right to see additional columns.)

This year, an emphasis in these reports is on showing why the NCAA RPI, because of how it measures the opponents' strengths of schedule that it incorporates into its formula, discriminates against or in favor of particular teams, conferences, and regions.

TEAMS

This page shows, for each team:

Team name

Geographic playing pool region

Conference

If the team is predicted to be its conference's NCAA Tournament automatic qualifier (AQ)

If the team is predicted to be disqualified from an NCAA Tournament at large selection due to having more losses than wins (1)

Team's 

NCAA RPI rank (based on past history, a key factor in selecting teams that will be in the NCAA Tournament #1 through #4 seed pods)

rank as a strength of schedule contributor to opponents under the NCAA RPI formula

Opponents'

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula

Conference opponents' 

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula

[NOTE: Teams have relatively little control over this part of their schedules.] 

Non-Conference opponents' 

average NCAA RPI rank

average rank as strength of schedule contributorsl under the NCAA RPI formula

[NOTE: Teams control this part of their schedules, to some extent.  Geographic factors such as travel expenses, available opponents, and other factors can be limiting considerations.]

NCAA RPI Top 50 Results Score

NCAA RPI Top 50 Results Rank (based on past history, a key factor in NCAA Tournament at large selections and in selecting teams that will be in the #5 through #8 seed pods)

Similar rank and strength of schedule contributor rank numbers under the Balanced RPI

KPI rank if available

Massey rank


 

CONFERENCES

This page shows, for each conference:

Conference name

Conference's NCAA RPI rank

Teams' 

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula 

 Opponents' 

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula

Conference opponents' 

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula

Non-Conference opponents' 

average NCAA RPI rank 

average rank as strength of schedule contributorsl under the NCAA RPI formula

Conference's Non-Conference RPI rank 

Similar rank and strength of schedule contributor rank numbers under the Balanced RPI

KPI rank if available

Massey rank


 

REGIONS

This page shows, for each region:

Region name

Number of teams in region 

Region's NCAA RPI rank

Teams' 

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula 

Opponents' 

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula

Region opponents' 

average NCAA RPI rank 

average rank as strength of schedule contributors under the NCAA RPI formula

(NOTE: Due to budget limitations, teams may be compelled to play all or most of their non-conference games against opponents from their own geographic regions.] 

Non-Region opponents' 

average NCAA RPI rank 

average rank as strength of schedule contributorsl under the NCAA RPI formula

Similar rank and strength of schedule contributor rank numbers under the Balanced RPI

KPI rank if available

Massey rank

Regions' proportions of games played against teams from each region (NOTE: This years, the numbers of out-of-region games are down about 30% from past patterns.  This may result in a significant degradation of the NCAA RPI's already impaired ability to properly rate teams from a region in relation to teams from other regions.)

Proportion of in-region games that are ties (as a measure of in-region parity) (NOTE: The NCAA RPI, because of how it measures Strength of Schedule, on average discriminates against teams from regions with higher region parity.)