Thursday, January 29, 2026

2026 ARTICLE 5: POWER CONFERENCE v NOT POWER CONFERENCE GAME LOCATIONS, NUMBERS OF GAMES, AND OPPORTUNITIES TO GET GOOD RESULTS AGAINST HIGHLY RANKED OPPONENTS

This article will provide some data on the relationship between the Power 4 conferences and the 27 Not Power 4 conferences and point out a significant NCAA Division I Women's Soccer Committee issue related to Power 4 as compared to Not Power 4 candidates for NCAA Tournament at large positions.

First, some background:

1.  Power 4 conference teams, as a group, have an historic pattern of having a high proportion of their games against Not Power 4 conference teams at Power 4 home fields.

2.   Whereas in the past it was fairly common for Power 4 conference teams playing Not Power 4 conference opponents to enter into home-away contracts, I am hearing from coaches that many, if not all, Power 4 conference teams no longer are allowed by their administrations (1) to enter into home-away contracts and (2) to travel to Not Power 4 sites for away games.  I also am hearing that when there are not home-away contracts, Power 4 conference teams have less money available to help cover Not Power 4 conference opponents' travel costs if they travel to Power 4 sites. This fits with information I have heard of the possibility that beginning in 2027, the first weekend of the season for Power 4 conference teams will consist of inter-Power 4 conference games, which will mean significant travel expenses for half the Power 4 conference teams every year.

3.  Power 4 conference teams, because their conferences are strong, of necessity play significant numbers of highly ranked opponents every year.  This will be even more the case if there is a future increase in inter-Power 4 conference competition.  Not Power 4 conference teams have far fewer opportunities to play highly ranked opponents and it looks like they may have even fewer in the future.  An effect of this is that Power 4 conference teams have more opportunities to achieve "good results" than do Not Power 4 conference teams and may have even more in the future.  This raises the question whether the NCAA Women's Soccer Committee has the necessary statistics sophistication to properly compare good results of Power 4 conference teams with good results of Not Power 4 conference teams.  A 2025 at large decision of the Committee, giving Power 4  Kentucky an at large position rather than Not Power 4 St. Mary's, will provide a good case study for this question.

POWER 4 HOME FIELD ADVANTAGE

I break down consideration of Power 4 conference home field advantage into three time periods:

2007 (the first year in my data base) to 2012

 2013 to 2023, with 2013 being the year of completion of significant changes in conference memberships, including but not limited to the split of the Big East into the Big East and American Athletic Conference, with some teams migrating to the ACC and others to the Big Ten.

2024 to the present, following the distribution of most Pac 12 teams among the ACC, Big Ten, and Big 12 and the shift from the Power 5 to the Power 4.

The following table shows the proportions of Power conference home field advantage in games against Not Power conferences for each period (with Pac 12 teams counting as Power conference teams through 2023):


As you can see, the Power conferences over time have increased their home field advantage proportions in games against Not Power conference opponents.  If it is correct that starting in 2027 the season's opening weekend will have Power conference teams playing games against teams from other Power conferences, it seems likely Power conference teams will be even less willing to travel to Not Power opponents' sites for games.  Thus it seems likely the Power conference home field advantages will increase even more in the future, if Not Power conference teams play them.

LIKELY DECREASE IN PROPORTION OF POWER VERSUS NOT POWER CONFERENCE GAMES

The following tables show the proportions of games that were Power versus Not Power for the three time periods:


As you can see, there has been a gradual decline in the proportions of Power versus Not Power games.  If the Power versus Power season-opening weekend happens in 2027 and continues into the future, it seems likely the proportion of Power versus Not Power games will decline even more.

It is important to note in the above table that on average Not Power conference teams currently play 93.3% of their games against other Not Power teams, meaning only 6.7% against Power teams.  This equates, on average, to Not Power conference teams playing about 1 game per year against Power teams (1.25 games per year, to be exact).

ABILITY OF POWER CONFERENCE TEAMS, AS COMPARED TO NOT POWER CONFERENCE TEAMS, TO PLAY HIGHLY RANKED OPPONENTS

Power conference teams are able to play significant numbers of highly ranked opponents simply by playing their conference regular season games.  If the Power versus Power season-opening weekend happens, the Power conference teams' opportunities to play highly ranked opponents will increase.  Conversely, the opportunities to play highly ranked opponents for strong teams from Not Power conferences mostly are in their non-conference games and if the Power versus Power season-opening weekend happens, these opportunities are likely to decrease.  Further, with Power conference teams having decreased willingness to enter into home-away contracts and lesser ability to share travel costs for one-off home games against Not Power conference opponents, the opportunities of strong Not Power conference teams to play highly ranked opponents is likely to decrease even more.

This suggests that given the inequality between Power and Not Power teams in opportunities to play games against highly ranked opponents, how the Women's Soccer Committee evaluates results against highly ranked opponents will become increasingly important in the Committee's NCAA Tournament at large selection process.  The Committee's decision to give Kentucky an at large position in the 2025 NCAA Tournament, rather than St. Mary's, provides a good case study for this issue.

#42 St. Mary's, in the Not Power West Coast Conference, had an overall record of 12W/2L/4T. They played no Top 50 opponents in conference play.  They played 2 in non-conference play:  Lost away to #3 Stanford; and Won away against #12 Georgetown.

#50 Kentucky, in the Power SEC, had an overall record of 12W/4L/4T.  They played 6 Top 50 opponents in conference play and 2 in non-conference play.  They: Lost home to #41 Illinois; Lost home to #38 Ohio State; Lost away to #43 Georgia; Won home against #29 Alabama; Lost home to #4 Vanderbilt; Won home against #37 South Carolina; Tied away against #14 LSU; and tied neutral against #43 Georgia.

One way to look at the Top 50 results is to look at the numbers of good results.  St. Mary's had 1 good result.  Kentucky had 4, counting ties against Top 50 opponents as good results, as I would do.  If the number of good results against Top 50 opponents is the way to consider these games, Kentucky is the choice for the at large position.

On the other hand, St. Mary's played top 50 opponents with an average rank of #7.5 and had an 0.500 winning percentage against them.  Kentucky played top 50 opponents with an average rank of #31 and had an 0.375 winning percentage against them (counting a tie as half a win).  If the average rank of Top 50 opponents and the winning percentage against them is the way to consider these games, St. Mary's is the choice for the at large position.

As you can see, which way the Committee considers Top 50 results is critical.  It is especially critical given the unequal opportunities of Not Power conference teams to play Top 50 opponents as compared to Power conference teams.  And it looks like it will become even more critical in the future.

Given the unequal opportunities to play Top 50 opponents, I believe the stronger argument is that the Committee should compare Power and Not Power conference candidates' Top 50 results using the latter method: rather than looking simply at numbers of good results, they should look at the average rank of a team's Top 50 opponents and the winning percentage against those opponents.  That method will take out of the equation the unequal opportunities to play Top 50 opponents.

Wednesday, January 28, 2026

2026 ARTICLE 4: ELITE PLAYERS LOOKING AT COLLEGES: POWER 4 ONLY OR TOP 67? THEY'RE NOT THE SAME!

I'm hearing from coaches of top teams that are not in Power 4 conferences that a lot of elite players coming out of high school are saying they are "Power 4 Only."  That is consistent with what I am seeing in social media, which treats going to a Power 4 school as having higher status than going to any other school.  From a quality of soccer perspective, this is dumb.

The best indicator of teams' likely future quality, as measured by their past performance, is their median rank over the last 7 years.  I've settled on this as an indicator after doing a comprehensive study of the relationship between teams' past performance and their future performance.

So players in the college selection process, there are 67 Power 4 conference teams, but they are not likely to be the top 67 when you are in college.  Your best bets, if you want to play for a team likely to be in the top 67, are listed below.  If you decide on a Power 4 team that is not on the list, don't fool yourself: Your decision is motivated by something other than the likely quality of the team you will be playing for.

The teams are in order, with the likely strongest at the top.  Thirteen of the 67 (almost 20%) are not Power 4 teams.



2026 ARTICLE 3: ANNUAL UPDATE ON HOW THE RATING SYSTEMS PERFORM - NCAA RPI, KPI, BALANCED RPI, AND MASSEY

There are four statistics-based rating systems for Division I women's soccer:

NCAA RPI

KPI (sometimes referred to as the Kevin Pauga Index although I think I've also seen the term Key Parameter Index)

Balanced RPI (my system, a modification of the NCAA RPI)

Massey Ratings

I evaluate how each rating system performs, using a series of performance measures.

For the NCAA RPI and Balanced RPI, I evaluate their performance since 2010, considering game results and calculating ratings as if there had been no overtimes throughout the period.  For the NCAA RPI, I calculate ratings based on the formula the NCAA currently uses.

For the KPI, I evaluate its performance since 2017, which is the first year it produced ratings for Division I women's soccer,

For Massey, I evaluate its performance since 2022.  Although Massey has done ratings for many years, his rating scale changed in 2022 (and also another time a few years earlier).  His rating system also may have changed but I don't know since it is proprietary, so to be sure the evaluation is of his current system I use only performance since 2022.  Since the Massey evaluation includes such a limited number of years, I consider it only a rough evaluation of the system.

Below, I will show a comparison table for each performance measure and then will explain what the table represents.

OVERALL

OVERALL PERFORMANCE


In each table, the systems are in order of NCAA RPI, KPI, Balanced RPI, Massey.  This is intentional.  For NCAA Tournament at large selection and seeding purposes, the Women's Soccer Committee uses the NCAA RPI as its rating system.  For the past few years, the NCAA has allowed it to supplement the NCAA RPI with the KPI.  That is why those systems are first in order, they are what the Committee sees during its decision process.  The Committee is not allowed to use any other rating system.  As I go through the measures, you will see that the NCAA RPI's and the KPI's performances are relatively similar.  Although I do not know it for a fact, I suspect this is why the NCAA permits the Committee to use the KPI as a backup rating system.

In looking at the tables, you also will see that the Balanced RPI's and Massey's performance are quite similar, but are dis-similar from the NCAA RPI and the KPI.  This is why the Balanced RPI and Massey are last in order.  Here too although I do not know it for a fact, I suspect the dis-similarity between the NCAA RPI and the Balanced RPI and Massey is why the NCAA did not choose either of those systems as the Committee backup to the NCAA RPI.

The above table is based on looking at the opposing teams' ratings in each game, adjusting those ratings to take home field advantage into account, and determining the location-adjusted rating difference between the opponents.  It then looks to see whether the better rated team won, tied, or lost the game.  It does this for all games, then tallies the total wins, ties, and losses of the better rated teams, and then converts the totals to percentages of all games played.

Thus, using the NCAA RPI as an example, the better rated teams have won 65.1%, tied 21.1%, and lost 13.8% of their games.  Since the KPI and Massey use different time periods of data, they show different ppercentages of ties than each other and than the NCAA RPI and the Balanced RPI.  To accomodate this, I have added the green highlighted column, which shows the higher rated teams' win and loss percentages without reference to tie games.  I consider this the best basis for comparing how the rating systems perform.

As the table shows in the green highlighted column, the Balanced RPI's ratings are the most consistent with game results -- the better rated team wins 83.5% of the time, followed by the KPI (82.8%), Massey (82.6%), and the NCAA RPI (82.5%).

PERFORMANCE IN GAMES INVOLVING AT LEAST ONE TOP 60 TEAM


This table is similar to the first one above, but includes only games involving at least one Top 60 team.  This is to show how the rating systems perform when considering games involving teams that might be candidates for NCAA Tournament at large positions.

You can see that the Balanced RPI performs the best, followed by the KPI, the NCAA RPI, and Massey.

CONFERENCES

CONFERENCE TEAMS' ACTUAL RESULTS AS COMPARED TO THEIR EXPECTED RESULTS


This and the next two tables show how the rating systems perform at rating teams from a conference in relation to teams from other conferences.

The first step in this process is to determine, for each game, the exact location-adjusted rating difference between the two opponents.  Based on that difference, the next step is to calculate each team's win likelihood, tie likelihood, and loss likelihood -- its expected results -- using a Result Probability Table for the rating system I'm evaluating.  The third step is to compare a team's expected results to its actual results.  The last step is to combine the expected and actual results for each conference's teams to see how the conference's actual win-tie-loss results compare to its expected win-loss-tie results.

The Result Probability Tables for the rating systems are extremely accurate when applied to all games played.  In other words, the sums of the expected win-loss-tie results in all games match almost exactly the sums of the actual win-loss-tie results.  This makes it possible, by breaking games into identifiable groups, such as a conference's teams' games against non-conference opponents, to see whether here too the expected results match actual results or whether they don't match.  If they don't match for a group of teams, it means the rating sytstem has trouble rating the group's teams properly in relation to teams from other groups, resulting in the system overrating some groups of teams and underrating others.

Applying this evaluation method to how each conference's teams do in non-conference games produces the following table for the NCAA RPI:


This table has the conferences arranged in order of their teams' average NCAA RPI ratings.  It shows, in the right-hand column, the amount by which each conference's teams' actual winning percentages are greater or less than their expected winning percentages.

The  NCAA RPI row in the table at the top of this section draws from the above table.  In the top table, the Conferences Non-Conference Actual Less Likely Winning Percentage, High column shows the difference amount for the Conference whose actual winning percentage most exceeds its expected winning percentage (i.e., the most underrated conference).  The Low column shows the difference amount for the conference whose expected winning percentage most exceeds its actual winning percentage (i.e., the most overrated conference).  The Spread column shows the difference between the High and Low numbers.  This Spread is a measure of how well the rating system does at rating teams from a conference in relation to teams from other conferences:  It represents the amount of the rating system's discrimination against the best performing -- most underrated -- conference as compared to the poorest performing -- most overrated -- conference.  The Over and Under column shows the total amount by which all conferences perform better and worse than the Result Probability Table says they should -- in other words the rating system's overall discrimination among conferences.

There are similar conference-by-conference tables for the KPI, the Balanced RPI, and Massey.  They all feed into the table at the top of this section.

As you can see from the top table, the Balanced RPI does the best job of rating teams from a conference properly in relation to teams from other conferences, followed by Massey.  The NCAA RPI does the poorest job, with the KPI next but quite similar to the NCAA RPI's performance.

ACTUAL LESS EXPECTED PERFORMANCE IN RELATION TO CONFERENCE STRENGTH

The preceding section covered whether and, if so, by how much a rating system discriminates among conferences.  It does not, however, cover whether there are patterns to the discrimination.

This section looks to see whether there are discrimination patterns.  Specifically, it compares conferences' actual less expected winning percentages to their strength as represented by their average ratings, to see whether there discrimination patterns related to conference strength.

In the above table, the data for a rating system come from a chart for the system. Each systems' chart draws from a table like the long table in the preceding section that shows conferences' NCAA RPI ratings and their actual/expected results differences.

The following are the charts for the four systems.  Below the first chart, for the NCAA RPI, there is an explanation of what the chart shows and of how it relates to the above table.  Scroll to the right to see the entire chart.


In the chart, the vertical axis is for the difference between a conference's teams' actual and expected winning percentages.  At the top of the chart, actual winning percentages are greater than expected winning percentages, meaning conferences' results against teams from other conferences are better than the ratings say they should be -- in other words, the ratings underrate the conferences.  At the bottom, actual winning percentages are less than expected winning percentages, meaning conferences' results against teams from other conferences are poorer than the ratings say they should be -- in other words, the ratings overrate the conferences.

The horizontal axis is for the average of the conference teams' ratings.  The conferences are arranged in order from the highest rated (strongest) conference on the left to the lowest rated (weakest) on the right.

The solid black line is a computer generated straight trend line that shows the relationship between conference strength and conference teams' actual performance against teams from other conferences as compared to their rating-based expected performance.  The downward slope of the trend line indicates that stronger conferences, on the left, tend to perform better than their ratings say they should -- in other words tend to be underrated -- and weaker conferences, on the right, tend to perform more poorly than their ratings say they should -- in other words tend to be overrated.

On the chart, you can see the trend line's formula, which tells you what you can expect the actual/expected results difference to be at any point in the conference average NCAA RPI spectrum.  You also can see an R squared value, in this case 0.6949.  The R squared value is a measure of the strength of the relationship (consistency) between conferences' actual/expected results differences and conferences' strength.  An R squared value of 1 means perfect consistency and of 0 means no consistency.  For the NCAA RPI, the R squared value suggests that there is a relatively strong relationship between the NCAA RPI's underrating and overrating of conferences, on the one hand, and conference strength, on the other hand.

In the table at the beginning of this section, the NCAA RPI's row summarizes the NCAA RPI's overrating and underrating pattern in relationship to conference strength.  The first gold highlighted column (High) shows the amount by which actual performance is better than expected performance at the top left of the trend line: by 4.3%.  The second gold highlighted column (Low) shows the amount by which actual performance is poorer than expected performance at the bottom right of the trend line: by -5.6%.  The fully gold highlighted column shows the difference (Spread) between these two numbers: 9.9%.  This Spread is a measure of how much the NCAA RPI discriminates in relation to conference strength.

Below are comparable charts for the KPI, the Balanced RPI, and Massey.  As you can see from the charts, the KPI shows the same discriminatory pattern as the NCAA RPI and has a similar R squared value.  The Balanced RPI and Massey, on the other hand, show minimal discrimination among conferences.  Further, the Balanced RPI's R squared value is much lower than for the NCAA RPI and the KPI, suggesting there is at most a weak relationship between the Balanced RPI's minimal discrimination among conferences and conference strength.  And, Massey's R squared value is even lower, suggesting virtually no relationship between Massey's minimal discrimination and conference strength.

The above table summarizes the four charts and allows us to compare how the rating systems perform  The table shows that the NCAA RPI and the KPI have significant discrimination against stronger and in favor of weaker conferences whereas the Balanced RPI and Massey have little discrimination.






ACTUAL LESS EXPECTED PERFORMANCE IN RELATION TO THE DIFFERENCE BETWEEN CONFERENCE TEAMS' NON-CONFERENCE OPPONENTS' RATINGS AS COMPARED TO THEIR NON-CONFERENCE OPPONENTS' RATINGS AS STRENGTH OF SCHEDULE CONTRIBUTORS


The preceding section shows that the NCAA RPI discriminates against stronger conferences and in favor of weaker ones, whereas the Balanced RPI doesn't.  But it doesn't show why.  This section shows why.

Both the NCAA RPI and the Balanced RPI, before computing teams' ratings, compute teams' strengths of schedule, which they then combine with the teams' winning percentages to produce their RPI ratings.  In order to compute teams strengths of schedule, each system assigns to each team a rating as a strength of schedule contributor to its opponents.  This makes it possible to compute, for each team, an opponents' average RPI rating and rank and also an opponents' average rating and rank as strength of schedule contributors.  Although the KPI may and Massey does use strength of schedule calculations, I am not able to determine what a team's strength of schedule contribution to its opponents is for either system, so I am not able to compute teams' opponents' average ratings and ranks as strength of schedule contributors.

For the NCAA RPI, as I will show more explicitly later in this article, there are big differences between teams' NCAA RPI ratings and ranks and their NCAA RPI strength of schedule contributor ratings and ranks.  The main difference between the Balanced RPI and the NCAA RPI is that the Balanced RPI involves a series of additional calculations designed to eliminate those differences.

The data in the above table come from the following two charts (which in turn come from other tables), the first for the NCAA RPI and the second for the Balanced RPI.  There is an explanation below the NCAA RPI chart.


In this NCAA RPI chart, the vertical axis is for the difference between a conference's teams' actual and expected winning percentages.  At the top of the chart, actual winning percentages are greater than expected winning percentages, meaning conferences' results against teams from other conferences are better than the ratings say they should be -- in other words, the ratings underrate the conferences.  At the bottom, actual winning percentages are less than expected winning percentages, meaning conferences' results against teams from other conferences are poorer than the ratings say they should be -- in other words, the ratings overrate the conferences.

The horizontal axis shows the amount by which conference teams' opponents' NCAA RPI ratings exceed their ratings as strength of schedule contributors under the NCAA RPI formula.  On the left is the conference whose teams' opponents' NCAA RPI ratings have the greatest excess over their NCAA RPI strength of schedule contributor ratings.  On the right, the conference teams' opponents' NCAA RPI ratings have the greatest deficit below the NCAA RPI strength of schedule contributor ratings.

As the trend line shows, the greater the excess of conference teams' NCAA RPI ratings over their NCAA RPI ratings as strength of schedule contributors, the greater the excess of conference teams' actual performance over their rating-based expected performance.  You also can see that the trend line's R squared value is 0.8565.  This relatively high R squared value suggests that the NCAA RPI's difference between conference teams' opponents' ratings and their ratings as strength of schedule contributors is the cause, or at least the primary cause, of the actual/expected results differences, in other words of the NCAA RPI's discrimination among conferences.

Looking at this table together with the preceding one leads to the following conclusion: The NCAA RPI, because of its disconnect between teams' NCAA RPI ratings and their NCAA RPI ratings as strength of schedule contributors, discriminates against teams from stronger conferences and in favor of teams from weaker conferences.

A case study from the 2025 season illustrates how the mechanics of the NCAA RPI formula cause this:

Liberty, from Conference USA, had an NCAA RPI rank of 45.  Cal, from the Big Ten, had an NCAA RPI rank of 46.  In other words, the NCAA RPI rated them essentially the same.

But, Liberty had an NCAA RPI strength of schedule contributor rank of 26.  Cal's rank was 92.

Thus for all the Conference USA teams Liberty played during the season, they got credit within their RPI ratings for having played the #26 team.  On the other hand, Cal's Big 10 opponents got credit for playing only the #92 team.  This caused Liberty's Conference USA opponents to be overrated and Cal's Big Ten opponents to be underrated.

Why the difference between Liberty's and Cal's NCAA RPI ranks as strength of schedule contributors when their NCAA RPI ranks were essentially the same?  The difference is due to the way the NCAA RPI formula computes a team's strength of schedule contribution to its opponents.  Under the NCAA RPI formula, a team's strength of schedule contribution to its opponents consists of 80% the team's winning percentage and only 20% the team's opponents' strength.  Liberty, being a top team in a mid-major conference, had a higher winning percentage than Cal but played weaker opponents due to being in a weaker conference.  Cal, being a mid-level team in a strong conference, had a lower winning percentage but played stronger opponents due to being in a stronger conference.  Liberty's situation is representative of teams at or near the top of mid-major and weaker conferences' standings.  Cal's is repreentative of strong conferences' teams that are not at or near the top of their conferences.  The result of this is that the RPI formula treats teams from stronger conferences as having played weaker opponents (like Cal) than they actually played and teams from weaker conferences as having played stronger opponents (like Liberty) than they actually played. The end result, as the above table shows, is that the NCAA RPI underrates teams from stronger conferences and overrates teams from weaker ones.

Here is the chart for the Balanced RPI: 


Look at the Balanced RPI's differences between conference teams' opponents' ratings and their ratings as strength of schedule contributors to their opponents, as shown across the bottom of this chart.  Then look at the same differenes at the bottom of the preceding NCAA RPI chart.  You will see that for the Balanced RPI, any differences are minimal as compared to the NCAA RPI.

The Balanced RPI chart shows that the minimal differences under the Balanced RPI bear no relationship to the differences between conferences' actual performance and their expected performance.  Simply eyeballing the chart, you can see this, as the data points appear to be distributed randomly around the trend line, suggesting there is no relationship between conference teams' opponents' ratings/strength of schedule contributor ratings differences and conferences' actual/expected resuts differences.  The trend line's R squared value of virtually 0 confirms this.

The table at the top of this section summarizes what you see in the two charts.  For the NCAA RPI, the actual/expected performance spread is 9.6%.  This is the level of discrimination among conferences due to the NCAA RPI rating/strength of schedule rating difference.  By comparison, the Balanced RPI has no discrimination among conferences due to Balanced RPI rating/strength of schedule rating differences.

REGIONS

REGION TEAMS' ACTUAL RESULTS AS COMPARED TO THEIR EXPECTED RESULTS


This is like the first conference table above, but is for the four geographic regions within which teams tend to play their games: Middle, North, South, and West.

As the table shows, the NCAA RPI and the KPI have trouble rating teams from a geographic region properly in relation to teams from other geographic regions.  Some regions' teams' actual results against out-of-region opponents are better than their NCAA RPI and KPI ratings say they should be and other regions' results are poorer.  For the Balanced RPI and Massey, however, results are about as they should be.

ACTUAL LESS EXPECTED PERFORMANCE IN RELATION TO REGION STRENGTH

This table -- based on trend charts like those for conferences -- shows that the NCAA RPI and KPI both discriminate against stronger regions and in favor of weaker ones whereas the Balanced RPI and Massey have almost entirely eliminated the discrimination.

The above table and its underlying trend charts are based on the following data:

For the NCAA RPI:


NOTE:  For the NCAA RPI's regions trend chart, the R squared value is 0.4142.  This suggests that although there may be a relationship between the NCAA RPI's over- and under-rating of regions' teams and region strength, the relationship is not as strong as for conferences.

For the KPI:


NOTE: For the KPI's regions trend chart, the R squared value is 0.4170, very similar to the value for the NCAA RPI.

For the Balanced RPI:


NOTE: For the Balanced RPI's regions trend chart, the R squared value is 0.5940.  This might suggest the Balanced RPI retains a very small amount of the NCAA RPI's discrimination among regions in relation to region strength.


For Massey:


NOTE: For Massey's regions trend chart, the R squared value is 0.4132, similar to the values for the NCAA RPI and the KPI.

ACTUAL LESS EXPECTED PERFORMANCE IN RELATION TO THE DIFFERENCE BETWEEN TEAMS' NON-REGION OPPONENTS' RATINGS AS COMPARED TO THEIR NON-REGION OPPONENTS' RATINGS AS STRENGTH OF SCHEDULE CONTRIBUTORS


This table -- again based on trend charts and underlying tables like those for conferences -- shows that for the NCAA RPI, the disconnect between region teams' opponents' overall ratings and their ratings as strength of schedule contributors results in some regions having actual winning percentages in non-region games that are better than their expected winning percentages and other regions having actual winning percentages that are poorer than their expected winning percentages.  The Balanced RPI, on the other hand, minimizes this problem.

NOTE:  For the NCAA RPI, the trend chart's R squared value is 0.8458.  For the Balanced RPI's trend chart, the value is 0.7637.

RANK/STRENGTH OF SCHEDULE CONTRIBUTOR RANK DIFFERENCES FOR TEAMS

As the above information shows, the NCAA RPI formula's differences between how it rates and ranks teams overall and how it rates and ranks them as strength of schedule contributors causes it to discriminate against stronger conferences and in favor of weaker ones.  And, it cause the NCAA RPI formula to discriminate against stronger regions and in favor of weaker ones.

In that context, it is worth comparing the NCAA RPI and the Balanced RPI in terms of the size of the differences between teams' overall ranks and their ranks as strength of schedule contributors.  The following table shows this comparison.


As you can see, for the NCAA RPI the average difference between a team's overall rank and its rank as a strength of schedule contributor is 31.3 positions, the median is 24 positions, and the greatest difference is 177 positions.  The Balanced RPI, on the other hand, essentially has eliminated these differences with an average difference of 0.3, a median of 0, and a maximum difference of 7 positions.

In the portion of the table to the right, I have highlighted the column that shows the percentage of teams for which the RPI rank/strength of schedule contributor rank is 15 or fewer positions: 37% for the NCAA RPI and 100% for the Balanced RPI.

I have highlighted the 15 or fewer positions column because it relates to non-conference scheduling.  I consult with teams on scheduling.  In general, if a team is choosing between potential opponents likely to have similar overall ranks it should play the opponent likely to have the better strength of schedule contributor rank.  Teams' ranks from year to year, however, including their ranks as strength of schedule contributors, are variable.  I selected the 15 or fewer positions column since it seems reasonable to plan on teams with expected strength of schedule contributor ranks that are more than 15 positions better than their overall ranks to be likely, in the future, to have strength of schedule contributor ranks that are better than their overall ranks, and on teams with expected strength of schedule contributor ranks that are more than 15 positions poorer than their overall ranks to be likely to have strength of schedule contributor ranks that are poorer than their overall ranks.  Thus with the NCAA RPI having a 15 or fewer rank position difference for 37% of teams, that means it has 63% with differences greater than 15 positions.  That is enough teams to make it possible, in non-conference scheduling, to trick the NCAA RPI by playing non-conference opponents whose contributions to your strength of schedule will be significantly better than the overall NCAA RPI says they should be.  This is not possible, however, for the Balanced RPI.

Using the Liberty as compared to Cal example from above, although the two have essentially the same NCAA RPI ranks, if the Committee is using the NCAA RPI formula and a team has the ability to choose between Liberty and Cal as a non-conference opponent, the team always should choose Liberty.  The NCAA formula will treat the team as having played the #26 ranked team if it plays Liberty, rather than the #92 team if it plays Cal.

Thus it is possible, under the NCAA RPI, for teams to "game" the ratings through "smart" scheduling.  This is not possible under the Balanced RPI.

TEAM ACTUAL/EXPECTED WINNING PERCENTAGE DIFFERENCES




This table is like ones above for conferences and regions, but shows, for individual teams, how the systems' ratings perform at matching teams' actual results.

As you can see, in terms of performance as measured by the Spread between the team that historically most out-performs its rating and the team that most underperforms, the KPI does the poorest job, followed by Massey and the NCAA RPI, with those three being relatively close.  The Balanced RPI does much better.  And when one looks at the Over and Under Total -- the amount by which all teams deviate from having their actual and expected performances match, the Balanced RPI is far superior to the other systems.

Finally, for those who want to see the individual team details, the following tables show, for the different rating systems, all the teams and their actual/expected result differences.  The teams are arranged by conference and, within each conference, in order of the extent by which their actual performance exceeded their expected performance.  Teams with positive percentages had actual winning percentages that exceed their ratings-based expected winning percentages.  Teams with negative percentages had actual winning percentages that were poorer than their expected winning percentages.  If you spend time examining the NCAA RPI table, you can get an excellent picture of the NCAA RPI's discriminatory patterns in relation to both conferences and regions.

NCAA RPI TABLE



BALANCED RPI TABLE



KPI TABLE



MASSEY TABLE



Monday, January 12, 2026

2026 ARTICLE 2: CORRECTION - THERE HAS NOT BEEN A CHANGE IN OUT-OF-REGION TRAVEL

In a number of 2025 articles, I indicated we were seeing a signifcant decline in out-of-region travel, with potential negative impacts on the NCAA RPI's ability to properly rank teams from a region in relation to teams from other regions.  I WAS WRONG about there being a decline.  There has not been a significant change in out-of-region travel.  My error was due to a programming issue that resulted in overstated amounts of past out-of-region travel.

The following table shows the percentages of games that were in-region, for each of four time periods:

2007 to 2012: During the period, conference membership was fairly stable, except towards the end of the period.

2013 to 2023:  2013 was the last year of a significant conference re-shuffle, with conference membership remaining relatively stable through 2023.  There were changes, but not major ones.

2024 to the Present:  2024 was the year of the demise of the Pac 12 as we previously knew it as well as of other significant conference membership changes, with even more changes yet to come.  In particular, these changes have involved conferences trying to expand their geographic (and television market) footprints.

The Most Recent Year:  2025, which also is included in the 2024 to the Present period.

The following table shows the percentage of games that were in-region, for each of the four regions, for each of the time periods:


With conferences expanding their geographic footprints, one might have expected a significant increase in out-of-region travel in 2024 and 2025, continuing into the future.  This does not appear to have happened.  It suggests that any increase in in-conference but out-of-region travel, for teams in conferences with expanded geographic footprints, has been offset by decreases in non-conference out-of-region travel either for teams in the expanded conferences or for other teams.

Whether this will continue in the coming years remains to be seen.

Monday, December 29, 2025

2026 ARTICLE 1: RANKING TEAMS, 2026 CONFERENCES, AND REGIONS BASED ON 10 YEARS OF NCAA TOURNAMENT RESULTS

For some New Year's fun, here are how teams, conferences based on their memberships for 2026, and the four geographic regions rank when considering NCAA Tournament performances over the last 10 years -- 2016 through 2025 (including Covid-affected 2020).  To keep the system simple, I simply award a team 1 point for each NCAA Tournament win, so that the maximum points a team can earn in any year is the champion's 6 points.

TEAMS

In general, without paying too much attention to the exact order of the teams and recognizing there are exceptions where teams have improved or declined significantly over the 10 years, a lot of teams are in the ranges I expect to see for the upcoming year.

To be on the list, a team must have won at least 1 NCAA Tournament game.


CONFERENCES

NOTE: Since conference memberships are based on teams' 2026 conferences, the PacTwelve numbers are for the teams that will be members of that conference in 2026.


REGIONS



NOTE:  As you can see, there is a big gap between the South and the Middle regions.  If you were to look at the average ranks of teams in the regions, however, the South and Middle would be very close.  This reveals a particular characteristic of the South region:  (1) It is very strong at the top, thus  producing excellent NCAA Tournament results, whereas (2) It is very weak at the bottom, thus producing only a middling average rank for the entire region.


Tuesday, December 23, 2025

2025 ARTICLE 32: FINAL RPI REPORTS

Here are the final RPI reports for 2025, covering the regular season and conference tournaments.

TEAMS

In the teams table, I have the teams in order of their NCAA RPI ranks for the teams through #57.  Based on past history, teams in this group that were not Automatic Qualifiers were the candidates for at large positions.  If you refer to the color-coded columns on the left, you can see which teams were candidates for seeds as well as at large positions.  This year, all the seeds and at large selections fell within these historic candidate ranges.

In addition, the table has columns showing the Committee's NCAA Tournamnent seeding and at large selection decisions, which teams were Automatic Qualifiers, and which teams were disqualified from NCAA Tournament at large positions due to having more losses than wins.  In the Seed or At Large Selection column, 1 = #1 seed, 2 = #2 seed, 3 = #3 seed, 4 = #4 seed, 4.5 = #5 seed, 4.6 = #6 seed, 4.7 = #7 seed, 4.8 = #8 seed, 5 = unseeded Automatic Qualifier, 6 = at large selection, 7 = NCAA RPI top 57 team not getting an at large position, and 8 = NCAA RPI top 57 team disqualified (this year, there weren't any).

For teams with NCAA RPI ranks #58 and poorer, I have them in order of their Balanced RPI ranks.  This makes it easy to identify which teams are in the top 57 using the Balanced RPI but not using the NCAA RPI.

In the Team column, I have highlighted in salmon the teams that are in the NCAA RPI's top 57 but not in the Balanced RPI's top 57.  I have highlighted in green the teams that are in the Balanced RPI's top 57 but not in the NCAA RPI's top 57.

As I have pointed out previously, for the NCAA RPI, there are significant differences between teams' NCAA RPI ranks and their ranks as Strength of Schedule contributors to their opponents' NCAA RPI ratings.  The table shows these ranks.

As a result of the differences between teams' NCAA RPI ranks and their ranks as Strength of Schedule contributors, a team's opponents' average NCAA RPI ranks and the opponents' average ranks as Strength of Schedule contributors also are different.  The table shows these average ranks and also shows for each team the difference between these two numbers.  If the difference between a team's opponents' average NCAA RPI rank and the opponents' average Strength of Schedule contributor rank is positive, then the team's Strength of Schedule is overrated, which means in turn that the team is overrated.  If the difference is negative, then the team's Strength of Schedule is underrated, which means the team is underrated.  I designed the Balanced RPI for the specific purpose of producing team BRPI ranks and ranks as Strength of Schedule contributors under the BRPI formula that are equal.  You can see this equality in the columns to the right of the NCAA RPI columns.

You'll have to scroll to the right to see the right-hand columns.


CONFERENCES


REGIONS

Tuesday, November 25, 2025

2025 ARTICLE 31: WHAT IF THE COMMITTEE USED THE BALANCED RPI RATHER THAN THE NCAA RPI? DIGGING DEEP

In this article, I'll show what the Women's Soccer Committee's seeding and at large selection decisions likely would have looked like if the Committee had used the Balanced RPI and will compare them to the Committee's actual decisions.  After doing that, I will discuss in detail why there are differences.

Decisions with the Balanced RPI as Compared to the Committee's Actual Decisions

The table below shows the Committee's actual seeding and at large selections as compared to what they likely would have been using the Balanced RPI.  The Committee does not always do what is "likely," but they come close, so this should give a good picture of the differences between the two rating systems.  At the top ot the table is a key to the table's two right-hand columns.  In the table's left-hand column, the green highlighting is for teams that would get at large selections using the Balanced RPI but that did not actually get them with the Committee using the NCAA RPI.  The orange highlighting is for teams that would not get at large selections using the Balanced RPI but that actually got them.  The lime highlighting is for teams that would have been candidates (Top 57) for at large selections using the Balanced RPI but would not have been selected, but that were not even candidates using the NCAA RPI.  The salmon highlighting is for teams that were not candidates for at large selections using the Balanced RPI but that were candidates that did not get selected using the NCAA RPI.




Why Are There Differences?  Digging Down a Level

Think of a rating system as a tree.  Its ratings and rankings of teams are what you see above ground.  Where the ratings and ranks come from is the tree's underground root structure.  As you dig down and expose the root structure, you get a better and better understanding of where what you see above ground comes from.  I'll use this tree metaphor to show what the differences are between the NCAA RPI and the Balanced RPI and why they are different.

The first underground level is best described by looking at actual results as compared to  "expected" results.  Specifically, which teams, conferences, and regions have done better (actual results) than their ratings say they should have done (expected results) and which have done more poorly?

Expected results for a particular rating system come from a history-based result probability table for that system.  The table shows teams' win, loss, and tie probabilities for different rating differences between teams and their opponents (adjusted for home field advantage).  When applied to large numbers of games, the result probability tables are very accurate.  For example, applying the NCAA RPI result probability table to all games from 2010 through 2024, here is how games' higher rated teams' expected results compare to their actual results:


When applying the result probability table to a single team for a single season, thus dealing with relatively few games, one would not expect the level of equivalence between actual results and expected results that is shown in the above table.  For the 2025 season, a look at individual teams' expected results based on their NCAA RPI ratings compared to their actual results yields the following table:


This table shows that the team whose actual winning percentage was most above its expected winning percentage was 9.0% above.  The team whose actual winning percentage was most below its expected winning percentage was -11.2% below.  The sum of these two numbers, 20.2%, is an indicator of how well the NCAA RPI measured teams' performance.

Here is a similar table, but looking at conferences, in non-conference games:


The different states' teams play a majority or plurality of their games within one of four geographic regions.  The next table is similar to the teams and conferences tables, but looking at the four regions, in non-region games:


Here are similar tables for the Balanced RPI:







As you can see, in going through the levels -- from teams, to conferences, to regions -- the Balanced RPI's expected results are closer to actual results than for the NCAA RPI.  Using the tree metaphor, this underground difference between the two systems accounts for part of the difference you see between the Committee's NCAA RPI-based seeding and at large selections and what the seeds and selections likely would have been using the Balanced RPI.

Why Are There Differences?  Digging Down a Second Level

In sports, and perhaps especially for soccer, no system can produce ratings that are 100% consistent with results.  Given that there will be inconsistencies between results and ratings, a critical question is whether the inconsistencies are random (desirable) or whether they follow patterns that discriminate against and/or in favor of identifiable groups of teams (not desirable).

A good way to answer the "random or discriminatory" question is to look at the teams that are candidates for at large selections.  Historically, all at large teams have come from the Top 57 in the RPI rankings, so that is the candidate group.  In the 2025 shift from the NCAA RPI to the Balanced RPI, there is a change of 10 teams in the Top 57:

The teams dropping out of the Top 57 as a result of a shift to the Balanced RPI are, in order of NCAA RPI rank: Fairfield (AQ), Samford (AQ), Rhode Island, Charlotte, James Madison, Old Dominion, Army (AQ), Lipscomb (AQ), UNC Wilmington, and Texas State (AQ).

The teams moving into the Top 57 as a result of the shift are, again in order of NCAA RPI rank: Cal State Fullerton, Pepperdine (AQ), Kansas State, Seattle, Arizona State, Southern California, Portland, Houston, Santa Clara, and Nebraska.

The following table provides data related to why these changes occur:


For each team that is "in" or "out" of the Top 57 in a shift to the Balanced RPI, the table shows, in the five columns on the right, how teams actual winning percentages compare with their expected winning percentages.  It shows this for the NCAA RPI ratings and for the Balanced RPI ratings.

In the table, the 7th and 8th columns show the teams' actual winning percentages as compared to their NCAA RPI ratings' expected winning percentages.  The 9th column shows the actual versus expected differences for the teams.  A positive difference means a team's actual results have been better than its expected results.  A negative difference means actual results have been poorer than expected results. At the bottom of the 9th column, you can see the average differences for the "in" teams and for the "out" teams.  The "in" teams' actual results averaged 2.3% better than their expected results; and the "out" teams' actual results averaged 3.0% poorer than their expected results.  In other words, on average the NCAA RPI underrated the "in" teams and overrated the "out" teams, with a cumulative 5.3% (2.3% + 3.0%) discriminatory effect against the "in" teams relative to the "out" teams.

On the other hand, moving to the 11th column, for the Balanced RPI, the "in" teams' actual results averaged 0.2% better than their expected results; and the "out" teams' actual results averaged 1.0% better.  This amounts to a slight discriminatory effect against the "out" teams of 0.9% (1.0% - 0.2%, rounded off) relative to the "in" teams.

Continuing with the tree metaphor, this underground difference between the NCAA RPI and the Balanced RPI -- the NCAA RPI's discrimination against the "in" teams and in favor of the "out" teams and the Balanced RPI's near elimination of discrimination between the "in" and "out" teams -- accounts for another part of the difference you see between the Committee's NCAA RPI-based at large selections and what the selections likely would have been using the Balanced RPI.

This tells us that the Balanced RPI mostly eliminates the NCAA RPI's discriminatory effects.  But, it does not tell us why.

 Why Are There Differences?  Digging Down Another Level

The NCAA RPI and the Balanced RPI have two key components:  a team's Winning Percentage (WP) and the team's Strength of Schedule (SoS).  Within each system's formula, each component has a 50% effective weight.

A team's SoS is intended to measure its opponents' strengths.  The two systems measure opponents' strengths differently.

For each rating system, it is possible to calculate a team's rating as an SoS contributor to its opponents -- as distinguished from its actual RPI rating.  It then is possible to determine a team's rank as an SoS contributor to its opponennts.

A team's rank as an SoS contributor should be the same as its RPI rank.  For the NCAA RPI, however, it isn't.  In fact, for the NCAA RPI, the average difference between a team's RPI rank and its rank as an SoS contributor is 31.3 rank positions, with the median difference 24 positions.  I designed the Balanced RPI, using the RPI as a starting point, to eliminate this disparity between RPI ranks and SoS contributor ranks.  As a result, for the Balanced RPI, the average difference between a team's RPI rank and its rank as an SoS contributor is 0.8 rank positions, with the median difference 0 positions.  In simple terms, for the NCAA RPI there are significant differences between a team's NCAA RPI rank and its rank as an SoS contributor to its opponents; but for the Balanced RPI the two essentially are the same..

For the "in" and "out" teams, the following table shows what I described in the preceding paragraph:


In the table, start out by looking at the three columns Opponents NCAA RPI Average Rank, Opponents NCAA RPI Average Rank as SoS Contributor,  and NCAA RPI Difference.  In the Difference column, a negative number means that the team's opponents' average rank as SoS contributors is poorer than their opponents' average actual NCAA RPI rank.  In other words, the NCAA RPI formula understates the team's SoS -- it discriminates against the team.  A positive number means that the team's opponents' average rank as SoS contributors is better than its opponents' average actual NCAA RPI rank.  In other words, the NCAA RPI formula overstates the team's SoS -- it discriminates in favor of the team.

At the bottom of the table, in the NCAA RPI Difference column, you can see the average differences for the "in" and "out" teams.  The average difference for the "in" teams is -27 and for the "out" teams is -7.  For the "out" teams, this means the NCAA RPI discriminates against them some.  But for the "in" teams, the NCAA RPI discriminates against them almost four times as much as for the "out" teams.

The last three columns on the right are similar, but for the Balanced RPI.  There, for the "in" teams, the average difference is -1 and for the "out" teams it is 1.  In other words, using the Balanced RPI there is virtually no discrimination between the "in" and "out" teams.

For the NCAA RPI, this high level of discrimination against the "in" teams explains why the "in" teams' average performance is better than their ratings say it should be, including in relation to the "out" teams even though the "out" teams experience some discrimination.  And for the Balanced RPI, the lack of discrimination explains why the "in" and "out" teams' performance is close to what their ratings say it should be.

Why Are There Differences?  Digging Down One More Level

There is more to see, however, in the preceding table.  If you focus on the conferences and regions of the "in" and "out" teams, you will see patterns.  Most of the "in" teams are from the West region and those not from the West are from the Power 4 conferences.  All of the "out" teams are from mid-major conferences and from the North and South regions.  Why are we seeing these patterns?

The following table, shows conferences' actual winning percentages in non-conference games as compared to their NCAA RPI expected winning percentages, with the conferences in order from those most discriminated against at the top to those most discriminated for at the bottom:


As you can see, stronger conferences and conferences from the West tend to be in the upper part of the table - the most discriminated against conferences.  Compare this with the following table for the Balanced RPI:


In the Balanced RPI table, there still are differences between conferences' actual performance and their expected performance.  But the differences are less tied to conference strength and geographic regions than in the NCAA RPI table (as well as overall being smaller).

What underlies the above tables for the NCAA RPI and the Balanced RPI?  The following table shows, for conferences, the differences between conference teams' opponents' average NCAA RPI ranks, conference teams' opponents average NCAA RPI ranks as strength of schedule contributorts, and the difference between the two.  As above, a negative difference means the NCAA RPI on average discriminates against the conference's teams; and a positive difference means it discriminates in favor of the conference's teams.  In the table, the most discriminated against teams are at the top and the most discriminated in favor of teams are at the bottom.


In this table, the key columns are the Conferences NCAA RPI Rank and the Conference Teams Opponents NCAA RPI Ranks Less NCAA RPI SoS Contributor Ranks Difference columns.  As you can see, the NCAA's way of calculating SoS discriminates heavily against teams from stronger conferences and in favor of teams from weaker conferences.

Compare this to the similar table for the Balanced RPI:


Here, the conferences are in the same order is in the preceding table.  You can see that for the Balanced RPI, conference teams' opponents' ranks and their ranks as SoS contributors are essentially the same for all conferences.  This is one of the underlying causes for the "in" and "out" changes when shifting from the NCAA RPI to the Balanced RPI.

What about for regions?

Here is the NCAA RPI's actual versus expected performance table for regions, in non-region games:


As you can see, the NCAA RPI discriminates significantly against the West region (and in favor of the North).  Compare this to the table for the Balanced RPI:


As you can see, the Balanced RPI minimizes discrimination in relation to regions.

Here is the underlying table for the NCAA RPI, showing regions' teams' average RPI ranks as compared to their average ranks as SoS contributors:



As you can see, the numbers in the Difference column are in order or region strength.  The NCAA RPI's discrimination in how it values conference teams' strengths of schedule exactly matches region strength.  The stronger the region, the more the discrimination.

Here is the table for the Balanced RPI:


Here, the regions are in the same order is in the preceding table.  You can see that for the Balanced RPI, region teams' opponents' ranks and their ranks as SoS contributors are essentially the same.  This is another of the underlying causes for the "in" and "out" changes when shifting from the NCAA RPI to the Balanced RPI.

Summarizing all of the above information, the reason for the changes in "in" and "out" teams when shifting from the NCAA RPI to the Balanced RPI is (1) the Balanced RPI's ratings of teams correspond better with teams' actual performance and (2) the Balanced RPI eliminates the NCAA RPI's discrimination among conference and regions.

Why Are There Differences?  Digging Down to the Third Level

Why does the NCAA RPI have large differences between RPI ranks and ranks as SoS contributors?  Continuing with the tree metaphor, it is due to the NCAA RPI's DNA, the RPI formula itself.

A team's RPI rating is a combination of the team's Winning Percentage (WP), its Opponents' Winning Percentages (OWP), and its Opponents' Opponents' Winning Percentages (OOWP).  The way the formula combines the three, WP has an effective weight of 50%, OWP has an effective weight of 40%, and OOWP has an effective weight of 10%.

A team's opponents' contributions to its RPI rating are their winning percentages (OWP) and their opponents' winning percentages (OOWP), which as just stated account for 40% and 10% respectively of the team's RPI rating.  Thus an opponent's contribution, if isolated, is 80% the opponent's WP and 20% the opponent's OWP.

Since a team's NCAA RPI rating is 50% its WP, 40% its OWP, and 10% its OOWP, but the team's SoS contribution to an opponent is 80% the team's WP and 20% the team's OWP, it is no wonder there are significant differences between teams'  NCAA RPI ranks and their ranks as SoS contributors.

These differences between a team's NCAA RPI rating and its SoS contribution to an opponent are the DNA that is the source of the NCAA RPI patterns described above.

The Balanced RPI, on the other hand, starts with a structure similar to the NCAA RPI, although with an effective weights of 50% WP, 25% OWP, and 25% OOWP and, within WP, with a tie counting as half of a win rather than a third of a win as in the NCAA RPI.  The Balanced RPI formula then goes through a series of additional calculations whose effect is have each team's RPI rank and rank as an SoS contributor be the same.  This more complex formula is the source of the Balanced RPI patterns described above.

CONCLUSION 

The differences between the Committee's actual NCAA Tournament seeding and at large decisions and what those decisions likely would have been using the Balanced RPI are not simply a matter of differences between two equal rating systems.

(1) The Balanced RPI's ratings are more consistent with actual game results than the NCAA RPI's; and (2) The Balanced RPI has minimal to no discrimination among conferences and regions whereas the NCAA RPI has significant discrimination.  These differences between the NCAA RPI and the Balanced RPI account for the bracket differences at the top of this article.