Tuesday, November 25, 2025

2025 ARTICLE 31: WHAT IF THE COMMITTEE USED THE BALANCED RPI RATHER THAN THE NCAA RPI? DIGGING DEEP

In this article, I'll show what the Women's Soccer Committee's seeding and at large selection decisions likely would have looked like if the Committee had used the Balanced RPI and will compare them to the Committee's actual decisions.  After doing that, I will discuss in detail why there are differences.

Decisions with the Balanced RPI as Compared to the Committee's Actual Decisions

The table below shows the Committee's actual seeding and at large selections as compared to what they likely would have been using the Balanced RPI.  The Committee does not always do what is "likely," but they come close, so this should give a good picture of the differences between the two rating systems.  At the top ot the table is a key to the table's two right-hand columns.  In the table's left-hand column, the green highlighting is for teams that would get at large selections using the Balanced RPI but that did not actually get them with the Committee using the NCAA RPI.  The orange highlighting is for teams that would not get at large selections using the Balanced RPI but that actually got them.  The lime highlighting is for teams that would have been candidates (Top 57) for at large selections using the Balanced RPI but would not have been selected, but that were not even candidates using the NCAA RPI.  The salmon highlighting is for teams that were not candidates for at large selections using the Balanced RPI but that were candidates that did not get selected using the NCAA RPI.




Why Are There Differences?  Digging Down a Level

Think of a rating system as a tree.  Its ratings and rankings of teams are what you see above ground.  Where the ratings and ranks come from is the tree's underground root structure.  As you dig down and expose the root structure, you get a better and better understanding of where what you see above ground comes from.  I'll use this tree metaphor to show what the differences are between the NCAA RPI and the Balanced RPI and why they are different.

The first underground level is best described by looking at actual results as compared to  "expected" results.  Specifically, which teams, conferences, and regions have done better (actual results) than their ratings say they should have done (expected results) and which have done more poorly?

Expected results for a particular rating system come from a history-based result probability table for that system.  The table shows teams' win, loss, and tie probabilities for different rating differences between teams and their opponents (adjusted for home field advantage).  When applied to large numbers of games, the result probability tables are very accurate.  For example, applying the NCAA RPI result probability table to all games from 2010 through 2024, here is how games' higher rated teams' expected results compare to their actual results:


When applying the result probability table to a single team for a single season, thus dealing with relatively few games, one would not expect the level of equivalence between actual results and expected results that is shown in the above table.  For the 2025 season, a look at individual teams' expected results based on their NCAA RPI ratings compared to their actual results yields the following table:


This table shows that the team whose actual winning percentage was most above its expected winning percentage was 9.0% above.  The team whose actual winning percentage was most below its expected winning percentage was -11.2% below.  The sum of these two numbers, 20.2%, is an indicator of how well the NCAA RPI measured teams' performance.

Here is a similar table, but looking at conferences, in non-conference games:


The different states' teams play a majority or plurality of their games within one of four geographic regions.  The next table is similar to the teams and conferences tables, but looking at the four regions, in non-region games:


Here are similar tables for the Balanced RPI:







As you can see, in going through the levels -- from teams, to conferences, to regions -- the Balanced RPI's expected results are closer to actual results than for the NCAA RPI.  Using the tree metaphor, this underground difference between the two systems accounts for part of the difference you see between the Committee's NCAA RPI-based seeding and at large selections and what the seeds and selections likely would have been using the Balanced RPI.

Why Are There Differences?  Digging Down a Second Level

In sports, and perhaps especially for soccer, no system can produce ratings that are 100% consistent with results.  Given that there will be inconsistencies between results and ratings, a critical question is whether the inconsistencies are random (desirable) or whether they follow patterns that discriminate against and/or in favor of identifiable groups of teams (not desirable).

A good way to answer the "random or discriminatory" question is to look at the teams that are candidates for at large selections.  Historically, all at large teams have come from the Top 57 in the RPI rankings, so that is the candidate group.  In the 2025 shift from the NCAA RPI to the Balanced RPI, there is a change of 10 teams in the Top 57:

The teams dropping out of the Top 57 as a result of a shift to the Balanced RPI are, in order of NCAA RPI rank: Fairfield (AQ), Samford (AQ), Rhode Island, Charlotte, James Madison, Old Dominion, Army (AQ), Lipscomb (AQ), UNC Wilmington, and Texas State (AQ).

The teams moving into the Top 57 as a result of the shift are, again in order of NCAA RPI rank: Cal State Fullerton, Pepperdine (AQ), Kansas State, Seattle, Arizona State, Southern California, Portland, Houston, Santa Clara, and Nebraska.

The following table provides data related to why these changes occur:


For each team that is "in" or "out" of the Top 57 in a shift to the Balanced RPI, the table shows, in the five columns on the right, how teams actual winning percentages compare with their expected winning percentages.  It shows this for the NCAA RPI ratings and for the Balanced RPI ratings.

In the table, the 7th and 8th columns show the teams' actual winning percentages as compared to their NCAA RPI ratings' expected winning percentages.  The 9th column shows the actual versus expected differences for the teams.  A positive difference means a team's actual results have been better than its expected results.  A negative difference means actual results have been poorer than expected results. At the bottom of the 9th column, you can see the average differences for the "in" teams and for the "out" teams.  The "in" teams' actual results averaged 2.3% better than their expected results; and the "out" teams' actual results averaged 3.0% poorer than their expected results.  In other words, on average the NCAA RPI underrated the "in" teams and overrated the "out" teams, with a cumulative 5.3% (2.3% + 3.0%) discriminatory effect against the "in" teams relative to the "out" teams.

On the other hand, moving to the 11th column, for the Balanced RPI, the "in" teams' actual results averaged 0.2% better than their expected results; and the "out" teams' actual results averaged 1.0% better.  This amounts to a slight discriminatory effect against the "out" teams of 0.9% (1.0% - 0.2%, rounded off) relative to the "in" teams.

Continuing with the tree metaphor, this underground difference between the NCAA RPI and the Balanced RPI -- the NCAA RPI's discrimination against the "in" teams and in favor of the "out" teams and the Balanced RPI's near elimination of discrimination between the "in" and "out" teams -- accounts for another part of the difference you see between the Committee's NCAA RPI-based at large selections and what the selections likely would have been using the Balanced RPI.

This tells us that the Balanced RPI mostly eliminates the NCAA RPI's discriminatory effects.  But, it does not tell us why.

 Why Are There Differences?  Digging Down Another Level

The NCAA RPI and the Balanced RPI have two key components:  a team's Winning Percentage (WP) and the team's Strength of Schedule (SoS).  Within each system's formula, each component has a 50% effective weight.

A team's SoS is intended to measure its opponents' strengths.  The two systems measure opponents' strengths differently.

For each rating system, it is possible to calculate a team's rating as an SoS contributor to its opponents -- as distinguished from its actual RPI rating.  It then is possible to determine a team's rank as an SoS contributor to its opponennts.

A team's rank as an SoS contributor should be the same as its RPI rank.  For the NCAA RPI, however, it isn't.  In fact, for the NCAA RPI, the average difference between a team's RPI rank and its rank as an SoS contributor is 31.3 rank positions, with the median difference 24 positions.  I designed the Balanced RPI, using the RPI as a starting point, to eliminate this disparity between RPI ranks and SoS contributor ranks.  As a result, for the Balanced RPI, the average difference between a team's RPI rank and its rank as an SoS contributor is 0.8 rank positions, with the median difference 0 positions.  In simple terms, for the NCAA RPI there are significant differences between a team's NCAA RPI rank and its rank as an SoS contributor to its opponents; but for the Balanced RPI the two essentially are the same..

For the "in" and "out" teams, the following table shows what I described in the preceding paragraph:


In the table, start out by looking at the three columns Opponents NCAA RPI Average Rank, Opponents NCAA RPI Average Rank as SoS Contributor,  and NCAA RPI Difference.  In the Difference column, a negative number means that the team's opponents' average rank as SoS contributors is poorer than their opponents' average actual NCAA RPI rank.  In other words, the NCAA RPI formula understates the team's SoS -- it discriminates against the team.  A positive number means that the team's opponents' average rank as SoS contributors is better than its opponents' average actual NCAA RPI rank.  In other words, the NCAA RPI formula overstates the team's SoS -- it discriminates in favor of the team.

At the bottom of the table, in the NCAA RPI Difference column, you can see the average differences for the "in" and "out" teams.  The average difference for the "in" teams is -27 and for the "out" teams is -7.  For the "out" teams, this means the NCAA RPI discriminates against them some.  But for the "in" teams, the NCAA RPI discriminates against them almost four times as much as for the "out" teams.

The last three columns on the right are similar, but for the Balanced RPI.  There, for the "in" teams, the average difference is -1 and for the "out" teams it is 1.  In other words, using the Balanced RPI there is virtually no discrimination between the "in" and "out" teams.

For the NCAA RPI, this high level of discrimination against the "in" teams explains why the "in" teams' average performance is better than their ratings say it should be, including in relation to the "out" teams even though the "out" teams experience some discrimination.  And for the Balanced RPI, the lack of discrimination explains why the "in" and "out" teams' performance is close to what their ratings say it should be.

Why Are There Differences?  Digging Down One More Level

There is more to see, however, in the preceding table.  If you focus on the conferences and regions of the "in" and "out" teams, you will see patterns.  Most of the "in" teams are from the West region and those not from the West are from the Power 4 conferences.  All of the "out" teams are from mid-major conferences and from the North and South regions.  Why are we seeing these patterns?

The following table, shows conferences' actual winning percentages in non-conference games as compared to their NCAA RPI expected winning percentages, with the conferences in order from those most discriminated against at the top to those most discriminated for at the bottom:


As you can see, stronger conferences and conferences from the West tend to be in the upper part of the table - the most discriminated against conferences.  Compare this with the following table for the Balanced RPI:


In the Balanced RPI table, there still are differences between conferences' actual performance and their expected performance.  But the differences are less tied to conference strength and geographic regions than in the NCAA RPI table (as well as overall being smaller).

What underlies the above tables for the NCAA RPI and the Balanced RPI?  The following table shows, for conferences, the differences between conference teams' opponents' average NCAA RPI ranks, conference teams' opponents average NCAA RPI ranks as strength of schedule contributorts, and the difference between the two.  As above, a negative difference means the NCAA RPI on average discriminates against the conference's teams; and a positive difference means it discriminates in favor of the conference's teams.  In the table, the most discriminated against teams are at the top and the most discriminated in favor of teams are at the bottom.


In this table, the key columns are the Conferences NCAA RPI Rank and the Conference Teams Opponents NCAA RPI Ranks Less NCAA RPI SoS Contributor Ranks Difference columns.  As you can see, the NCAA's way of calculating SoS discriminates heavily against teams from stronger conferences and in favor of teams from weaker conferences.

Compare this to the similar table for the Balanced RPI:


Here, the conferences are in the same order is in the preceding table.  You can see that for the Balanced RPI, conference teams' opponents' ranks and their ranks as SoS contributors are essentially the same for all conferences.  This is one of the underlying causes for the "in" and "out" changes when shifting from the NCAA RPI to the Balanced RPI.

What about for regions?

Here is the NCAA RPI's actual versus expected performance table for regions, in non-region games:


As you can see, the NCAA RPI discriminates significantly against the West region (and in favor of the North).  Compare this to the table for the Balanced RPI:


As you can see, the Balanced RPI minimizes discrimination in relation to regions.

Here is the underlying table for the NCAA RPI, showing regions' teams' average RPI ranks as compared to their average ranks as SoS contributors:



As you can see, the numbers in the Difference column are in order or region strength.  The NCAA RPI's discrimination in how it values conference teams' strengths of schedule exactly matches region strength.  The stronger the region, the more the discrimination.

Here is the table for the Balanced RPI:


Here, the regions are in the same order is in the preceding table.  You can see that for the Balanced RPI, region teams' opponents' ranks and their ranks as SoS contributors are essentially the same.  This is another of the underlying causes for the "in" and "out" changes when shifting from the NCAA RPI to the Balanced RPI.

Summarizing all of the above information, the reason for the changes in "in" and "out" teams when shifting from the NCAA RPI to the Balanced RPI is (1) the Balanced RPI's ratings of teams correspond better with teams' actual performance and (2) the Balanced RPI eliminates the NCAA RPI's discrimination among conference and regions.

Why Are There Differences?  Digging Down to the Third Level

Why does the NCAA RPI have large differences between RPI ranks and ranks as SoS contributors?  Continuing with the tree metaphor, it is due to the NCAA RPI's DNA, the RPI formula itself.

A team's RPI rating is a combination of the team's Winning Percentage (WP), its Opponents' Winning Percentages (OWP), and its Opponents' Opponents' Winning Percentages (OOWP).  The way the formula combines the three, WP has an effective weight of 50%, OWP has an effective weight of 40%, and OOWP has an effective weight of 10%.

A team's opponents' contributions to its RPI rating are their winning percentages (OWP) and their opponents' winning percentages (OOWP), which as just stated account for 40% and 10% respectively of the team's RPI rating.  Thus an opponent's contribution, if isolated, is 80% the opponent's WP and 20% the opponent's OWP.

Since a team's NCAA RPI rating is 50% its WP, 40% its OWP, and 10% its OOWP, but the team's SoS contribution to an opponent is 80% the team's WP and 20% the team's OWP, it is no wonder there are significant differences between teams'  NCAA RPI ranks and their ranks as SoS contributors.

These differences between a team's NCAA RPI rating and its SoS contribution to an opponent are the DNA that is the source of the NCAA RPI patterns described above.

The Balanced RPI, on the other hand, starts with a structure similar to the NCAA RPI, although with an effective weights of 50% WP, 25% OWP, and 25% OOWP and, within WP, with a tie counting as half of a win rather than a third of a win as in the NCAA RPI.  The Balanced RPI formula then goes through a series of additional calculations whose effect is have each team's RPI rank and rank as an SoS contributor be the same.  This more complex formula is the source of the Balanced RPI patterns described above.

CONCLUSION 

The differences between the Committee's actual NCAA Tournament seeding and at large decisions and what those decisions likely would have been using the Balanced RPI are not simply a matter of differences between two equal rating systems.

(1) The Balanced RPI's ratings are more consistent with actual game results than the NCAA RPI's; and (2) The Balanced RPI has minimal to no discrimination among conferences and regions whereas the NCAA RPI has significant discrimination.  These differences between the NCAA RPI and the Balanced RPI account for the bracket differences at the top of this article.


Tuesday, November 18, 2025

2025 ARTICLE 30: A LOOK AT THE COMMITTEE'S NCAA TOURNAMENT SEEDING AND AT LARGE SELECTION DECISIONS

This will be the first of two articles on the Women's Soccer Committee's NCAA Tournament bracket decisions.  This article addresses the Committee's decisions when using the NCAA RPI, which is the rating system the NCAA requires the Committee to use.  The second article will address what the Committee's decisions likely would have been if it had used the Balanced RPI.

In this article, I'll review 6 of the Women's Soccer Committee's seeding and at large selection decisions, with regard to whether the decisions are consistent with the Committee's historic decision patterns.  I selected the decisions for review from the following list, which shows the Committee's decisions as compared to what the Committee's historic patterns indicated the decisions probably would be.  The list inludes all of this year's differences, most of which I consider minor.  The ones I selected for review, highlighted yellow, are the ones I consider significant.  (Any Committee decisions that are not on the list matched the expected decisions.)

In this review, I am going to assume you are familiar with how I use factor standards to determine the decisions the Committee will make if it is following its historic patterns.  I have discussed that method in previous articles.


SEEDS

There is an important consideration to keep in mind when evaluating the Committee's seeds.  Although the NCAA expressly sets out certain factors the Committee must consider (and also other information the Committee is not allowed to consider) when selecting at large teams to participate in the tournament, it does not require the Committee to consider those factors when seeding.  This is intentional.  Thus although the Committee almost certainly considers the factors when seeding, it is free to make whatever seeding decisions it thinks are appropriate.

Another consideration relates to the #5 through #8 seeds.  The NCAA did not begin seeding those positions until the 2022 season.  Thus there are only three years' data -- which is very little data -- from which to identify Committee patterns.  Because of that, for the time being I use Committee at large selection patterns, supplemented by the combined NCAA RPI Rank and Top 50 Results Score factor, as the basis for indicating what we should expect the #5 through #8 seeds to be.  That is a crude process that I won't be able to refine until we have more years' data.  This means it is not particularly significant if the Committee placements of teams within the #5 through #8 seed group are different than the "expected" historic pattern placements.

A final consideration relates to the #2 through #4 seeds.  The Committee's #1 seeds follow historic patterns pretty closely, with an occasional missed team.  In addition, which teams will fall within the #1 through #4 group as a whole follow historic patterns pretty closely.  Within the #2 through #4 group, however, historic patterns are relatively poor at predicting which teams will get which seeds.

Michigan State's #2 Seed Rather than a #4

The Committee gave Michigan State, with a #6 NCAA RPI rank, a #2 seed, as compared to an expected #4 seed.  It finished #2 in the Big Ten regular season standings and #2 in the conference tournament.  The Big Ten was #4 in the RPI conference ranks.  Michigan State met 0 "yes" standards for a #2 seed and 2 "no" standards.  The "no" standards were:

Conference RPI and Top 60 Head to Head Results Rank

"No" standard:  <0.6721

Michigan State score:  0.6584 

This is a pretty large distance below the standard.

Conference Rank and Top 50 Head to Head Results Rank

"No" standard:  >14

Michigan State score:  14.09

This is just outside the standard.  Based on past experience, this "shortcoming" is not significant.

In its Top 60 Head to Head games, here are Michigan State's results:

1 loss H to a #1 seed

1 tie A to a #3 seed

2 ties H and N to #4 seeds

2 wins A and N to #4 seeds

1 loss A to a #5 seed

1 loss A to a #6 seed

2 ties H to an unseeded AQ and an at large

2 wins H and N to unseeded at larges


Also, Michigan State's Top 60 Head to Head Results Rank was #29 (based on my scoring system for Head to Head Results).  And they met 0 "yes" and 0 "no" standards for a #3 seed, and met 4 "yes" and 0 "no" standards for a #4 seed. 

The above information suggests that a #2 seed for Michigan State is a significant deviation from the Committee's historic patterns.  It doesn't look like Michigan State is in the #2 seed range.  A #4 seed seems more consistent with the Committee's historic patterns and also with results against Top 60 opponents.

But, there is an additional situation that may have affected the Committee's decision.  The Committee gave Washington a #4 seed, which is the best seed historic patterns would have supported.  This was the case even though Washington was the Big Ten regular season and conference tournament champion, with a win against Michigan State during the conference regular season competition and a tie with Michigan State in the conference tournament championship game, with Washington prevailing on penalty kicks.  Further, the Committee gave these other seeds to Big Ten teams: Wisconsin #4 (Washington beat them twice), UCLA #4 (Washington beat them), Iowa #5 (Washington did not play them), and Penn State #8 (Washington beat them).  Given this conference performance, one might think the Committee would seed Washington ahead of Michigan State and higher than a #4 seed.

Washington, however, had a very unusual profile.  It had poor results in its non-conference games, indeed poor enough to give it a #137 NCAA Non-Conference RPI rank.  On the other hand it obviously had very strong results during its conference season, strong enough to bring its overall NCAA RPI rank up to #27.  This created the very odd situation in which Washington met no "yes" standards for a #1, #2, or #3 seed, and "yes" standards for a #4 seed, but also a high number of "no" standards at every seed level.  In fact, it met a high number of both "yes" and "no" standards for an at large position.  In other words, Washington had a Jekyll and Hyde profile such as the Committee has not seen before -- for example, it met 2 "yes" standards for a #4 seed and 47!!! "no" standards.

So what was the Committee to do?  It had the Big Ten as the #4 conference.  Yet its historic patterns only had Big Ten potential seeds at the #4 or poorer seed level.  It had Michigan State with an NCAA RPI rank of #6, Washington at #27, Iowa at #21, and Wisconsin at #23.  Would having the Big Ten's best seed position as #4 be reasonable?  My guess is that the Committee felt it had to give a Big Ten team a #2 seed and that, because of Washington's Jekyll and Hyde profile and Michigan State's high RPI rank, it decided to give Michigan State the #2 seed.

From my perspective, given that the Committee is not bound by the data as to seeds, I would have given Washington the #2 seed notwithstanding its #27 NCAA RPI rank but rather due to its Big Ten regular season and conference tournament double.  But what the Committee decided to do also seems reasonable given the peculiar circumstances.  In particular it seems reasonable as historically, #2 seeds have been limited to teams with NCAA RPI ranks of #13 or better.

Louisville as #6 Seed Rather than a #3

The Committee gave Louisville, with a #15 NCAA RPI rank, a #6 seed, as compared to a somewhat expected #3 seed.  For a #3 seed, Louisville met 2 "yes" standards and 8 "no" standards.  For a #4 seed, it met 2 "yes" and 0 "no" standards.

For a #3 seed, after the Committee's selection of the #1 and #2 seeds, the candidates were:

    #8 Colorado  3 "yes" and 0 "no"

     #10 Florida State  5 "yes" and 1 "no"

    #14 LSU  4 "yes" and 1 "no"

    #15 Louisville  2 "yes" and 8 "no"

    #7 Kansas  1 "yes" and 4 "no"

    #13 West Virginia  0 "yes" and 3 "no"

    #17 Baylor  0 "yes" and 1 "no"

    #16 Texas Tech  0 "yes" and 4 "no"

    #22 UCLA  0 "yes" and 5 "no"

    #23 Wisconsin  0 "yes" and 5 "no"

    #19 Memphis  0 "yes" and 6 "no"

    #9 Tennessee  0 "yes" and 7 "no"

    #18 Xavier  0 "yes" and 12 "no"

    #20 BYU  0 "yes" and 15 "no"

    #21 Iowa  0 "yes" and 15 "no"

From the above list, Colorado (3/0) is a clear #3 seed and that is what the Committee gave it.  After that, the Committee gave #3 seeds to Florida State (5/1), Kansas (1/4), and Tennessee (0/7).  It did not give a #3 seed to Louisville (2/8) or LSU (4/1), but it did give LSU a #4 seed.  As a point of reference, for a #4 seed Florida State scored 7/0, LSU 6/0, Kansas 3/0, Louisville 2/0, andTennessee 1/0.

The teams meeting 1 or more "yes" and "no" standards presented profiles the Committee has not seen before: Kansas, Florida State, LSU, and Lousiville.

Regarding Lousiville, the question is why it did not get a #3 seed as compared to Tennessee, which did.

Here are the #3 seed "yes" standards Louisville met and its scores for the standards:

    Poor Results Rank

        Standard  <4

        Score  1

    Conference Rank and Poor Results Rank

        Standard  <2

        Score  1.35

Here are the "no" standards:

    RPI Rating

        Standard  <.6157

        Score  .6155

    Head to Head v Top 60 Rank

        Standard  >44

        Score  50

    RPI Rating and Head to Head v Top 60 Rank

        Standard  <.7003

        Score  .6765

    RPI Rank and Head to Head v Top 60 Rank

        Standard  >55

        Score  55.58

    NonConference RPI Rating and Head to Head v Top 60 Rank

        Standard  <.7790

        Score  .746

    NonConference RPI Rank and Head to Head v Top 60 Rank

        Standard  >111

        Score  118.1

    Top 50 Results Score and Head to Head v Top 60 Rank

        Standard  <38548

        Score  33338

    Top 50 Results Rank and Head to Head v Top 60 Rank

        Standard  >60

        Score  67.65

Looking at the "yes" standards, the critical information is that Louisville had no poor results, supplemented by its being in the #1 ranked conference.  Looking specifically at its losses, they were to #4 Vanderbilt, #10 Florida State, #1 Notre Dame, and #11 Duke.  Its ties were against #28 Wake Forest and #2 Virginia.  Without regard to its opponents' ranks, its record against Top 60 (and 50) opponents was 2 wins, 4 losses, and 2 ties.  Its wins against Top 50 (and 60) opponents were against Dayton (26) and Wake Forest (28).

Looking at Tennessee, which historic patterns indicate would not get a #3 seed but did get one, it met no "yes" standards and 7 "no" standards.  Here are the "no" standards:

    Top 50 Results Rank

        Standard  >40

        Score  43

    Top 50 Results Rank and RPI Rating

        Standard  <.7086

        Score  .7025

    Top 50 Results Rank and Top 50 Results Score

        Standard  <28795

        Score  25922

    Top 50 Results Rank and Conference Standing

        Standard  >61

        Score  69.46

    Top 50 Results Rank and Conference RPI

        Standard  <232.1

        Score  119.9

    Top 50 Results Rank and Top 60 Head to Head Score

        Standard  <31.2

        Score  27.9

    Top 50 Results Rank and Top 60 Common Opponents Score

        Standard  <30.9

        Score  18.6

Comparing Louisville to Tennessee, the critical information appears to be that Louisville's "yes" standards related to its lack of poor results.  Absent the Committee seeing value in that, when looking at the Committee's historic patterns there does not seem to be a lot of difference between Louisville's and Tennessee's "no" values.  This suggest that the Committee did not assign value to Louisville's lack of poor results.  Just as this would explain why the Committee would not give Lousiville a #3 seed, it also explains why it would not give it a #4 seed since Louisville's 2 "yes" standards for a #4 seed were the same no-poor-result-reliant standards as for a #3 seed.  If this reasoning is correct, then Louisville dropping to a #6 seed is understandable.

I suspect that in the past, when teams with Poor Result Ranks of <4 always have gotten at least #3 seeds, they also always have had other profile characteristics that were determinative factors in favor of #3 seeds.  This gave the appearance that their Poor Result Ranks were important, but in fact that was not what the Committee was looking at.  In other words, this was a case where "correlation" of Poor Results Ranks and Committee decisions did not mean that the Poor Results Ranks were the "causation" for the decisions.

The bottom line of this is that although poor results may hurt a team in the Committee's decision process, it appears the lack of poor results won't help it.

AT LARGE SELECTIONS

Georgia and Kentucky Given At Large Positions, St. Mary's and California Denied At Large Positions

Georgia as At Large Team

Georgia's NCAA RPI rank was #43.  The SEC's NCAA RPI rank was #3.  Georgia finished #3 in the SEC regular season standings and lost in the semifinals of the SEC tournament.  Based on the Committee's historic patterns, however, it met 0 "yes" standards for an at large position and 1 "no" standard:

    Non-Conference RPI Rank (#90) and Top 50 Results Rank (#48)

        Standard  >191

        Score  192.1

As you can see, Georgia barely met the "no" standard for an at large position.  My experience says that given the closeness of Georgia's score to this factor standard, it would not be surprising if Georgia were not penalized based on the factor.  If we disregard this factor, then Georgia was an historically appropriate candidate for an at large position -- not assured of getting one but also not assured of being denied one.

Kentucky as At Large Team

Kentucky's NCAA RPI rank was #50.  Its Non-Conference RPI rank was #125.  Again, the SEC's NCAA RPI rank was #3.  Kentucky finished tied for #5/#6 in the SEC regular season standings and lost in the quarterfinals of the SEC tournament.  Based on the Committee's historic patterns, it met 0 "yes" standards for an at large position and 13 "no" standards:

    RPI Rating

        Standard  <0.5654

        Score  0.5596

    Non-Conference RPI Rating

        Standard  <0.5168

        Score  0.5141

    RPI Rating and Non-Conference RPI Rating

        Standard  <0.8584

        Score  0.8305

    RPI Rating and Non-Conference RPI Rank

        Standard  <0.5855

        Score  0.5740

    RPI Rating and Top 50 Results Score

        Standard  <0.5657

        Score  0.5593

    RPI Rank and Non-Conference RPI Rating

        Standard  <85.3

        Score  84.3

    Non-Conference RPI Rating and Top 50 Results Score

        Standard  <.5306

        Score  0.5174

    Non-Conference RPI Rating and Top 50 Results Rank

        Standard  <0.6361

        Score  0.622

    Non-Conference RPI Rank and Top 50 Results Rank

        Standard  >191

        Score  224.97

    Non-Conference RPI Rank and Conference Standing

        Standard  >167

        Score  189.32

    RPI Rating and Top 50 Head to Head Rank

        Standard  <0.5815

        Score  0.5692

    Non-Conference RPI Rating and Top 60 Head to Head Rank

        Standard  <0.5981

        Score  0.536

    Non-Conference RPI Rank and Top 60 Head to Head Rank

        Standard  >207

        Score  253.12

As a general comment, on a good number of these standards, Kentucky's scores are not very close to the standards for "no" at large selection, in other words are pretty well into the range of teams that never have gotten at large selections.

For the factors that involve RPI and Non-Conference RPI ratings, however, there is a problem that has shown up this year and that has resulted in a lot of teams meeting both "yes" and "no" standards for different Committee decisions.  The problem is the Committee's decision last year (in 2024) to devalue ties from half of a win to a third of a win when calculating the Winning Percentage portion of the RPI, which accounts for 50% of the RPI's effective weight.  This change means that when looking at ratings, as distinguished from ranks, the Committee is looking at different numbers than it has been used to seeing and, as a generalization, is looking at lower ratings than it has seen historically.  Because of this, the historic rating standards probably are too high.  Each year, I revise the standards to incorporate the previous year's Committee decisions, so I made some adjustments last year after the first year using the one-third tie value.  This year, however, there have been significantly more ties than in the past, thus further depressing teams' ratings and therefore generating more "no" scores than we have been used to seeing.  The bottom line of this is that this year, using standards that involve RPI and Non-Conference RPI ratings is suspect as a basis for assessing the historic consistency of the Committee's decisions.

On the other hand, using standards that do not involve using RPI and Non-Conference RPI ratings does not suffer from the previous paragraph's problem.  That is why, in the above list, I have highlighted three standards: They involve only ranks.  On all of these, Kentucky is well beyond the standards for "no" at large position.  In other words, they show that Kentucky getting an at large position represents a significant deviation from the Committee's historic patterns.

As you can see from the above list, each highlighted standard includes Kentucky's Non-Conference RPI rank.  Kentucky went 6 wins and 2 losses in its non-conference games, with the two losses respectable ones, to #41 Illinois and #38 Ohio State.  The problem isn't with Kentucky's record in its non-conference games, it's with the overall weakness of Kentucky's non-conference opponents.  The six wins were against #278 Jackson State, #321 West Georgia, #97 East Tennessee State, #329 Detroit, #344 IPFW, and #324 Mercyhurst.  The standards essentially say that historically, this has been too weak a non-conference schedule to support an at large position.  The Committee, however, decided otherwise this year, thus saying a team can get away with this weak a non-conference schedule and still get into the NCAA Tournament.

St. Mary's Denied an At Large Position

St. Mary's NCAA RPI rank was #42.  The West Coast Conference's NCAA RPI rank was #7.  St. Mary's finished tied for #2/#3 in the conference standings.  (The WCC did not have a conference tournament.)  Based on the Committee's historic patterns, St. Mary's met 6 "yes" standards for an at large position and 0 "no" standards:

    Non-Conference RPI Rank and Top 60 Head to Head Score

        Standard  >120.9

        Score  136.7

    Non-Conference RPI Rank and Top 60 Head to Head Rank

        Standard  <54

        Score  42.71

    Non-Conference RPI Rank and Top 60 Common Opponents Rank

        Standard  <84

        Score  75.41

    Top 50 Results Score and Top 60 Head to Head Results Rank

        Standard  >71471

        Score  71933

    Conference Rank and Top 60 Head to Head Results Score

        Standard  >24.71

        Score  25.40

    Conference Rank and Top 60 Head to Head Results Rank

        Standard  <9

        Score  6.91

For most of these standards, St. Mary's was well over the "yes" standard for teams that always have gotten at large positions in the Tournament.  In other words, the Committee not giving St. Mary's an at large position was a significant deviation from the Committee's historic patterns.

California Denied an At Large Position

California's NCAA RPI rank was #46.  The ACC's NCAA RPI rank was #1.  California finished #8 in the conference standings, so it did not play in the conference tournament.  Its record was 8 wins, 3 losses, and 8 ties.  Based on the Committee's historic patterns, California met 0 "yes" standards for an at large position and 6 "no" standards:

    RPI Rating

        Standard  <0.5654

        Score  0.5639

    Poor Results Rank

        Standard  >65

        Score  67

    RPI Rating and Poor Results Rank

        Standard  <0.5964

        Score  0.5727

    Non-Conference RPI Rating and Poor Results Rank

        Standard  <0.6266

        Score  0.583

    Conference Standing and Poor Results Rank

        Standard  >18

        Score  20.32

    Top 60 Head to Head Results Rank and Poor Results Rank

        Standard  >124

        Score  131

As discussed above, using RPI and Non-Conference RPI ratings has problems due to last year's reduced valuation of ties in the RPI formula.  This leaves the three highlighted factors as the best to use in evaluating where California stood in relation to the Committee's historic patterns of denying teams at large positions.  For the first two highlighted standards, California was just a little beyond the "no" standards.  For the third, it was a little more beyond.  A way to look at this is that California's ACC standing and its Top 60 head to head results were not good enough to overcome its poor results, though they were pretty close.

Of interest for California is that it had 8 ties.  Under the previous RPI formula's valuation of a tie as half of a win, California's RPI rank would have been #43.  With the third of a win valuation its rank is #46.  Georgia would have been #39 with a half a win valuation, rather than its #43 rank at a third of a win, so the change did not hurt California in relation to Georgia.  On the other hand, with a half a win valuation Kentucky would have dropped to #51 rather than its #50 with the third of a win valuation.  Would this have affected the Committee's at large choice as between Kentucky and California?  Possibly.

Conclusion as to At Large Selections

On close examination, the Committee's giving Georgia an at large position is not a significant deviation from the Committee's historic patterns.  And, the Committee's not giving California an at large position is not a significant deviation.

On the other hand, the Committee's not giving St. Mary's an at large position is a significant deviation from the Committee's historic patterns.

Further, the Committee's giving Kentucky an at large position also is a significant deviation from the Committee's historic patterns.

Thus the Committee appears to have gone far out of its way to give Kentucky an at large position and to deny one to St. Mary's.

OVERALL CONCLUSION

Taking into consideration the effect of the change in the valuation of ties within the NCAA RPI formula and Washington's Jekyll and Hyde season, the Committee's seeding decisions do not appear to have significant deviations from the Committee's historic patterns.  On the other hand, the Committee's denial of an at large position to St. Mary's and giving a position to Kentucky each represents a significant deviation; and put together the two amount to a big deviation.