Tuesday, November 18, 2025

2025 ARTICLE 30: A LOOK AT THE COMMITTEE'S NCAA TOURNAMENT SEEDING AND AT LARGE SELECTION DECISIONS

This will be the first of two articles on the Women's Soccer Committee's NCAA Tournament bracket decisions.  This article addresses the Committee's decisions when using the NCAA RPI, which is the rating system the NCAA requires the Committee to use.  The second article will address what the Committee's decisions likely would have been if it had used the Balanced RPI.

In this article, I'll review 6 of the Women's Soccer Committee's seeding and at large selection decisions, with regard to whether the decisions are consistent with the Committee's historic decision patterns.  I selected the decisions for review from the following list, which shows the Committee's decisions as compared to what the Committee's historic patterns indicated the decisions probably would be.  The list inludes all of this year's differences, most of which I consider minor.  The ones I selected for review, highlighted yellow, are the ones I consider significant.  (Any Committee decisions that are not on the list matched the expected decisions.)

In this review, I am going to assume you are familiar with how I use factor standards to determine the decisions the Committee will make if it is following its historic patterns.  I have discussed that method in previous articles.


SEEDS

There is an important consideration to keep in mind when evaluating the Committee's seeds.  Although the NCAA expressly sets out certain factors the Committee must consider (and also other information the Committee is not allowed to consider) when selecting at large teams to participate in the tournament, it does not require the Committee to consider those factors when seeding.  This is intentional.  Thus although the Committee almost certainly considers the factors when seeding, it is free to make whatever seeding decisions it thinks are appropriate.

Another consideration relates to the #5 through #8 seeds.  The NCAA did not begin seeding those positions until the 2022 season.  Thus there are only three years' data -- which is very little data -- from which to identify Committee patterns.  Because of that, for the time being I use Committee at large selection patterns, supplemented by the combined NCAA RPI Rank and Top 50 Results Score factor, as the basis for indicating what we should expect the #5 through #8 seeds to be.  That is a crude process that I won't be able to refine until we have more years' data.  This means it is not particularly significant if the Committee placements of teams within the #5 through #8 seed group are different than the "expected" historic pattern placements.

A final consideration relates to the #2 through #4 seeds.  The Committee's #1 seeds follow historic patterns pretty closely, with an occasional missed team.  In addition, which teams will fall within the #1 through #4 group as a whole follow historic patterns pretty closely.  Within the #2 through #4 group, however, historic patterns are relatively poor at predicting which teams will get which seeds.

Michigan State's #2 Seed Rather than a #4

The Committee gave Michigan State, with a #6 NCAA RPI rank, a #2 seed, as compared to an expected #4 seed.  It finished #2 in the Big Ten regular season standings and #2 in the conference tournament.  The Big Ten was #4 in the RPI conference ranks.  Michigan State met 0 "yes" standards for a #2 seed and 2 "no" standards.  The "no" standards were:

Conference RPI and Top 60 Head to Head Results Rank

"No" standard:  <0.6721

Michigan State score:  0.6584 

This is a pretty large distance below the standard.

Conference Rank and Top 50 Head to Head Results Rank

"No" standard:  >14

Michigan State score:  14.09

This is just outside the standard.  Based on past experience, this "shortcoming" is not significant.

In its Top 60 Head to Head games, here are Michigan State's results:

1 loss H to a #1 seed

1 tie A to a #3 seed

2 ties H and N to #4 seeds

2 wins A and N to #4 seeds

1 loss A to a #5 seed

1 loss A to a #6 seed

2 ties H to an unseeded AQ and an at large

2 wins H and N to unseeded at larges


Also, Michigan State's Top 60 Head to Head Results Rank was #29 (based on my scoring system for Head to Head Results).  And they met 0 "yes" and 0 "no" standards for a #3 seed, and met 4 "yes" and 0 "no" standards for a #4 seed. 

The above information suggests that a #2 seed for Michigan State is a significant deviation from the Committee's historic patterns.  It doesn't look like Michigan State is in the #2 seed range.  A #4 seed seems more consistent with the Committee's historic patterns and also with results against Top 60 opponents.

But, there is an additional situation that may have affected the Committee's decision.  The Committee gave Washington a #4 seed, which is the best seed historic patterns would have supported.  This was the case even though Washington was the Big Ten regular season and conference tournament champion, with a win against Michigan State during the conference regular season competition and a tie with Michigan State in the conference tournament championship game, with Washington prevailing on penalty kicks.  Further, the Committee gave these other seeds to Big Ten teams: Wisconsin #4 (Washington beat them twice), UCLA #4 (Washington beat them), Iowa #5 (Washington did not play them), and Penn State #8 (Washington beat them).  Given this conference performance, one might think the Committee would seed Washington ahead of Michigan State and higher than a #4 seed.

Washington, however, had a very unusual profile.  It had poor results in its non-conference games, indeed poor enough to give it a #137 NCAA Non-Conference RPI rank.  On the other hand it obviously had very strong results during its conference season, strong enough to bring its overall NCAA RPI rank up to #27.  This created the very odd situation in which Washington met no "yes" standards for a #1, #2, or #3 seed, and "yes" standards for a #4 seed, but also a high number of "no" standards at every seed level.  In fact, it met a high number of both "yes" and "no" standards for an at large position.  In other words, Washington had a Jekyll and Hyde profile such as the Committee has not seen before -- for example, it met 2 "yes" standards for a #4 seed and 47!!! "no" standards.

So what was the Committee to do?  It had the Big Ten as the #4 conference.  Yet its historic patterns only had Big Ten potential seeds at the #4 or poorer seed level.  It had Michigan State with an NCAA RPI rank of #6, Washington at #27, Iowa at #21, and Wisconsin at #23.  Would having the Big Ten's best seed position as #4 be reasonable?  My guess is that the Committee felt it had to give a Big Ten team a #2 seed and that, because of Washington's Jekyll and Hyde profile and Michigan State's high RPI rank, it decided to give Michigan State the #2 seed.

From my perspective, given that the Committee is not bound by the data as to seeds, I would have given Washington the #2 seed notwithstanding its #27 NCAA RPI rank but rather due to its Big Ten regular season and conference tournament double.  But what the Committee decided to do also seems reasonable given the peculiar circumstances.  In particular it seems reasonable as historically, #2 seeds have been limited to teams with NCAA RPI ranks of #13 or better.

Louisville as #6 Seed Rather than a #3

The Committee gave Louisville, with a #15 NCAA RPI rank, a #6 seed, as compared to a somewhat expected #3 seed.  For a #3 seed, Louisville met 2 "yes" standards and 8 "no" standards.  For a #4 seed, it met 2 "yes" and 0 "no" standards.

For a #3 seed, after the Committee's selection of the #1 and #2 seeds, the candidates were:

    #8 Colorado  3 "yes" and 0 "no"

     #10 Florida State  5 "yes" and 1 "no"

    #14 LSU  4 "yes" and 1 "no"

    #15 Louisville  2 "yes" and 8 "no"

    #7 Kansas  1 "yes" and 4 "no"

    #13 West Virginia  0 "yes" and 3 "no"

    #17 Baylor  0 "yes" and 1 "no"

    #16 Texas Tech  0 "yes" and 4 "no"

    #22 UCLA  0 "yes" and 5 "no"

    #23 Wisconsin  0 "yes" and 5 "no"

    #19 Memphis  0 "yes" and 6 "no"

    #9 Tennessee  0 "yes" and 7 "no"

    #18 Xavier  0 "yes" and 12 "no"

    #20 BYU  0 "yes" and 15 "no"

    #21 Iowa  0 "yes" and 15 "no"

From the above list, Colorado (3/0) is a clear #3 seed and that is what the Committee gave it.  After that, the Committee gave #3 seeds to Florida State (5/1), Kansas (1/4), and Tennessee (0/7).  It did not give a #3 seed to Louisville (2/8) or LSU (4/1), but it did give LSU a #4 seed.  As a point of reference, for a #4 seed Florida State scored 7/0, LSU 6/0, Kansas 3/0, Louisville 2/0, andTennessee 1/0.

The teams meeting 1 or more "yes" and "no" standards presented profiles the Committee has not seen before: Kansas, Florida State, LSU, and Lousiville.

Regarding Lousiville, the question is why it did not get a #3 seed as compared to Tennessee, which did.

Here are the #3 seed "yes" standards Louisville met and its scores for the standards:

    Poor Results Rank

        Standard  <4

        Score  1

    Conference Rank and Poor Results Rank

        Standard  <2

        Score  1.35

Here are the "no" standards:

    RPI Rating

        Standard  <.6157

        Score  .6155

    Head to Head v Top 60 Rank

        Standard  >44

        Score  50

    RPI Rating and Head to Head v Top 60 Rank

        Standard  <.7003

        Score  .6765

    RPI Rank and Head to Head v Top 60 Rank

        Standard  >55

        Score  55.58

    NonConference RPI Rating and Head to Head v Top 60 Rank

        Standard  <.7790

        Score  .746

    NonConference RPI Rank and Head to Head v Top 60 Rank

        Standard  >111

        Score  118.1

    Top 50 Results Score and Head to Head v Top 60 Rank

        Standard  <38548

        Score  33338

    Top 50 Results Rank and Head to Head v Top 60 Rank

        Standard  >60

        Score  67.65

Looking at the "yes" standards, the critical information is that Louisville had no poor results, supplemented by its being in the #1 ranked conference.  Looking specifically at its losses, they were to #4 Vanderbilt, #10 Florida State, #1 Notre Dame, and #11 Duke.  Its ties were against #28 Wake Forest and #2 Virginia.  Without regard to its opponents' ranks, its record against Top 60 (and 50) opponents was 2 wins, 4 losses, and 2 ties.  Its wins against Top 50 (and 60) opponents were against Dayton (26) and Wake Forest (28).

Looking at Tennessee, which historic patterns indicate would not get a #3 seed but did get one, it met no "yes" standards and 7 "no" standards.  Here are the "no" standards:

    Top 50 Results Rank

        Standard  >40

        Score  43

    Top 50 Results Rank and RPI Rating

        Standard  <.7086

        Score  .7025

    Top 50 Results Rank and Top 50 Results Score

        Standard  <28795

        Score  25922

    Top 50 Results Rank and Conference Standing

        Standard  >61

        Score  69.46

    Top 50 Results Rank and Conference RPI

        Standard  <232.1

        Score  119.9

    Top 50 Results Rank and Top 60 Head to Head Score

        Standard  <31.2

        Score  27.9

    Top 50 Results Rank and Top 60 Common Opponents Score

        Standard  <30.9

        Score  18.6

Comparing Louisville to Tennessee, the critical information appears to be that Louisville's "yes" standards related to its lack of poor results.  Absent the Committee seeing value in that, when looking at the Committee's historic patterns there does not seem to be a lot of difference between Louisville's and Tennessee's "no" values.  This suggest that the Committee did not assign value to Louisville's lack of poor results.  Just as this would explain why the Committee would not give Lousiville a #3 seed, it also explains why it would not give it a #4 seed since Louisville's 2 "yes" standards for a #4 seed were the same no-poor-result-reliant standards as for a #3 seed.  If this reasoning is correct, then Louisville dropping to a #6 seed is understandable.

I suspect that in the past, when teams with Poor Result Ranks of <4 always have gotten at least #3 seeds, they also always have had other profile characteristics that were determinative factors in favor of #3 seeds.  This gave the appearance that their Poor Result Ranks were important, but in fact that was not what the Committee was looking at.  In other words, this was a case where "correlation" of Poor Results Ranks and Committee decisions did not mean that the Poor Results Ranks were the "causation" for the decisions.

The bottom line of this is that although poor results may hurt a team in the Committee's decision process, it appears the lack of poor results won't help it.

AT LARGE SELECTIONS

Georgia and Kentucky Given At Large Positions, St. Mary's and California Denied At Large Positions

Georgia as At Large Team

Georgia's NCAA RPI rank was #43.  The SEC's NCAA RPI rank was #3.  Georgia finished #3 in the SEC regular season standings and lost in the semifinals of the SEC tournament.  Based on the Committee's historic patterns, however, it met 0 "yes" standards for an at large position and 1 "no" standard:

    Non-Conference RPI Rank (#90) and Top 50 Results Rank (#48)

        Standard  >191

        Score  192.1

As you can see, Georgia barely met the "no" standard for an at large position.  My experience says that given the closeness of Georgia's score to this factor standard, it would not be surprising if Georgia were not penalized based on the factor.  If we disregard this factor, then Georgia was an historically appropriate candidate for an at large position -- not assured of getting one but also not assured of being denied one.

Kentucky as At Large Team

Kentucky's NCAA RPI rank was #50.  Its Non-Conference RPI rank was #125.  Again, the SEC's NCAA RPI rank was #3.  Kentucky finished tied for #5/#6 in the SEC regular season standings and lost in the quarterfinals of the SEC tournament.  Based on the Committee's historic patterns, it met 0 "yes" standards for an at large position and 13 "no" standards:

    RPI Rating

        Standard  <0.5654

        Score  0.5596

    Non-Conference RPI Rating

        Standard  <0.5168

        Score  0.5141

    RPI Rating and Non-Conference RPI Rating

        Standard  <0.8584

        Score  0.8305

    RPI Rating and Non-Conference RPI Rank

        Standard  <0.5855

        Score  0.5740

    RPI Rating and Top 50 Results Score

        Standard  <0.5657

        Score  0.5593

    RPI Rank and Non-Conference RPI Rating

        Standard  <85.3

        Score  84.3

    Non-Conference RPI Rating and Top 50 Results Score

        Standard  <.5306

        Score  0.5174

    Non-Conference RPI Rating and Top 50 Results Rank

        Standard  <0.6361

        Score  0.622

    Non-Conference RPI Rank and Top 50 Results Rank

        Standard  >191

        Score  224.97

    Non-Conference RPI Rank and Conference Standing

        Standard  >167

        Score  189.32

    RPI Rating and Top 50 Head to Head Rank

        Standard  <0.5815

        Score  0.5692

    Non-Conference RPI Rating and Top 60 Head to Head Rank

        Standard  <0.5981

        Score  0.536

    Non-Conference RPI Rank and Top 60 Head to Head Rank

        Standard  >207

        Score  253.12

As a general comment, on a good number of these standards, Kentucky's scores are not very close to the standards for "no" at large selection, in other words are pretty well into the range of teams that never have gotten at large selections.

For the factors that involve RPI and Non-Conference RPI ratings, however, there is a problem that has shown up this year and that has resulted in a lot of teams meeting both "yes" and "no" standards for different Committee decisions.  The problem is the Committee's decision last year (in 2024) to devalue ties from half of a win to a third of a win when calculating the Winning Percentage portion of the RPI, which accounts for 50% of the RPI's effective weight.  This change means that when looking at ratings, as distinguished from ranks, the Committee is looking at different numbers than it has been used to seeing and, as a generalization, is looking at lower ratings than it has seen historically.  Because of this, the historic rating standards probably are too high.  Each year, I revise the standards to incorporate the previous year's Committee decisions, so I made some adjustments last year after the first year using the one-third tie value.  This year, however, there have been significantly more ties than in the past, thus further depressing teams' ratings and therefore generating more "no" scores than we have been used to seeing.  The bottom line of this is that this year, using standards that involve RPI and Non-Conference RPI ratings is suspect as a basis for assessing the historic consistency of the Committee's decisions.

On the other hand, using standards that do not involve using RPI and Non-Conference RPI ratings does not suffer from the previous paragraph's problem.  That is why, in the above list, I have highlighted three standards: They involve only ranks.  On all of these, Kentucky is well beyond the standards for "no" at large position.  In other words, they show that Kentucky getting an at large position represents a significant deviation from the Committee's historic patterns.

As you can see from the above list, each highlighted standard includes Kentucky's Non-Conference RPI rank.  Kentucky went 6 wins and 2 losses in its non-conference games, with the two losses respectable ones, to #41 Illinois and #38 Ohio State.  The problem isn't with Kentucky's record in its non-conference games, it's with the overall weakness of Kentucky's non-conference opponents.  The six wins were against #278 Jackson State, #321 West Georgia, #97 East Tennessee State, #329 Detroit, #344 IPFW, and #324 Mercyhurst.  The standards essentially say that historically, this has been too weak a non-conference schedule to support an at large position.  The Committee, however, decided otherwise this year, thus saying a team can get away with this weak a non-conference schedule and still get into the NCAA Tournament.

St. Mary's Denied an At Large Position

St. Mary's NCAA RPI rank was #42.  The West Coast Conference's NCAA RPI rank was #7.  St. Mary's finished tied for #2/#3 in the conference standings.  (The WCC did not have a conference tournament.)  Based on the Committee's historic patterns, St. Mary's met 6 "yes" standards for an at large position and 0 "no" standards:

    Non-Conference RPI Rank and Top 60 Head to Head Score

        Standard  >120.9

        Score  136.7

    Non-Conference RPI Rank and Top 60 Head to Head Rank

        Standard  <54

        Score  42.71

    Non-Conference RPI Rank and Top 60 Common Opponents Rank

        Standard  <84

        Score  75.41

    Top 50 Results Score and Top 60 Head to Head Results Rank

        Standard  >71471

        Score  71933

    Conference Rank and Top 60 Head to Head Results Score

        Standard  >24.71

        Score  25.40

    Conference Rank and Top 60 Head to Head Results Rank

        Standard  <9

        Score  6.91

For most of these standards, St. Mary's was well over the "yes" standard for teams that always have gotten at large positions in the Tournament.  In other words, the Committee not giving St. Mary's an at large position was a significant deviation from the Committee's historic patterns.

California Denied an At Large Position

California's NCAA RPI rank was #46.  The ACC's NCAA RPI rank was #1.  California finished #8 in the conference standings, so it did not play in the conference tournament.  Its record was 8 wins, 3 losses, and 8 ties.  Based on the Committee's historic patterns, California met 0 "yes" standards for an at large position and 6 "no" standards:

    RPI Rating

        Standard  <0.5654

        Score  0.5639

    Poor Results Rank

        Standard  >65

        Score  67

    RPI Rating and Poor Results Rank

        Standard  <0.5964

        Score  0.5727

    Non-Conference RPI Rating and Poor Results Rank

        Standard  <0.6266

        Score  0.583

    Conference Standing and Poor Results Rank

        Standard  >18

        Score  20.32

    Top 60 Head to Head Results Rank and Poor Results Rank

        Standard  >124

        Score  131

As discussed above, using RPI and Non-Conference RPI ratings has problems due to last year's reduced valuation of ties in the RPI formula.  This leaves the three highlighted factors as the best to use in evaluating where California stood in relation to the Committee's historic patterns of denying teams at large positions.  For the first two highlighted standards, California was just a little beyond the "no" standards.  For the third, it was a little more beyond.  A way to look at this is that California's ACC standing and its Top 60 head to head results were not good enough to overcome its poor results, though they were pretty close.

Of interest for California is that it had 8 ties.  Under the previous RPI formula's valuation of a tie as half of a win, California's RPI rank would have been #43.  With the third of a win valuation its rank is #46.  Georgia would have been #39 with a half a win valuation, rather than its #43 rank at a third of a win, so the change did not hurt California in relation to Georgia.  On the other hand, with a half a win valuation Kentucky would have dropped to #51 rather than its #50 with the third of a win valuation.  Would this have affected the Committee's at large choice as between Kentucky and California?  Possibly.

Conclusion as to At Large Selections

On close examination, the Committee's giving Georgia an at large position is not a significant deviation from the Committee's historic patterns.  And, the Committee's not giving California an at large position is not a significant deviation.

On the other hand, the Committee's not giving St. Mary's an at large position is a significant deviation from the Committee's historic patterns.

Further, the Committee's giving Kentucky an at large position also is a significant deviation from the Committee's historic patterns.

Thus the Committee appears to have gone far out of its way to give Kentucky an at large position and to deny one to St. Mary's.

OVERALL CONCLUSION

Taking into consideration the effect of the change in the valuation of ties within the NCAA RPI formula and Washington's Jekyll and Hyde season, the Committee's seeding decisions do not appear to have significant deviations from the Committee's historic patterns.  On the other hand, the Committee's denial of an at large position to St. Mary's and giving a position to Kentucky each represents a significant deviation; and put together the two amount to a big deviation.


Monday, November 10, 2025

2025 ARTICLE 29: THE NCAA TOURNAMENT BRACKET - IT'S A NEW WORLD FOR THE COMMITTEE. WILL THEY BE UP TO IT?

In working on my "regular" end-of-season analysis of what the Women's Soccer Committee's NCAA Tournament seeds and at large selection might be, the numbers I am seeing have made something clear:  This year, there is nothing "regular" about what the Committee will be seeing.  Because of that, this article will include more and different details than what I have provided in earlier years, so you can see what the Committee will be facing and, once you have seen the Committee's actual decisions, you can decide whether the Committee has been "up to" the moment.

First, I'll start with some information on why this season is not "regular."

Proportions of Out-of-Region Games

The following table shows the historic proportions of games that the four regions' teams have played against out-of-region opponents.  I place each State's teams in the region in which the State's teams as a group play either the majority or plurality of their games.  To see the regions -- Middle, North, South, and West -- and the States within them, go to the RPI: Regional Issues page at the RPI for Division I Women's Soccer website.  The data in the below table are from the years 2013 through 2024 (excluding Covid-affected 2020).


The next table breaks down the historical out-of-region percentages by region, including showing the distribution of out-of-region games among the other regions:


This year, likely driven by the changed financial landscape for Division I sports. the out-of-region numbers have declined dramatically, notwithstanding the increased out-of-region travel for teams from conferences with recent major expansions of their geographic footprings:

 

Comparing this to the first table above, there has been a 28.5% reduction in out-of-region games.

Here are this year's breakdowns for the regions: 


Comparing this to the second table above, you can see the reduction in out-of-region travel for each region.  There is an 18.3% reduction for teams from the Middle, 29.3% for the North, 31.0% for the South, and 33.5% for the West.

In looking at these reductions, consider that for the NCAA RPI to function as a fully national rating system, there must be a large number of out-of-region games.  If there are not enough out-of-region games, then what you are seeing in the NCAA RPI ratings and rankings is how teams within a region compare to each other, but not how teams from a region compare to teams from other regions.

Levels of Parity Within Regions

An indicator of parity within a region is the proportion of in-region games that are ties.  The following table shows the historic proportions of ties, by region.  The data for the table are from the years 2010 through 2024 (excluding Covid-affected 2020), with all games that were ties at the end of regular time treated as ties (for those years when the rules provided for overtime games).


As you can see from this table, historically the West has had the highest proportion of ties, followed by the Middle and North, with the South having the lowest proportion of ties.  In terms of parity, the order is from the West with the greatest parity to the South with the least.

Here is the table for this year:


 As you can see, the proportions of in-region ties are higher for all regions.  In other words, it appears there has been an increase in parity within the regions.  Once again, the West has the greatest parity and the South the least, with the Middle and North switching places from the historic norm.

Diminished Value of Ties Within the RPI Formula

In 2024, the Women's Soccer Committee changed the RPI Formula.  This included a change in how the NCAA computes Element 1 of the RPI, which is a team's Winning Percentage (WP).  A way to express the formula for Winning Percentage is:

WP = (Wins + X*Ties) /(Wins + Ties + Losses)

Until 2024, in the formula X was 1/2.  In 2024, the Committee changed X to 1/3.  Thus the value of a tie went from 1/2 of a win to 1/3 of a win.

 As a result of this change, the Committee depressed many teams' ratings, since many teams have one or more ties.  As a presumably unintended side effect, the change also punished regions with higher levels of parity and thus more ties.

It appears that an effect of these changes has been to change many upper level teams' profiles enough that they now look poorer than they have in the past.  This relates to my annual process of considering what seed and at large decisions we can expect the Committee to make, if the Committee follows its historic decision patterns.  I'll go through the expected seeds and at large selections below and hopefully you will be able to see what I mean.

#1 SEEDS

Historically the #1 seeds always have come from the Top 7 teams in the NCAA RPI rankings.  Thus teams #1 through #7 are the #1 seed candidates.

Here is a table that relates to this year'sd #1 seed selection process:



Here is a detailed description of this table, as an introduction to the process I use and to the other tables I'll show below for the other seed levels and the at large selections.

I have identified 13 individual factors that the NCAA directs the Committee to consider in its at large selection process.  For each of those factors, either the NCAA provides a scoring system (for example, for a team's rating, the NCAA specifies the RPI as the scoring system) or I provide my own scoring system.  In addition to those individual factors, I also pair each factor with each other factor, with the scoring system weighting each factor in a pair at 50% of that "paired" factor's value.  Altogether this results in 105 paired factors plus the 13 individual factors or a total of 118 factors.

By comparing the factor scores for teams to the Committee's seed and at large selection decisions for teams over the years, for each decision the Committee must make -- #1 seed, #2 seed, etc., and at large selection -- I have identified two score standards for each factor.  A "yes" standard for a factor means that teams whose scores for that factor have been better than the "yes" standard always have gotten a positive decision from the Committee.  A "no" standard means that teams whose scores have been poorer than the "no" standard never have gotten a positive decision.  Using #1 seeds and the NCAA RPI Rating factor as an example:

The "yes" factor score is 0.6986.  This means that teams with NCAA RPI ratings better than 0.6986 always have gotten #1 seeds.

The "no" factor score is 0.6479,  this means that teams with NCAA RPI ratings poorer than 0.6479 never have gotten #1 seeds.

It is important to note that some teams will have NCAA RPI ratings between the "yes" score of 0.6986 and the "no" score of 0.6479.  These are possible, but not assured, #1 seeds based on the Committee's historic patterns.

For each required Committee decision, my system evaluates each team in relation to each factor.  The above table, in the 1 Seed Status Based on Standards column shows how this year's candidate group fared in the evaluation process.

As you can see if you look at the 1 Seed Total and No 1 Seed Total columns, Stanford and Notre Dame each meet a number of "yes" standards and no "no" standards.  This means that based on the Committee's historic patterns, Stanford and Notre Dame are clear #1 seeds.

On the other hand, if you look at Virginia and TCU, each meets at least 1 "yes" standard and at least 1 "no" standard.  This means each has a profile the Committee has not seen before (meaning not since 2007).  Based on my years of experience looking at numbers like this, I see something in the Virginia and TCU "no" numbers.  They are significantly higher than what I would expect to see for a profile the Committee has not seen before.

Further, if you look at Vanderbilt, Michigan State, and Kansas, they meet 0 "yes" standards and a high number of "no" standards.  Again based on years of experience, the "no" numbers are far higher than what I would expect to see for teams in the Top 7 of the NCAA RPI rankings.

Rather than seeing 5 of the 7 candidates for #1 seeds having significant numbers of "no" scores, what I would expect to see is at least several of them having no "yes" and no "no" scores.  These then would be the candidates for the remaining 2 #1 seed positions. 

The bottom line of this is that most of the RPI Top 7 teams' profiles are far poorer than what one should expect based on past history.

This same phenomenon appears throughout the decisions the Committee must make.  Because of this, I have concluded that this year, the changes I described above make it unwise to use the "no" factor scores as a basis for seeing if the Committee's decisions are consistent with the Committee's past decision patterns.  The "yes" factor scores should be fine, but not the "no" scores.  I have shown this in the 1 Seed Status Based on Standards column by identifying all of the teams other than clear #1 Stanford and Notre Dame as #1 seed Candidates.  Looking at the table, when I disregard the "no" scores, each of Virginia and TCU is left with at least 1 "yes" score.  Because of this, it appears to me that those teams receivinig #1 seeds would be most consistent with the Committee's historic decision patterns.

Unlike this year, in the past, when there have not been enough teams meeting only "yes" standards to fill a decision group, there have been teams that meet no "yes" and no "no" standards.  I then apply a tiebreaker to fill out the group.  The tiebreaker is the factor, from among all 118 of them, the scores of which historically have been most consistent with the Committee's decisions as to that group.  As it turns out for the #1 through #4 seeds, the factor most consistet with the Committee's decisions is teams' NCAA RPI ratings or ranks.  So if, for example, Virginia and TCU had met 0 "yes" standards this year, the Committee's historic decision patterns would have suggested picking from the 5 candidates the 2 teams with the best NCAA RPI ranks.  In that case, I would have looked at the 1 Seed Status Based on NCAA RPI Rank column, in which I entered "1 Seed" to indicate that those teams would get 1 seeds if needing to use the tiebreaker.  In that case, the Committee's historic patterns would have assigned the remaining 2 #1 seed positions to Virginia and Vanderbilt.

In the table, the #1 Seed Status Based on Standards and Tiebreaker Combined column shows the #1 seeds, based on this process, that would be most consistent with the Committee's historic patterns.

 #2 Seeds

With the #1 seeds most consistent with the Committee's historic patterns identified, next comes the #2 seeds using the same process as applied to the #2 seed candidate group, but excluding the already identified #1 seeds.  The candidate group is teams with NCAA RPI ranks of #13 or better.


Of the candidates, after disregarding the "no" scores, there are 4 teams that have at least 1 "yes" score: Vanderbilt, Florida State, Duke, and Georgetown.  So, given the constraints this year, those teams receiving #2 seeds would be most consistent with the Committee's historic patterns.

#3 Seeds

I will go through the remaining seeds in the same fashion as for the #2 seeds.


Kansas, Colorado, LSU, and Louisville as #3 seeds would be most consistent with the Committee's historic patterns.

#4 Seeds


There are three teams with at least 1 "yes" score: Michigan State, Tennessee, and Washington.  There still is one #4 seed position to fill.  Disregarding the "no" scores leaves all the other teams in the #4 seed candidate group (teams ranked #28 or better) as possibilities.  Since the tiebreaker is teams' NCAA RPI ranks, the best ranked remaining team is West Virginia, so it fills the remaining #4 seed position.

#5 Through #8 Seeds

The process for these seeds is the same as above, except that the Committee has done these seeds only for a few years and the data establishing Committee patterns are relatively limited.  Based on what the Committee has done for these seeds so far, the following changes appear to best identify the Committee's patterns:

1.  The tiebreaker for these seeds is a paired factor rather than simply their NCAA RPI ranks.  The paired factor is NCAA RPI Rank and Top 50 Results Score combined (the higher the score, the better); and

2.  The Committee's selection of the #5 through #8 seeds is more like the Committee's at large selections than the Committee's #1 through #4 seeds.  So the best standards references are to the standards for at large selections.

With that in mind, here are the tables for the #5 through #8 seeds (identified in the column headings as 4.5 through 4.8):

 



 


 NOTE: In the table, the lack of entries for Fairfield indicate that it played no opponents with NCAA RPI ranks of 60 or better.  That puts it out of consideration for an NCAA Tournament seed or at large position.

At Large Positions

The tiebreaker for at large positions is factor pair of NCAA RPI Rank and Top 50 Results Rank combined (the lower the score, the better).  The at large candidates are teams ranked #57 or better (that have not been seeded).


After the seeding, there are 8 additional at large positions to fill.  In the table, the At Large Status Based on Standards column shows the selection of 6 teams, each of which has at least 1 "yes" score: Penn State, Mississippi State, Ohio State, Clemson, South Carolina, and Illinois.  This leaves 2 positions to fill based on the tiebreaker.  The teams scoring best on the tiebreaker (lowest score) are California and Utah Valley, so their selection best matches the Committee's historic patterns.

Reminder and Summary

This year, for the reasons described above, I am disregarding the "no" scores.  How the Committee will treat the negative aspects of teams' profiles this year remains to be seen.  Maybe they will recognize that recent changes are causing teams' profiles to seem poorer than they have been in the past; but maybe they won't.

In the meantime, here is a summary of the above seeds and at large decisions.  In the Seed or At Large Selection column, #1 seeds are 1.0, #2s are 2.0, #3s are 3.0, #4s are 4.0, #5s are 4.5, #6s are 4.6, #7s are 4.7, and #8s are 4.8.  Unseeded Automatic Qualifiers are 5.0.  Unseeded at large selections are 6.0.  At large candidates from the Top 57 that do not get at large selections are 7.0.  (The order of teams within the different groups does not have any significance.)