Monday, January 8, 2018

NCAA TOURNAMENT BRACKETOLOGY: WHAT MATTERS MOST TO THE COMMITTEE? POST 2017 UPDATE

What factors have the most influence on the Women's Soccer Committee's decisions on NCAA Tournament seeds and at large selections?  That's the subject of this post.

Each year, I review the Committee's decisions in comparison to 13 basic factors I've drawn from the at large decision-making rules the Committee must follow.  And, I review the decisions in comparison to 78 additional factors, each of which pairs two of the 13 basic factors weighted at 50% each.  I review the Committee's decisions for each of  #1 seeds, #2 seeds, #3 seeds, #4 seeds, all 16 seeded teams as a single group, and at large selections.

I then combine the data from all the years in my data base and, looking at the Top 60 teams, identify "yes" and "no" standards for each factor, for each decision group.  Using the Adjusted RPI Rank factor and the #1 seed group as an example,  "yes" a team with an ARPI Rank of #1 always has received a #1 seed; and "no" a team with an ARPI rank of #8 or poorer never has received a #1 seed.

I've just updated the standards after adding the 2017 data and Committee decisions to the data base.  And, I've done a review to see which factors appear to have been the most powerful in the Committee's decision process.

But first, for context, here's a table that summarizes how well the factor standards match with the Committee's decisions over the last 11 years:






















For each year, the "... Final +/0 and 0/0" column shows the number of positions in each group that the standards "fill" with no decision required.  The "Percent Filled by Standards" row shows the percent of available positions the standards fill for each group.  The "Undecided Positions to Fill per Year" column shows, for an average year, the number of positions in each group that the standards by themselves cannot fill, in other words the remaining openings.  And, the "Candidates for Undecided Positions" column shows the number of candidates per year the standards identify for the remaining openings.  All of these candidates meet no "yes" and no "no" standards for the group.  (All other teams are excluded because they meet no "yes" standards and 1 or more "no" standards.)

As the table shows, the standards do a great job with the #1 seeds and a good job with the #2 seeds.  They do lesser jobs with the #3 and #4 seeds, but when looking at all the seeds as a single group, the standards again do a good job.  This suggests that the Committee is pretty consistent in its selection of seeds as a whole but is less consistent in how it assigns teams #9 through #16 between the #3 and #4 seed groups.  In addition, for each position the standards by themselves cannot fill, the standards identify roughly 2 candidates for the position.

For at large selections, combining both the at large seeded and at large unseeded teams, the standards by themselves fill 87.8% of the available positions.  Each year, there are about 4 positions the standards by themselves cannot fill.  And there are 6 to 7 candidates to fill those positions.

The bottom line is that the standards do a good job of reflecting how the Committee makes its decisions.

With that in mind, which standards are the most powerful?  To evaluate that, for each factor and decision group, looking at the Top 60 teams, I count the number of teams that meet "yes" and "no" standards -- the teams for which those particular standards make a decision.  And, I count the number of teams that meet neither a "yes" nor a "no" standard -- the teams for which those particular standards don't make a decision.  Once I've done that, I look to see which standards account for the most "yes" decisions, the most "no" decisions, and the fewest "don't make a decision" -- the fewer teams in the "don't make a decision" group, the more powerful the factor.

Here are the factors that produce the most "yes" decisions, the most "no" decisions, and the fewest "don't make a decision," for each decision group.  Where I list two factors together, I'm referring to paired individual factors with each weighted 50%:

#1 Yes:  ARPI Rank and Top 60 Common Opponents Rank
#1 No:  ARPI Rank
#1 Don't make a decision:  ARPI Rank and Top 60 Common Opponents Rank

#2 Yes:  ARPI and Top 60 Common Opponents Score
#2 No:  ARPI Rank and Top 60 Common Opponents Rank
#2 Don't make a decision:  ARPI Rank and Top 60 Common Opponents Rank

#3 Yes:  ARPI Rank and Conference Rank
#3 No:  ARPI and Top 50 Results Rank
#3 Don't make a decision:  ARPI and Top 60 Common Opponents Rank

#4 Yes: ARPI and Top 60 Common Opponents Score
#4 No:  ARPI and Top 60 Common Opponents Rank
#4 Don't make a decision:  ARPI and Top 60 Common Opponents Rank

At Large Yes:  ARPI and Top 50 Results Rank
At Large No:  ARPI Rank and Top 50 Results Score
At Large Don't make a decision:  ARPI and Top 50 Results Rank

Here are some observations:

First, looking at at large selections and seeds together, the Adjusted RPI has the most powerful influence.  This is consistent with past observations.  Essentially, the Committee starts with the ARPI and then looks to see how other factors suggest a change from the ARPI rankings.

Second, for at large selections, Top 50 Results (Score or Rank) combined with the ARPI has the most powerful influence.  This is consistent with past observations.  Top 50 Results Score looks at good results (wins or ties) against Top 50 teams, and assigns values to those results based on the ranks of the opponents and the game locations.  The higher ranked the Top 50 opponent, the higher the score assigned for a good result, with the assigned scores very much higher for very highly ranked opponents than for lower ranked opponents.

Third, for seeds, Top 60 Common Opponents results (Score or Rank) combined with the ARPI has the most powerful influence.  This confirms, quite clearly, past observations.  Top 60 Common Opponents results is based on the requirement that the Committee consider results against common opponents in making at large selections.  Essentially, this is a mini-rating system for only the Top 60 teams based on common opponent results.  For each Top 60 team, it looks at its results as compared to the results of each other Top 60 team for opponents the two teams had in common.  Then, it assigns a score and ranking to each Top 60 team based on its cumulative common opponent comparisons with the other Top 60 teams.

Fourth, for #3 seeds, teams' Conference Ranks come into play.  This confirms something I've suggested previously, which is that in distinguishing among teams ranked #9 through #16, where making distinctions is difficult, the Committee tends to defer to teams from stronger conferences.

Fifth, teams' Non-Conference RPIs (Ratings and Ranks), Head to Head results against Top 60 opponents, and poor results, although they have some influence, have limited influence as compared to the other factors.

All of these observations are consistent with past observations, so the Committee did not do anything radically different in 2017.


HOW MANY TOP 60 OPPONENTS SHOULD WE SCHEDULE, FOR NCAA TOURNAMENT PURPOSES? POST-2017 UPDATE

Last year, I published information on the number of Top 60 opponents that the NCAA Tournament #1, #2, #3, and #4 seeds played and that the unseeded teams that did and did not get at large selections played.  That information was based on 10 years' data, from 2007 through 2016.  I've added the data from the 2017 season, so here is a table with updated numbers:


And below are charts showing a little more detail (but not the Average and Median numbers) for the #1, #2, #3, and #4 seeds and for the unseeded teams that did (yes) and did not (no) get at large selections:




Monday, November 13, 2017

Comments on the Women's Soccer Committee's 2017 NCAA Tournament Bracket Decisions

INTRODUCTION

Overall, it looks like the Women's Soccer Committee's NCAA Tournament decisions were fairly consistent with the decision factors and with the way it's made decisions over the last 10 years.  There are, however, exceptions where I believe the Committee erred:
  1. Not giving South Florida a #3 seed (should have bumped Virginia out of a #3 seed).
  2. Not giving South Florida a #4 seed (should have bumped Florida State out of a #4 seed).
  3. Not giving Minnesota an At Large selection and instead giving one to Rice.
Below is my analysis of all the Committee's seeding and at large selection decisions, starting with seeds and then moving to at large selections.  The analysis is based on a system of "factor" scores I've developed from all of the game results over the last 10 years (2007 through 2016), matched up with the NCAA Women's Soccer Committee's seeds and at large selections over those 10 years.  The items covered by the factor scores are the factors the Committee is required to use in making at large selections.  The Committee also uses them for seeds, but unlike for at large selections, the Committee also can consider other factors when selecting seeds.

The factor scores aren't intended to represent the process the Committee goes through in making its decisions.  They do represent, however, what the Committee has done.  There are both "yes" factor scores and "no" factor scores.  Using #1 seeds as an example, there is a "yes" factor score for the ARPI of 0.7002 and a "no" factor score of 0.6430.  What this means is that every team with an ARPI >=0.7002 has received a #1 seed -- has had a "yes" decision for a #1 seed; and every team with an ARPI <=0.6430 has not received a #1 seed -- has had a "no" decision for a #1 seed.  My system uses 14 individual factors and 78 "paired" factors, each of which consists of two of the individual factors combined by a formula that weights each individual factor at 50%.

For each "yes" decision the Committee has made as to a particular group -- #1 seeds, #2 seeds, #3 seeds, #4 seeds, and at large selections -- all of the teams that received a "yes" decision from the Committee will either (a) have "yes" scores for one or more factors and no "no" scores or (b) have no "yes" scores but also no "no" scores.  Conversely, all teams that received a "no" decision will either (a) have no "yes" scores and one or more "no" scores or (b) have no "yes" and no "no" scores.  Below, I'll refer to teams with only "yes" or only "no" scores as being ones where there should be clear decisions, which means that by past precedent (decisions over the last 10 years), the teams with only "yes" should get a "yes" decision and the teams with only "no" should get a "no" decision.  The teams with no "yes" and no "no" scores, on the other hand, by past precedent are the candidates to fill any remaining positions.

Using this system, for the Committee's decisions over the last 10 years, the process matches factor scores and the decisions as follows:

#1 seeds:  the factors identify clear #1 seeds for 95% of the #1 seeds;

#2 seeds:  the factors identify clear #2 seeds for 82.5% of the #2 seeds;

#3 seeds:  the factors identify clear #3 seeds for 45% of the #3 seeds;

#4 seeds:  the factors identify clear #4 seeds for 52.5% of the #4 seeds;

All seeds without regard to position:  the factors identify clear seeds for 82.5% of all seeds

At large selections:  the factors identify clear at large selections for 86.8% of all at large selections.

As is apparent, the factors are good at identifying #1 and #2 seeds, as well as seeds without regard to seeding position, and at identifying at large selections, but they have trouble particularly in identifying how to distribute seeded teams between the #3 and #4 positions.

This year, and in each following year, there will be some teams that meet one or more "yes" scores and one or more "no" scores for a particular decision.  What this means is that these teams have profiles the Committee has not seen over the last 10 years, so that the Committee is going to have to make a decision it hasn't had to make in the past on how it values the competing factors.  At the end of each season, I integrate that year's decisions into my data base and review and revise the factor scores as needed so that the scores continue to be consistent with all of the Committee's decisions during the period covered by the data base.

For certain of the individual factors, the numbers I use for scoring are obvious -- the ARPI and ARPI Rank, the ANCRPI and ANRPI Rank, Conference Average ARPI and Conference Rank, Total Head to Head Games v Top 60 Teams.  For each of the other individual factors, I've developed a system for assigning factor scores to teams -- Top 50 Results Score and Rank, Head to Head Results v Top 60 Teams, Common Opponents Score and Rank in relation to common opponents with other Top 60 teams, Poor Results, Conference Regular Season Standing, Conference Tournament Standing, and Average Conference Standing.  If you're interested in the details of how I compute these scores, you can find it at the RPI for Division I Women's Soccer website on the NCAA Tournament: Predicting the Bracket, At Large Selections page.

ANALYSIS

#1 Seeds:  Based on past precedent, the candidate group for #1 seeds is the top 7 teams in the ARPI rankings.  What this means is that the poorest ranked team to receive a #1 seed, over the last 10 years, is a #7 ARPI team.  Here is a table that shows this year's top 7 teams.  In the left-hand column it shows the Committee's decision.  To the right of the team's name, the table shows how many of my system's "yes" factor scores the team met and how many "no" scores it met.

NCAA Seed or Selection ARPI Rank for Formation Team for Formation 1 Seed Total No 1 Seed Total
1 1 Stanford 36 0
1 2 NorthCarolinaU 15 0
2 6 TexasA&M 10 1
1 3 SouthCarolinaU 4 3
1 4 Duke 2 0
2 5 UCF 0 5
2 7 WestVirginiaU 0 19

Based on these numbers, Stanford, North Carolina, and Duke, with only "yes" factor scores, were clear #1 seeds.  UCF and West Virginia, with only "no" factor scores, clearly were not #1 seeds.  This left the Committee making a choice between Texas A&M and South Carolina for the fourth #1 seed.  The Committee chose South Carolina.  Both teams met "yes" and "no" factor scores for a #1 seed:  Texas A&M met 10 "yes" scores and 1 "no" score and South Carolina met 4 "yes" and 3 "no" scores.  They thus had profiles the Committee has not seen over the last 10 years.  The absolute numbers of "yes" or "no" scores met is not decisive, rather one has to look more closely at what the factors are to evaluate the Committee's decision.

On looking at Texas A&M's "yes" scores, two of them were its Common Opponents Score and its Common Opponents Rank, and the other "yes" scores all involved one of those individual factors paired with another individual factor.  Texas A&M had a Common Opponent Score of 8.30 and a Common Opponent Rank of 2.  The "yes" #1 seed score for the Common Opponent Score factor is >=8.14; and for the Common Opponent Rank factor is <=2.  Thus Texas A&M just met these two "yes" factors and, through the extension of these factors to other factor pairs, met 8 other "yes" factors.  South Carolina compared to this with a Common Opponent Score of 7.06 and a Common Opponent Rank of 5.

Texas A&M's single "no" score was for the paired factor of ARPI Rank and Top 50 Results Score.  Its ARPI Rank was #6 and its Top 50 Results Score was 1136, giving it a Top 50 Results Rank of #31 out of the Top 60 teams.  In comparison, South Carolina's ARPI Rank was #3 and its Top 60 Results Score was 15,720, which put it in the #11 position among the Top 60 teams.

South Carolina's "yes" scores all were from paired factors, with one of the factors being either the SEC's average ARPI or the SEC's average ARPI Rank of #1.  The other factors paired with these had to do with either South Carolina's ARPI, its Non-Conference ARPI, or its Head to Head Results against top 60 opponents.

Since South Carolina and Texas A&M were from the same conference, a comparison of their scores for the other halves of the Conference Rating and Conference Rank factor pairs will indicate why South Carolina received "yes" scores for them and Texas A&M didn't:  South Carolina's ARPI Rank was #3 and Texas A&M's was #6; South Carolina's Adjusted Non-Conference ARPI Rank was #2 and Texas A&M's was #17; and South Carolina's Head to Head Results score was 1.38 and Texas A&M's was 1.15 (ranked #5).

South Carolina's "no" scores were for the individual Conference Standing factor and that factor paired with other factors.  South Carolina's regular season SEC position was #1, but it lost in the quarter-finals of the SEC Tournament.  Under my system this gave South Carolina a Conference Standing of #3.75.  The "no" score for the Conference Standing factor is >=3.75, so South Carolina was right there at the factor limit.  This compares to Texas A&M's #1.5 (#2 in the regular season and #1 in the conference tournament).  One of the "no" factors for South Carolina paired its Conference Standing with its Non-Conference ARPI Rank.  As indicated above, South Carolina's ANCRPI Rank was significantly better than Texas A&M's.  The other factor paired with Conference Standing was South Carolina's score for poor results.  South Carolina's score for poor results was 0, matching Texas A&M's 0, meaning they both had no poor results.

For an overall perspective, here is a table that shows how the two teams compared on individual factors (but not including the number of head to head games with Top 60 opponents):

Factor SouthCarolinaU TexasA&M
ARPI 0.6834 0.6658
ARPI Rank 3 6
ANCRPI 0.7168 0.6487
ANCRPIRank 2 17
Top50ResultsScore 15720 1136
Top50ResultsRank 11 31
HeadtoHeadScore 1.38 1.15
CommonOpponentsScore 7.06 8.30
CommonOpponentsRank 5 2
PoorResultsScore 0 0
Conf RegSeasonStanding 1 2
ConfTournamentStanding 6.5 1
Conf AverageStanding 3.75 1.50
Conf AverageARPI 0.5924 0.5924
ConferenceRank 1 1

Looking at all these details together, it looks like the Committee did not consider South Carolina's quarter-final departure from the SEC Tournament as a sufficient reason to give Texas A&M the #1 seed in lieu of South Carolina.  And, with that negative for South Carolina disposed of, on the other factor comparisons South Carolina looks like it easily could, and perhaps should, be seeded ahead of Texas A&M.  Thus the Committee gave South Carolina the fourth #1 seed and, based on the analysis above, that looks like a reasonable decision even though a decision in favor of Texas A&M also would have seemed reasonable.

What's important, perhaps, from a long term perspective is that the Committee was willing to give South Carolina a #1 seed notwithstanding its having lost in the quarter-finals of the SEC Tournament.

#2 Seeds:  Based on past precedent, the candidate group for a #2 seed was the teams ranked in the top 14 of the ARPI.  Here is a table for those teams:

NCAA Seed or Selection ARPI Rank for Formation Team for Formation 2 Seed Total No 2 Seed Total
1 1 Stanford 51 0
1 2 NorthCarolinaU 33 0
1 3 SouthCarolinaU 28 0
2 6 TexasA&M 21 0
1 4 Duke 18 0
2 5 UCF 8 0
3 10 PennState 0 0
2 7 WestVirginiaU 0 1
2 8 UCLA 0 1
6 11 Rutgers 0 14
4 9 Princeton 0 25
6 14 NotreDame 0 25
4 12 TexasU 0 37
5 13 Pepperdine 0 40

From this list, after disregarding the #1 seeds, Texas A&M and UCF are clear #2 seeds, and the Committee gave them #2 seeds.  On the other side, the teams starting with Rutgers and moving down on the list clearly are not #2 seeds, and the Committee did not give them #2 seeds.  This left the Committee with three teams to choose from for the remaining two #2 seed slots:  Penn State, UCLA, and West Virginia.  With UCLA and West Virginia each having one "no" #2 seed score, and Penn State having no "no" scores, past precedent says that Penn State would have received one of the remaining #2 seeds and the Committee then would have had to choose between UCLA and West Virginia for the other #2 seed.  The Committee, however, gave #2 seeds to UCLA and West Virginia and gave Penn State a #3 seed.  The question is whether this was reasonable.

The "no" score that UCLA missed on was a combined factor score for (a) Conference Standing and (b) Top 50 Results Score.  UCLA's conference standing was #2.5 (tied for the #2-3 positions with Southern California) and its Top 50 Results Score Rank was 32.  Put together using my 50-50 weighting formula, these give a 46.75 score for UCLA.  The "no" factor limit is >=45.2, so UCLA was only slightly over the limit.  In comparison, Penn State finished in the #4.5 position in the Big 10 regular season and the #1 position in the conference tournament for a Conference Standing of #2.75.  Its Top 50 Results Score Rank was #4.

Here's a table showing how UCLA and Penn State compared on all of the individual factors:

Factor UCLA PennState
ARPI 0.6596 0.6486
ARPIRank 8 10
ANCRPI 0.6776 0.6747
ANCRPIRank 7 9
Top50ResultsScore 1035 21342
Top50ResultsRank 32 4
HeadtoHeadScore 0.80 0.50
CommonOpponentsScore 5.11 4.50
CommonOpponentsRank 7 8
PoorResultsScore 0 0
ConfRegSeasonStanding 2.5 4.5
ConfTournamentStanding 2.5 1
ConfCombinedStanding 2.50 2.75
ConfAverageARPI 0.5854 0.5679
ConferenceRank 3 4

After looking at all of the numbers, it likely was a very close call for the Committee that could have gone either way as between UCLA and Penn State.

The "no" score that West Virginia missed on was was a combined Conference Rank and Common Opponent Score factor.  West Virginia was in the #5 conference (with a Conference Standing of #2.75) and had a Common Opponent Score Rank of #14.  In comparison, Penn State was in the #4 conference (Conference Standing #2.75) with a Common Opponent Score Rank of #8.

Here's a table that shows how West Virginia and Penn State compared on all the individual factors:

Factor WestVirginiaU PennState
ARPI 0.6633 0.6486
ARPIRank 7 10
ANCRPI 0.6925 0.6747
ANCRPIRank 5 9
Top50ResultsScore 6231 21342
Top50ResultsRank 15 4
HeadtoHeadScore 0.73 0.50
CommonOpponentsScore 2.40 4.50
CommonOpponentsRank 14 8
PoorResultsScore 0 0
ConfRegSeasonStanding 2 4.5
ConfTournamentStanding 3.5 1
ConfCombinedStanding 2.75 2.75
ConfAverageARPI 0.5610 0.5679
ConferenceRank 5 4

In addition, West Virginia beat Penn State in Morgantown.

Here too, as with UCLA, based on the numbers it likely was a very close call.

Overall, I can't argue with the Committee's having given #2 seeds to UCLA and West Virginia rather than to Penn State.  Any two of the three would have been a reasonable decision.

There are three other things that I think are worth considering.  First, if UCLA had not gotten a #2 seed, 6 of the 8 quarterfinals likely would be on the East Coast (Penn State, West Virginia, UCF, South Carolina, Duke, and North Carolina), 1 of the 8 likely would be in the southwestern US (Texas A&M), and 1 of the 8 likely would be on the West Coast (Stanford).  With UCLA getting a #2 seed, this "improved" the balance to 5-1-2.  For seeding, unlike the at large selections, the Committee is not limited in what it can consider.  Thus geographic distribution is something the Committee could have considered.

Second, it looks like the Committee, in giving #2 seeds to UCLA and West Virginia rather than to Penn State, may have paid more attention to conference regular season standings than to conference tournament results.

Third, the Committee appears not to have assigned a high value to Penn State's higher quality results against Top 50 teams.

#3 Seeds:  Based on past precedent, the candidate group for a #3 seed was the teams ranked in the top 19 of the ARPI.  Here is a table for those teams:

NCAA Seed or Selection ARPI Rank for Formation Team for Formation 3 Seed Total No 3 Seed Total
1 1 Stanford 53 0
1 2 NorthCarolinaU 38 0
1 3 SouthCarolinaU 32 0
2 6 TexasA&M 25 0
1 4 Duke 23 0
2 5 UCF 16 0
5 15 SouthFlorida 3 0
3 10 PennState 1 0
2 7 WestVirginiaU 1 0
2 8 UCLA 1 0
3 18 SouthernCalifornia 1 8
6 14 NotreDame 0 0
4 17 OhioState 0 0
3 19 FloridaU 0 0
6 11 Rutgers 0 1
4 12 TexasU 0 4
4 16 FloridaState 0 6
4 9 Princeton 0 9
5 13 Pepperdine 0 16

From this list, after disregarding the #1 and #2 seeds, Penn State and South Florida were clear #3 seeds.  On the other side, Rutgers and the teams below it clearly were not #3 seeds.  This would appear to have left Southern California, Notre Dame, Ohio State, and Florida as candidates for two #3 seed slots.  The Committee, however, did not give a #3 seed to South Florida (indeed, it didn't seed South Florida at all), and it gave a #3 seed to Virginia which was ranked #23 in the ARPI, well outside the past precedent range for #3 seeds  (with no "yes" scores for a #3 seed and 4 "no" scores).

Giving ARPI #23 Virginia a #3 seed is a clear break with past precedent.  And not giving a #3 seed to South Florida likewise is a break with past precedent.  The question is whether the Committee's decisions, on a closer look, are reasonable.

First, I'll look at South Florida.  South Florida met 3 "yes" scores for a #3 seed and no "no" scores.  The 3 "yes" scores all were paired factor scores, with the common element of South Florida's Top 50 Results Rank.  Its Top 50 Results Rank was #2, which was almost entirely the result of its having tied #5 ranked (and #2 seed) UCF twice, both times at UCF.  The other sides of the paired factor scores were South Florida's ARPI (rank #15), and its Common Opponents Score and Rank (rank #9).

Second, I'll look at Virginia, with no "yes" #3 seed scores and 4 "no" scores.  The "no" scores related to Virginia's ARPI of 0.6157 and ARPI Rank of 23.  The ARPI "no" score is <= 0.6165, so Virginia actually was close to having a better rating than that score.  The ARPI Rank "no" score is >= #20, so Virginia was poorer in the rankings than that position.  The other side of a paired factor score was Virginia's ANCRPI, which produced an ANCRPI rank of #33.

Third, I'll look at Southern California, with 1 "yes" #3 seed score and 8 "no" scores.  Their 1 "yes" score was based on a pairing of the Pac 12's Conference Average ARPI, which ranked #3, and Southern California's Head to Head Results Score, which was 0.00. Its "no" scores all were for paired factors and related to combinations of its ARPI (rank #18), its Top 50 Results Score (rank #39), its ANCRPI (rank #49), and its poor results score (0).

Finally, here's a table that shows how all the likely candidates for the remaining three #3 seeds compared on the individual factors:

Factor SouthFlorida SouthernCalifornia NotreDame OhioState FloridaU VirginiaU
ARPI 0.6253 0.6227 0.6262 0.6230 0.6200 0.6157
ARPIRank 15 18 14 17 19 23
ANCRPI 0.5986 0.5887 0.6435 0.6182 0.5980 0.6094
ANCRPIRank 41 49 18 28 42 33
Top50ResultsScore 28856 175 17974 2883 17597 6465
Top50ResultsRank 2 39 7 22 8 14
HeadtoHeadScore 0.00 1.00 -0.21 0.29 0.00 -0.25
CommonOpponentsScore 3.59 2.78 0.32 3.45 2.00 0.85
CommonOpponentsRank 9 12 21 10 15 20
PoorResultsScore 0 0 0 0 0 0
ConfRegSeasonStanding 2 2.5 5.5 1 3 4
ConfTournamentStanding 1 2.5 6.5 3.5 3.5 3.5
ConfCombinedStanding 1.50 2.50 6.00 2.25 3.25 3.75
ConfAverageARPI 0.5491 0.5854 0.5874 0.5679 0.5924 0.5874
ConferenceRank 6 3 2 4 1 2

Looking at this table, apart from the conferences' average ARPIs and ranks, South Florida seems like a clear #3 seed.  Among the others, Virginia had better conference results in the ACC than Notre Dame although other than that, Notre Dame looks like it had more merit than Virginia.  And looking at all but South Florida, there are pluses and minuses for all of them.  So the question is, what would have justified the Committee's not giving South Florida a #3 seed.  I see only one possibility:  South Florida was in the #6 conference whereas Florida was in the #1 conference, Virginia in the #2 (put ahead of Notre Dame by the Committee perhaps due to its better ACC standing), and Southern California was in the #3 conference.  Essentially, South Florida was downgraded because of its conference.  This also fits with Ohio State, in the #4 conference, losing out to Florida, Southern California, and Virginia.  This leads to a question:  Was the Committee more focused on conferences' merits than on teams' merits?

In my opinion, the Committee's not giving South Florida a #3 seed (indeed, not giving it any seed) was an error, to the extent that seeding should be based on teams' performance over the course of the season.  As among the other teams, there were pros and cons for all of them and I don't see obvious decisions for the Committee, although dipping down in the ARPI rankings for #23 Virginia seems like a stretch and definitely is a departure from past precedent.

#4 Seeds:  Based on past precedent, the candidate group for #4 seeds was teams ranked in the top 26 of the ARPI.  Here is a table for those teams:

NCAA Seed or Selection ARPI Rank for Formation Team for Formation 4 Seed Total No Seed Total
1 1 Stanford 56 0
1 2 NorthCarolinaU 52 0
1 3 SouthCarolinaU 42 0
1 4 Duke 39 0
2 6 TexasA&M 35 0
2 5 UCF 25 0
2 8 UCLA 18 0
2 7 WestVirginiaU 11 0
3 10 PennState 9 0
4 9 Princeton 8 1
5 15 SouthFlorida 5 0
4 12 TexasU 2 0
3 18 SouthernCalifornia 1 0
3 19 FloridaU 1 0
6 11 Rutgers 0 0
6 14 NotreDame 0 0
4 17 OhioState 0 0
3 23 VirginiaU 0 0
6 26 ArizonaU 0 2
4 16 FloridaState 0 4
6 20 TennesseeU 0 4
5 22 Georgetown 0 4
5 13 Pepperdine 0 9
6 25 CaliforniaU 0 10
5 24 Hofstra 0 14
5 21 MurrayState 0 20
After disregarding the #1, #2, and #3 seeds, Texas and South Florida were clear #4 seeds.  And, the teams from Arizona and beyond on the list clearly were not #4 seeds.  This left Princeton (8 "yes" scores and 1 "no" score) and three other teams with no "yes" scores and no "no" scores as candidates for the remaining two positions.  Those three candidates were Rutgers, Notre Dame, and Ohio State.  The Committee gave Texas a #4 seed.  But it did not seed South Florida with 5 "yes" and 0 "no" scores and it did seed Florida State with 0 "yes" scores and 4 "no" scores.  It gave the remaining two #4 seeds to Princeton and Ohio State.

I've already said the Committee's not giving South Florida a #3 seed was an error.  That's even more true for a #4 seed.

So, what about Florida State?  Each of its 4 "no" scores was for a paired factor, and each of the pairs included either its Top 50 Results Score or Rank.  Florida State's Top 50 Results Rank was #37.  This compares to South Florida's #2. The other sides of Florida State's four "no" paired factors involved its Conference Standing, or its Common Opponents Score or Rank.  Florida State's Conference Standing was #6.75 (finished #7 in the ACC and #6.5 in the ACC Tournament), as compared to South Florida's #1.5 (#2 in the American and #1 in the American Tournament).  Florida State's Common Opponent Rank was #40, compared to South Florida's #9.

The other team to look at is Princeton since it had 1 "no" score, to go with its 8 "yes" scores.  Princeton's "no" score was for its Top 50 Results Score.  Past precedent is that no team with a Top 50 Results Score <=60 has been seeded.  Princeton's Top 50 Results Score is 54 (rank #45.5).  On the other hand, Princeton's "yes" scores had to do primarily with its ARPI and ANCRPI, and secondarily with its Conference Standing of #1.  Teams with an ARPI >=0.6513, according to past precedent, get at least a #4 seed, and Princeton's ARPI was 0.6519.  Its ANCRPI Rank was #8.

Altogether, looking at South Florida, Princeton, Rutgers, Notre Dame, Ohio State, and Florida State, among whom the Committee had to distribute three #4 seeds, here is how they compare on the individual factors:

Factor Princeton Rutgers NotreDame SouthFlorida FloridaState OhioState
ARPI 0.6519 0.6340 0.6262 0.6253 0.6251 0.6230
ARPI Rank 9 11 14 15 16 17
ANCRPI 0.6748 0.6698 0.6435 0.5986 0.6626 0.6182
ANCRPI Rank 8 12 18 41 14 28
Top 50 Results Score 54 2826 17974 28856 248 2883
Top 50 Results Rank 45.5 24 7 2 37 22
Head to Head Score 1.00 0.67 -0.21 0.00 -0.27 0.29
Common Opponents Score 2.00 2.00 0.32 3.59 -1.44 3.45
Common Opponents Rank 17 16 21 9 40 10
Poor Results Score 0 0 0 0 0 0
Conf Reg Season Standing 1 4.5 5.5 2 7 1
Conf Tournament Standing 1 6.5 6.5 1 6.5 3.5
Conf Average Standing 1.00 5.50 6.00 1.50 6.75 2.25
Conf Average ARPI 0.5360 0.5679 0.5874 0.5491 0.5874 0.5679
Conference Rank 7 4 2 6 2 4
I went through these numbers, comparing each team to each other team as in a "round robin" format. Based on my review using the round robin format, I see the six teams ranked as:

1.  Princeton
2.  South Florida
3.  Ohio State
4.  Rutgers
5.  Notre Dame
6.  Florida State

In my opinion, the Committee erred in not seeding South Florida and in giving Florida State a #4 seed.

At Large Selections:  Based on past precedent, the candidate group for at large selections was teams ranked in the top 57 of the ARPI.  Below is a table for those teams that did not receive seeds and that were not automatic qualifiers.  In the NCAA Seed or Selection column, 6 means the Committee gave the team an at large selection and 7 means it didn't.

NCAA Seed or Selection ARPI Rank for Formation Team for Formation At Large In Total At Large Out Total
6 14 NotreDame 39 0
6 11 Rutgers 32 0
6 20 TennesseeU 32 0
6 32 OklahomaState 30 0
6 45 Butler 24 2
6 27 Auburn 18 0
6 26 ArizonaU 17 0
6 41 ArkansasU 17 1
6 30 NCState 16 0
6 42 WakeForest 16 0
6 28 AlabamaU 15 0
6 29 WisconsinU 14 0
6 25 CaliforniaU 10 0
7 44 MississippiState 9 9
7 56 LSU 8 18
6 31 SantaClara 7 0
6 39 Vanderbilt 6 0
6 33 NorthwesternU 3 0
6 40 MississippiU 3 1
6 46 TCU 2 0
6 38 Clemson 1 0
7 52 Cincinnati 1 1
6 43 ColoradoU 0 0
7 48 MinnesotaU 0 0
6 49 WashingtonState 0 0
7 50 Memphis 0 0
7 53 Northeastern 0 7
7 54 BostonCollege 0 7
6 36 Rice 0 10
7 51 Marquette 0 14
7 57 VirginiaTech 0 14
7 55 SanJoseState 0 30
From this list of candidate teams, according to past precedent these are clear at large selections:  Notre Dame, Rutgers, Tennessee, Oklahoma State, Auburn, Arizona, NC State, Wake Forest, Alabama, Wisconsin, California, Santa Clara, Vanderbilt, Northwestern, TCU, and Clemson.  To these I add Butler and Arkansas due to their "yes" scores greatly outnumbering their low "no" scores.  Also according to past precedent these are clearly not at large selections:  Northeastern, Boston College, Rice, Marquette, Virginia Tech, and San Jose State.

This leaves the following teams competing for the 4 remaining at large positions:  Mississippi State (9 "yes" and 9 "no), LSU (8 "yes" and 18 "no"), Mississippi (3 "yes" and 1 "no"), Cincinnati (1 "yes" and 1 "no"), and Colorado, Minnesota, Washington State, and Memphis (all 0 "yes" and 0 "no").  The Committee gave 3 of the positions to Mississippi, Colorado, and Washington State.  It gave the 4th position, however, to Rice (0 "yes" and 10 "no").  The question is whether the Committee's decisions were reasonable.

First, I'll address Mississippi State and LSU:

For Mississippi State, all of its "yes" scores related to its Adjusted Non-Conference RPI, either the rating itself or the rank.  According to past precedent, teams with an ANCRPI rating at Mississippi State's level always had received at large selections and the same was true for teams with its ANCRPI Rank (#11).  On the other hand, Mississippi State did not make the SEC Tournament, with a Conference Standing of #11.  Seven of its 9 "no" scores related to its Conference Standing.  According to past precedent, no team with a Conference Standing of #9.5 or poorer had received an at large selection.  The Committee, not having seen a profile like Mississippi State's over the last 10 years, had to make a decision:  Which is more important, Mississippi State's excellent non-conference results or its poor Conference Standing?  And, it had to make this decision while taking into consideration that the SEC was the #1 ranked conference, by a fair margin.  It's quite clear the Committee decided that the poor Conference Standing trumped the good non-conference results.  In relation to this, it's telling that one of the paired factors is for ANCRPI Rank and Conference Standing.  This is one of the factors for which Mississippi State received a "no" score, indicating that from the perspective of that factor Mississippi State's non-conference results simply weren't sufficient to overcome its Conference Standing.

The Mississippi State decision must have ended the discussion on LSU too, which had a Conference Standing of #13, notwithstanding an ANCRPI Rank of #10.  In addition, LSU had problems with its ARPI, which at the end of the season dropped to a level at which past precedent said they would not receive an at large selection.  And again, like Mississippi State, it received a "no" score for the paired ANCRPI Rank and Conference Standing factor.

Next in line is Ole Miss, with its 3 "yes" scores and 1 "no" score.  It did receive an at large selection.  Its 3 "yes" scores all were paired factors with Conference Rank as part of each pair.  The other factor in each pair was ARPI, ARPI Rank, and Head to Head Results against top 60 opponents.  This provides an interesting illustration.  What these "yes" scores show is that if a team is in a very tough conference -- in this case the #1 conference -- and the team still can achieve a decent ARPI Rating and Rank (Ole Miss was #40 in the ARPI) or decent results against Top 60 teams, the team is going to have a better chance than a team with a similar ARPI or Head to Head Results but from a lesser conference.  Ole Miss's 1 "no" score was for the paired factor of Conference Standing (#9.25) and Top 50 Results Rank (#47).  With its #9.25 Conference Standing, Ole Miss was just barely on the good side of the Conference Standing (#9.5) at which teams, according to past precedent, will not receive at large selections.  This all indicates to me that the Committee could have gone either way with Ole Miss and made a reasonable decision either way.

Next comes Cincinnati with 1 "yes" and 1 "no" score.  Its "yes" score was for its Top 50 Results Rank of #13.  This is the exact threshold for always getting an at large selection.  Its "no" score was for its ARPI (Rank #52) and its ANCRPI Rank (#51) combined.  Here, Cincinnati was just over the threshold for never getting an at large selection.  Cincinnati thus was very close to having no "yes" and no "no" scores.  That being the case, I consider it as being in the same group as Colorado, Minnesota, Washington State, and Memphis.

Finally, for a detailed look at "yes" and "no" scores, there is Rice with its 10 "no" scores.  All but one of Rice's 10 "no" scores involve its Head to Head Results Against Top 60 Teams score.  The "no" threshold for Head to Head Results is -1.64.  Rice's score is -1.67.  There were four other teams in the Top 60 that had Head to Head Results at least as poor as Rice's:  automatic qualifier Murray State with -2.00, ARPI #47 LaSalle with -2.00 that did not get an at large selection, ARPI #51 Marquette with -2.00 that did not get an at large selection, and #55 San Jose State with -2.00 that did not get an at large selection.  It's three results against Top 60 teams were a loss @ Texas (#12 ARPI), a home tie v Baylor (#34), and a loss @ Memphis (#50).  (Memphis was a 0 "yes" and 0 "no" team that did not get an at large selection.)  Rice's other "no" score was a paired factor of Rice's ANCRPI (Rank #69) and the Conference USA Rank (#12) -- meaning that Rice didn't have good non-conference results and was in a relatively weak conference, relative to its competitors for an at large position.

Altogether, looking at Ole Miss, Cincinnati, Colorado, Minnesota, Washington State, and Memphis, and adding in Rice, among whom the Committee had to distribute four at large positions, here is how they compare on the individual factors:

Factor Rice MississippiU ColoradoU MinnesotaU WashingtonState Memphis Cincinnati
ARPI 0.5934 0.5874 0.5836 0.5767 0.5747 0.5740 0.5707
ARPI Rank 36 40 43 48 49 50 52
ANCRPI 0.5573 0.6340 0.5811 0.5610 0.6060 0.5746 0.5817
ANCRPI Rank 69 21 52 68 35 59 51
Top 50 Results Score 8 43 29 3074 2881 12 10213
Top 50 Results Rank 51.5 47 48.5 19 23 50 13
Head to Head Score -1.67 -0.55 -0.67 -0.71 -1.11 -0.83 0.00
Common Opponents Score -2.00 -4.79 -1.60 -1.00 -4.32 -2.05 -3.11
Common Opponents Rank 44 56 42 37.5 53 45 51
Poor Results Score 0 0 0 0 0 0 0
Conf Reg Season Standing 1 9 6 2.5 7 4.5 3
Conf Tournament Standing 6.5 9.5 6 6.5 7 3.5 5.5
Conf Average Standing 3.75 9.25 6.00 4.50 7.00 4.00 4.25
Conf Average ARPI 0.5017 0.5924 0.5854 0.5679 0.5854 0.5491 0.5491
Conference Rank 12 1 3 4 3 6 6
I went through these numbers, comparing each team to each other team as in a "round robin" format. Based on my review using the round robin format, I see the seven teams ranked as:

1.  Ole Miss
2-3.  Minnesota
2-3.  Cincinnati
4.  Colorado
5.  Washington State
6.  Memphis
7.  Rice

I also looked at this, but discounting Rice's early exit from the Conference USA Tournament, and see the teams ranked as:

1.  Minnesota
2-3.  Colorado
2-3.  Ole Miss
4.  Cincinnati
5.  Washington State
6.  Rice
7.  Memphis

I also looked to see the teams' best win and tie results:

Washington State:  W v 8, T v 48
Minnesota:  W v 11, T v 14
Cincinnati:  W v 15, T v 5
Memphis:  W v 36, T v 52
Ole Miss:  W v 42, T v 27
Memphis:  W v 36, T v 52
Colorado:  W v 46, T v 26
Rice:  W v 70, T v 34

Looking at all of these numbers, two things stand out.  First, Rice had the least merit of any of these teams.  The Committee made an error in giving it an at large position.  Second, Minnesota should have received an at large selection.  The Committee made an error in not giving it one.

After that, it looks like Memphis had not earned an at large selection.  This should have left the Committee with three at large selections to split among Cincinnati, Colorado, Ole Miss, and Washington State.  Any two of those would be reasonable.

So, in conclusion, in my opinion the Committee's giving at large selections to Colorado, Ole Miss, and Washington State, but not to Cincinnati or Memphis, was reasonable.  On the other hand, the Committee's not giving a selection to Minnesota, and instead giving one to Rice, was a clear error.