Monday, October 28, 2024

2024 ARTICLE 13: POST-WEEK-11 ACTUAL RATINGS AND UPDATED PREDICTIONS

 Current Actual RPI Ratings, Ranks, and Related Information

The following tables show actual RPI ratings and ranks and other information based on games played through Sunday, October 27.  The first table is for teams, the second for conferences, and the third for geographic regions.  Scroll to the right to see all the columns.






Predicted Team RPI and Balanced RPI Ranks, Plus RPI and Balanced RPI Strength of Schedule Contributor Ranks

The following table for teams and the next ones for conferences and regions show predicted end-of-season ranks based on the actual results of games played through October 27 and predicted results of games not yet played, including conference tournament games.  The predicted results of future games are based on teams' actual RPI ratings from games played through October 27.

In the table, ARPI 2015 BPs is ranks using the NCAA's 2024 RPI Formula.  URPI 50 50 SoS Iteration 15 is using the Balanced RPI formula.






Predicted NCAA Tournament Automatic Qualifiers, Disqualified Teams, and At Large Selection Status, All for the Top 57 Teams

Below, I show predicted #1 through #8 seeds and at large selections based on the Women's Soccer Committee's historic decision patterns.  With two weeks of regular season play left to go (including conference tournaments), there will be changes, possibly significant, from the predictions.  Nevertheless, we now are getting closer to where things will end up.

The first table below is for potential #1 seeds.  The #1 seeds always have come from the teams ranked #1 through 7 in the end-of-season RPI rankings, so the table is limited to the teams predicted to fall in that rank range.  The table is based on applying history-based standards to team scores in relation to a series of factors, all of which are related to the NCAA-specified criteria the Committee is required to use in making at large decisions.  For each factor, there is a standard that says, if a team met this standard historically, the team always has gotten a #1 seed.  I refer to this as a "yes" standard.  For most of the factors, there likewise is a standard that says, if a team met this standard historically, it never has gotten a #1 seed.  This is a "no" standard.  In the table, I have sorted the Top 7 RPI #1 seed teams in order of the number of yes standards they meet and then in order of the number of no standards.


This shows North Carolina and Mississippi State as clear #1 seeds.  Duke and Wake Forest have profiles the Committee has not seen before (meeting both "yes" and "no" standards), but are possitle #1 seeds.  Arkansas and Southern California likewise are possible #1 seeds.  The following table applies the "tiebreaker" for #1 seeds to the "possible" group.  (A "tiebreaker" is a factor that historically has been the best predictor for a particular Committee decision.)


As you can see, Duke and Southern California score best on the tiebreaker and so join North Carolina and Mississippi State as the predicted #1 seeds.

The candidates for #2 seeds are teams ranked through #14.  With the #1 seeds already assigned, this produces the following table:



This shows Wake Forest, Arkansas, and Florida State as clear #2 seeds.  Penn State and Stanford have profiles the Committee has not seen before, but are possitle #2 seeds.  Iowa and Michigan State likewise are possible #2 seeds.  The following table applies the "tiebreaker" for #2 seeds to the "possible" group.


As you can see, Iowa scores best on the tiebreaker and so joins Wake Forest, Arkansas, and Florida State as the predicted #2 seeds.

The candidates for #3 seeds are teams ranked through #23.  With the #1 and 2 seeds already assigned, this produces the following table:


This shows Penn State and Stanford as clear #3 seeds.  Notre Dame has a profile the Committee has not seen before, but is a possitle #3 seed.  Michigan State, UCLA, and Georgetown likewise are possible #3 seeds.  The following table applies the "tiebreaker" for #3 seeds to the "possible" group.


As you can see, Michigan State and Notre Dame score best on the tiebreaker and so join Penn State and Stanford as the predicted #3 seeds.

The candidates for #4 seeds are teams ranked through #26.  With the #1, 2, and 3 seeds already assigned, this produces the following table:



This shows no clear #4 seeds.  Virginia and Utah State have profiles the Committee has not seen before, but are possitle #4 seeds.  UCLA, South Carolina, Vanderbilt, Georgetown, TCU, and Auburn likewise are possible #4 seeds.  The following table applies the "tiebreaker" for #4 seeds to the "possible" group.


As you can see, Virginia, UCLA, South Carolina, and Vanderbilt score best on the tiebreaker and so are the predicted #4 seeds.

For # 5 through #8 seeds, the candidates are the not already seeded teams ranked #1 through #49.  Although the data are limited since we have had those seeds for only a few years, the best indicator of which teams will get those seeds is a combination of teams' RPI ranks and their Top 60 Head to Head results ranks:


Using this table, the #5 seeds are TCU, Utah State, Auburn, and Georgetown.  The #6s are Virginia Tech, Minnesota, Ohio State, and Xavier.  The #7s are St. Louis, Kentucky, Western Michigan, and West Virginia.  The #8s are Texas, Liberty, Wisconisn, and Santa Clara.

For the remaining At Large selections, the candidates run up to RPI #57, producing the following initial table:


Here, Oklahoma State, Georgia, Texas Tech, and BYU are clear At Large teams, with 7 additional spots to fill.  Pittsburgh, Buffalo, and Memphis have profiles the Committee has not seen before and are candidates for those spots.  Pepperdine, Rutgers, Washington, and Arizona also are candidates for those spots.  Texas A&M would be a candidate but has a winning percentage below 0.500 due to a predicted conference tournament first round loss and thus is not a candidate,  The remaining teams are not at large selections.  Since there are only 7 eligible candidates to fill the 7 open spots, all of Pittsburgh, Buffalo, Memphis, Pepperdine, Rutgers, Washington, and Arizona fill those spots.

Based on the above, this produces the final compilation of seeds, Automatic Qualifiers, at large selections, and Top 57 teams not getting at large selections.  In the NCAA Seed or Selection column, the seeds are self explanatory, the 5s are unseeded Automatic Qualifiers, the 6s are unseeded at large selections, the 6.5 is disqualified, and the 7s are Top 57 teams not getting at large positions.


What If the Committee Were Using the Balanced RPI?

If the Committee were using the Balanced RPI, which does not have the NCAA RPI's problem of discrimination in relation to conferences and regions, the following teams would drop out of the RPI Top 57:

Fairfield, AQ, drops from RPI rank 37 to Balanced RPI rank 94; Liberty AQ 39 to 68; South Florida AQ 40 to 71; Dayton No At Large 41 to 62; James Madison AQ 44 to 64; Massachusetts No At Large 46 to 70; Columbia AQ 47 to 63; Army No AL 49 to 65; Buffalo Yes At Large 54 to 79; Texas A&M DQ 55 to 66.

The following teams would move into the Balanced RPI Top 57:

California 65 to 35; Colorado 58 to 40; Loyola Marymount 79 to 43; Tennessee 60 to 33; Illinois 98 to 51; UC Davis 80 to 52; Baylor 74 to 53; Connecticut 64 to 54; Kansas 64 to 56; Utah 86 to 57.

It's worth noting that these shifts primarily are teams from weaker conferences moving out of the Top 57 and teams from stronger conferences moving in.  In addition, no teams from the West geographic region move out of the Top 57 and five from the West region move in.

In terms of actual at large changes, it is likely that Oklahoma State, Memphis, and Buffalo would lose their predicted at large positions and would be replaced by California, Tennessee, and Colorado.

Monday, October 21, 2024

2024 ARTICLE 12: POST-WEEK-10 ACTUAL RATINGS AND UPDATED PREDICTIONS

Current Actual RPI Ratings, Ranks, and Related Information

The following tables show actual RPI ratings and ranks and other information based on games played through Sunday, October 20.  The first table is for teams and the second for conferences.

This year, the NCAA expanded the pre-NCAA Tournament season by a week, from 12 to 13 weeks.  In the table for teams, there are color coded columns that show which teams are potential #1 through #4 seeds, which teams almost certainly will get at large positions if not seeded, and which teams potentially may get at large positions, as of the completion of week 10 of the season.  The ranges in those columns are based on a 12-week season.  Since the season now is 13 weeks, I will deal with that as follows:

This week, which is as of the completion of week 10, I am showing the historic week 10 ranges;

Next week, which will be as of the completion of week 11, I again will show the historic week 10 ranges;

The following week, which will be as of the completion of week 12, I will show the historic week 11 ranges; and

The final week, which will be as of the end of the season, I will show the end-of-season (historic week 12) ranges.

 Because of the change in season length, it is possible that the ranges in this week's report should be a little larger than what the  teams table shows. 

Here is the teams table.  Scroll to the right to see all the columns.


Below is the conferences table.  In this table, I suggest you take a close look at the two right hand yellow columns.  The first of these shows, for each conference, the average NCAA RPI rank of its opponents (Conference Opponents Average NCAA RPI Rank).  The second shows, for each conference, the average NCAA RPI formula Strength of Schedule Contributor rank of its opponents (Conference Opponents Average NCAA RPI Strength of Schedule Contributor Rank).  As you can see, for the stronger conferences, at the top, their opponents' RPI formula ranks as strength of schedule contributors are much poorer than their actual RPI ranks.  As you go down the table from the stronger to the weaker conferences, this reverses so that at the bottom, the weaker conferences' opponents' RPI formula ranks as strength of schedule contributors are much better than their actual RPI ranks.  This pattern is a function of the NCAA RPI's faulty method for calculating strength of schedule and results in the RPI's discrimination against stronger and in favor or weaker conferences that I often have written about.



And new this week, here is a similar table for the regions into which I divide teams based on where teams from the different states play the majority or plurality of their games.




Predicted Team RPI and Balanced RPI Ranks, Plus RPI and Balanced RPI Strength of Schedule Contributor Ranks

The following table for teams and the next ones for conferences and regions show predicted end-of-season ranks based on the actual results of games played through October 20 and predicted results of games not yet played.  The predicted results of future games are based on teams' actual RPI ratings from games played through October 20.

In the table, ARPI 2015 BPs is ranks using the NCAA's 2024 RPI Formula.  URPI 50 50 SoS Iteration 15 is using the Balanced RPI formula.








Predicted NCAA Tournament Automatic Qualifiers, Disqualified Teams, and At Large Selection Status, All for the Top 57 Teams

Below, I show predicted #1 through #8 seeds and at large selections based on the Women's Soccer Committee's historic decision patterns.  Since we still have three weeks of regular season play (including conference tournaments), there will be changes, possibly significant, from the predictions.  Nevertheless, we now are moving closer to where things will end up.

The first table below is for potential #1 seeds.  The #1 seeds always have come from the teams ranked #1 through 7 in the end-of-season RPI rankings, so the table is limited to the teams predicted to fall in that rank range.  The table is based on applying history-based standards to team scores in relation to a series of factors, all of which are related to the NCAA-specified criteria the Committee is required to use in making at large decisions.  For each factor, there is a standard that says, if a team met this standard historically, the team always has gotten a #1 seed.  I refer to this as a "yes" standard.  For most of the factors, there likewise is a standard that says, if a team met this standard historically, it never has gotten a #1 seed.  This is a "no" standard.  In the table, I have sorted the Top 7 RPI #1 seed teams in order of the number of yes standards they meet and then in order of the number of no standards.




This shows North Carolina, Arkansas, and Mississippi State as clear #1 seeds.  Wake Forest, Duke, and Penn State have profiles the Committee has not seen before (meeting both "yes" and "no" standards), but are possitle #1 seeds.  Southern California likewise is a possible #1 seed.  The following table applies the "tiebreaker" for #1 seeds to the "possible" group.  (A "tiebreaker" is a factor that historically has been the best predictor of Committee decisions for a particular decision.)


 
As you can see, Duke scores best on the tiebreaker and so joins North Carolina, Arkansas, and Mississippi State as the predicted #1 seeds.

The candidates for #2 seeds are teams ranked through #14.  With the #1 seeds already assigned, this produces the following table:



Here, the decision is clear: Wake Forest, Penn State, Michigan State, and Southern California are #2 seeds.

The candidates for #3 seeds are teams ranked through #23.  With the #1 and 2 seeds already assigned, this produces the following table:



Here, Stanford and Florida State are clear #3 seeds.  Notre Dame, Iowa, UCLA, and Georgetown are potential #3 seeds.  Here is the tiebreaker table for #3 seeds for those teams:




Based on the tiebreaker, Iowa and Notre Dame join Stanford and Florida State for #3 seeds.

The candidates for #4 seeds are teams ranked through #26.  With the #1, 2, and 3 seeds already assigned, this produces the following table:



Here, Auburn is a clear #4 seed.  Utah State, UCLA, Georgetown, Vanderbilt, Oklahoma State, and South Carolina are potential #4 seeds.  Here is the tiebreaker table for #4 seeds for those teams:



Based on the tiebreaker Vanderbilt, UCLA, and South Carolina join Auburn for #4 seeds.

For # 5 through #8 seeds, the candidates are teams ranked #1 through #49.  Although the data are limited since we have had those seeds for only a few years, the best indicator of which teams will get those seeds is a combination of teams' RPI ranks and their Top 60 Head to Head results ranks:



Using this table, the #5 seeds are Utah State, TCU, Georgetown, and Xavier.  The #6s are Virginia, Texas Tech, St. Louis, and Oklahoma State.  The #7s are Kentucky, Viginia Tech, West Virginia, and Liberty.  The #8s are Ohio State, Western Michigan, Fairfield, and Santa Clara.

For the remaining At Large selections, the candidates run up to RPI #57, producing the following initial table:


Here, Rutgers, Minnesota, Pittsburgh, Wisconsin, Georgia, and Texas are clear At Large teams, with 5 additional spots to fill.  Colorado, Tennessee, Creighton, Pepperdine, Dayton, BYU, Massachusetts, and Washington are candidates for those spots.  The remaining teams are not at large selections.  For At Large, the tiebreaker is a combination of teams' RPI ranks and their ranks in terms of their Common Opponent results in relation to other Top 60 teams.


Based on the tiebreaker, Pepperdine, Dayton, BYU, Washington, and Massachusetts get the last at large positions.

Based on the above, this produces the final compilation of seeds, Automatic Qualifiers, at large selections, and Top 57 teams not getting at large selections.  In the NCAA Seed or Selection column, the seeds are self explanatory, the 5s are unseeded Automatic Qualifiers, the 6s are unseeded at large selections, and the 7s are Top 57 teams not getting at large positions.


What If the Committee Were Using the Balanced RPI?

If the Committee were using the Balanced RPI, which does not have the NCAA RPI's problem of discrimination in relation to conferences and regions, the following teams would drop out of the RPI Top 57:

Army (AQ) drops from #44 to #63; South Florida (no at large) from #43 to #65; Liberty (AQ) from #40 to #68; James Madison (AQ) from #47 to #70); Columbia (AQ) from #53 to #72; Grand Canyon (AQ) from #50 to #73); Ohio (AQ) from #57 to #94; Fairfield (AQ) from #41 to #102).

The following teams would move into the Balanced RPI Top 57:

California climbs from #76 to #38; Loyola Marymount from #81 to #41; Baylor from #59 to #45; Utah Valley from #61 to #49; Utah from #84 to #50; Memphis from #63 to #51; Clemson from #64 to #55; Kansas from #73 to #57.

It's worth noting that these shifts primarily are teams from weaker conferences moving out of the Top 57 and teams from stronger conferences moving in.

In terms of actual at large changes, it is likely that Dayton, BYU, and Massachusetts would lose their predicted at large positions and would be replaced by Tennessee, Colorado, and California.

Monday, October 14, 2024

2024 ARTICLE 11: POST-WEEK-9 ACTUAL RATINGS AND UPDATED PREDICTIONS

 Current Actual RPI Ratings, Ranks, and Related Information

The following tables show actual RPI ratings and ranks and other information based on games played through Sunday, October 13.  The first table is for teams and the second for conferences.

Note: Scroll to the right to see all the columns.





Predicted Team RPI and Balanced RPI Ranks, Plus RPI and Balanced RPI Strength of Schedule Contributor Ranks

The following table for teams and the next one for conferences show predicted end-of-season ranks based on the actual results of games played through October 13 and predicted results of games not yet played.  The predicted results of future games are based on teams' actual RPI ratings from games played through October 13.

In the table, ARPI 2015 BPs is ranks using the NCAA's 2024 RPI Formula.  URPI 50 50 SoS Iteration 15 is using the Balanced RPI formula.





Predicted NCAA Tournament Automatic Qualifiers, Disqualified Teams, and At Large Selection Status, All for the Top 57 Teams

Starting this week, I will show predicted #1 through #8 seeds and at large selections based on the Women's Soccer Committee's historic decision patterns.  With a good part of the season remaining to be played, there will be changes, including substantial changes, from the predictions.  My main purpose now is to show how the prediction system works.

The first table below is for potential #1 seeds.  The #1 seeds always have come from the teams ranked #1 through 7 in the end-of-season RPI rankings, so the table is limited to the teams predicted to fall in that rank range.  The table is based on applying history-based standards to team scores in relation to a series of factors, all of which are related to the NCAA-specified criteria the Committee is required to use in making at large decisions.  For each factor, there is a standard that says, if a team met this standard historically, the team always has gotten a #1 seed.  I refer to this as a "yes" standard.  For most of the factors, there likewise is a standard that says, if a team met this standard historically, it never has gotten a #1 seed.  This is a "no" standard.  In the table, I have sorted the Top 7 RPI #1 seed teams in order of the number of yes standards they meet and then in order of the number of no standards.


This table has the #1 seed candidates sorted in order of the number of "yes" #1 seed standards they meet, from largest number to smallest (1 Seed Total), and the number of "no" standards they meet, from smallest to largest (No 1 Seed Total).  In the table, North Carolina and Mississippi State each meets at least one "yes" standard and no "no" standards, so each historically would get a #1 seed.  Duke and Penn State meet both "yes" and "no" standards, which means they have profiles the Committee has not seen historically.  Arkansas and Southern California meet no "yes" and no "no" standards, which means their getting or not getting #1 seeds would be consistent with Committee historic patterns.  Wake Forest meets no "yes" and 5 "no" standards, which means it historically would not get a #1 seed.  Altogether this means we have North Carolina and Mississippi State as #1 seeds, with Duke, Penn State, Arkansas, and Southern California as candidates for the remaining two #1 seed positions.

When there are more #1 seed candidates remaining than there are positions to fill, historically the best indicator of which teams the Committee will pick is a standard that combines a team's RPI rank with its conference final standing position, with each weighted at 50%.  For conference standing, the standard uses the average of the team's regular season conference standing position and its conference tournament finishing position.  Using this standard produces the following table, where the lower the standard value the better:


The system thus predicts that in addition to North Carolina and Mississippi State, the Committee will give #1 seeds to Duke and Arkansas.

Going through a similar process for #2 seeds, where the candidates are teams ranked #1 through #14, produces the following initial table:


In this table, Iowa, Wake Forest, and Stanford are clear #1 seeds.  Penn State, Southern California, and Auburn are potential #2s.  The last four teams are not #2s.  The tiebreaker for #2 seeds is a standard combining teams' conference standing with their Top 60 Head to Head Results Ranks:


Thus Southern California joins Iowa, Wake Forest, and Stanford as #2 seeds.

For #3 seeds, the candidates are teams ranked #1 through #23, producing the following initial table:


Here, Penn State and Michigan State are clear #3 seeds.  Xavier, Florida State, TCU, Auburn, UCLA, and Georgetown are candidates.  The remaining teams are not #3 seeds.  For #3s, the tiebreaker is teams' ranks in terms of their common opponents results in relation to results of other Top 60 teams.


Thus TCU and Auburn join Penn State and Michigan State as #3 seeds.

For #4 seeds, the candidates are teams ranked #1 through #26, producing the following initial table:


Here, Florida State and Ohio State are clear #4 seeds.  Utah State, Xavier, UCLA, Georgetown, and Pepperdine are candidates.  The remaining teams are not #4 seeds.  For #4s, the tiebreaker is a combination of teams' ranks in terms of their results against Top 50 opponents and their conferences' ranks.


Thus UCLA and Georgetown join Florida State and Ohio State as #4 seeds.

For # 5 through #8 seeds, the candidates are teams ranked #1 through #49.  Although the data are limited since we have had those seeds for only a few years, the best indicator of which teams will get those seeds is a combination of teams' RPI ranks and their Top 60 Head to Head results ranks:


Using this table, the #5 seeds are Utah State, Xavier, West Virginia, and Vanderbilt.  The #6s are St. Louis, Virginia, Liberty, and Pittsburgh.  The #7s are Wisconsin, Notre Dame, Pepperdine, and South Carolina.  The #8s are Virginia Tech, Western Michigan, Kentucky, and Oklahoma State.

For the remaining At Large selections, the candidates run up to RPI #57, producing the following initial table:


Here, Rutgers, Texas Tech, Texas, and Minnesota are clear At Large teams, with 6 additional spots to fill.  Buffalo, Oklahoma, Connecticut, Arizona, Georgia, Colorado, BYU, Washington, and Tennessee are candidates.  The remaining teams are not at large selections.  For At Large, the tiebreaker is a combination of teams' RPI ranks and their ranks in terms of their Common Opponent results in relation to other Top 60 teams.


Thus the unseeded at large selections are Rutgers, Texas Tech, Texas, and Minnesota, joined by Arizona, Georgia, Buffalo, Oklahoma, BYU, and Colorado.  Connecticut, Washington, and Tennessee just miss getting at large positions.

What If the Committee Were Using the Balanced RPI?

If the Committee were using the Balanced RPI, which does not have the NCAA RPI's problem of discrimination in relation to conferences and regions, Washington, California (which would move inside the Top 57 and thus be a candidate), and Tennessee would get at large selections and BYU, Oklahoma, and Buffalo (which would move outside the Top 57 and thus not be a candidate) would not.  

Tuesday, October 8, 2024

2024 ARTICLE 10: POST-WEEK-8 ACTUAL RATINGS AND UPDATED PREDICTIONS

 Current Actual RPI Ratings, Ranks, and Related Information

The following tables show actual RPI ratings and ranks and other information based on games played through Sunday, October 6.  The first table is for teams and the second for conferences.

As an item of interest, during the non-conference phase of the season, the percentage of tie games was relatively low as compared to the historic percentage of entire season tie games.  Now that the conference phase of the season has begun, however, the percentage of tie games is increasing and appears likely to end up in the vicinity of the historic entire season percentage.  This suggests that the percentage of tie games tends to be higher for in-conference competition than for non-conference competition.  If one thinks of the conferences as groupings of relatively comparable teams and of the non-conference phase as allowing competition among less comparable teams, it makes sense there would be more ties during the conference season.

Note: Scroll to the right to see all the columns.





Predicted Team RPI and Balanced RPI Ranks, Plus RPI and Balanced RPI Strength of Schedule Contributor Ranks

The following table for teams and the next one for conferences show predicted end-of-season ranks based on the actual results of games played through September 29 and predicted results of games not yet played.  The predicted results of future games are based on teams' actual RPI ratings from games played through October 6.  I consider the current RPI ratings still to be speculative, but they should become better game result predictors each week as the season progresses.

In the table, ARPI 2015 BPs is ranks using the NCAA's 2024 RPI Formula.  URPI 50 50 SoS Iteration 15 is using the Balanced RPI formula.





Predicted NCAA Tournament Automatic Qualifiers, Disqualified Teams, and At Large Selection Status, All for the Top 57 Teams

Starting this week, I will show the candidate groups for #1 through #4 seeds and for at large selections, placed in order based on the Women's Soccer Committee's historic decision patterns.

The first table below is for potential #1 seeds.  The #1 seeds always have come from the teams ranked #1 through 7 in the end-of-season RPI rankings, so the table is limited to the teams predicted to fall in that rank range.  The table is based on applying history-based standards to team scores in relation to 118 factors, all of which are related to the NCAA-specified criteria the Committee is required to use in making at large decisions.  For each factor, there is a standard that says, if a team met this standard historically, the team always has gotten a #1 seed.  I refer to this as a "yes" factor.  For most of the factors, there likewise is a standard that says, if a team met this standard historically, it never has gotten a #1 seed.  This is a "no" factor.  In the table, I have sorted the Top 7 RPI #1 seed teams in order of the number of yes factors they meet and then in order of the number of no factors.  For each of the other seed tables and for the at large table, I have followed a similar pattern.

As I said in the preceding paragraph, I use 118 factors for the yes decisions but fewer for the no decisions.  In the past, I have used all 118 for both yes and no decisions.  I am using fewer for the no decisions due to the NCAA this year having reduced the value of ties from 1/2 a win to 1/3 a win when computing the Winning Percentage portion of the RPI formula.  This change will result in almost all teams having lower RPI ratings than they have had in the past.  Each of the "no" factors I am not using this year is a factor that incorporates teams' RPI ratings.  For example, historically, no team with an RPI rating less than 0.6433 has gotten a #1 seed, so <0.6433 is the standard for a "no" #1 seed decision.  This year, however, with ratings as a whole being lower, that standard most likely is too high.  I can review and re-set all of the no standards that incorporate RPI ratings, but that is too big and time-consuming a task to do during the season, so it will have to wait until after the season is over.  In the meantime, the best approach is simply to not use the no standards that incorporate RPI ratings.  I still use, however, all the other no standards including those that incorporate RPI ranks (as distinguished from RPI ratings).

Here is the #1 seed table:


In the table, the 1 Seed Total column shows the number of yes standards a team met.  The No 1 Seed Total shows the number of no standards it met.  The table suggests that currently, North Carolina and Wake Forest look like sure #1 seeds and Mississippi State looks a strong possibility.  After that, it could be any of Duke, Penn State, Arkansas, or Iowa, whose order in the table is not necessarily the order of likely selection.


This is the table for #2 seeds.  The historic candidates are teams ranked #1 through 14.  The table includes the teams that are candidates for #1 seeds.


This is the table for #3 seeds, with the historic candidate group being teams ranked #23 or better.


This is the table for #3 seeds, with the historic candidate group being teams ranked #26 or better.

The final table is for at large selections, with the historic candidate group being teams ranked #57 or better:


There will be 34 at large teams.  One way to look at this table is to count down the list until you get to the 34th team that is not an Automatic Qualifier.  That takes you to Wisconsin, with 0 yes and 0 no standards.  Since Minnesota, below Wisconsin on the list, also is 0-0, the list suggests that currently the most likely at large teams are all of teams from Georgia and above that are not Automatic Qualifiers, plus all but one of the 0-0 teams from Texas A&M to Minnesota.  The one question mark of the teams from Georgia and above is Santa Clara with 2 yes and 3 no standards.

Note:  If you expected to see a team on the list and it is not there, it is because the current prediction has the team ending with a rank poorer than #57.  Also, as you can see, the current prediction has SMU ending as the #53 RPI team but with a Winning Percentage below 0.500 and thus being disqualified from getting an at large position.