[NOTE: The tables below are different than the ones I originally published on November 10, 2024. This is because I discovered a programming error in the Excel workbook I use to do my predictions of what the Committee will decide on NCAA Tournament seeds and at large selections. The error involved the calculations of the RPI bonuses and penalties for good and poor results. The error now is fixed. The tables now are as of November 11, 2024.]
Here is what my computer comes up with as NCAA Tournament seeds and at large selections, and candidate teams not getting at large selections, if the Committee follows its historic patterns. Below the table, I havea list of the final at large teams "in" and "out." The system has Liberty and Boise State getting at large positions rather than California and Washington. I would not be surprised to see Liberty and Boise State out and California and Washington in.
In the NCAA Seed or Selection column, the seeds 4.5, 4.6, 4.7, and 4.8 are the 5, 6, 7, and 8 seeds. The 5s are unseeded Automatic Qualifiers. The 6s are unseeded at large teams. The 7s are RPI Top 57 teams not getting at large positions.
Why don’t you trust your spreadsheets that say UMASS has higher ARPI than several 6s and all of the 7s. UMASS not even mentioned. AND why is Boise State at 54 a 6?! I thought your ARPI took all of your opinions out of the game. Why don’t you trust your algorithm? Or why don’t you just tell us who you think should go?
ReplyDeleteThanks for the questions. The ARPI is the NCAA's Adjusted RPI, which is the rating system the NCAA requires the Women's Soccer Committee to use. Like any mathematical rating system, it is not precise. This is especially true for Division I women's soccer, since the numbers of games teams play are not sufficient for even a really good rating system to do a precise job.
DeleteMy expertise is not analyzing who should or should not be in the Tournament, in terms of how good the teams really are. Rather, my expertise is in what the Committee's historic decision-making patterns are and, based on that, what their likely decisions are this year based on the data the NCAA requires them to use in making the decisions. The reason I limit my work to that is because I am interested in those interested having an understanding of how the Committee works and makes its decisions. This can be helpful particularly to coaches in their non-conference scheduling work and in understanding their prospects as the work their way through the season.
But your last in and last out are kinda wrong
DeleteThey were pretty close. The last in are the ones that my system says historically would have been in but are borderline. The last out are the ones that are borderline out. So if one were expecting a deviation from my prediction, the last in group might be out and the last out group might be in.
DeleteI could have done a slightly expanded last in and last out list and all of the ones my prediction put in but that ended up out would have been on the last in side. And all of the ones my prediction would have put out but that ended up in would have been on the last out side. That means that the "fuzzy" area of the bubble, where it wasn't sure who would be in and who out, included all of the teams predicted "in" but ending up "out" and also included all of teams predicted "out" but ending up "in."
So, the Committee did not do anything with at large selections that represents a big deviation from its historic patterns. Or, put differently, those decisions were within the range of what one reasonably might expect.