In this week's RPI Report, I want to point out two sets of data that the Women's Soccer Committee and coaches should be concerned about. Following that discussion, I'll post the regular weekly team, conference, and region tables. For those who want to go directly to the weekly tables, here is a link to this week's Excel workbook 2025 RPI Report After Week 2. (For those seriously interested in the current RPI Report information, I recommend downloading the workbook in an Excel format rather than using the weekly tables as reproduced below.)
Issue of Concern #1: Distribution of In-Region Ties, by Region
This table shows, for each of the four regional playing pools, the percentage of in-region games that are ties. The percentage is based on actual ties to date and tie likelihoods for games not yet played. The percentages in the table may be a little higher than they will be at the end of the season, but the placement of the regions in the table is consistent with what the placement has been historically.
Here is the problem the Committee and coaches should be concerned about:
As I have demonstrated elsewhere, the NCAA RPI historically has discriminated against teams from the West region, meaning that in games against teams from other regions, the West region teams on average have performed better than their ratings say they should have performed, i.e., are underrated. At the other end of the spectrum, the South region teams on average have performed more poorly than their ratings say they should have performed, i.e., are overrated. This doesn't mean all teams from the regions are under- or overrated, rather that on average teams from the regions are under- or overrated.
This year, the Committee made an NCAA RPI formula change that relates to this: In the computation of a team's Winning Percentage (half the effective weight of a team's NCAA RPI rating), the Committee reduced the value of a tie from half of a win to one-third of a win. In other words, it devalued ties. Since it is reasonable to assume that regions with higher parity will have a higher proportion of ties, the effect of the change is to devalue the ratings of teams from regions with a high level of parity.
Also as I have demonstrated elsewhere, the West region historically has had the highest level of parity, followed by the Middle, then the North, and then the South. The historic proportion of in-region games that are ties is consistent with this, as are the proportions for this year in the above table.
Thus the effect of the change likely will be to devalue the ratings of teams from the West, followed by the Middle, to the benefit of teams from the North and especially from the South. This means that with the NCAA RPI already discriminating against teams from the West, the Committee's change has made the discrimination even worse.
The extent to which the Committee has made the change worse is not large, as indicated by the following table:
This table summarizes how the rating systems perform at rating teams from a region in relation to teams from the other regions. It draws from the following data:
1. For each region, its teams' actual winning percentages in games against teams from the other regions.
2. For each region, its teams' expected winning percentages against teams from other regions based on teams' ratings as adjusted for home field advantage. [NOTE: Expected winning percentages are based on the rating differences between opponents as adjusted for home field advantage. The expected winning percentages are calculated using result probability tables derived from the results of all games played since 2010 and are highly reliable when applied across large numbers of games.]
3. The difference, for each region, between its actual winning percentage and its expected wining percentage. Teams from regions with higher actual winning percentages are outperforming their ratings, in other words are underrated (discriminated against), whereas teams from regions with lower winning percentages are overrated.
In the table, the High column shows the actual v expected winning percentage difference for the region whose teams most outperform their ratings when playing teams from other regions -- the West region. The Low column shows the difference for the region whose teams most underperform their ratings -- the South region. The Spread column shows the difference between the High and Low, which is a measure of the extent of the rating system's discrimination among regions. The Over and Under Total column shows the amount, for all four regions, by which the rating system misses a perfect match (no difference) between actual and expected winning percentage, which is another measure of the system's performance.
As you can see from the table's comparison of the two systems, although the difference between the systems' performance is small, the Committee's 2024 change increased the discrimination among regions.
So, what was the Committee's rationale for the change? It gave two reasons: (1) Valuing ties as 1/3 of a win matches how conferences compute in-conference standings and how leagues compute standings in the larger soccer world; and (2) The Division 1 men have made the change.
Neither rationale holds water. (1) For conferences and leagues, for in-conference and in-league standings, there is no issue of whether different playing pools have different levels of parity. Thus how they count ties, when determining standings, is irrelevant to how to count ties in a national rating system for teams playing games in different geographic regions. (2) That the men have made the change should be a reason ... Why? That does not demonstrate in any way that it is a good change for the women. The Committee should make its own decision based on how the change will affect the NCAA RPI as a rating system for the women's teams, not let the Men's Soccer Committee make the decision for them.
The question the Committee should have considered is whether the change makes the NCAA RPI a better system for rating teams properly in relation to each other when the teams are distributed around the nation and tend to play in different regional playing pools. They should have asked the NCAA statistics staff about this, and the staff should have advised the Committee that if the geographic playing pools have different levels of parity, then the change will punish teams from the pools with high parity and benefit stronger teams from the pools with low parity. Did the Committee ask the staff about this and did the staff give them this answer? I don't know, but I doubt it.
Issue of Concern #2: Proportions of Out-of_Region Games
As I showed above, the NCAA RPI discriminates against some regions and in favor of others. At one time, I thought this was due exclusively to differences in region strength and there not being a high enough proportion of out-of-region games for the system to work properly on a national basis. As it turns out, however, both Massey's and my Balanced RPI rating systems show only a small amount of discrimination in relation to region strength, and much less than the NCAA RPI. In other words, there are enough out-of-region games for those systems to avoid all but a little discrimination in relation to region strength. So the NCAA RPI's problem has not been driven mainly by there not having been enough out-of-region games (rather, as it has turned out, it is driven by how the NCAA RPI computes Strength of Schedule).
On the other hand, it is indisputable that there have to be "enough" out-of-region games for any rating system to work on a national basis. As the following tables show, no doubt due to the changing college sports economic landscape, this year there is a substantial reduction in the proportion of out-of-region games from what the proportion has been in the past (based on teams' published 2025 schedules, adjusted to take canceled games to date into account):
This represents a 28.0% across the board reduction in the proportion out-of-region games. For the Middle region, the reduction is 18.0%, for the North 28.5%, for the South 30.2%, and for the West 31.7%. Given these reductions, a major question is whether they will significantly impair the ability of the NCAA RPI -- or any other rating system -- to properly rate teams on a national basis. This is a question the Committee and coaches should be thinking about.
Weekly Team, Conference, and Region Tables
The following tables are based on the actual results of games played through Sunday, August 24, and predicted result likelihoods for games not yet played.
TEAMS
CONFERENCES
REGIONS
No comments:
Post a Comment