In this article, I'll show what the Women's Soccer Committee's seeding and at large selection decisions likely would have looked like if the Committee had used the Balanced RPI and will compare them to the Committee's actual decisions. After doing that, I will discuss in detail why there are differences.
Decisions with the Balanced RPI as Compared to the Committee's Actual Decisions
The table below shows the Committee's actual seeding and at large selections as compared to what they likely would have been using the Balanced RPI. The Committee does not always do what is "likely," but they come close, so this should give a good picture of the differences between the two rating systems. At the top ot the table is a key to the table's two right-hand columns. In the table's left-hand column, the green highlighting is for teams that would get at large selections using the Balanced RPI but that did not actually get them with the Committee using the NCAA RPI. The orange highlighting is for teams that would not get at large selections using the Balanced RPI but that actually got them. The lime highlighting is for teams that would have been candidates (Top 57) for at large selections using the Balanced RPI but would not have been selected, but that were not even candidates using the NCAA RPI. The salmon highlighting is for teams that were not candidates for at large selections using the Balanced RPI but that were candidates that did not get selected using the NCAA RPI.
This table shows that the team whose actual winning percentage was most above its expected winning percentage was 9.0% above. The team whose actual winning percentage was most below its expected winning percentage was -11.2% below. The sum of these two numbers, 20.2%, is an indicator of how well the NCAA RPI measured teams' performance.
The different states' teams play a majority or plurality of their games within one of four geographic regions. The next table is similar to the teams and conferences tables, but looking at the four regions, in non-region games:
The teams dropping out of the Top 57 as a result of a shift to the Balanced RPI are, in order of NCAA RPI rank: Fairfield (AQ), Samford (AQ), Rhode Island, Charlotte, James Madison, Old Dominion, Army (AQ), Lipscomb (AQ), UNC Wilmington, and Texas State (AQ).
The teams moving into the Top 57 as a result of the shift are, again in order of NCAA RPI rank: Cal State Fullerton, Pepperdine (AQ), Kansas State, Seattle, Arizona State, Southern California, Portland, Houston, Santa Clara, and Nebraska.
The following table provides data related to why these changes occur:
For each team that is "in" or "out" of the Top 57 in a shift to the Balanced RPI, the table shows, in the five columns on the right, how teams actual winning percentages compare with their expected winning percentages. It shows this for the NCAA RPI ratings and for the Balanced RPI ratings.
Why Are There Differences? Digging Down Another Level
The NCAA RPI and the Balanced RPI have two key components: a team's Winning Percentage (WP) and the team's Strength of Schedule (SoS). Within each system's formula, each component has a 50% effective weight.
A team's SoS is intended to measure its opponents' strengths. The two systems measure opponents' strengths differently.
For each rating system, it is possible to calculate a team's rating as an SoS contributor to its opponents -- as distinguished from its actual RPI rating. It then is possible to determine a team's rank as an SoS contributor to its opponennts.
A team's rank as an SoS contributor should be the same as its RPI rank. For the NCAA RPI, however, it isn't. In fact, for the NCAA RPI, the average difference between a team's RPI rank and its rank as an SoS contributor is 31.3 rank positions, with the median difference 24 positions. I designed the Balanced RPI, using the RPI as a starting point, to eliminate this disparity between RPI ranks and SoS contributor ranks. As a result, for the Balanced RPI, the average difference between a team's RPI rank and its rank as an SoS contributor is 0.8 rank positions, with the median difference 0 positions. In simple terms, for the NCAA RPI there are significant differences between a team's NCAA RPI rank and its rank as an SoS contributor to its opponents; but for the Balanced RPI the two essentially are the same..
For the "in" and "out" teams, the following table shows what I described in the preceding paragraph:
In the table, start out by looking at the three columns Opponents NCAA RPI Average Rank, Opponents NCAA RPI Average Rank as SoS Contributor, and NCAA RPI Difference. In the Difference column, a negative number means that the team's opponents' average rank as SoS contributors is poorer than their opponents' average actual NCAA RPI rank. In other words, the NCAA RPI formula understates the team's SoS -- it discriminates against the team. A positive number means that the team's opponents' average rank as SoS contributors is better than its opponents' average actual NCAA RPI rank. In other words, the NCAA RPI formula overstates the team's SoS -- it discriminates in favor of the team.
At the bottom of the table, in the NCAA RPI Difference column, you can see the average differences for the "in" and "out" teams. The average difference for the "in" teams is -27 and for the "out" teams is -7. For the "out" teams, this means the NCAA RPI discriminates against them some. But for the "in" teams, the NCAA RPI discriminates against them almost four times as much as for the "out" teams.
The last three columns on the right are similar, but for the Balanced RPI. There, for the "in" teams, the average difference is -1 and for the "out" teams it is 1. In other words, using the Balanced RPI there is virtually no discrimination between the "in" and "out" teams.
For the NCAA RPI, this high level of discrimination against the "in" teams explains why the "in" teams' average performance is better than their ratings say it should be, including in relation to the "out" teams even though the "out" teams experience some discrimination. And for the Balanced RPI, the lack of discrimination explains why the "in" and "out" teams' performance is close to what their ratings say it should be.
Why Are There Differences? Digging Down One More Level
There is more to see, however, in the preceding table. If you focus on the conferences and regions of the "in" and "out" teams, you will see patterns. Most of the "in" teams are from the West region and those not from the West are from the Power 4 conferences. All of the "out" teams are from mid-major conferences and from the North and South regions. Why are we seeing these patterns?The following table, shows conferences' actual winning percentages in non-conference games as compared to their NCAA RPI expected winning percentages, with the conferences in order from those most discriminated against at the top to those most discriminated for at the bottom:
As you can see, stronger conferences and conferences from the West tend to be in the upper part of the table - the most discriminated against conferences. Compare this with the following table for the Balanced RPI:
In the Balanced RPI table, there still are differences between conferences' actual performance and their expected performance. But the differences are less tied to conference strength and geographic regions than in the NCAA RPI table (as well as overall being smaller).
In this table, the key columns are the Conferences NCAA RPI Rank and the Conference Teams Opponents NCAA RPI Ranks Less NCAA RPI SoS Contributor Ranks Difference columns. As you can see, the NCAA's way of calculating SoS discriminates heavily against teams from stronger conferences and in favor of teams from weaker conferences.
Compare this to the similar table for the Balanced RPI:
Here, the conferences are in the same order is in the preceding table. You can see that for the Balanced RPI, conference teams' opponents' ranks and their ranks as SoS contributors are essentially the same for all conferences. This is one of the underlying causes for the "in" and "out" changes when shifting from the NCAA RPI to the Balanced RPI.
As you can see, the NCAA RPI discriminates significantly against the West region (and in favor of the North). Compare this to the table for the Balanced RPI:
As you can see, the numbers in the Difference column are in order or region strength. The NCAA RPI's discrimination in how it values conference teams' strengths of schedule exactly matches region strength. The stronger the region, the more the discrimination.
Here, the regions are in the same order is in the preceding table. You can see that for the Balanced RPI, region teams' opponents' ranks and their ranks as SoS contributors are essentially the same. This is another of the underlying causes for the "in" and "out" changes when shifting from the NCAA RPI to the Balanced RPI.
No comments:
Post a Comment