Each conference’s coaches do pre-season rankings of teams within their conference. Women’s soccer expert Chris Henderson (https://twitter.com/chris_awk) likewise does pre-season conference rankings. And I do pre-season conference rankings.
Chris Henderson incorporates a number of factors into his rankings. There is a point system for each factor:
Returning starters -- Starters are defined as last year’s top 10 minutes getters for field players and the goalkeeper with the most minutes.
Returning award winners from last year -- This rewards a team for top returning talent. There is a sliding scale with higher value awards getting more points.
Returning award winners from the two years before last year -- Again, this rewards a team for top returning talent and has a sliding scale, allowing the system to better factor in superstar level players.
CoachRank -- This is based on Henderson’s system for ranking coaches. A team whose coach has a high CoachRank score receives bonus points and a team with a low CoachRank score receives penalty points.
Recruiting -- This is based on the TopDrawer Soccer player ratings, since there is not a better system available right now. For transfers and international players, Henderson assigns his own player ratings.
Experience -- If a team has fewer than six returning starters, it gets penalized with the penalty increasing for each decrease in returning starters.
"Bust" potential -- If the value of last year’s talent is much higher or lower than the two preceding years, the team gets penalized. This is to factor in teams that had a "fluke" recruiting year in 2019, for better or for worse.
Talent gap -- This penalizes teams that were far worse than much of the rest of the conference last year. It assigns penalty points for teams with a league goal differential of worse than -1.0 per conference game, with the penalty escalating as the negative goal differential increases to greater whole numbers.
The scores for all of these factors are combined using a weighted formula and teams are ranked based on their overall scores.
My rankings likewise use a mathematical system, but it is completely different. It looks at the rankings of teams historically and determines their ranking trends. Based on trend formulas, it projects rankings for the coming year and converts those rankings to RPI ratings. Using the conference schedule for the coming year, including game locations, it then compares each game’s opponents’ ratings as adjusted for home field advantage and based on the comparison assigns a game result (either win-loss or tie). With the results of all the conference games, it assigns 3 points for a win and 1 for a loss, computes the conference points scored by each team, and ranks the teams accordingly.
I do not know how each coach ranks his or her conference’s teams, but I assume the coaches do it based on all of the knowledge they have, including from direct in-game observations, about each team in the conference.
The following table shows how our three sets of rankings compare for the four conferences having their conference regular season competition this Fall. For each conference, the teams are in the order the coaches ranked them:
The three right-hand columns show the ranking difference, for each team, between the Henderson ranks and the coach ranks, between my ranks and the coach ranks, and between the Henderson ranks and my ranks.
2018: Coaches, on average, were within 2.03 positions of actual ranks
Henderson was within 2.20
I was within 2.24
2019: Coaches within 2.16
Henderson was within 2.22
I was within 2.30
As you can see, it is very hard to do pre-season rankings of teams within a conference with a high degree of accuracy. Although the coaches, with their direct experience and detailed knowledge of the other conference teams sometimes over a significant number of years, do the best with the pre-season rankings, they really do little better than the two mathematical systems.
This illustrates that there is not just one way to do reasonable advance rankings of teams. Each of the above three systems brings its own perspective and contributes something to the picture of how teams are likely to do. And none is able give a highly accurate picture of where teams will end up. At least when it comes to Division I women’s soccer, even very good predictive models leave a pretty good degree of uncertainty. Given that the three models here -- each of which is quite sophisticated in its own way -- are quite close in their degree of accuracy, there is a good possibility that they are approaching the limits of how close predictive models can come for the rankings of teams within their conferences.
No comments:
Post a Comment