Point ranking

This rating uses a linear model to calculate team score effects. It is basically a model in which games present two data points and equations. For example, a game with score Nixon 56, Madoff 43, would yield two equations:

56 = overall mean + Nixon effect – Madoff effect + error

43 = overall mean + Madoff effect – Nixon effect + error

The linear model finds the best set of team effects which minimizes the sum of squared error terms.

One can see that the difference between Nixon and Madoff is

56 – 43 = 13 = 2 * Nixon effect – 2 * Madoff effect.

In actual practice, I collapse two observations into a single point modelling:

Margin (A over B) = Team A effect – Team B effect + error

The resulting ratings then represent the actual point difference between the teams. The linear regression calculation is done using R. I re-center the team effects so the mean team effect is 0. The team effects given are the exact fitted estimates from the linear regression. The standard error of this estimate is typically about 9 or 10 points.

The games used for the calculation are those in my dataset, provided the teams in the game have at least two games in the dataset (after eliminating teams that have just one game until all those are eliminated). This is a problem for teams with several games outside the dataset. Although the two-game rule allows inclusion of some out-of-state teams (I don't publish their result in the rankings) the quality of data is lower. The rating is not predictive. It is a summarization of teams' performance to date. Every game is weighted the same. Recent games have the same effect as games the first week of December.

The team effects are determined from the data by a standard statistical technique. There are no weights supplied by me. I don't say teams from Class 4A are X points better than Class 3A, etc. I don't even supply a term for class. The model just deals with teams.

For those not comfortable with mathematical techniques, the algorithm is like looking at comparative scores between teams. We all know that using comparative scores, using PARTICULAR comparative scores, it is possible to prove any team could beat any other team. The difference is that the linear regression comparative score comparison looks at EVERY comparative score possibility simultaneously and calculates what the best average comparative score rating is.

The technique allows for teams and clusters of teams (usually leagues) to separate from each other. The preponderance of league games should not dilute the separation. In several other ranking algorithms the league games will introduce a set of data that will drive everybody toward the mean. Although I would like to have a lot of games between diverse sets of teams, the linear model technique does allow separation of teams. The more games the better but even a single game permits separation. One must be mindful when making inferences based on just a few games, however.

Winning a game by 40 points rather than by 2 is rewarded. Winning the game by 1 rather than losing by -1 is similar to winning by 40 or 38 (which doesn't make much difference). Using some algorithms, such as RPI, winning is the key differentiator. Maybe that is a good thing about RPI. Winning the game is one of the objectives.