# My AFL-Elo model

Over the last few years I have followed a lot of the work done by FiveThiryEight, particularly their attempts to model and predict sport. More recently I have discovered there is a community of people trying to do similar things for the AFL, including The Arc, Squiggle, Matter of Stats and Hurling People Now.

Many of these modelling projects are based around the Elo system. If you haven’t heard of it before this model is a ranking system originally designed for chess by a Hungarian physicist. In the simplest form each player (or team) is assigned a ranking. When a match is played you can estimate a win probability based on the differences between the rankings. The rankings are then adjusted based on the result in such a way that unexpected results cause bigger changes than those that are closer to what was predicted. This model is relatively naive and simple to implement, no knowledge of the players or teams themselves is required, just the results of matches, but can still produce good predictions.

Given this I thought it would be a good place to start. My version of the model is closely based on the one described by The Arc here. There were a few different things I wanted to try but (as always) everything took longer than I planned, so what I have done in the end is very similar. The one area where I have done things differently is the process used to select the parameters of the model. This part wasn’t really described in the post on The Arc so I was left to my own devices. Here are brief descriptions of the parameters, but if you are interested I suggest you check out the outline of the model on The Arc which has a lot more detail:

• New team rating - The starting rating for new teams that enter the competition (Gold Coast and GWS). Original teams start with a rating of 1500.
• New season adjustment - Amount to regress to the mean at the beginning of a new season
• HGA alpha - Weighting given to travel distance when calculating home ground advantage (HGA)
• HGA beta - Weighting given to ground experience when calculating HGA
• p - Controls how win probabilities are converted to margins
• k - Controls how differences between predicted and actual results affect ratings. Greater values cause greater changes, meaning the the model reacts quicker to what has happened but also that it is more unstable. In many ways this is the critical parameter for the Elo model. For this version of the model we use three different values:
• Early - Used for the first five rounds of the regular season
• Normal - Used for the remainder of the regular season
• Finals - Used for finals matches

To select these parameters I chose to use a genetic optimisation algorithm. Partly because it is potentially able to explore a wider parameter space, but also because I think they are cool. To do this we need a measure of fitness that we are aiming for. For sport predictions there are generally two things we want to know, who is going to win and by how much. Estimating these can often be best done using different sets of parameters. For this reason I ran the optimisation procedure three times, once optimising for win prediction accuracy, once optimising for the mean absolute error in predicting the margin and once for a 50/50 balance between the two. Each optimisation procedure was run for 100 generations with 100 individuals in each generation, training the model on all AFL games from 1997 to 2016 and assessing performance on the games from 2000 to 2016. This leaves the 2017 season as a validation set to check the selected parameters. Here the best performing parameter sets from each of the optimisations compared to the default parameters based on The Arc:

DefaultMarginBalancedPrediction
NewTeamRating1090129212841106
HGA_Alpha61.332.892.05
HGA_Beta1512.892.15.68
p0.04640.0270.02040.078
k_Early82929255
k_Normal62624238
k_Finals72338043
Margin201629.929.8229.7632.51
Predict20160.680.680.680.69
Margin201729.0928.9429.2330.18
Predict20170.610.630.610.62

Based on the 2017 results I decided to go with the Margin model. Despite being optimised for margin accuracy it also performed the best at predicting results in 2017. This might suggest that the optimisation procedure is not ideal, but that is a problem for another day… Encouragingly, all three of my models outperform the defaults, which suggests that the results will be somewhere in the range of the The Arc, and I am more than happy with that.

If you are interested in how I have done things I have made an aflelo R package which you can install from GitHub and my analysis and predictions for each round will be available here.

Now that I have a model I can use it to make predictions about the 2018 season!

# Round 5

## Summary

TeamRatingChangePointsPercentageProjRatingProjPointsTop2Top4Top8
Richmond15701912130155556.528.249.878.9
Sydney1561-612108154656.927.949.479.0
GW Sydney1558012140154456.327.548.779.4
West Coast15562112136154554.723.042.774.9
Hawthorn15533412127153956.326.948.278.7
Geelong1542108109153250.012.027.661.3
Collingwood1530448107152149.611.126.859.1
North Melbourne1497288134149943.24.712.638.4
Essendon149418899149940.73.08.528.2
Melbourne1493-34898149543.13.811.335.5
Western Bulldogs14636472147036.00.73.216.3
Fremantle14580889146639.31.85.923.3
St Kilda1438-10468145629.40.21.16.4
Gold Coast1415-21883143332.70.41.69.1
Brisbane1407-19064142624.30.10.32.7
Carlton1390-28061141520.50.00.11.2