GameSpot may receive revenue from affiliate and advertising partnerships for sharing this content and from purchases through links.

Follow up: How the prediction model went after NA&EU superweek.

After using a statistical prediction model to analyze superweek, the results are in and they were 68.75% right for both NA and EU.

Comments

This article was originally published on GameSpot's sister site onGamers.com, which was dedicated to esports coverage.

Last week I set up a prediction model that predicted the results of Superweek for both North America and Europe. I used a model that took into account a team's season win rate on blue or red side, their head to head matchup, and their last 5 games. I wanted to do a small retrospective to see how the model worked out and how I should change the model for the upcoming playoffs.

How the model did.

No Caption Provided

The first region to look at is Europe. Overall, this model had a 68.75% success rate. Some of the big upsets were Copenhagen Wolves over Gambit on day 1 and Millenium over Alliance in day 2. The one upset that was successfully predicted by the model was the Millenium over Supa Hot Crew on day 1. Overall, I think this model weighed head to head a bit too much, causing the wrong predictions for Gambit vs Roccat, and Copenhagen Wolves vs Supa Hot Crew. Gambit has a really good win rate on blue side, and should have been predicted over Roccat, while Supa Hot Crew has the worst red-side (or any single side) of any team (18% win rate), so they should not have been predicted despite having the head-to-head advantage.

No Caption Provided

The next superweek was NA, which strangely also had an overall prediction success rate of 68.75%. Some of the big upsets were of course the XDG upset over TSM on day 3, and Curse over CLG on day 2. However, the model also predicted an upset, Coast over Dig on day 2. Overall again here I think head to head played a bit of a negative role, for example in the EG vs CRS game CRS had higher win rates, and a better last 5 games, but the model predicted EG because they were 2-1 in the head to head matchup.

How successful was the model?

While we know that the prediction model was right 68.75% of the time, in order to bring that into context we compared it to a few different models:

RegionPrediction ModelPicking all Blue-sidePicking higher-seed
EU68.75%68.75%76.9%
NA68.75%50%81.25%

While 68.75% is above the coin-flip 50-50 method, the prediction model overall was a bit unsuccessful because it was less than just picking the higher seed (or "favorite") in every matchup. And while the picking all blue can be a successful strategy, as for NA over 60% of games are won by blue side, I think this model is more consistent/reliable than that.

How to change the model for the future

  1. Overall win rate needs to be factored in more. Currently the model looks at the win rate of a team's season on blue or red side (with double the weight of other factors), but I think should maybe also add in a full season win rate on both sides to put even more weight into the higher-seed.
  2. Head to head should be weighted less. The issue with head to head is it doesn't just increase the win rate of one team, but it also reduces it of the opposing team. This doubling effect provides just too much power for the head to head factor and should be reduced.

I will give predictions for the playoffs using a modified version that predicts who will win and how so in a bo3. Stay tuned for that within the next week!

Data collected with help by the onGamers Stats team: Derek 'Kathix' Adams, Steven 'whedgehead' Falgout, Kent 'Traepoint' Frasure, Jake Morales, and James 'PelkaSupaFresh' Pelkey. Design by Ben 'Sarcasmappreciated' Li.

Got a news tip or want to contact us directly? Email news@gamespot.com

Join the conversation
There are no comments about this story