Predictive Signals
Some attempt to see if there are any predictive qualities to the stats we have available. Shared with everyone, because I'm not certain it's useful. We look at the "ICT Index" that everyone ignores in the main game, plus a rolling average of form, and the difficulty of the next fixture. And it seems there is some sort of predicitve element there. Or, it might all be dumb and circular - of course these things predict the points, because they're the things used to calculate the points... Whatever, I just asked the Gimp to create it for me, so 🤷‍♀️
Team Form Pulse Check
This is the basic stuff - teams ranked by their players' average form. And the three most form-y players for each of them. And then a chart plotting that form vs. how hard their next five fixtures are.
| Rank | Team | Active Players | Team Form | Top Form Players |
|---|---|---|---|---|
| 1 | Wolves | 21 | 68.2 |
Toti
DEF
(8.0)
Edozie
MID
(6.0)
André
MID
(5.5)
|
| 2 | Arsenal | 19 | 68.1 |
Gabriel
DEF
(5.8)
Saka
MID
(5.8)
J.Timber
DEF
(5.5)
|
| 3 | West Ham | 20 | 67.8 |
Todibo
DEF
(9.0)
Diouf
DEF
(6.6)
Hermansen
GKP
(6.2)
|
| 4 | Liverpool | 19 | 64.2 |
Virgil
DEF
(7.2)
Mac Allister
MID
(5.8)
Frimpong
DEF
(5.5)
|
| 5 | Bournemouth | 19 | 61.4 |
Semenyo
MID
(7.0)
Hill
DEF
(6.4)
Truffert
DEF
(5.4)
|
| 6 | Man City | 19 | 58.9 |
O’Reilly
DEF
(9.5)
Haaland
FWD
(6.0)
AĂŻt-Nouri
DEF
(5.6)
|
| 7 | Man Utd | 19 | 58.0 |
B.Fernandes
MID
(7.0)
Šeško
FWD
(5.4)
Mbeumo
MID
(4.6)
|
| 8 | Chelsea | 22 | 57.3 |
JoĂŁo Pedro
FWD
(9.2)
Palmer
MID
(9.0)
James
DEF
(4.7)
|
| 9 | Sunderland | 22 | 55.6 |
Ellborg
GKP
(10.0)
O'Nien
DEF
(5.0)
Ballard
DEF
(4.4)
|
| 10 | Crystal Palace | 16 | 52.1 |
Henderson
GKP
(6.2)
Sarr
MID
(5.8)
Lacroix
DEF
(5.7)
|
| 11 | Nott'm Forest | 23 | 50.8 |
Anderson
MID
(5.4)
Gibbs-White
MID
(4.8)
Morato
DEF
(4.0)
|
| 12 | Everton | 16 | 50.7 |
Dewsbury-Hall
MID
(6.8)
McNeil
MID
(5.5)
Garner
MID
(4.8)
|
| 13 | Brighton | 22 | 50.3 |
Van Hecke
DEF
(5.0)
Gomez
MID
(4.6)
Welbeck
FWD
(4.4)
|
| 14 | Brentford | 18 | 48.8 |
Damsgaard
MID
(4.8)
O.Dango
MID
(4.6)
Van den Berg
DEF
(4.4)
|
| 15 | Leeds | 19 | 48.7 |
Okafor
MID
(6.5)
Stach
MID
(5.7)
Gruev
MID
(3.8)
|
| 16 | Newcastle | 19 | 46.8 |
Bruno G.
MID
(7.0)
Gordon
MID
(4.0)
Thiaw
DEF
(4.0)
|
| 17 | Fulham | 21 | 45.4 |
Iwobi
MID
(6.2)
Wilson
MID
(6.0)
RaĂşl
FWD
(4.0)
|
| 18 | Aston Villa | 20 | 42.0 |
Mings
DEF
(4.0)
Martinez
GKP
(3.8)
Rogers
MID
(3.4)
|
| 19 | Burnley | 18 | 36.5 |
Anthony
MID
(5.2)
Hannibal
MID
(5.0)
Flemming
FWD
(4.8)
|
| 20 | Spurs | 21 | 29.5 |
Gray
MID
(4.6)
P.M.Sarr
MID
(3.2)
Solanke
FWD
(2.8)
|
Coming good
Players whose 3-game average is outperforming their longer-term baseline.
-
J.Timber DEF ARS+2.63G mean 8.0 vs 5G avg 5.4
-
Damsgaard MID BRE+2.533G mean 7.33 vs 5G avg 4.8
-
Stach MID LEE+2.273G mean 5.67 vs 5G avg 3.4
-
André MID WOL+2.273G mean 6.67 vs 5G avg 4.4
-
Mac Allister MID LIV+2.23G mean 8.0 vs 5G avg 5.8
Coming too soon
Players whose short-term form has slipped under their baseline.
-
Palmer MID CHE-4.673G mean 4.33 vs 5G avg 9.0
-
Bruno G. MID NEW-2.83G mean 0.0 vs 5G avg 2.8
-
Okafor MID LEE-2.63G mean 0.0 vs 5G avg 2.6
-
Mings DEF AVL-2.533G mean 0.67 vs 5G avg 3.2
-
Madueke MID ARS-2.473G mean 0.33 vs 5G avg 2.8
Predictive signal leaderboard
Of the signals we're looking at, which players are doing well? All the input features, mulched together into an overall score.
| # | Player | Pos | Club | Signal Score |
|---|---|---|---|---|
| 1 | JoĂŁo Pedro | FWD | CHE | 43.27 |
| 2 | Sarr | MID | CRY | 25.74 |
| 3 | Garner | MID | EVE | 20.24 |
| 4 | Anderson | DEF | NFO | 19.79 |
| 5 | Garnacho | MID | CHE | 19.68 |
| 6 | André | MID | WOL | 19.61 |
| 7 | B.Fernandes | MID | MUN | 19.6 |
| 8 | Rodrigo | MID | MCI | 19.41 |
| 9 | Wharton | MID | CRY | 18.62 |
| 10 | Palmer | MID | CHE | 18.25 |
| 11 | Gibbs-White | MID | NFO | 18.2 |
| 12 | M.Salah | MID | LIV | 16.98 |
| 13 | Tarkowski | DEF | EVE | 16.16 |
| 14 | Dewsbury-Hall | MID | EVE | 16.16 |
| 15 | Gordon | MID | NEW | 16.13 |
| 16 | Summerville | MID | WHU | 16.07 |
| 17 | Solanke | FWD | TOT | 16.01 |
| 18 | Alderete | DEF | SUN | 14.68 |
| 19 | J.Timber | DEF | ARS | 14.57 |
| 20 | Bernardo | MID | MCI | 14.47 |
Projected next-match points
And the juice - can we use that score to predict how a player will do next match? It runs their current performance score against whether the next match is home/away, its diffculty, etc. and gives a points forecast.
| # | Player | Pos | Club | Next Opponent | Predicted Pts | Range |
|---|---|---|---|---|---|---|
| 1 | João Pedro | FWD | CHE | Home vs NEW 3 | 17.95 | 17.51 – 18.38 |
| 2 | Sarr | MID | CRY | Home vs LEE 2 | 14.55 | 14.11 – 14.98 |
| 3 | Anderson | DEF | NFO | Home vs FUL 2 | 11.59 | 11.16 – 12.03 |
| 4 | André | MID | WOL | Away vs BRE 4 | 10.92 | 10.48 – 11.36 |
| 5 | Rodrigo | MID | MCI | Away vs WHU 2 | 9.96 | 9.52 – 10.39 |
| 6 | Wharton | MID | CRY | Home vs LEE 2 | 9.68 | 9.25 – 10.12 |
| 7 | Gibbs-White | MID | NFO | Home vs FUL 2 | 9.36 | 8.93 – 9.8 |
| 8 | Garner | MID | EVE | Away vs ARS 5 | 9.17 | 8.73 – 9.6 |
| 9 | Tarkowski | DEF | EVE | Away vs ARS 5 | 8.98 | 8.54 – 9.41 |
| 10 | Palmer | MID | CHE | Home vs NEW 3 | 8.93 | 8.5 – 9.37 |
| 11 | Gabriel | DEF | ARS | Home vs EVE 3 | 8.42 | 7.98 – 8.86 |
| 12 | Dewsbury-Hall | MID | EVE | Away vs ARS 5 | 8.4 | 7.96 – 8.83 |
| 13 | Summerville | MID | WHU | Home vs MCI 4 | 8.39 | 7.96 – 8.83 |
| 14 | J.Timber | DEF | ARS | Home vs EVE 3 | 8.24 | 7.8 – 8.67 |
| 15 | M.Salah | MID | LIV | Home vs TOT 3 | 8.2 | 7.76 – 8.63 |
| 16 | Casemiro | MID | MUN | Home vs AVL 3 | 8.1 | 7.66 – 8.54 |
| 17 | Henderson | GKP | CRY | Home vs LEE 2 | 7.21 | 6.77 – 7.64 |
| 18 | Ballard | DEF | SUN | Home vs BHA 3 | 7.04 | 6.6 – 7.47 |
| 19 | Saka | MID | ARS | Home vs EVE 3 | 7.03 | 6.6 – 7.47 |
| 20 | Solanke | FWD | TOT | Away vs LIV 4 | 6.86 | 6.42 – 7.3 |
Model scorecard
So... how good is the model, if at all?
The MAE values shown in the scorecards are literal FPL points (they average how far the predictions miss by). During training we learn the link between a match's ICT signals and that same match's score; for future fixtures we reuse each player's latest snapshot and swap in the upcoming opponent's difficulty/home-or-away flag to make a forward-looking call.
Because most players only score a couple of points, we also slice the errors by actual and predicted score bands, track how often we flag genuine big hauls (≥8 pts), and check whether the model's top picks overlap with the actual top performers each gameweek.
Prediction scorecard
Back-tested on recent gameweeks. MAE is the average miss in FPL points; the hit-rate shows the share of predictions that landed within two points of reality.
| GW | MAE | Hit rate | Samples |
|---|---|---|---|
| 25 | 0.97 | 65.0% | 811 |
| 26 | 1.01 | 66.4% | 896 |
| 27 | 0.96 | 64.8% | 817 |
| 28 | 0.92 | 67.0% | 818 |
| 29 | 1.09 | 63.9% | 819 |
Largest misses help highlight outliers the model struggles with.
-
Lewis-PotterGW 17 vs WOLDEF · BREΔ 18.54 pts2.46 → 21.0
-
Hudson-OdoiGW 16 vs TOTMID · NFOΔ 18.13 pts0.87 → 19.0
-
SchadeGW 18 vs BOUMID · BREΔ 17.85 pts2.15 → 20.0
-
João PedroGW 29 vs AVLFWD · CHEΔ 17.83 pts1.17 → 19.0
-
EzeGW 13 vs CHEMID · ARSΔ 17.3 pts19.3 → 2.0
Big haul classification
How often the model correctly flags players expected to hit 8+ points.
0.175
Precision (hauls we called correctly)
0.1
Recall (share of all hauls we spotted)
0.128
F1 (balance of precision & recall)
Predicted hauls: 280 · Actual hauls: 488 · True positives: 49
Scores run 0–1; higher is better. Precision/recall/F1 around 0.25–0.35 would be solid for noisy haul calls, while anything under ~0.1 means the model is mostly guessing.
Top-10 overlap
How many of the actual top scorers we catch in the model's top picks each gameweek.
0.082
Avg recall (actual top scorers recovered)
0.082
Avg precision (how many picks really hauled)
Both metrics run 0–1; higher is better. A healthy shortlist would sit around 0.4–0.6 recall/precision, while numbers below ~0.2 suggest the picks aren’t much better than luck.
Error by actual score bucket
Each bar shows how far off the predictions were (average absolute error) for players who actually landed in each points band, plus how many fell inside that band. In other words, it checks whether the model stays sharp for low scorers as well as the rare big hauls. Does the miss size stay sensible no matter how many points the player truly scored?
Lower is better: MAE around 1–2 points in most buckets is respectable; when errors regularly creep above 3–4 points the model is missing the mark.
Calibration at higher predictions
Points are grouped by what the model predicted (e.g. 6–9 points) and compared with what actually happened, so we can see if confident forecasts sit too high, too low, or on target. It is a gut check on whether the model is overhyping or under-calling good outings. When we predict 6–9 points, do the real scores usually end up in that range?
A well-calibrated model clusters near the diagonal with average miss under ~2 points; if points sit far above or below the band lines, the model is overconfident or under-confident.
Top-10 overlap by gameweek
The two lines track, week by week, how many of the real top scorers appear in the model's top picks (recall) and how many of the model's picks actually went on to haul (precision). It reveals whether the shortlist consistently finds the right players or just gets lucky. Are the model's top choices reliably capturing the real stars each gameweek?
Good weeks hover near or above 0.5 on both lines; dips below ~0.2 hint the model’s weekly picks are more miss than hit.
Linear R²
0.80
Explains how much variance the straight-line model captures using the core signals.
Linear MAE
0.54
Average miss (in FPL points) for the baseline regression.
XGBoost R²
0.83
How much variance the tree ensemble explains once we allow nonlinear interactions.
XGBoost MAE
0.44
Average miss for the boosted model — lower means sharper projections.
Feature spotlight
And how much do all the features matter?
SHAP values show the average boost each feature gives across all predictions. They apportion credit fairly, even when features interact.
Glossary
The signals use a mix of official FPL metrics and statistical jargon. Here's a quick refresher.
- ICT Index
- Fantasy Premier League's blend of Influence, Creativity, and Threat metrics to gauge how involved a player is in decisive actions.
- Influence
- Measures how heavily a player affects match outcomes (goals, assists, key contributions). High influence means the player drives team results.
- Creativity
- Tracks the rate of chance creation—crosses, key passes, and set-piece threat. Assisters tend to spike here.
- Threat
- Quantifies how likely a player is to score based on shots and positioning inside dangerous zones.
- Fixture Difficulty
- Club-provided rating (1 easiest — 5 hardest) estimating how tough the opponent is for that particular match.
- Rolling Mean
- Moving average across the last N games. A 3-game rolling mean smooths noisy weekly scores into a clearer form signal.
- Rolling Sum
- Moving total across the last N games. A 5-game sum captures medium-term consistency vs. short bursts.
- SHAP
- Shapley Additive exPlanations: a model-agnostic method that shows how much each feature pushed a prediction up or down.
- MAE
- Mean Absolute Error — the average absolute difference between predicted and actual points.
- R²
- Coefficient of determination. Shows how much of the variation in points the model explains (1.0 means perfect).
- XGBoost
- Extreme Gradient Boosting — an ensemble of shallow decision trees trained sequentially to reduce errors, great at spotting nonlinear patterns.