Our College Football Playoff Probabilities Are Live. Here’s How They Work.

Yesterday, we reintroduced Movelor, our college football rating system. Today, we’ve relaunched the older part of the model: Our College Football Playoff probabilities.

We first build our College Football Playoff model in 2019, using data from all playoff selections prior to that season. We’ve tweaked it a little over the years, but the guts of it are the same. Over backtesting and in its use, it has yet to miss a single playoff team.

Here’s how it works, how strong it is, and what you should know when referencing it.

CFP Formula

Our approach uses a core ranking formula built off of six sets of metrics. This is the formula that is a perfect 36–for–36 on identifying playoff teams throughout the College Football Playoff’s history. The six metrics:

Wins/Losses/Win Percentage

This is fairly self-explanatory. We broke it out into win percentage in addition to wins and losses to adjust for teams without conference championship games (when that was an issue), teams with games canceled by hurricanes, and—of course—the Covid season.

Adjusted Point Differential

This is more complicated.

Adjusted Point Differential (APD) is our metric which approximates the eye test, Vegas odds, other advanced ratings, rankings, and all the other explicit and implicit pieces which impact committee members’ impressions of how good each team under consideration really is.

The way APD works for a given team is to look at each of that team’s opponents’ average margin of victory or defeat (using a flat number for all FCS opponents) and then compare that average scoring margin to their scoring margin against the given team. In practice: If Indiana has an average scoring differential of –10 points and Ohio State beat them by 20, Ohio State outperformed the average by ten points. Add that up for each of Ohio State’s games, average it, and you have Ohio State’s APD.

Power Five/Group of Five Status

As you may have guessed, APD overestimates the committee’s evaluation of teams outside the Power Five. So, we put a blanket adjustment into our overall CFP ranking formula which deducts from all Group of Five teams, including independents not named Notre Dame. This also accounts for any additional discounting the committee is doing, whether warranted or not.

Power Five Conference Championship

The committee values conference titles, but it’s unclear if it cares about those outside the Power Five. Our formula awards a bonus to teams who win a Power Five conference.

Three Best Wins/Three Best Losses

While strength of schedule metrics get a lot of airtime, the construction of those formulas can be arbitrary, and what ultimately seems to really matter is the quality of each team’s best wins and the quality of each team’s losses, if they have any. We add a component to this which accounts for margin of victory or defeat, because blowouts look worse (or better) than narrow losses (or wins).

FPA (Forgiveness/Punishment Adjustment)

Our sixth variable is FPA. FPA is inserted into our CFP formula every week CFP rankings are released. It normalizes our model’s impression of the field to where the committee says the field lies. In only one instance other than in reaction to rankings do we insert FPA into the formula, and that’s something we call the Kelly Bryant Rule: If a team loses without its first-string quarterback and that quarterback will be back for the playoff, their worst loss’s impact is halved, in accordance with how the committee treated Clemson in 2017 after the Tigers’ loss to Syracuse.

Simulations

Our model, using the Movelor rating system, simulates the rest of the season 10,000 times, then tallies up all of the results into probabilities. Each simulation is its own unique season, and Movelor is live within each, adjusting to results as they happen. So: If in one simulation, Navy upsets Notre Dame, Notre Dame is expected to do a lot worse the rest of the season than it is in another simulation where Notre Dame blows out Navy.

Caveats

A few things to be wary of with our system:

  • At the moment, we have no conference tiebreakers programmed in. Those are purely random. As the season goes on, we will either fix this or adjust the randomness for known tiebreaker scenarios. (Update, 10/24: We have begun manually adjusting randomness to account for known tiebreakers. Our impressions of the current tiebreaker situation at any given time can be found here.)
  • We haven’t yet standardized our process of how exactly to insert FPA in response to the rankings, and so in our model’s simulations, FPA is just another random variable. It isn’t linked to specific results or specific timing which can shake the committee from precedent. We would like to automate this one day. Ideally, soon.
  • Last year, it appeared USC was going to make the playoff if they beat Utah in the Pac-12 Championship, but our model would not have had this happening, because USC’s résumé, by the standards our model had used to identify the previous 32 playoff invitees, was not playoff-caliber. Our model is blindly loyal to precedent, but the committee is not, and in that instance, our model was about to be wrong. Utah bailed it out. This is another thing we have yet to fix about it.
  • We have not done a full, week-by-week backtest of the model, so our probabilities are uncalibrated. This is another thing we would eventually like to do—make sure that 10% of the things we say are 10% likely end up happening—but we have yet to do it.
  • APD is very simplistic. It works fine, but it is very simplistic. This would be a nice thing to ultimately improve upon, but do not expect any adjustment to this piece of the puzzle within this season.

Overall, this is one of the best playoff predictors publicly published, with only one glaring poor performance in its history and that ultimately comfortably covered up (because USC wasn’t that good, as APD was pointing out). Also? We don’t publish it if there’s a flaw significant enough to make us know the probabilities are incorrect. This is a big distinction from other models, where certain media outlets will happily throw out ridiculous numbers and call it statistics. If we see a ridiculous number, we fix the issue behind it before we go back out there, and if we don’t fix it, we at least tell you it’s ridiculous and why we think it’s happening and how we plan to fix it going forward. You can count on that from us.

The Barking Crow's resident numbers man. Was asked to do NIT Bracketology in 2018 and never looked back. Fields inquiries on Twitter: @joestunardi.
Posts created 3304

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.