Will College Football’s New Schedule Metrics Change Anything?

On Sunday, we said there was an 85 to 95 percent probability we’d get you CFP Bracketology this morning. After a head cold for the baby and some related daycare shenanigans, the underdog won that round. I’d say there’s a 60 to 70 percent chance we have a bracket and playoff probabilities tomorrow, and a 10 to 20 percent chance those include the FCS by Friday. I hope some of you have established betting markets on when and what we’ll publish. If you have, don’t hesitate to reach out so we can manipulate them together for our profit. It’s long been a dream of mine to show up in a Matt Levine column. (Joking! About the manipulation.)

It’s easiest to understand the college football industry as geopolitics without death, and last week, the SEC and Big Ten brokered an informal treaty. The terms:

  • The SEC will add a ninth conference game beginning in 2026, bringing it even with the Big Ten and Big 12.
  • The College Football Playoff will alter the strength of schedule metric it shows its committee, adjusting it “to apply greater weight to games against strong opponents.
  • The CFP will also add a “record strength” metric to the assortment it shows committee members.
  • The ACC—continuing its Todd Cleary role within the power conference family—can go to nine games or it can stay at eight. The other leagues don’t really care what the ACC does. (The Big 12 is definitely Gloria in this allegory, and I’m guessing NIT Stu will write a blogpost soon breaking down the other conferences.)

As most things are, this was a coup for SEC commissioner Greg Sankey. I don’t know how explicitly he’s said this, but the SEC has tried to add a ninth conference game for a while. It’ll make the sports product better (less chance of two great teams missing each other all season) and it’ll bring in more revenue. For similar reasons, we can assume athletic directors like it as well. Sankey, SEC presidents, and SEC AD’s all either wanted this or should have wanted it. The fact they got it while also extracting concessions from the Big Ten is a win–win.

Who didn’t want this? SEC coaches. This is going to make it harder for guys at Arkansas and Mississippi State and other lower-tier SEC programs to make bowl games. It’s going to make it harder for second-tier programs like Tennessee and Auburn and…Alabama?? (*ducks*)…to make the College Football Playoff. In the long run, this should make more seats hotter, at least until expectations adjust. Job-saving certifications (“made a playoff,” “made X straight bowl games”) will become harder to achieve.

I’ve kind of implied that the concessions—the adjusted strength of schedule metric and the introduction of record strength—won’t make a difference when it comes to playoff selections. To make that more clear: They won’t make a significant difference in and of themselves.


The thing about SOS (strength of schedule) and SOR (strength of record) is that there’s no one way to define them. That’s always been the case. In the NFL, it’s simple: Teams are close enough in quality that opponents’ raw win percentage works to calculate both of these metrics (or SOV, strength of victory, which the NFL uses). In college sports, it doesn’t work, which is what turned RPI into such a disaster in men’s basketball. Depending which college sport you play, it’s possible to mildly game the selection system by scheduling decent teams with easy schedules or bad teams with hard schedules. Thankfully, the benefits are marginal. These mechanizations might gain or cost a team an NCAA Tournament or CFP berth, but they’re not swaying college football or basketball history. (Also, a lot of the vulnerable metrics are gone or are on their way out.)

By the same token, these changes’ impact will also be marginal. Add SOR and adjust SOS, and maybe 2024 Penn State gets a harder quarterfinal matchup. Maybe Alabama (or Miami, though that was a different and weirder case) makes the 2024 CFP. But Ohio State was winning last year’s national championship. Notre Dame was going to make a run. Alabama fans were going to be uneasy with Kalen DeBoer entering this fall, and Penn State was going to hire Jim Knowles away from Columbus this past winter. Who makes the playoff is important to single programs. Who competes for championships and/or the heart of the average fan is what’s important to the sport.

We, however, are in the who–will–make–the–playoff business. So let’s get back to what these changes mean.


The SOS adjustment is simpler. If they’ve shared their formula, I haven’t seen it, but there are a few shapes it could take. In football—a small-sample sport, making raw win percentage even less useful for these kinds of metrics—most schedule metrics simply average together some ranking or power rating of all of a team’s opponents. Who determines which ranking or rating they use? Congratulations. You’re starting to see a problem that’s hiding in plain sight. We’ll get back to that piece of this soon, but first: When the CFP folks say they’ll “apply greater weight to games against stronger opponents,” they might mean that in a very simple way. Maybe they take the three best opponents and multiply those rankings by 1.5 or something. They could also mean it in a more complicated way. The power ratings curve pivots close to each end, such that the gap between 1st and 10th is greater than the gap between 40th and 50th. Maybe they’re further accentuating that trend in a curve like the one below, multiplying the gaps between teams.

If neither of these possible alterations seems like a big deal, you’re right. They aren’t. Again, there’ll be a marginal difference, but nobody’s schedule is going to move from 50th to 10th or anything like that, and whatever change does happen is going to be watered down because the committee will also see the SOS numbers ESPN and FOX put on broadcasts, SOS numbers which might be based off a different calculation. If a team has a hard schedule, that will be known. If a team has an easy schedule, that will also be known. The numbers will be a little different, but this is not an earthquake.


Similarly, the introduction of SOR is a small deal. SOR, popularized by ESPN, is a good metric. It looks at a team’s record and at its schedule and calculates the likelihood that an average top-25 or average playoff team would have achieved that record playing the same opponents in the same locations. It’s holistic and it has a simple output, and it doesn’t care about margin of victory, satisfying the “no incentivizing blowouts” set of purists. Its one shortcoming? You brought it up above. Like SOS, it’s tied to subjectively chosen set of underlying rankings/power ratings. To calculate SOS or SOR, you need to start with a rating and/or ranking of how good every team is. In college sports, that’s a lot more complicated than raw win percentage. ESPN’s analytics department has gotten infamously opaque, but my best understanding is that their version of SOR is calculated using present-tense FPI ratings, meaning that at any given moment, ESPN’s SOR is based on its current FPI rating of every team’s opponents. That is my impression. The CFP’s version, simply called “record strength,” will come from SportsSource Analytics, so it probably won’t use FPI. Will it use something better? Probably not. But it’ll have to use some set of ratings and/or rankings, and that set will be chosen arbitrarily, and that set will be in mild conflict with what ESPN and The Barking Crow and various other internet nerds publish under names resembling “record strength.” Many people seem to think that if the committee was replaced with SOR, the process would become completely objective. These people are wrong. The CFP would still have to choose how to calculate SOR. Will it be more prominent this year in the eyes of the committee? Yes. But like SOS, SOR already exists in various conflicting forms. This too is not an earthquake.

In short, these are far from the only metrics committee members see. They won’t even be the only schedule and record–strength metrics committee members see. Committee members see the metrics the CFP shows them, but they also see metrics on ESPN, and on FOX, and in whatever college football coverage they read. The official CFP versions of SOS and record strength will no doubt help guide the conversation. They could conceivably be the deciding factor in an especially close race. But realistically, by the time the committee gets into that room, the die is mostly cast. It’s shaped by a thousand different inputs, many of which go unnoticed. Mostly, thankfully, it’s shaped by what teams do on the field in their twelve or thirteen opportunities.


Relative to others like it, the CFP committee has a narrow job. In basketball, so many teams play so many games that it’s impossible for committee members to thoroughly follow every team they’re selecting or seeding. The official data is far more important. In football, these guys are following four conferences, Notre Dame, and a couple mid-majors they usually find out about a month into the year. They know if Indiana has a tough schedule or an easy one. They know if Miami lost close to Syracuse or got blown out.

That isn’t to say the committee does a perfect job. Paradoxically, the narrowness of the assignment introduces a larger role for subjectivity, since it’s easier for these guys to think they have a handle on everything happening in the sport. There’s also the historic horse-race nature of college football rankings, a major hindrance to rationality in the playoff selection process. While CFP committees do show less deference to prior top 25’s than their AP counterparts (“you can’t move down after a loss” sounds great in theory but only in theory), their rankings are still shaped by prior rankings in nonsensical ways.

Overall, though? The CFP committee knows what’s happening in college football, and it generally exhibits an approach best described as self-preservation. When there’s actually a hard question on the line, like whether to take 13–0 Florida State in 2023 or whether to drop SMU after a 2024 ACC Championship loss, the committee tends to do the thing that will least upend the industry of college football. Which, at long last, brings us to the real significance of these rankings.

What Greg Sankey and the SEC wanted was a world where the committee was a little less beholden to raw win–loss records. They’ll probably get it. Not because the new and adjusted metrics will totally shift committee members’ viewpoints, but because they’re an explicit mandate to the committee to give a little more leeway to teams who lose to great teams, plus a little more credit to teams who beat them. When committee members see record strength on their monitors, it’s an implicit reminder that they’re allowed to rank a 9–3 SEC (or Big Ten!) team above an 11–1 team from ACC. (Or Big Ten!) That’s probably more important than the metrics themselves.

**

Again, we’re optimistic about getting you that CFP bracketology tomorrow morning. Apologies for the late post today, but as we said above: Welcome to The Barking Crow.

**

If you enjoy these posts and want to receive them more directly, we have two options for you. The first is to subscribe to our Substack, where we currently exclusively publish college football blog posts roughly six days a week. The second is to subscribe to our daily newsletter, a morning rundown of everything going on at The Barking Crow. Thanks!

**

The Barking Crow's resident numbers man. NIT Bracketology, college football forecasting, and things of that nature. Fields inquiries on Twitter: @joestunardi.
Posts created 3741

Leave a Reply

Your email address will not be published. Required fields are marked *

Begin typing your search term above and press enter to search. Press ESC to cancel.