How Much We Trust Our Bracketology Model

DATE: MARCH 5, 2021. IF YOU ARE READING THIS IN A FUTURE SEASON PLEASE GO FIND SOMETHING MORE RECENT

We don’t want to mislead anyone. Our goal with all our predictive models is the opposite of misleading people: it’s to give good, accurate information. And so we thought it’d be worthwhile, as we head into the college basketball season’s last two pre-tournament weekends, to explain our own expectations of our model.

Our bracketology model was originally built to give a percent probability that each Division-I team men’s basketball team would make the NCAA Tournament, and another percent probability that each Division-I men’s basketball team would make the NIT. This was the goal when we built it, and it’s what the model would normally do best…were there not a pandemic going on.

With the probability of game cancelations large, inconsistent across time and teams, and of somewhat unknown consequence, we’ve declined to publish our model’s probabilities this year. We’ve only published the bracketologies. And the bracketologies, if we’re being completely honest, have a few issues.

Our goal in the long term is, of course, to iron out these issues. But they exist. And generally, they revolve around the same thing:

Our model doesn’t handle oddities very well.

At its final stage, when all the games have been completed, our model is pretty simple. It takes the “reflective” ratings on the team sheets, which are NET, KPI, and ESPN’s BPI SOR (I’ve talked previously about why “reflective” is a poor term with which to refer to NET, but it’s how the three are generally categorized opposite the “predictive” ratings of KenPom, BPI, and Sagarin, so we use the word here), and uses a weighted average of the three to seed teams. That’s it. Before the games are over, it’s projecting what will happen within those ratings, which is more complex, but once that piece is done, it’s a simple system.

This isn’t a bad way to do it—there are plenty of bracketologies worse than ours (and we’ll get to exactly how good and bad we expect our bracketology to be)—but it doesn’t handle teams well when they, for example, lose a lot of close games and win blowouts, or do the opposite, or play an historically difficult schedule, or play an historically weak schedule, or play half a dozen fewer games than the average team because of a raging pandemic that’s killed millions of people worldwide. These are isolated situations, at the tail ends of various spectra, that are often unique—which makes it hard to find a historic precedent—and that sometimes provoke inconsistent logic from the selection committee, which is an inherently subjective body. Over the next few offseasons, we’ll be trying to better account for these kinds of outlier situations, but right now, that’s just not something our model handles well.

One of our model’s best assets is that it doesn’t bounce around early in the season like other bracketologies do. It’s fairly steady—it never got too high on Richmond, for example—and its November and December brackets, in our back-testing, end up responsibly close to the eventual real things. But by this part of the season, when the oddities have presented themselves, its accuracy can be summed up as follows:

NIT Bracketology Accuracy

In back-testing against prior years, our NIT Bracketology misses a team or two from a 32-team field in its final publication, made after the NCAA Tournament bracket is announced. It’s on par with or better than the other NIT Bracketologies on the internet. This, we’d say, is pretty good, and it’s the only NIT Bracketology that incorporates automatic bids throughout the year, thereby giving the most realistic picture of the size of the at-large field, which is highly impactful for bubble teams and varies year-to-year. If you’re going to use an NIT Bracketology, we would recommend using ours alongside, if he’s doing it (we haven’t seen one from him yet this year), John Templon’s. Those two in combination give you a good NIT picture, especially since Templon doesn’t, as a rule (to my knowledge), include sub-.500 teams, whereas our model does and it’s unclear, to us at least, if the committee will or won’t include a sub-.500 team in the near future.

NCAAT Bracketology Accuracy

I probably should have led with this, but I wanted to explain why it’s what it is first. Our NCAAT Bracketology is below-average in its final accuracy. Its early-season accuracy? Pretty good. Its mid-season accuracy? Solid. By the last week? Meh. We expect its final iteration, published after the conclusion of conference championship games on Selection Sunday, to be better than only about a quarter of the bracketologies on Bracket Matrix when measured by Bracket Matrix’s scoring system. We’re hoping to improve this over the years (which will make the early-season bracketologies even better), but this is where it’s at right now, which basically means that as the season gets older, you should trust our model’s NCAAT Bracketology less and less relative to the field, especially in cases in which the model is dealing with oddities (were there probabilities, we’d still recommend trusting those, but again—you’d want to be aware of if, say, a team had the worst NCSOS in the country or if a team was four games under .500 or if a team was shellacking great teams and narrowly losing to good ones with abnormal consistency).

***

So, that’s where we’re at. Conduct yourselves accordingly. Please ask questions, too. It helps us to have to dig into oddities like the ones mentioned above and prepare ourselves to adjust for them in the future.

The Barking Crow's resident numbers man. Was asked to do NIT Bracketology in 2018 and never looked back. Fields inquiries on Twitter: @joestunardi.
Posts created 3304

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.