How our Bracketology Model Works

As of a few hours ago, our College Basketball Bracketology is up and running. We’ll be adding a few features to it over the next few days, but the core of how it works isn’t changing.

Here’s the basis of where we’re getting our numbers:

We use the latest KenPom, Sagarin, and BPI ratings to simulate the rest of the college basketball season, including conference tournaments, 1,000 times. Once we have those results, we take the latest NET, KPI, and BPI SOR rankings and use an Elo-like system to adjust each of them based on the outcomes of games in each of the 1,000 simulations. This gives us an approximation of how likely a certain game is to remain a Quadrant 2 game, switch to a Quadrant 1 game, etc.

We then use a formula derived from previous decisions by selection committees to project the seed lists for the NCAAT and NIT in each of the 1,000 simulations. One part of this formula is a randomized error value, taken from a normal distribution based upon how precisely the rest of the formula predicts previous brackets. Because our formula is fairly simple, our error value is fairly large. This is part of why you’ll see some results from our model that don’t make sense (somewhere between one and four simulations had Duke landing as an NCAAT 8-Seed). Still, hopefully these capture the decisions the committees make that similarly don’t make sense.

Our goal with this is to give you, the fan, the best reflection possible of each team’s postseason chances. In future seasons, we’ll want to grow more precise with our estimates, and we’ll want to increase our simulations to match the industry standard of 10,000, used by ESPN and FiveThirtyEight (not for bracketology, but for simulations of sports in general). Given our current constraints in the areas of time and equipment, though, we’re doing the best we can reasonably do.

If a projection has the “<1%” designation, it just means it appeared in at least one simulation, but it didn’t appear in enough to round up to 1%. If a projection has a “0%” designation, it didn’t happen in any of our simulations. This doesn’t mean it isn’t possible, but it’s so improbable that in 1,000 chances, it failed to occur 1,000 times.

We’re aware that the sorting function doesn’t work well, and are hoping we might be able to fix it, along with some other unsavory aspects of the user experience (I just spent a frustrating half hour trying to get the site’s navigation menu to stop taking over the screen when you scroll to the left on a mobile device, to no avail). Bear with us, and let us know what we should most urgently improve.

How our brackets get made:

One of the features on its way is a predicted bracket for both the NCAAT and the NIT. This takes the median result for each team over the rest of the season and seeds teams according to that result. Some teams may appear in both brackets, because our NIT bracket will reflect the probability of bid thieves as well as the reality that not every conference tournament favorite wins their conference tournament. To determine how many bid thieves we estimate there to be, we’ll take the median NCAAT cut line from our 1,000 simulations and compare it to the seed list. To determine which teams to place as NIT automatic bids, we’ll take the median number of NIT automatic bids across our 1,000 simulations and assign them to the teams most likely to receive them.

We’re hoping to add a reflective bracket (the “where things stand” bracket that’s currently the industry standard), but that’s at the low point on our priority list. If we do, we’ll be sure to designate between it and the predictive bracket.

Postseason Projections:

Our postseason projections (another feature on their way) will reflect how often, in our 1,000 simulations, each team reached significant stages of either the NCAAT or the NIT. Our goal with this is to, before the bracket is even finalized, let you know how likely it is that, say, Gonzaga will make the Final Four, or that Penn State will win the NIT.

Your Help

If you notice something off in our projections, please let us know. Our model was jerry-rigged to handle conference tiebreakers, so there may be some flaws in our regular season projections. Additionally, as the person putting the brackets together (and doing so in a rush), it’s possible I’ll make a careless error in the placement of teams. I am far from perfect, and our model is far from perfect, but we all start somewhere. Thanks for joining us here at The Barking Crow.

-Joe Stunardi
@joestunardi
joestunardi at gmail.com

The Barking Crow's resident numbers man. Was asked to do NIT Bracketology in 2018 and never looked back. Fields inquiries on Twitter: @joestunardi.
Posts created 3304

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.