How Our Bracketology Works (2022, Final Version)

I believe our bracketology formula is finalized, and with that, this year’s model is complete. We never did get the probabilities up and running, but in back testing over the 2019 and 2021 NCAA Tournaments, our model performed better than roughly 40% of the bracketologies on Bracket Matrix, giving us an expectation that it will be better than maybe 25% of its counterparts this year (we did, after all, design our formula to be successful looking backwards, and we’d guess there are exceptions we’ve missed that we’ll learn about this year). Getting better. Year over year. Soon, perhaps, we will be average. (Practically speaking, we expect to miss a team or two on each bubble, and we expect to miss a handful of seedings by more than one seed line. In back testing, we missed five teams in both 2019 and 2021 by more than a seed line, and we missed a few teams each year on the NIT/CBI bubble but none either year on the NCAAT/NIT bubble).

Our Model’s General Approach

Generally, the best way we’ve found to reflect the selection committee’s seeding decisions is to assign each team a raw score, based on its ratings in the six ratings systems on the committee’s team sheets, and to then add and subtract points from each team’s score based on various “exceptions.” For a long time, we tried to include these exceptions formulaically, making Q1 win percentage and nonconference strength of schedule and all the other bits variables within one big equation. This didn’t work. It doesn’t seem to happen this way in the actual committee room. What seems to happen instead is that variables like those don’t matter until they cross certain thresholds, and once past said thresholds, they can matter a lot. 330th-ranked nonconference strength of schedule? You’re probably fine. 350th? It’s over.

The Guts of the Formula

Going off that, then, here’s what we do:

Raw Score

The raw score is a combination of NET, SOR, KPI, and KenPom, comprised of thirteen parts NET, thirteen parts SOR, twelve parts KPI, and one part KenPom, with each variable adjusted so as to plot it on a normal distribution, making the gap between 1st and 2nd significantly wider than the gap between 21st and 22nd, and so on. This is the backbone of our seeding formula. It excludes BPI and Sagarin because, while those do appear on the committee’s team sheets, they don’t explain much of the variance between our raw score and the eventual seed list. In other words, this raw score is the best one we could figure out.

Raw score is scaled in such a way that the highest raw score possible is around 61.5 points, and the raw score of a bubble team is somewhere around 24 points, while a team on the NIT/CBI bubble might have a score around 18 points (we use the same formula for the NCAAT and the NIT—we may change that in future years, but we find the NCAAT formula to work well enough in back testing).

The exceptions are as follows:

Q1 Win Percentage

If a team’s Quadrant I win percentage is lower than 25%, they lose 1.25 points. If it’s below 15%, they lose an additional 1.4 points. If they didn’t play any Q1 games, they lose 5 total points.

Q1 Win Total

If a team has five or more Quadrant I wins, they gain 1.75 points.

Worst Rating-That-Matters

If the four rating systems included in our raw score (NET, SOR, KPI, KenPom) all have a team ranked 52nd or better, said team gains 0.45 points.

Wins Above/Below .500

This one can go both ways. If a team is 14 or more games above .500, they receive 1 point. If a team is exactly two games above .500, they lose 1 point. If a team is exactly one game above .500, they lose 2.5 points. If a team is exactly .500, they lose 4 points. If a team is below .500, they lose 5 points.

Nonconference Strength of Schedule

If a team’s nonconference strength of schedule is, by the metric used on the team sheet, 340th or worse, the team loses 3 points.

Q1/Q2 Win Total

If a team has ten or more combined Quadrant I and Quadrant II wins, they gain 1.2 points.

Q1/Q2 Win Percentage

If a team has a .500 record or better combined in Q1 and Q2 games, they gain 0.6 points.

Q2/Q3/Q4 Losses

If a team has no losses in Q2, Q3, and Q4 games, they gain 1 point.

Top 2 Ranking

If a team is ranked first or second by two or more of the four rating systems included in our raw score, they gain 3 points.

Closing Argument

If a team loses a Q4 game in its conference tournament, they lose 3 points.

Nonconference Record

If a team plays eleven or more nonconference games and wins all of them, they gain 1.5 points.

Penultimate AP Poll Adjustment

If a team receives no votes in the AP Poll released on the Monday prior to Selection Sunday, they lose 1.5 points. If they’re ranked 29th or higher, they receive points equivalent to half the difference between 29.5 and their ranking. So, the top-ranked team receives 14.25 points and the 29th-ranked team (going by votes received, listed at the bottom) receives 0.25 points. This helped us a lot with ironing out last year’s seed list for teams on the top ten seed lines, but it hurt our 2019 performance, so we’re curious how it will help or hurt this year. The case for including it is that the AP Poll reflects the narrative surrounding each team as well as some degree of recency, both of which can at the very least subliminally affect the committee. The case for not including it is that its predictive power last year may have been greater due to the pandemic-driven scheduling disparities between teams around the country. Remember how NET had Colgate ranked in the top ten? This helped correct for that.

Example

For an example of how this score works, let’s look at Wake Forest, whose pre-Selection Sunday season is over. Wake Forest, this morning, was ranked 45th in NET, 67th in KPI, 47th in SOR, and 37th in KenPom. They had a 2-4 record in Q1 games and a 3-3 record in Q2 games. Their nonconference strength of schedule was rated 343rd by the NET-based metric, and they did not go undefeated in nonconference play. They received no votes in the most recent AP Poll. Their final record is 14 games above .500.

Based on the four rating system rankings, their raw score comes out to 23.6, which was 52nd-best in the country this morning. They receive 1 bonus point for having an overall record more than 13 games above .500, but they lose 3 points for having a bad nonconference strength of schedule. They lose 0.75 points for receiving no votes in the most recent AP Poll. This makes their new score 20.9, 58th-best in the country.

Bracketing

Once we have the automatic bids determined, the NIT’s upper cut line determined, and the teams lined up in order of seeding, it’s as simple as following our impression of the NCAA’s bracketing principles for each tournament, with one exception: We don’t move NIT teams across seed lines for geographic convenience, as the committee is allowed to do. We found our previous attempts at doing this in the model introduced unnecessary confusion and chaos to the projections.

Before Sunday

This explains what we’ll be doing on Sunday, when résumés are final. It doesn’t quite explain what we did yesterday, and today, and what we’ll be doing tomorrow and Sunday before games tip off.

Predictive vs. Reflective

Our model is predictive, not reflective, meaning we try to project where the bracket will end up, not where it stands right now. To do this, we are no longer messing with ratings systems. We could try predicting how NET and SOR and KPI and KenPom will all change, but we haven’t tested our methods rigorously enough to do this with any confidence for NET and SOR and KPI, and our KenPom method is imperfect.

We do, however, look at a team’s median expected results the rest of the way—the results that, in a large set of simulations, are better than half of the simulations and worse than the other half. We then assign Q1/Q2/Q3/Q4 wins and losses (and thereby overall wins and losses). The result is that our model is trending a little more towards the reflective than we’d like, but we’d rather err on that side than overstate the impact of future results.

Automatic Bids

We predict automatic bids for the NCAA Tournament through simulating the remainder of each conference tournament a high number of times and assigning the auto-bid to the favorite. We predict automatic bids for the NIT the same way, except instead of assigning auto-bids to tournament favorites, we identify the median expected number of remaining automatic bids and assign those to, in order, the teams most likely to receive them. So, if there are two expected automatic bids remaining and the four teams that could receive them are, respectively, 75%, 40%, 35%, and 32% likely to receive them, we give the bids to the 75% and 40% guys.

Bid Thieves

We use a similar approach to bid thieves that we use for NIT automatic bids, looking at the median expected number of instances of thievery. We only do this for the NCAAT, and it only affects our NIT bracketology, by determining where we set the upper cut line. The result is that if the median expected number of bid thieves is two, there will be two at-large teams present in both our NCAAT Bracketology and our NIT Bracketology.

***

I think that’s it. If you have a quarrel with the model, it may be a fair one. If you have a suggestion with the model, it may be a good one. Because of this, we always want to hear from you, and we’re always appreciative of feedback and commentary. Transparency is a goal of ours, and our ultimate hope with all of this is to give fans the best idea possible of where their team is headed when it comes to the postseason.

Thanks, everybody.

The Barking Crow's resident numbers man. Was asked to do NIT Bracketology in 2018 and never looked back. Fields inquiries on Twitter: @joestunardi.
Posts created 3299

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.