Our NIT Bracketology, NIT probabilities, and NCAA Tournament Bracketology are all updated coming out of last night’s games. Here’s what’s up, mostly on the NIT side.
Kansas State Gets a Home Game
There wasn’t a lot of movement in and around the NIT bracket. Oklahoma moved away from the NIT, but our model had them aimed at the NCAA Tournament anyway. Pitt dropped a bad one to NC State, but Pitt’s safely in NIT territory and the loss came on the road, which made it a wash.
The biggest outcome in our neck of the woods was Kansas State’s 54–49 upset of Cincinnati at Fifth Third Arena. This was enough to push the Wildcats ahead of TCU in median final KNIT, which takes the Big 12’s exempt bid away from the Horned Frogs and gives it to Kansas State.
Wake Forest Gets a 1-Seed
In collateral action, K-State’s win knocked Cincy out of the NCAA Tournament’s First Four Out, pushing Wake Forest into that zone. This makes Wake Forest an automatic NIT 1-seed, which in turn pushes Boise State down to the 2-line. More on Boise State below.
North Texas vs. TCU
In more collateral action, TCU moving out of seeded territory and into the road game mosh made them the closest first round matchup for North Texas, shifting that potential DFW showdown one round earlier. There was a little more bracket movement, but this consequence was the most fun.
Model Talk: The Mountain West—Specifically, Boise State and San Diego State
With our full model back in action and Selection Sunday approaching, we like to spend this space talking about one area where our model deviates from consensus. Today’s? The Mountain West.
When it comes to mid-majordom’s most mountainous and western conference, our model is lower than the consensus. This is happening across the board. Bracket Matrix has New Mexico a 9-seed. We have the Lobos as an 11. Bracket Matrix has Utah State as a 9-seed. We have the Aggies playing in the First Four. Every other Bracket Matrix bracket has San Diego State in the field, as of yesterday’s update, and exactly half of them had Boise State in as well. Our model has the Aztecs in the NIT and Boise State needing to win the MWC Tournament.
Last year, we made our Seed List subjective, reflective of what we perceived as shortcomings in our model, hoping to bridge the gap between our model (helpful) and the consensus (more accurate over the long term). This approach worked fine. Our Seed List was more accurate on seedings than our model, but our model correctly picked all 68 NCAA Tournament teams, whereas our Seed List had Virginia missing the field.
We’re probably going to do that again this year. We know our model is imperfect, especially when it comes to seedings. The consensus has a much better track record than our model does. A lot of the point of models is to avoid groupthink, and while there’s plenty of value in that, the NCAA Tournament selection committee is a group that thinks.
So, I’d trust the consensus when it comes to the Mountain West more than I’d trust our model. That said…
One area of seeding in which our model was really right last year was the seedings of the Mountain West teams.
Our Seed List was wrong.
The consensus was wrong.
Our model—which is bad at seedings—was right.
How off was the consensus? Bracket Matrix missed Nevada’s seeding by three seed lines, pegging the Wolf Pack as a 7-seed before they were announced as a 10. Bracket Matrix missed Utah State, Boise State, and New Mexico by two seed lines each. Bracket Matrix missed Colorado State by one seed line, and while it got San Diego State right, that still leaves an average miss of 1.67 seed lines across the Mountain West Conference. Assuming Bracket Matrix reflects what the committee will do if the committee is consistent with its previous iterations, this means the Mountain West was systematically underseeded, relative to precedent. Our model, for reasons we’ll explain below which may or may not be relevant this year, expected this.
What happened?
The Mountain West exhibits some unusual statistical trends. It’s more binary than other conferences, with a thick pack up top and some terrible teams at the bottom. It’s a conference where home-court advantage matters more than it does arguably anywhere else. It’s the highest-level conference which plays most of its games at elevation. These things might not mean anything. They also might make the Mountain West hard to measure.
Going by kenpom’s measurement of home-court advantage, the Mountain West averages weaker home courts than the Big 12 and—by three hundredths of a point per game—the SEC. But. This includes Fresno State. Limit the sample to only the nationally relevant teams, that “thick pack up top,” and every Mountain West court grades out significantly stronger than places like Houston and Florida, no matter how good Houston and Florida’s teams currently are.
Are we about to say home-court advantage is a bad thing? Not exactly. Obviously, home-court advantage itself is good and only good. It helps you win games at home. But if a team plays a lot better at home than they do on the road…that could also just mean the team is bad away from home. Houston and Florida’s home-court advantages grade out weakly. Houston and Florida play good basketball on the road.
A simple explanation here is that road record might matter more to the committee than we or anybody else is realizing. But our model doesn’t directly account for road wins, and road wins get a lot of buzz around the bracketology world. Regarding last year’s Mountain West, our model was in line with the committee and out of line with the bracketology world. If road wins were really important and the Mountain West lacked them, we’d expect our model to overrate the Mountain West’s chances, not underrate them.
Let’s keep looking.
Of those six Mountain West teams who made last year’s tournament, only two reached the second round. None beat a team seeded better than themselves.
Just one season.
Small sample.
In 2023, four Mountain West teams made the NCAA Tournament. Only San Diego State won a game, and while the Aztecs won a lot of them, reaching the national championship, they only beat one team seeded better than themselves. It was a great win. They upset Brandon Miller’s Alabama. But that was the Mountain West’s only such win, and it remains the Mountain West’s only such win since 2018.
In 2022, the Mountain West went 0–4 in the NCAA Tournament.
In 2021, the Mountain West went 0–2 in the NCAA Tournament.
In 2020, there was no tournament.
In 2019, the Mountain West went 0–2 in the NCAA Tournament.
In 2018, Nevada beat a better-seeded team than itself, upsetting Cincinnati in the second round only to lose to Loyola in the Sweet Sixteen.
In 2017, the Mountain West was a one-bid league. Nevada lost its opener.
It’s hard to win games as a seed-line underdog, and this sample is really, really small. We’re not saying the Mountain West is worse than perceptions indicate. We’re simply saying it might be, specifically at the end of the season. We think the committee might have inadvertently stumbled upon this, too.
Ken Pomeroy wrote a piece earlier this week pointing out that this year’s SEC might be overrated. In it, he explained that most of what we measure as conference strength is merely how well given groups of teams play in November and December. There’s no way to measure a conference’s strength after New Year’s, when conferences start playing almost exclusively intra-conference games.
November and December are a meaningful sample, but they’re not even half the season. They’re certainly not the most important time of year to be playing well. How do good conferences perform in March? Per Pomeroy, they regress. Among the power conferences, “good” leagues—those whose strength rates highly through the predominantly November and December-based nonconference sample—perform more poorly in the postseason than kenpom (the model) expects. Pomeroy offers two possible explanations, and I’d encourage you to click that link if you’re interested enough to be reading this. The bottom line is that it’s possible for conferences to be aggregately overvalued by the systems—mainly kenpom—which serve as our “How good is this team?” source of truth.
Again, we’re not saying the Mountain West is bad. We’re not even saying it’s always worse in March than it is in December. (The Mountain West is not a power conference, so it was not part of Pomeroy’s study.) We’re saying it’s possible that the Mountain West is a little overvalued in terms of its quality.
On the surface, this doesn’t have a whole lot to do with bracketology, or with the committee’s process. How good teams are influences their bracketing fate, but the strength of a team and its position in the brackets are two separate things. I promise we will tie this back to bracketology shortly.
ESPN’s BPI includes a lot of variables in its calculation. Among those are travel distance and altitude. Last year, the altitude piece received a lot of attention, with people noticing that BPI ranked Mountain West teams specifically roughly 30% worse than kenpom’s estimation. This was mostly panned. 30% worse? That seems ridiculous, and I do believe, gun to my head, that BPI was wrong. But what if BPI wasn’t wrong? What if BPI caught an important variable? What if there’s something about all that elevation and all that corresponding home-court advantage that inflates mathematical perceptions of Mountain West teams?
I would hope that somebody is tracking the performances of kenpom, BPI, and other systems, but I haven’t done it and I’m unaware of anyone who has. There’s a way to test a lot of this out, and I am not capable of doing it today. Proceeding, then:
The popular explanation for why Bracket Matrix and the committee disagreed so strongly about the Mountain West’s seedings is that Jeff Sagarin stopped publishing his college basketball ratings and this caused BPI to become more prominent to the committee than it used to be. These days, there are usually three “predictive” ratings on the team sheet. Last year, there were only two. One of them was BPI, which was not particularly fond of Mountain West teams. This is a fair explanation. I do think it might be to blame. But our model doesn’t really consider BPI. Last year, it only considered it indirectly, through SOR (which is not a predictive rating but is calculated based off of BPI). Why was our model right? Will it be right again?
This year, the team sheets are back to three predictive ratings. Sagarin’s has been replaced by Bart Torvik’s, and comically, at least in the context of this blog post, ESPN appears to have axed the altitude piece of BPI. ESPN either fixed a bug or chose to follow the herd. Maybe it’s a little bit of both. The prevalent thought, then, appears to be that the committee will gauge Mountain West teams “normally” again, that this altitude thing won’t hold them back, that 2024’s bracket was weird. The prevalent thought is that models like ours which systematically underrate the Mountain West, relative to consensus, are wrong. Maybe so. But last year, our model was right, and last year, our model didn’t incorporate BPI. It cared about SOR, but SOR was one of three résumé ratings, as will be the case again this year. And this year, SOR should be evaluating the Mountain West fairly conventionally anyway, with BPI’s altitude variable evidently flattened. What is going on??
To be completely honest, I don’t know. I have guesses, but this isn’t one of those things where other bracketologists are pointing to variables which simply don’t exist in my model, like how last year I didn’t have any deduction for teams with four or more Q3 losses, or any bonus for teams with Q1A wins. This isn’t one of those things where a team’s upcoming schedule is clearly advantageous, like when a team needs one more Q1 win to cross an arbitrary but important threshold and they’re going to play three more Q1 opponents. It’s also not one of those things where I can point to how distinctly my model treats selection and seeding. My model is paradoxically higher than anyone on VCU’s seeding but lower than most on VCU’s chances of making the tournament as an at-large. With the Mountain West, this is happening with both selection and seeding.
For the moment, I’d say San Diego State fans should be concerned by the outputs of The Barking Crow’s model, but not completely worried. Ditto Boise State fans and Utah State fans and New Mexico fans (I’ve got the Lobos with a 4-in-5 shot of making the field if they don’t win the MWC Tournament; the Aggies at 50/50 in such cases). This model sees things the consensus misses, but the consensus has the better track record. For our purposes, I’m going to keep looking for explanations. If today were Selection Sunday and our model was the only bracketology on Bracket Matrix with San Diego State out of the field, I’d probably put the Aztecs on that subjective Seed List and give Texas the boot. With eleven days to go instead, I’m interested in seeing it play out.
**
