What’s Wrong With ESPN’s BPI?

I posted this on Twitter, so if you already saw it there, you’re not getting much new here. The impetus for this is Kerry Miller highlighting that the likeliest reason BPI is so low on Missouri this year is an aggressive weighting of preseason ratings.

Including preseason ratings in late-season ratings isn’t inherently bad. Preseason ratings maintain predictivity in a lot of systems. Including them makes the system more accurate, and accuracy is good. The problem here is that BPI seems to be overweighting preseason expectations.

Even that wouldn’t be a huge problem if BPI’s architects had intentionally chosen to give those ratings a lot of weight based on factors like BPI’s performance in recent years. The problem here—or rather, my guess at the problem—is that I don’t think BPI has active architects. It seems ESPN is updating the metric sparingly, haphazardly, and in secret (they might have removed the importance of elevation this year). In 2016, this level of preseason weight probably made sense. Since then, the transfer situation has upended college basketball rosters. My guess is that no one ever checked if BPI was still handling its preseason expectations properly.

Again, this is all just a guess. But the fact we even have to guess is frustrating. Without further ado…what I think is going on with BPI, copied verbatim from that Twitter thread:


My half-guess understanding of ESPN’s college basketball analytics problem is that ESPN reduced its analytics headcount and people stopped trying to proactively update systems like BPI. Any updates—like the potential elevation one this year—are reactive and a little slapdash.

They also stopped being transparent when their systems stopped being new. This might be about headcount too, but I think it’s more about the market. In the early 2010’s, analytics were marketable. Now, they’ve mostly been replaced by betting odds in the public diet.

This is mostly fine. Betting odds are more accurate for single games. But BPI doesn’t only affect ESPN’s credibility. BPI is prominent. It’s on the bottom line. It’s on broadcasts. (Broadcasters are often asked to cite metrics they do not understand.) It’s on the team sheets.

BPI plays some role in teams’ outcomes. It’s not a huge role—last I checked, BPI wasn’t strongly predictive of a team’s selection/seeding—but it’s a factor. Less importantly (annoying for me, but unimportant), BPI impacts fans’ perceptions of analytics and their accuracy.

It’d be one thing if BPI was just spitting hot takes. Outliers are good. They can be useful. But when they’re outliers due to systematic flaws that only get updated a year late under the cover of darkness, that’s bad. The lack of transparency is rough.

I wish ESPN would annually publish a post explaining in detail how BPI and SOR work. If they’re worried about losing proprietary knowledge, whatever, don’t include the formulas. But give people something on an annual basis. Explain what you’re doing. Take some accountability.

Should the NCAA take BPI and BPI-based SOR off the team sheets? Yes. Axe BPI and find someone to make a kenpom-based SOR. I don’t think you even need to replace BPI. Do I get why the NCAA hasn’t done this? Yes. Year-over-year consistency is good. Give teams a steady target.

Thankfully, BPI was never indispensable. What I’m more worried about is something similar happening with FPI and with football SOR. FPI isn’t popular, but it has a ridiculously good track record. Would be a shame to lose that.

**

The Barking Crow's resident numbers man. NIT Bracketology, college football forecasting, and things of that nature. Fields inquiries on Twitter: @joestunardi.
Posts created 3421

Leave a Reply

Your email address will not be published. Required fields are marked *

Begin typing your search term above and press enter to search. Press ESC to cancel.