Our NHL Model Is Delayed

We were hoping as recently as a few hours ago to roll out our NHL model today, with the first game starting right now.

It isn’t happening.

So, as part of the ongoing effort to keep myself accountable on expanding the scope and quality of our models overall (I’m writing these posts as much for myself as for you), here’s what happened. I’ve broken it down into three failures: Logistical, Technological, and Statistical. I’ll cover those and then talk through what comes next.

First, the logistical.

One of the consequences of this not being a full-time job is that there are other jobs, which have their own demands. Sparing you a long story, things came up in the other jobs last week, and they delayed the actual building of the model. It’s a boring story, but it’s a common theme, and one we organizationally need to figure out how to preempt (and are making progress figuring out).

Then, the technological.

I wrote previously on here about how limited we are in terms of technological experience. Our models run on Microsoft Excel, even though we’re pretty sure there are much faster and more flexible platforms for them. I’d hoped to use Python to gather the data, having begun learning Python in recent weeks, but my hopes that it would be quick enough to figure out how to mine a couple hundred tables’ worth of data were not answered. Figuring that out took too much time, and we ended up reverting to just manually downloading and reassembling the tables. This wasn’t a huge hit time-wise, but it was significant in that we aren’t where we wanted to be in that avenue of improvement.

We still would’ve been fine. The model was on track for an afternoon rollout. But then we fell upon the statistical failure: The model didn’t work.

It simulated hockey games just fine. There was rhyme and reason to how it did it, and that rhyme and reason made sense when written out in the design plan. But our design was flawed. We want this model to account for the fact that teams get better or worse as a season progresses—basically, we want the model to be able to learn, and to be cognizant of the fact it will keep learning as each season goes on—but due to a design flaw of my own, the model’s humility ended up being too significant. By midseason, it was assuming every team was about the same, quality-wise. And while we initially thought this was just due to some variables needing to be adjusted—a possibility we were aware existed when we first built the thing—it turns out our problem’s deeper than that, requiring a new structural approach to how we evaluate teams’ ability and then simulate the games themselves.

So, the model isn’t here. We’re going back to the drawing board, and it’s unclear how long we’ll be there. The model might be here in a few days. It might be here in a few weeks. We might end up scrapping it. There’s incentive to make it happen, now that Stu’s a Senators fan, so if this is something you’re hoping to see, hold onto that hope. In the meantime, we’ll keep you posted with updates like these.

***

In other model news, one logistical victory was changing our approach to the way we operate our college basketball model. Again, sparing you the details (because they’re boring), we didn’t change anything about the model itself, but by shifting around some priorities, we were able to amplify the simulation count from 1,000 to 10,000 when we ran it after Monday’s games, something that lessens the chances of a small-sample outlier popping into our brackets themselves. So it wasn’t an entire failure of a week on the model front (and the work we did on the NHL model was worthwhile, though fruitless so far).

The Barking Crow's resident numbers man. Was asked to do NIT Bracketology in 2018 and never looked back. Fields inquiries on Twitter: @joestunardi.
Posts created 3304

One thought on “Our NHL Model Is Delayed

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.