meandering through the world of hockey statistics.

Yahoo’s Buzzing the Net — “World Junior 2015: Finland’s Power Play Woes Provide an Opportunity for Canada”

In two games, Finland has had eleven power play opportunities (including over a minute of 5-on-3) and scored on none of them. They’ve hardly generated scoring chances at all like we’d normally expect from a power play. When two Finns collided during a simple power play breakout against the U.S., Gord Miller remarked that the team looked more collected in 5-on-5 play. The numbers bear that out: against the U.S., Finland had 12 shot attempts in 10 minutes of power play time, averaging a meager 1.2 shot attempts per minute. That would put them dead last in the NHL this year, behind even Buffalo. I watched all the Finnish power plays thus far to try to discover why a team that’s pretty good at even strength implodes with the man advantage…

Yahoo’s Buzzing the Net — “World Junior 2015: Eichel a statistical nightmare for opponents as Finnish opposition falls flat”

If you follow me on Twitter, you probably know I’ve accepted a position doing CHL stats writing with Yahoo’s Buzzing the Net. This is my first post for them. I’ll try to remember to link all of them here, for my own records if not for anyone else’s viewing (hi mom!).

Yesterday, I asked my Twitter followers to pick a game for me to look at from an analytics perspective. They chose Finland versus the USA, my first look at Jack Eichel in action–and what glorious action it was!

I looked each team’s even strength shot attempts (missed, blocked, or shot on net), and who was on the ice for them. Think of it as a plus-minus, only instead of tracking goals scored for and against, it tracks shot attempts. (You probably know this as Corsi.) Say the U.S. attempts a shot that misses the net. Even though it missed, it’s a positive sign that the players on the ice made good plays to get into the offensive zone. In the long run, those plays add up to more goals for than against, but goals are very fluky (just think about that wonky Finnish goal!). So shot attempts do a better job of showing us players who are likely to be good at scoring more goals than are scored on them. That’s the short version of the puck possession story, I’ll write more on this in the future. For now, let’s look at some numbers…

Zone-Adjusted Player Usage Charts

I wrote a piece for Progressive Hockey about adjusting cross-team player usage charts for differing zone starts among teams.

Are WHL Goaltenders’ Save Percentage Affected by Team Defense?

After I published my WHL goalie confidence intervals, a few people questioned the validity of a non-team specific method of evaluating goaltending. It’s a fair criticism, given that the performance of goaltenders varies more widely than in the NHL. The standard deviation of career sv% of WHL goalies active in 2013-14 was .01396; for the same in the NHL it was .009543.

Basically, there’s a theory that shot quality matters more in the WHL because the distribution of team defense is so much greater. The good defensive teams are really good, and the bad defensive teams are really bad. Because of this, the effects of shot quality (distance from the goal, breakaways, shots from the home plate area) that are so small as to be a non-issue at the NHL level come into play in the WHL. My theory is that there are just more bad goalies, and it’s harder for WHL teams than for NHL teams to replace one.

How do you test whether team defense has a role to play in boosting or lowering a goalie’s save percentage? A Ryan Miller in 2012-13 Buffalo situation is rare in the WHL. For the most part, bad goalies (however much of that can be attributed to team play) seem to congregate on bad teams.

I ended up comparing goalies who were traded during their career and comparing their save percentage splits in the context of how good their team was.

Process: I peer pressured Josh Weissbock into grabbing me data for all WHL goalies traded from 2000 to now (all the cool kids are doing it!). If a goalie had multiple seasons with one team prior to being traded, I combined them (adjusting for games played per season) to estimate an overall save percentage while he played for that team. I also found the percentage of games won for each season and combined those for an overall win%, to roughly estimate how good each team was pre- or post-trade.

I essentially looked at the change in save percentage each goalie experienced after being traded and compared it with the change in win percentage. If better teams help their goalies save a higher proportion of shots, we should see a positive relationship: as the change in win% (i.e. how good your new team is compared to your old team) increases, your change in save percentage should also increase.


The dotted line means nothing. Well, it does, but don’t let the positive slope fool you into thinking there’s a giant correlation. What matters is that R^2 value of .0313. This tells us that the data points we see are scattered so far away from the line that only about 3% of the change in save percentage is explained by the change in team points. That’s a really weak relationship!


The P-value here tells us what percentage of times the seemingly-positive relationship between win% change and save% change wouldn’t show up at all if we were to randomly sample this population 100 times. 15% of the time it wouldn’t show up at all. The threshold we use for ‘proof’ is .05 or less: that is, this would only fail 1 in 20 times. So while there’s a relationship, it’s not statistically significant to where we can say yes, a bad goalie moving to a better team is 95% likely to have a better save percentage.

There may be the barest hint of a relationship there, but it’s quite tenuous, and even if we had better stats available to evaluate team defense I doubt it would be strong enough to explain, say, a Taran Kozun. For now it doesn’t seem that goalies on terrible teams are disadvantaged to a significant extent, nor that those on good teams have an inflated save percentage.

Data via Google Drive and Dropbox

Confidence Intervals for NHL Goaltenders

I wrote a post for Progressive Hockey about using confidence intervals for NHL goalies, similarly to my post about WHL goaltending. If you didn’t really understand what’s going on in a confidence interval the first time, I explained it better this time–promise. IMO (having written it), this one deserves a read.

As always, I put my data below. If you open up that spreadsheet, there’s a pretty cool feature where you can change the probability of success you’re willing to accept (for the risk lovers among you).

You can also add any missing goalies; you just need their shots faced and save percentage. Select the three confidence interval equation cells–Low CI, High CI, CI Width–and drag them down using the bottom right hand corner of the rightmost cell (the mouse turns into this) to fill the cells of your new row.

Dropbox / Drive

Confidence Intervals for WHL Goaltenders

I first saw the use of confidence intervals for goalie save percentage last year after the Roberto Luongo trade that left Eddie Lack in the starting role in Vancouver. It may have been Eric Tulsky who tweeted out a cautionary 95% confidence interval that (I don’t remember the actual statistics, so humor me with my guesstimates) 20 games at a .920 save percentage left room for Lack to develop into somewhere between a .900 save percentage goalie or a .940, or something of that kind.

The thing is, most of the time, goalies are voodoo. For a while, I’ve kind of ignored goalies because they’re tough to predict in comparison with individual player possession statistics. Even with a full season of data, it’s hard to know what they’ll put up the next year. Look at this year’s Vezina winner in Semyon Varlamov. Here are his last five seasons’ save percentages in reverse order: .927, .903, .913, .924, .909. If there’s this much variation at the season level, what the heck does that say about how individual games go, or even playoff series?

Why does this happen? To put it in a blunt and unsavory manner: there’s a lot of luck in hockey. I know it screws up our well-crafted sports narrative, but truthfully there are lucky bounces, deflections, and all kinds of random chance involved.

Basically, the message is: you know less than you think you do about how good a goaltender is. A 95% confidence interval tells you that 19 of 20 times, a goaltender’s career save percentage will fall between the low and high value. It’s the statistics equivalent of “I am basically sure this goaltender is this good.” I may not stake my life on it, but I would bet $20 confidently.

Read the rest of this entry »

Decoding the Draft: The Effect of Height on Draft Position

I have something of a vested interest in researching how NHL teams view short players, and to what extent any height bias is warranted. In recent years, a number of the Winterhawks’ most prolific players have been on the shorter end of things when drafted. Sven Baertschi (5’10”), Brendan Leipsic (5’8″), Nic Petan (5’8″), Oliver Bjorkstrand (5’10”), and Derrick Pouliot (defenseman, 5’11”) all come to mind.

There are a number of intriguing perspectives on teams’ motivations for preferring tall players, and whether they are justified. This Arctic Ice Hockey question post aggregates a number of observations from members of the analytics community.

(When I was introducing my dad to this project (he’s an NBA fan), he pointed out, “Teams like to draft the player with the most potential. If you draft the small guy and he’s a bust, there’s not much you can say. If you pick the big player, at the end of the day you can still say you drafted the most athletic guy.” So at the very least, there’s the cover-your-ass perspective to be considered.)

Read the rest of this entry »


Get every new post delivered to your Inbox.

Join 230 other followers