After I published my WHL goalie confidence intervals, a few people questioned the validity of a non-team specific method of evaluating goaltending. It’s a fair criticism, given that the performance of goaltenders varies more widely than in the NHL. The standard deviation of career sv% of WHL goalies active in 2013-14 was .01396; for the same in the NHL it was .009543.
Basically, there’s a theory that shot quality matters more in the WHL because the distribution of team defense is so much greater. The good defensive teams are really good, and the bad defensive teams are really bad. Because of this, the effects of shot quality (distance from the goal, breakaways, shots from the home plate area) that are so small as to be a non-issue at the NHL level come into play in the WHL. My theory is that there are just more bad goalies, and it’s harder for WHL teams than for NHL teams to replace one.
How do you test whether team defense has a role to play in boosting or lowering a goalie’s save percentage? A Ryan Miller in 2012-13 Buffalo situation is rare in the WHL. For the most part, bad goalies (however much of that can be attributed to team play) seem to congregate on bad teams.
I ended up comparing goalies who were traded during their career and comparing their save percentage splits in the context of how good their team was.
Process: I peer pressured Josh Weissbock into grabbing me data for all WHL goalies traded from 2000 to now (all the cool kids are doing it!). If a goalie had multiple seasons with one team prior to being traded, I combined them (adjusting for games played per season) to estimate an overall save percentage while he played for that team. I also found the percentage of games won for each season and combined those for an overall win%, to roughly estimate how good each team was pre- or post-trade.
I essentially looked at the change in save percentage each goalie experienced after being traded and compared it with the change in win percentage. If better teams help their goalies save a higher proportion of shots, we should see a positive relationship: as the change in win% (i.e. how good your new team is compared to your old team) increases, your change in save percentage should also increase.
The dotted line means nothing. Well, it does, but don’t let the positive slope fool you into thinking there’s a giant correlation. What matters is that R^2 value of .0313. This tells us that the data points we see are scattered so far away from the line that only about 3% of the change in save percentage is explained by the change in team points. That’s a really weak relationship!
The P-value here tells us what percentage of times the seemingly-positive relationship between win% change and save% change wouldn’t show up at all if we were to randomly sample this population 100 times. 15% of the time it wouldn’t show up at all. The threshold we use for ‘proof’ is .05 or less: that is, this would only fail 1 in 20 times. So while there’s a relationship, it’s not statistically significant to where we can say yes, a bad goalie moving to a better team is 95% likely to have a better save percentage.
There may be the barest hint of a relationship there, but it’s quite tenuous, and even if we had better stats available to evaluate team defense I doubt it would be strong enough to explain, say, a Taran Kozun. For now it doesn’t seem that goalies on terrible teams are disadvantaged to a significant extent, nor that those on good teams have an inflated save percentage.
I wrote a post for Progressive Hockey about using confidence intervals for NHL goalies, similarly to my post about WHL goaltending. If you didn’t really understand what’s going on in a confidence interval the first time, I explained it better this time–promise. IMO (having written it), this one deserves a read.
As always, I put my data below. If you open up that spreadsheet, there’s a pretty cool feature where you can change the probability of success you’re willing to accept (for the risk lovers among you).
You can also add any missing goalies; you just need their shots faced and save percentage. Select the three confidence interval equation cells–Low CI, High CI, CI Width–and drag them down using the bottom right hand corner of the rightmost cell (the mouse turns into this) to fill the cells of your new row.
I first saw the use of confidence intervals for goalie save percentage last year after the Roberto Luongo trade that left Eddie Lack in the starting role in Vancouver. It may have been Eric Tulsky who tweeted out a cautionary 95% confidence interval that (I don’t remember the actual statistics, so humor me with my guesstimates) 20 games at a .920 save percentage left room for Lack to develop into somewher between a .900 save percentage goalie or a .940, or something of that kind.
The thing is, most of the time, goalies are voodoo. For a while, I’ve kind of ignored goalies because they’re tough to predict in comparison with individual player possession statistics. Even with a full season of data, it’s hard to know what they’ll put up the next year. Look at this year’s Vezina winner in Semyon Varlamov. Here are his last five seasons’ save percentages in reverse order: .927, .903, .913, .924, .909. If there’s this much variation at the season level, what the heck does that say about how individual games go, or even playoff series?
Why does this happen? To put it in a blunt and unsavory manner: there’s a lot of luck in hockey. I know it screws up our well-crafted sports narrative, but truthfully there are lucky bounces, deflections, and all kinds of random chance involved.
Basically, the message is: you know less than you think you do about how good a goaltender is. A 95% confidence interval tells you that 19 of 20 times, a goaltender’s career save percentage will fall between the low and high value. It’s the statistics equivalent of “I am basically sure this goaltender is this good.” I may not stake my life on it, but I would bet $20 confidently.
I have something of a vested interest in researching how NHL teams view short players, and to what extent any height bias is warranted. In recent years, a number of the Winterhawks’ most prolific players have been on the shorter end of things when drafted. Sven Baertschi (5’10”), Brendan Leipsic (5’8″), Nic Petan (5’8″), Oliver Bjorkstrand (5’10”), and Derrick Pouliot (defenseman, 5’11”) all come to mind.
There are a number of intriguing perspectives on teams’ motivations for preferring tall players, and whether they are justified. This Arctic Ice Hockey question post aggregates a number of observations from members of the analytics community.
(When I was introducing my dad to this project (he’s an NBA fan), he pointed out, “Teams like to draft the player with the most potential. If you draft the small guy and he’s a bust, there’s not much you can say. If you pick the big player, at the end of the day you can still say you drafted the most athletic guy.” So at the very least, there’s the cover-your-ass perspective to be considered.)
Prior to its going dark, Extra Skater was the sole repository of advanced junior hockey stats. Fortuitously, Tyler Hunnex (@TylerHunnex, letsgobirds.com), a Seattle Thunderbirds blogger, had downloaded the entire WHL archive in Excel and was kind enough to send them along to me. Additionally, I was able to copy the entries for the top 1000 point getters in the CHL in 2013-14 from archive.org. Initially I only shared the links on Twitter, but I’m glad to post them permanently for downloading via Google Drive and Dropbox. Unfamiliar statistics are defined in the Glossary sheet of the CHL stats workbook.
Two evenly-matched teams met in the WHL Final this year. Each team scored 20 goals over the seven game series, four of them on the power play. Their save percentages were .918 and .916. If all you saw were these numbers, you wouldn’t know anything about the Winterhawks or the Oil Kings–their best lines and players, their style of offense, the efficiency of their defense. Looking at more telling numbers can supplement merely watching the series and give us an understanding of the above qualities.