In my last post I looked at what turned out to not be much of a disagreement between Dave Berri and Neil Paine on what happens to teams when they lose their leading scorer. To recap, Berri said that replacing a team’s leading scorer (using one definition) with an average player would have cost a team about 3 wins on average last year. Neil Paine said (with a little translating by myself) that replacing a team’s leading scorer (using a different definition) with somebody unknown (namely, however the team managed to shift player minutes around in real life) would cost teams 2 to 2.5 wins last year, and perhaps closer to 5 in general over the last 25 years. Today Paine put up another analysis, this time looking only at ‘inefficient scorers’. What did this do?
First a quick summary of Neil’s update: he found leading scorers who were either below average on offensive rating or true shooting percentage. He also switched from looking at offense only to the overall change in efficiency. In the end Neil found that losing even an inefficient leading scorer costs a team 1.2 points of differential. Neil again makes the case that replacing a high-usage player is hard to do.
A few notes. First, this analysis is still not the same as Berri’s. Berri assumed that a leading scorer is replaced by average player; it is unclear who replaces him in Neil’s analysis. It’s likely a conglomeration; some players who already play will get more usage and some guys who don’t play (or play little) will get more usage as well. But it’s hard to evaluate the quality of that extra usage in terms of an average player. Second, doing similar math to what I did last time, we can guess that the 1.2 points of efficiency change are only worth a few wins. In fact, if we assume that the leading scorers from Neil’s first analysis were average defenders (and thus not changing the differential beyond what Neil found for offense), the effect has shrunk (if leading scorers are in general good defenders, then the effect has shrunk even more). And this is what we would expect: if you look at leading scorers who are probably worse players (by virtue of being below average at offensive rating or TS%), losing them should hurt their team less.
A few new issues have also arisen. One is using players who are either below average on offensive rating or TS%. I’m not sure why this criterion was chosen. If you wanted to look only at low-efficiency shooters, I would have stopped at TS%. Offensive rating includes a variety of measures, including rebounding and assists on top of scoring. If a player is below average on offensive rating it would make a lot of sense that losing him should actually help the team; losing a player who is below average on TS% is less clear because he could make up for it by contributing in other facets of the game (like assists, rebounding, steals, blocks…). Without knowing which player is on the list for which reason, it’s hard to evaluate which group contributes more to the positive benefit of ‘inefficient scorers’. Some of the big positive numbers come from players who were above average according to bball-reference’s Win Share, like 2005 Rip Hamilton (12.1), 2004 Zach Randolph (24.9), and even 1999 Allen Iverson (17.8) despite the fact that they were presumably below average in terms of TS%. Obviously the change in differential may be disproportionate to their Win Share, but again we don’t know who they were replaced by. If an average player is replaced by someone terrible, they could easily end up with a big change in differential.
Another issue is raised by an analysis done by ElGee. He found that if the team has an above-average offensive rating without their leading scorer, they were better without that scorer. That makes sense; if the team is above average playing without an ‘inefficient’ guy, then the scorer is likely holding them back. This result is in contrast to Neil’s claim; a good team does not want an inefficient scorer. But in contrast, if the team is below-average without their scorer, they played *better* with their scorer. ElGee takes that as evidence that an inefficient scorer is helping a bad team, but that too is not surprising. If a team is below-average, then even an inefficient scorer may be better than the other guys they have. Additionally, if the inefficient scorer is actually a positive (like Rip or Iverson above), then it would make a lot of sense that removing them would leave a below-average team remaining.
In short, I don’t think the conclusions have changed from what was said last time. Leading scorers, even some of which might be described as ‘inefficient’, can indeed help their teams win. However, scoring is not the only thing that leads to winning; if you take other factors into account you would do a much better job of identifying who helps a team win. Again, I don’t think there’s a disagreement here.
So what would I do if I had the data and wanted to be a bit more convinced by Neil’s analysis? If I were going to just do the second post over, I would limit the sample to players who were below-average with respect to TS%. Those players are, very reasonably, “inefficient scorers”. But, this still leaves the issue that some of those players could still help their team win via other routes, leaving the illusion that inefficient shooting can be helpful in some cases. For that reason I would include some measure of overall player productivity as a predictor in a regression (I won’t even argue too much if it’s Win Shares).
It would look something like team efficiency differential = WS+Player On + intercept, where Player On would be a categorical variable marking if the differential comes from when the player was available or not. WS might be win shares earned by the player in the minutes played and a negative win shares number estimated from what the player would have earned had he played the minutes he missed. For example, the first player in Neil’s list is Dominique Wilkins. The Hawks had a differential of 2.6 when he played, and he generated 10.8 WS, so that line of data would be 2.6 = 10.8 + 1 + (intercept). When he didn’t play the team had a differential of .5; he missed about 400 possessions. 400 possessions is a bit over 4 games, and in 4 games Dominique would have produced about half a win. So that line would be .5 = -.5 + 0 + (intercept). We want the missing wins to be negative because they were essentially taken away from the team when Dominique missed that time. And so on for all the other players. If the beta weight for Player On is positive even after accounting for a player’s productivity, that would be evidence that inefficient shooting can still contribute to a team’s winning. This regression could include all the players from Neil’s original article so that a good range of TS% is included.
Alternatively, Neil could look only at leading scorers who were below average on WS48. If a player is below average in general, there is not as much concern that he is helping his team win in ways besides scoring (although he could still be better than the guy who replaces him when he doesn’t play). If a below-average player is your leading scorer and your team still does better with him on the court, that would be pretty impressive.