Recently Dave Berri put up a post examining what would have happened this past year if every NBA team lost its top scorer. Neil Paine then put up a post looking at what happens to team offense if they lose their leading scorer. The tone of the article seemed to say that it was refuting Berri’s post (although it became less clear in the comments), so I thought I would go through both and see exactly what panned out.
Berri starts his piece by noting that recently a few teams have lost prized scorers (such as Iverson, Carmelo, and Rudy Gay) and were predicted to fail afterward. Curiously, they did not fail. He points out that if you look at Wins Produced, this would not be surprising; scoring is not the only thing that leads to winning. Then he produced a table that showed how many wins each team was expected to have this season and how many the would have been expected to get if their leading scorer (determined by most points scored) were replaced by an average (WP48=.100) player. Berri notes that 19 teams would be worse off without their leading scorer, 4 very much so. 11 teams would be better with an average player, 1 very much so (take a bow, Bargnani).
The point, in general, is that leading scorers are generally above average, but only generally. If all you look at is scoring, then sometimes you’ll make a good decision and sometimes you won’t. You would be right to play your leading scorer broadly speaking; if you add up the numbers in the table teams would lose about 96 wins, or a little over 3 per team, if they replaced their scorer with an average player. But that isn’t true for all teams; Toronto should be looking for other options (as in almost any other option). The leading scorer is not critical for team success. In contrast, Berri’s previous post found that if you replaced a team’s *best* player as measured by Wins Produced with an average player, they would lose about 223 games (by my calculation from Berri’s table) or about 7.5 games per team. That’s over twice as many as you lose by replacing the leading scorer. Again: scoring is not always important, but good scorers can be good players.
In his post Neil Paine deemed a player to be the leading scorer if they played over half of the team’s games and had the highest points per game. I would imagine that would create the same set as Berri’s, although it might not 100% of the time. Paine also looked at the past 25 years instead of just this season. He gathered each team’s offensive rating (points per possession) in games played by the leading scorer and compared it to the offensive rating in games not played by the leading scorer. The ratings were adjusted by what an average team would have done against that opponent and weighted by the number of games missed by the leading scorer. Neil found that teams lose 2 points per 100 possessions when their leading scorer doesn’t play, although the number was only .9 per 100 this past year. Conclusion: the leading scorer is important.
Neil’s analysis is not the same as Berri’s. First, there’s no connection to winning. Berri’s WP analysis includes defense (even though plenty of people tell you it doesn’t!) to allow for a connection to team wins. Offensive rating alone ignores defense, so we don’t know how the teams did without their leading scorer in Neil’s analysis. Second, Berri ‘replaced’ the leading scorer with an average WP player while Neil looked at what happened with whoever actually replaced the leading scorer. While Neil’s analysis looks to be more practically useful, it also has flaws; it is easily the case that lots of things change when the leading scorer doesn’t play besides just the leading scorer not playing. For example, teams will rest players at the end of the year. That means that not only is the leading scorer not playing, but probably the top two or three scorers. Berri’s analysis explicitly removes only the leading scorer’s influence, whereas we don’t know that to be the case for Neil’s; the change in offensive rating is really more of a highest-possible-impact measure.
In response to that first issue, let’s see what a leading scorer might be worth in terms of wins for Neil. The average pace this season was 92.1, so to account for some overtime games let’s bump it to 95 (perhaps an overly large bump, but that isn’t too important). The important thing is that teams are losing less than 2 points per game if they lose their leading scorer; this past year it would be closer to .8. The general conversion for point differential to wins says that a point is worth 2.5 wins, so leading scorers this past year are worth 2 to 2.5 wins to an average team and over the past 25 years they are worth a little under 5 wins. Those numbers aren’t too different from the 3 we got from Berri.
So I don’t see that we have a disagreement here. According to Neil, leading scorers are worth 4 or 5 wins per season in terms of their offensive contribution beyond whoever comes off the bench; we would need to look at their defensive rating effect to see if that number goes up or down for their total contribution to winning. Dave Berri says the number is about 3 this year when you account for offense and defense compared to an average player. The analyses don’t strictly agree, but they’re also different approaches using different measures on different data sets. Given that, it’s impressive that they came out so close. But I think Berri’s overarching point still stands: it would be better to account for everything a player does than just look at his scoring.