Thanks to Borders going out of business, I was able to pick up Scorecasting about a week ago and started reading it tonight. I’m sure that it will generate a few ideas for posts (I already have one in the back of my mind), but first I have a question.
In one of the early chapters the authors talk about the effect of taking a NBA player out when he has five fouls; they argue that players should be left in longer. The risk of fouling out is outweighed by the benefit of having a better player in the game. They say that this benefit is to the tune of half a point per game. Now we know that half a point per game should be worth about a win from the standard point differential – to – wins equation. But later in the same paragraph, the authors say that they estimated that leaving the player in raises the chances of winning by 12%, which is worth a couple games over the course of a season. How can these three numbers all be true?
The half a point per game is, at the most generous, maybe 1.5 extra wins. Raising your chances of winning by 12% is huge; assuming an otherwise average team, you would move from 41 wins to nearly 51, or get 10 extra wins. Neither of these numbers is ‘a couple extra wins’. Perhaps there just aren’t that many games where a starter gets five fouls? In that case the 12% only applies occasionally, dampening the impact? Even still, the ‘couple’ doesn’t seem to line up with the half point per game. Maybe in the end it’s a semantic issue driven by the authors summarizing research without spelling out where the numbers come from? If anyone has any insight, I’d appreciate it.