As I mentioned last time, I enjoy the work that Brian Burke has done over at Advanced NFL Stats. One of the cool things that he’s done is taken play by play data and used it to get an estimate of a team’s probability of winning given how many the points the team with the ball is ahead or behind, the time left in the game, the down and distance to go, and where the team is on the field (you can use his calculator here). Using this, you can calculate if a play helped or hurt you. Let’s say for example that it’s first and 10, your team has the ball on the opponent’s 30, you’re down four and there’s 2 minutes left in the game. Your team has a win probability of 45% right now (update: I’m not sure this is right; I had a typo but right now the calculator page isn’t working. You should still get the idea). Let’s say the coach dials up a run and you pick up 7 yards, making it 2nd and 3 at the 23 with 1:50 after a time-out gets called. Now the win probability is 54%, so you just gained 9%; Brian would call that a WPA (win probability added) of .09. If instead the same thing happened but the running back doodles around in the backfield and loses a yard, making it 2nd and 11 from the 31. The win probability is 38%, so WPA is -.07; that was bad.
The way WPA is framed, it sounds like it’s a good way to compare the quality of plays. And indeed that’s what Brian did in a recent post looking at Super Bowl plays. For example, the largest WPA he lists in terms of absolute value (positive or negative) is Norwood’s missed field goal in Super Bowl 23, taking the Bills from a probable favorite (67%) to definite losers (0%). Most of the big plays have WPAs of plus or minus 20-some percent. But are all WPAs created equal? If you look back at what I said about probabilities last post, probabilities are not linear. Moving from .45 to .55 is not the same as moving from .85 to .95. Instead, the appropriate way to compare changes in probabilities is to use the odds ratio. As I said last time, the odds is p/(1-p), and the odds ratio is just what it sounds like, the ratio of two odds. So in this case, p=.45 corresponds to odds = .818181…, p=.55 corresponds to 1.2222, and the odds ratio is 1.2222/.818181 = 1.494. So that change of 10% in probability means that the odds of winning (not the probability of winning) are now about 1.5 times higher than they were. But the odds ratio for .85 to .95 is 3.35. So the odds of winning have increased more in the second case even though the change in probability is the same, and that is because it’s harder to increase probability as you get closer to 1. What WPA would you need starting at .45 to have an odds ratio of 3.35? You solve 3.35 = [p/(1-p)]/[.45/(1-.45)], which a little algebra will tell you is p = .7329. In other words, to have the same impact on winning, you need a WPA of almost .3 in the middle of the range to match the WPA of .1 when you’re that close to winning. It would be the same at the other end, moving from .05 to .15, because odds are symmetric.
Odds isn’t really an intuitive measure, whereas most people at least feel like they have a sense of what 66% or 17% is, so it’s tempting to dismiss using odds. But there might be practical situations where it does matter. Let’s say you’re the general manager of an NBA team who has his eye on a particular free agent. Some analysis you’ve done says that if you sign this guy, it’ll improve your team’s point differential (the points you score minus the points your opponent scores) by 2. That’s nothing to sneeze at; the playoff teams in the Western Conference were only separated by 2.6 last year, and point differential relates strongly to winning. On the other hand, everyone else knows it too and so signing the free agent will tie up your cap space for the next 5 years. Should you make the move? The answer, at least in part, depends on how good your team already is, which we’ll see when I cover winning in the playoffs in the next post.