Phil has a post questioning how bad NFL expert picks are. Namely, Freakonomics claimed that three prominent sports sources were only 36% accurate at picking division winners before the season started. Since there are four teams in each division, dumb luck says that you should be correct 25% of the time. If you assume that one team in each division is clearly the worst, you bump up to one in three (33%), and so the ‘experts’ are barely above chance. Let’s do some pros and cons.
Pro: year-to-year wins for NFL teams is basically at chance. That is, you would do as well as pretty much anyone else if you said every team was equal, 8-8. If that’s true, then it wouldn’t be very wise to dismiss any team as a division winner, and ‘true’ chance is 25%. So the experts are at least above guessing at random.
Con: in the last three years, the average division winner has won 11 games. Second has won 9.5, third 6.9, and fourth 4.4. Thus the last-place team in the division is typically much worse than even the second-place team. Ignoring them is probably a decent decision, and so the experts are back to chance.
Pro: like I said, NFL teams regress quite a bit, such that they’re all predicted to be basically average next season. If you use the regression equation from my post in the first ‘pro’ section, you would predict the four division slots to win 8.8, 8.4, 7.7, and 7.1 games. Not so different now, huh? Also, every season in the past five at least one last-place team has won their division the next year (from 2005 to 2010, the Saints have actually been 4th, 1st, 3rd, 4th, 1st, and 2nd; good luck picking the NFC South). So you probably should consider every team to at least some extent, and so experts are somewhat above chance.
Con: out of the 40 division winners in the past five seasons, 20 also won their division the season before. So 50% might be a better ‘random’ level (just pick all the same division winners), and experts are actually below it. Bad form, experts.
Phil notes that evaluating the 36% number depends on how much of a role you think luck played. If there’s a lot of luck and underdogs come through often, then you would actually predict experts to do poorly since all they can do is pick talent. I’m not sure if there’s a best way to decide who the underdogs are, but I think that last con is fairly damning. Experts presumably look at those kinds of things and should know that division winners repeat half the time.
The Freakonomics guy claims that the poor performance is due to risk aversion. Experts pick most, but not all, playoff teams to repeat even though the actual results are a 50/50 split. I’m guessing that when the writer said risk aversion he didn’t have the usual economics meaning in mind. He probably meant that the experts are ‘playing it safe’ by taking more playoff repeaters than you actually see. Instead, the writer fell into his own behavioral economics flaw, which is probability matching. Let’s say you knew that half of the playoff teams would repeat. Should you try to pick half of them and six non-playoff teams for next year, as the writer suggests? No! It’s like picking a coin flip by guessing heads and tails. Since you have no real insight, your guesses are uncorrelated with the actual outcomes. That means you’ll only be right 25% of the time. If instead you guess heads every time (or pick all the playoff teams from last year), you’ll get 50% correct.
As a side note, Phil isn’t quite on with his digression into the point spread. The spread set by a casino or whoever else is a combination of how they think the game will turn out and how money is bet on either side. Part of the casino’s goal is to get the money somewhat even; since there’s the vig included in every bet, the casino can take the money from the losers to pay the winners and still keep a cut of all the action. This is why the spread moves in the week before a game. Part of that might be due to injuries and the like, but just as much of it comes in response to the money coming in. If the Lions were three point favorites and everyone thought they would cover, the casino would move the line up to 4 to get more money coming in on their opponent. Similarly, casinos take moneyline bets if you want to simply pick the winner. So Phil’s account of reasonable people disagreeing about the spread also applies to picking winners; if you think the Bills only had a 20% chance to win but I thought they had a 25% chance, we could make different bets depending on what the moneyline was. That’s being a little picky; we would both obviously expect the Patriots to win, but Phil isn’t quite right that we would have to bet the same way.
Finally, Phil disagrees that people are bad at predicting the future in sports. To be fair to the Freakonomics guy, he was looking at prominent media experts. And given the discussion above, it seems reasonable to say they are fairly dismal. Phil’s point about humans setting the Vegas lines is solid except for my point above; the Vegas spreads (and other bets) respond to how money comes in, so it’s a moving target that adjusts to the information in the betting population. So you really have to beat the wisdom of crowds to be a consistent winner. It’s hard for many people to consistently beat the spread because there are so many potentially useless ‘systems’ out there (like betting on your favorite team) compared to people using real information on top of having to beat the crowd and all the noise in a single football game. But there are professional sports gamblers, so it is possible to consistently beat the spread.
Overall, I think it’s true that ‘experts’ are pretty crummy at making predictions. Every year TMQ has an article pointing out the incorrect predictions prominent people make, and there are plenty of them. Following Phil’s discussion, maybe we should redefine who the experts are. The guys in Vegas are the experts, as are the guys who manage to beat them over the course of multiple years. But the people they trot out to put on TV aren’t as expert as they would like you to think.