Embrace the Chaos: the NFL Playoffs and Variance

By Andrew Mooney

For interested fan or gambler, the NFL playoffs can be infinitely maddening to comprehend. Those Cincinnati Bengals that had won seven of their last eight? Well, they’re out, held to just -6 passing yards in the first half by the team that allowed 68 points to the Jaguars and Lions in consecutive games. And let’s not forget the 2008 Super Bowl, in which probably the best team in history lost to the worst team ever to win a championship. If you’re skeptical of the influence game-to-game variance holds in the NFL, the playoffs have a way of making you a believer.

This post from Football Perspective lends further credence to the idea that playoff results have been getting even more random in recent years, edging closer to 50-50. We’ve long known parity is a very real phenomenon in the NFL, but is there a way to cut through all the randomness and noise to determine what makes teams win in the playoffs?

To start with, I considered upsets. In college basketball, an underdog stands the best chance of shocking a more highly rated team in the NCAA tournament if it employs a high variance style of play—pressing, shooting lots of three-pointers, etc. A strategy like this is termed “high variance” because it can lead to extreme outcomes; a hot shooting night and a sloppy opponent could lay the groundwork for a Cinderella story, but if the threes aren’t falling and the favorite takes care of the ball, the underdog is in for a blowout.

I adapted this theory to the NFL, hypothesizing that a team with a greater amount of variation in its regular season play stood a better chance of getting hot enough to pull off a few upsets. For every playoff team of the last ten years, I took the variance of a team’s point differentials (margin of defeat or victory) in its 16 regular season games. Next, I ran a few regressions to determine whether variance was a significant predictor of playoff wins or upset wins, which I defined as any victory by a team seeded two or more places lower than its opponent. In each test, I controlled for the teams’ point differentials per game from their respective regular seasons, as a proxy for team quality.

My results indicated no such relationship. The variance of a team’s week-to-week performance in the regular season had no measurable effect on the number of wins or upsets it pulled off, even when I split the data up into thirds and quarters, ordered by mean point differential. In other words, even for the worst teams to make the playoffs, regular season variance played no role in determining a potential Cinderella.

The difference I perceive in how variance affects college basketball and football upsets is that, generally speaking, variance in college basketball is more a product of a team’s style of play, which isn’t necessarily true for football. Few NFL offenses and defenses stray far from the mainstream in terms of concepts and schemes, which can’t be said for college basketball. In the NFL, game-to-game variance might be the result of a number of factors that aren’t necessarily predictive: turnovers, luck, etc.

I did find one variable that was slightly predictive of playoff success, however: team quality, as measured by mean point differential. Teams in the top third of mean point differential averaged about half a playoff win more than teams in the bottom third. This is about as hammer-to-the-head obvious as it comes; better teams do better in the playoffs. There are the NFL playoffs for you—the only things we can know for sure are those you could ask of a six-year-old.

About the author

harvardsports

View all posts

Leave a Reply

Your email address will not be published. Required fields are marked *