NBA MVP Voting: How Playing on the West Coast and Late-Season Surges Affect the Race

Kevin Durant accepts his MVP award in 2014
Kevin Durant accepts his 2014 MVP award

By Barrett Hansen

The NBA’s Most Valuable Player (MVP) award has had its fair share of controversy in its half-century history. As in baseball, football and hockey, the MVP is currently determined by a panel of sportswriters. The root of the dilemma for these voters is that there is no set criteria for determining the winner. Part of the problem is the semantics: should “valuable” be interpreted literally to mean the player that meant most to his team, or simply one with the best performance? Another issue is how to treat team success; does the player have to play for the best team? Or should a candidate be marked down for having another superstar on his team that made life easier? Finally is the question of evaluation: what role should the “eye test” play, and, as of recently, should advanced metrics overtake traditional box score statistics for evaluating excellence?

Faced with all these questions, voters eventually have to make a choice that may have some analytical backing, but in the end is highly subjective. Knowing this, it is highly likely voters succumb to one of the many cognitive biases when casting their ballots. In this analysis, we examine two possible biases. First, we will look at a possible “East Coast” bias against players on teams based in the Pacific or Mountain time zones, the idea being that since more people, and therefore more MVP voters, are concentrated on the East Coast, they may be able to watch East Coast players in real time but only see the highlights of those on West Coast teams. The second bias we will consider is  a recency bias where people tend to overweigh recent performance, and potentially be swayed by particularly strong finishes to the season. Given that every game counts the same, we have to hope this does not factor in too much, as the MVP should recognize players who have put together an outstanding body of work regardless of whether their final month is as impressive as others in the year.

Basketball Reference has a fantastic MVP predictor model off which I base my model. Justin Kubatko, the author of the MVP predictor on this site, found that four key variables—points per game, assists per game, rebounds per game, and team wins almost entirely predict the MVP award. Kubatko’s model has correctly predicted two-thirds of all MVPs since 1955. In the age of advanced analytics to measure performance, a reliance on simple box score statistics like these may seem archaic. However, the objective of the model is not to determine the top player, but to predict how the voters will cast their ballots; these individuals clearly still utilize the more traditional statistics.

To determine the impact of these two biases on MVP voting, I gathered MVP voting data and statistics split by month. I collected the four statistics used in the Basketball Reference model (points, rebounds, assists, and wins), and also added blocks, steals and turnovers for good measure. Since the latter three stats were first collected in 1986, my dataset spans the years 1986-2014, which all occurs after the MVP vote changed format from a player poll to a sportswriter panel in 1981. I omit the strike-shortened 1998-1999 and 2011-2012 seasons to remain consistent.

To assess whether voters overweigh recent performance, I measure the six individual statistics and one team statistic across five distinct time periods. The structure of the NBA season means these roughly, but do not entirely, correspond to calendar months (see Table 1 for reference). In total there are 35 measures of individual and team attributes.

Table 1

Month October November December January February March April
Period 1 1 2 3 4 5 5

To measure the East Coast bias, I created an indicator for playing on a western team. I define “Western” as being located in either the Pacific or Mountain Time Zone; this includes Portland, Sacramento, Golden State, both Los Angeles teams, Phoenix, Denver, Utah, and Seattle (before the team moved to Oklahoma City). This additional attribute brings the total covariate count to 36.

With a dataset of 340 MVP nominees and winners, I ran an ordinary least squares (OLS) regression with the rank in MVP voting as the dependent variable. I did not want an indicator of winning because I wanted to see if these biases had effects that did not just impact the eventual winner. I chose rank over voting share because the share tends to be highly skewed toward the winner, which creates highly nonlinear relationships between the predictors and outcome. Keep in mind that a rank of one is the best, so negative regression coefficients imply a positive relationship between that predictor and receiving votes.

The key findings from the OLS model can be found in Table 2. For simplicity, only terms with a p-value of less than 0.15 were included.

Table 2

Covariate Coefficient Standard Error 95% Confidence Interval p
PPG2 -0.105 0.069 (-0.231, 0.021) 0.10
PPG5 -0.117 0.057 (-0.228, -0.006) 0.04
APG3 -0.463 0.173 (-0.803, -0.124) 0.008
TPG5 0.432 0.269 (-0.096, 0.962) 0.11
WP1 -3.233 1.206 (-5.608 -0.859) 0.008
WP2 -3.661 1.096 (-5.817 -1.505) 0.001
WP3 -2.559 1.120 (-4.763 -0.354) 0.02
WP4 -4.032 1.147 (-6.290 -1.776) 0.001
WP5 -2.331 1.048 (-4.393 -0.270) 0.03
West 0.742 0.333 (0.087 1.396) 0.03

In the above table, PG means per-game, and the number following refers to the period. So PPG1 is points per game in period one. A refers to assists, T to turnovers, WP to winning percentage, and West to the indicator for being on the west coast. As visible from the table, winning percentage is the most important factor, as each period’s coefficient is significant to the 5% level. Rebounds, steals and blocks have no period p-values below 0.15 (lowest is RPG2 at 0.2). Assists are most significant in Period 3. Turnovers and points have their lowest p-values in Period 5; more turnovers leads to a higher ranking at a significance level just a hair outside the 10% marker, whereas more points leads to a lower ranking with less than 5% significance. Both of these results indicate strong play in the final period improve MVP ranking, which is consistent with our hypothesis that voters overweigh recent performance. Finally, the indicator for being on the west coast is positive and statistically significant to the 5% level, which supports our other hypothesis of East Coast bias.

These results indicate fairly strongly that there is a bias against west coast players in MVP voting; all else being equal, players on teams located in the Pacific and Mountain Time Zones end up 0.74 spots worse in the final rankings. For reasons outlined in the introduction, this result very likely could be driven by the fact that it might be harder for voters to recall moments of excellence for players they see on TV less often. An alternative explanation is a home team bias, in which voters simply vote for players they like better. Following the same logic that there are likely more voters on the east coast, it is possible they like east coast teams more and therefore have a subconscious bias toward players on those teams. Further research that includes a breakdown of voter location and team preferences would resolve this issue.

Performance towards the end of the season also appears to have a strong effect; points and turnovers per game in the final period of the season are highly associated with positioning in the MVP race. Points is the most widely quoted individual statistic, so the fact that PPG in period 5 is so significant certainly falls in line with this interpretation. Winning percentage has its highest p-value in the final period, which indicates that strong finishing performances are valued only at the individual-level.

This study presents a strong case for both an East Coast and recency bias in NBA MVP voting. It should motivate west coast teams to distribute highlights of late night games to MVP voters so as to give their players a better chance in the final voting, and should encourage voters to look beyond the most recent point and turnover totals when casting their ballots. Check back in after the weekend for HSAC’s predictions for this year’s MVP race.

About the author

harvardsports

View all posts

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *