By Julian Ryan
The 2013 NBA Finals was one of the most exciting series in recent memory and came within one shot of going either way. It saw the San Antonio Spurs, boasting just two lottery picks in Tim Duncan and Kawhi Leonard on the roster (and I suppose T-Mac), matched up against the juggernaut Miami Heat, who had six players who at one point in their career had been drafted in the top six.
It is perhaps one of the great achievements in North American sports that the small market Spurs have been able to win four championships and reach five Finals since Duncan was drafted in 1997 without missing the playoffs once in that span and winning seventy percent of their games. They have been long outshadowed by the rival Lakers in seducing free agents, and their highest draft pick since Duncan was James Anderson as the 20th in 2010. Maybe Popovich and Duncan are just that good, or perhaps the Spurs simply utilize the one resource they have access to more efficiently: the draft.
It has been shown in the NFL that ultimately general managers are not able to systematically beat the draft. Teams frequently choose players who outperform their pick number and vice-versa, but over time, no team has consistently had more successes than failures. However, if this result were to hold for the NBA, it would be almost impossible for the Spurs to have maintained their level of success for such a long period of time, making me doubt that it is true.
To test my theory, I followed a similar methodology as our own Kevin Meers did for the NFL and as has been reproduced for the NBA. Taking data back to the 1995 draft, I found the career win shares for each draft pick from Pro Basketball Reference as an estimate of the career value of each player. From there, I wanted to evaluate how well each draft pick performed relative to its draft class and hence calculated a z-score for each player using the mean and standard deviation of career win shares of that year’s draft. I then took the average of these values for each pick for each of the eighteen drafts in my dataset to produce an expected value for each draft pick. In essence, I now had a measure of how much each draft pick was expected to under- or over-perform the average member of its draft class.
If we have an expectation of how well each pick is expected to perform compared to its draft class, we then have a crude measure of the success or failure of a pick. The 17th overall pick has an expected value of 0.267 standard deviations of career win shares more than the average of his class, so Roy Hibbert, for example, outperformed his pick by 0.883 standard deviations, as his win shares z-score is 1.150. This “value over expected z-score” ultimately measures the quality of a draft pick by a team. I averaged this metric over all players drafted since 1995 for each team, and then converted that to a z-score by dividing by the standard error, to see if any teams are consistently beating the draft.
The graph shows that while most teams hover around a z-score of zero, meaning their average pick performs at his expected value, a few teams do indeed buck this trend. The three red teams, the Suns, the Clippers, and the Spurs, are able to reject the null hypothesis that their picks have an expected value of zero at the 5 percent significance level, while Portland is able to reject the null hypothesis at the 10 percent level.
This would imply that over the past 18 years, at a statistically significant level, only the Spurs and Suns have been able to consistently draft successfully and offers a partial explanation of the success of those franchises over the past decade. Similarly, the Clippers’ appalling draft record explains to some degree why the franchise toiled in obscurity for a number of years despite having the LA name to attract free agents.
Let us take a closer look at the supposedly good teams. The Spurs have had 34 picks in my dataset while the Suns have had 30. For both teams, the average draft pick has outperformed his position by on 0.325 standard deviations of career win shares. The standard deviation for career win shares for players from the late 90’s, most of whom are no longer adding wins, is 34.9, so very approximately the Spurs and Suns get just over eleven extra career wins per pick compared to what they should be getting (assuming they keep the player or trade him for equal value).
However, in spite of this, very few of either team’s picks actually pan out. Just 13 of the Suns’ picks and 11 of the Spurs’ have outperformed expectation. How can the Spurs’ be mining value from the draft if less than a third of their picks are “successful”?
The key to both teams’ draft strategy is variance. If you consider all thirty teams, there is a strong positive correlation of 0.67 between variance of draft picks’ success and average value over expectation. Teams who, by fault or design, have a wide spread of success and failure are on average doing better.
The Suns have the second highest variance (behind the Hornets, who have been the third most successful team) and can largely attribute their success to excellent lottery picks in the late 90’s and early 2000’s. Steve Nash, Shawn Marion, Amar’e Stoudemire and Luol Deng stand out as high picks who considerably over-performed their position amongst a sea of average picks. The Suns have cooled in recent years with several disappointing drafts (perhaps explaining the current strength of their roster) and so perhaps their superb run was just a fluke. However, if we consider the sampling distribution of the variance of their picks and compare it to a chi-squared distribution with the requisite number of degrees of freedom, the p-value returned by the Suns picks is 0.013. This suggests only a 1.3% chance of such variance occurring naturally with a sample of the same size, given the variance of the population of draft picks.
A similar test on the Spurs’ picks though gives a p-value of 0.53 and does not suggest an overarching high variance strategy. However, a closer analysis of the data suggests an even more refined picking strategy. As mentioned at the top, the Spurs have had essentially only low first-round and second-round picks to play with. Their picks have the seventh-highest variance overall, but that is hardly statistically significant. The variance of their second-round picks, though, is the highest by a large distance. In the first round, which has a much higher general variance of performance, they have been reasonably conservative, and indeed their low first round picks have been fairly average, with the notable exception of Tony Parker, in terms of expectation and low in terms of variance. This has resulted in the Spurs having the highest “worst draft pick” for any team because they rarely chance their arm in the first round where the vast majority of true busts are found.
In the second round though, the Spurs seem to go all out. Of their 21 second-round draft picks, only four have outperformed expectation. However, those four are DeJuan Blair, Goran Dragic, Luis Scola, and of course Manu Ginobili who more than make up for the 17 other “busts”. The p-value of the variance of their second round picks, as compared to only the variance of second round picks which on average are lower, is a quite low 0.0085. By rolling the dice on very high variance players in the second round, the Spurs are willing to sacrifice getting solid roster filler as is available in the second round in favor of choosing players with upside and hoping for the best. The Spurs couple this with their very stable first round picking such that every year they get at least a typical rotation player, as you can expect at the bottom of the first round, and then shoot for the moon in the second round.
Neither the Suns nor the Spurs have more successes than failures, and in this way, neither team “beats” the draft. However, both teams have systematically captured additional value by pursuing different high-variance strategies. The Spurs’ second-round gambles appear to have little to lose and a lot to gain and even though those gains are rare, over the past few years it seems to have been working out for them.
Nice idea!
Two thoughts:
First, you’d expect to see at least 2 or 3 teams with significant results (you are testing 30 of them, after all).
Second, I wonder if its possible to standardize by the total win shares produced by each draft. Getting the 18th pick in a great draft makes it much more likely to yield a valuable pick than the 18th pick in a poor draft, regardless of front office ability.
Isn’t the “p-value approach” prey to multiple hypotheses problem. With 30 teams in the league we should expect 1.5 of them to pass 5% statistically significant levels.
Also, you are counting Loul Deng for the Suns?
If you have a random sample of 30 hypothesis tests, how many would you expect to be able to reject the null hypothesis at the 10 and 5 levels due to pure chance?
It looks like 3 of us had the same comment before the moderation queue was cleared.
If you sampled the drafts randomly (random teams and random drafting positions), and did that many times, you would develop an expected standard deviation of team successes. How does the observed distribution of team successes compare with a true random sampling effect?
What is the annual turnover/attrition rate for NBA players? What percent of all draft picks end up on NBA rosters? What percent of Division I NCAA basketball players get sines by an NBA TEAM.