By Harrison Chase and Nathaniel Ver Steeg
In our last post, we looked at player progression in the NBA. We considered many variables, but two that we did not talk about are coaches and teams. It is reasonable to conclude that some coaches and organizations are better at developing players than others – or at least we should hope so, given that player development is often brought up as a coach’s strength or weakness. As we finished the model, we began to wonder what coaches and teams had been the best at developing players over the past thirty or so years. The purpose of this post is to present a ranking of the best coaches and teams at developing players.
Because we want to only look at player development, we will only look at players who played for the same coach (and team) for both years. If we did not, then we would have to determine which of the coaches (during the first or second year of the sample) got credit for their improvement. This would also bring in to play the confounding factor that maybe some coaches are better at bringing out the best in the players, not necessarily speeding up their progression but just using them in the right ways. To remove that question it was simpler to just look at instances where the same player played for the same coach and team for consecutive years.
The most obvious way to compare teams and coaches would be to just include them in our model as factors. However, that didn’t end up improving the AIC of the model (as with 30 teams and 100+ coaches we would end up over fitting). So that raises the question: can coaches even affect player development? Coaches clearly can affect player BPM in other ways – for example, it seems like every player who joins the Spurs jumps from a benchwarmer to a valuable contributor. But, as mentioned before, that may be more the Spurs using him ways that best suit him, rather than them speeding up their progression. The inclusion of each team in the model wasn’t significant, showing that there’s no significant difference between all teams/coaches. Adding in just one term – for example, a dummy variable for being coached by Greg Popovich – was significant at the 0.05 level. But that is slightly misleading, as we are picking a coach who happens to be one of the best of all time. With over a hundred coaches to choose from, there is a good probability that by random chance alone one (or more) of them would show up as significant. Therefor, we would want to use a Bonferroni correction or something similar to adjust for multiple comparisons. Doing that, the inclusion of Greg Popovich (and any other coach) is not even close to being significant. That raises the question again: can coaches and teams even affect player development?
So far the answer appears to be no, but that is a question far beyond the scope of our model. As we just want a ranking, we can get creative and instead use another method and look at the sum of the residuals from the original model associated with each team and coach.
First let’s look at teams. A table of the rank of every team, both by the total of the residuals and by average residual (since some teams had more young players pass through than others) is below.
This ranking pretty clearly passes the smell test, with teams like Phoenix, Houston, and, of course, San Antonio topping the list. Meanwhile, teams like Brooklyn, Philadelphia, and New York are stuck at the bottom. Again, this list doesn’t necessarily mean that teams at the top are better at developing players, just that over the past 30 or so their players have developed faster, whether it be because of good coaching, smart drafting, or just dumb luck. And of course this list isn’t truly informative, as an organization in and of itself can’t be better at developing players – it’s the people there that do so. With that in mind, below are the ten coaches (using the same method but only looking at coaches with more than two players coached) associated with the most player development.
Again, this list makes a ton of sense. The coaches with the highest total residuals are well-regarded coaches who have been in the league for a long time, topped off by maybe the best two coaches of the past thirty years. Meanwhile, the coaches with the highest average are for the most part current coaches who have only been employed for a few years, and therefor may be getting a little lucky and might experience some regression to the mean in the coming seasons.
Now lets look at the worst coaches.
The coaches with the worst average aren’t very recognizable, save Alvin Gentry (most recently coached Phoenix), Lawrence Frank (ex-Pistons coach), and, surprisingly, Rick Carlisle (current head coach of the Dallas Mavericks). The coaches with the worst sum should be a little more recognizable, as having a low total means they probably coached for more than a few years, and includes (in addition to Frank, Gentry, and Carlisle) Lenny Wilkins, Doug Collins, and our good friend Isiah Thomas.
Although adding the coaches and teams as factor variables in R didn’t improve the AIC of the model, as it was greatly over fitting, the rankings acquired by just looking at the sum of the residuals make sense. Once again, it is important to note that these rankings do not imply causation but rather a correlation between certain coaches/teams and player progression (or lack thereof). And these rankings are by no means perfect, as they don’t look at players who dropped out of the NBA or who got traded. Potential ideas for a future post, perhaps? And finally, this post does not answer the perhaps more important question of whether coaches/teams can actually affect player development. Although there is no statistically significance evidence to support the notion that they can, the rankings created by looking at the residuals do make sense anecdotally.