By Harrison Chase

In the summer of 2012, the World Champion Miami Heat convinced Ray Allen, the all-time league leader in 3-pointers made and one of the best shooters in the league, to leave Boston for deal worth roughly three million a year – almost less than half as much as the Celtics were offering. In the summer of 2014, the Orlando Magic, coming off a season in which they had the third worst record in the league, signed Ben Gordon, a player who had been waived by the Charlotte Bobcats the year before and had posted two straight seasons of negative win shares, to a two year deal worth 4.5 million a year – which by my estimation was roughly 4.5 million more than any other team was willing pay him.

How did these two players, one an all-time great, the other not so much, sign roughly equivalent deals in free agency? Clearly it has to do with the team they signed with. The Heat, being the favorites to win the title again next year, were able to convince Allen to sign with them for way less than his market value, while the Magic, destined for the lottery once again, were forced to pay Gordon more than he probably should have been.

This idea that bad teams must overpay in free agency while good teams don’t have to—a loser’s tax, so to speak—is nothing new: in fact, I got the idea for this when reading an article by Zach Lowe in which he explained how the Charlotte Bobcats, in order “to attract even quality midlevel veteran free agents” must “remove the stench of historic awfulness” that comes with losing year, after year… after year. So with this idea out there, I decided to try to determine analytically whether this Loser’s Tax does in fact exist, and, if so, how large it is.

To do this, I first compiled a list of all the free agents in the last six years (from the free agency of 2008 to that of 2013), as well the details about the amount of and length of the contract that they signed. I also adjusted the salaries according to the year in which they were signed, using the cap as the base number. For example, Andre Igoudala signed a contract worth $48 million in the 2013 offseason; the cap grew from 58.679 million to 63.065 million this offseason, meaning his adjusted salary grows to 51.59 million.

Next, I had to come up with a way to figure out what these players’ production would be over the length of their contract. To do this I used Basketball Reference’s simple projection system on win shares. When doing this, I had to discard a number players, who, due to injury or playing abroad, missed a full year or so of basketball, but the remaining data set was still fairly large (478 players). Although the simple projection system is not perfect (its aging curve may be very general) it has a pretty accurate track record of predicting a players’ future performance.

In order to determine whether good teams where able to persuade players to sign for less, I looked up the winning percentage of the team that each player signed with the year before. Obviously, winning percentage the year before does not tell the whole story about how good and ready-to-win a team is (other free agency moves may greatly impact a team’s outlook) but it provides a decent benchmark.

Finally, I assume that teams would underpay or overpay free agents by a percentage of their true worth, not by a fixed amount. Putting all these things together, we can model a player’s yearly salary with an equation of the form:

In this, S is a player’s salary, WS is the average win shares that a player is projected to produced over the length of his contract and WP is team’s winning percentage. That means A is the dollar amount that an average team pays for an additional yearly win share, B is the change in salary from team winning percentage—the coefficient of the Loser’s Tax—and C, as the y-intercept, is the amount a team would pay a player that produced zero win shares (which we would theoretically be $0 but of course will be a bit higher).

If there is no such thing as a Loser’s Tax, then the winning percentage of a team would have no effect on how much it pays for a player, and the B coefficient would equal zero. We can use linear regression to find the best values for A, B, and C, whose results are displayed in this table:

C (the y intercept), was positive but not statistically significant (p-value of 0.33). This is somewhat encouraging, as it means teams are not systematically giving money to players that contribute nothing to their teams. A is obviously quite significant: a player’s projected win shares clearly will be very predictive of his salary. Finally, B was significant to the 5% level, suggesting the existence of a loser’s tax. In words, its value of -0.48954 means that if a team was to have a winning percentage of 0.6, they would pay 1.30359-(0.6-0.5)*0.48954 = $1.25 million per projected win share, rather than 1.3 million (discount of 3.8%).

Although this seems to settle the issue of a Loser’s Tax, we must consider that the actual relationship may not be linear as this model suggests. To test this, I stratified the players by the amount of wins they were projected to produce: one group for those projected under 1 win, a second for those projected between 1 and 1.5 wins, a third for those between 1.5 and 2 wins, and so forth. After doing this, I then plotted what each group was being paid yearly, and got this graph.

As you can see, although the relationship is linear up to about seven win shares, after which it becomes much more erratic. This is mostly due to a small sample size – very few players who hit free agency are projected to produce 7 or more win shares. Therefore, their effect is very pronounced. You can see a table of sample sizes for WS range below.

As you can see, only 35 players in the sample have 7 or more projected win shares. Across 15 groups, that’s just over 2.2 players per group, way too small for meaningful extrapolation.

For reference, there were 41 players who notched at least 7 win shares over the 2013-2014 season. Lance Stephenson and Chandler Parsons were just over 7, while Klay Thomson and Kyrie Irving clocked in just under. The bulk of NBA players fall under this threshold, so I felt comfortable dealing only with the data points representing the players who were projected to have below 7 win shares. Adjusting the previous figure, the relationship now looked like this:

Rerunning the analysis on this subset produces these coefficients:

These coefficients are fairly similar to the one modeling all players’ contracts; both the average price a team pays per win share and the y-intercept were basically the same. However, the effect of a team’s winning percentage had nearly twice as large of an impact (going from -0.49 to -0.9) when looking only at players projected to produce fewer than seven win shares. Furthermore, the p-value of this coefficient was well below even a 0.001 alpha, providing strong evidence that the Loser’s Tax is in fact real.

So what does this mean? Let’s look at an example: when Carl Landry hit free agency in 2013. Over the next four years, Landry was projected to produce just under 18.5 win shares, or roughly 4.6 win shares a year. Using our model, an average team would have been able to sign Landry for roughly $6.1 million a year over those four years. A team like the Spurs, who had a winning percentage of 70.7% the year before, would project to be able to sign Landry for $5.2 million a year, a discount of nearly a million dollars. On the other hand, a team like Sacramento, who he eventually signed with, who had a winning percentage of 34% the year before, would project to have to pay a little over 6.75 million a year. In actuality, Landry signed a four-year deal worth $27.9 million (in adjusted salary; real salary was $26 million). That’s roughly $7 million a year, very close to what our model predicted.

So it seems like the Loser’s Tax exists and is very significant for players projected below 7 win shares yearly. But what about those predicted above 7 win shares? When I ran a regression on only those players, a team’s winning percentage was not significant at all. This could be due to several factors. First, there might just not be enough data points to form any type of valid conclusion. As seen in the table above, the sample size dramatically drops off once we get past 7 projected win shares. Second, these players may not have to worry about the strength of the team they would be joining, as they know that they are talented enough to turn whatever team they land on into a contender.

This leads to a possible confounder here: players might want to play for Chicago or Miami not because they’re good teams but because they’re good places to live. However, there are enough counterexamples that this doesn’t really hold—for example, the Knicks have been dreadful recently and no one wants to live in San Antonio (highs of 110 degrees, anyone?). Furthermore, the extreme significance of a team’s winning percentage suggests that even if there are other confounding variables, a Loser’s Tax does in fact exist.

A final reason why we might imagine the Loser’s Tax is only manifest among players below 7 win shares might be that this groups contains mostly role players. Because these players are pretty much known commodities, teams can understand their value much more easily. On the other hand, teams might be more divided on the long-term value of 7+ win share players, which can lead to variations in salary offers far larger than visible with the Loser’s Tax.

For role players Loser’s Tax is fairly large—by winning eight more games, a team has been able to reduce the amount that they have to pay free agents per projected win share by $90,000 – or roughly 7%. That may not seem like a lot, but when considering how it allows teams like Miami to sign a player like Ray Allen for less than a team like Orlando has to pay Ben Gordon, you can see how it can make a big difference.

Very nice work. How about looking at “historical losers” curse or weather-based choices?