This season the hyped Cleveland Browns have failed to meet expectations, starting with their quarterback Baker Mayfield who, last year, broke the rookie record in passing touchdowns, but has seen drops in nearly all significant statistics in his second year, including decreases in his completion percentage, yards per attempt, and touchdown rate along with an increase in his interception rate. When a second year player sees a decrease in performance, sometimes people explain it as a “sophomore slump” – an idea that players generally see performance decline in their second year.
Beyond Baker Mayfield, we have seen this in another notable name in RGIII who was Offensive Rookie of the Year and had the 7th best QBR (69.4 – the best mark for a rookie in NFL history) in 2012, but fell to 23rd best in the league for his sophomore campaign at a QBR of 51.8. RGIII’s case could plausibly be explained by his injury in the playoffs of his rookie year, which may have lessened speed and agility, making him less effective as a dual threat in his second season. Not needing to fear his running ability, opposing defenses could drop more players into coverage, leading to more interceptions and less efficiency through the air.
Yet another example of a sophomore slump is in Dak Prescott. He threw many more interceptions, jumping from only 4 in his Pro Bowl rookie year to 13 in his sophomore year. In his rookie year, he posted an adjusted yards gained per pass attempt of 8.6, which was third in the entire league behind only MVP Matt Ryan and Tom Brady. In his second year, Dak declined to 6.5 AY/A, 21st among quarterbacks. In other words, he contributed 25% less yardage per throw in comparison to his rookie year.
On the other hand, we have also seen cases such as Lamar Jackson this year and Adrian Peterson who had an outstanding rookie season who went on to have an even better sophomore year. He rushed for 1341 yards on 238 carries for a very impressive 5.6 yards per carry, and was rewarded by a Pro Bowl selection as well as being awarded Offensive Rookie of the Year. He then actually improved in his second year by leading the league in rushing yards with 1760 yards on 363 carries for 4.8 yards a carry, making him only the fifth player in history to reach 3,000 yards in his first two seasons.
With some conflicting cases about possible sophomore slump, we examined the existence of sophomore slump and if perhaps the existence differs by position. Given that the sophomore slump narrative is typically applied to players who overperformed in their rookie year, we focused on players who had above average rookie seasons. We only examined that players had played in at least 10 games in both their first and second season. This method reduces the effects of injuries, small sample size, etc. Ideally, we attain a balance between confidence in the accuracy of player statistics and a minimal level of restriction (to allow for a sufficiently large sample).
For quarterbacks, the performance statistic we examined was QBR. QBR is a statistic created by ESPN that incorporates all of a quarterback’s contributions to winning, including passing, rushing, turnovers, and penalties. For all other position groups (do not have QBR), we used Pro Football Focus’ Player Grade, which incorporates a range of advanced statistics and is made from Pro Football Focus’ staff charting and grading every player on every play in every game.
The figure below charts the performance of quarterbacks across their careers:
We can see from the green line (quarterbacks who perform better than average in their first year) that there is indeed a sophomore slump. From year one to year two, we see a decline of 12.1% in QBR and this found to be statistically significant by running a paired t-test that compares each player’s performance in his first and second year. We also observe that this sophomore slump from the high performing rookies goes against the natural average trend of quarterbacks that improve performance from their first to second year as seen by the red line (represents all quarterbacks). In addition, these quarterbacks see a slight rebound in performance in year three, fitting true idea of a “slump.” It is interesting to note that the quarterbacks represented in the green line consistently on average perform better than other quarterbacks for every single year in their career (except for year ten), while we only selected this group by their first year performance, meaning that first year performance is very indicative of (relative) career performance.
Next, we examined other offensive players: running backs, wide receivers, and tight ends.
As seen in the graphs above, we once again observe a decrease in performance from the first to second year for those who performed above average in their rookie season. Running backs, wide receivers, and tight ends saw their performance (measured by PFF grade) decrease by 5.6%, 3.1%, and 6.1% respectively, all statistically significant. Despite the case of Adrian Peterson, running backs overall do show a sophomore slump. Like the quarterbacks, the natural trend of these positions groups was to show an improvement in performance for their second year, but these high performing rookies slumped against the trend. While the tight ends matched the quarterbacks in rebound performance for their third year, the running backs and wide receivers actually saw a further decline in performance for their third year.
We did the same analysis for the defensive positions of cornerbacks and safeties, shown below:
We observe sophomore slump for these positions as well, seeing cornerback and safety performance and decrease by 8.2% and 9.4% respectively, both statistically significant. Unlike the other positions, the trend for these rookies overall (not just the outperforming rookies) was actually to have a decline in performance in their second year.
In conclusion, rookies across many offensive and defensive positions that perform above average in their rookie seasons tend to show a decline in performance on average (between 3-12% across positions) in their second season. However, most display a rebound performance in subsequent years.
If you have any questions for Matty about this article, please feel free to reach out to him at chengm@college.harvard.edu
]]>As I wrote about in my last post, the NBA’s 3-point frenzy has only just begun. Long gone are the days of Kobe’s fadeaways and the Duncan’s bank-shots; now it’s Dame’s pullup threes, Harden’s stepback threes, and Westbrook’s… well, attempted threes. Fewer and fewer teams have spots for those who struggle to shoot the long-ball, and as younger generations of hyper-efficient talent enter the league, midrange specialists will slowly phase out of the NBA.
I love watching the old Warriors’ patented 3-point barrages as much as the next guy, but this has gone too far. The 3-point line has made all two-pointers beyond the dunk obsolete, and it’s created a league where Davis Bertans can be a more valuable player than DeMar DeRozan. But it’s not too late; we can still save talented players like DeRozan from ending up on the sidelines, and we don’t even have to make 3-point marksmen exceptionally less valuable to do so. It’ll just take one simple rule change.
Let’s pretend Damian Lillard has the option to take one of two shots: he can either take a 3-pointer, which he’ll make 37% of the time, or he can step in and take an 18-foot jumper, which he’ll make 47% of the time. If you know anything about expected values (or about Damian Lillard), you’d know that he’d most often choose to shoot the 3. Despite the huge increase in shot percentage, Dame’s long two-pointer will only get him .94 points per shot, while the three-pointer will get him 1.11 points per shot. That third point he gains from shooting behind the arc is so valuable that it outweighs any potential increase in shot percentage from shooting at a shorter distance, beyond that of the layup. In Dame’s case, there is no benefit to shooting the 47% two-pointer when he can make the three-pointer 37% of the time; that is, unless we implement the “make-it-take-it” rule.
The make-it-take-it rule is simple: if you make a shot, you get to keep possession of the ball. If you’ve ever played pickup basketball in the halfcourt, you’ve probably played with it before. In a full-court setting, I imagine the scoring team would just take the ball out of bounds after a made shot to start a possession towards the opposite end of the court. It’s frustrating to be on the wrong side of the ball when the offense is hot, but it’s thrilling to be on the right side of the ball during a comeback. Yet above all, what makes this rule so interesting is its unique impact on the value of the 3-point shot.
Let’s return to the Damian Lillard example, but this time he’s playing with the make-it-take-it rule. The two-pointer will still grant him .94 points per shot, and the three-pointer will still grant him 1.11 points per shot. But now, there’s a benefit to taking the two instead of the three: if he takes the two-pointer, he has a 47% chance of being able to shoot again. If he takes the three, he only has a 37% chance of being able to shoot again. So even if the three-pointer is worth more points per shot, the two-pointer will give him a better chance to extend the possession and take a second shot.
So which of these two shots is preferable now? Now that possessions can contain more than one made shot, we can’t just compare his expected points per each shot and call it a day. Somehow, we need to include the number of future shots that the two-pointer and the three-pointer might allow him to take. So let’s imagine two scenarios: one where Dame only shoots the three, and one where he only shoots the two. Since we know how often he’ll make each shot, and we know how much each shot is worth, we can actually find the true expected points per possession in each of these scenarios. If we assume that each of Dame’s shots are independent, and that he will keep shooting until he misses, then the number of shots he takes in each scenario will follow a geometric distribution. Take the mean of the distribution, multiply it by the worth of each shot (2 or 3), and you have the expected points per possession of each shot.
Only taking three-pointers, Damian Lillard will score 1.76 points per possession. Only taking two-pointers, Lillard will score… 1.77 points per possession. Ever so slightly, it is suddenly more valuable for Lillard to take the long two instead of the three. We didn’t have to move the three-point line or make it worth a different number of points, and yet, we’ve managed to make the midrange useful again.
Let’s look at this a little deeper. Under the current system, teams have to shoot 50% better from inside the arc to make their two-pointers just as efficient as their three-pointers (so if a team normally shoots 40% from behind the arc, then they should only take a two-pointer if they can make it at least 60% of the time). How does this change with the new rule? After doing a little algebra, we learn that the efficiency of the two-pointer equals that of the three-pointer when the following equation is satisfied:
So under the make-it-take-it system, if you shoot 40% from three, then you only need to shoot 50% from two to make your two-pointer just as valuable as your three. If you shoot 30% from three, then you only need to shoot 39% from two. These two-point percentages are far more achievable than what the current system requires, and thus, shots from inside the arc become much more valuable.
We can visualize this new “efficiency landscape” by creating a heatmap on 3P% against 2P%, where each pixel value contains the difference in points-per-possession for a shooter with the corresponding shooting percentages. To help explain the map, I’ve included the example of Damian Lillard, as well as the efficiency landscape of the NBA under its current rules. The area in green is where it’s more efficient to shoot a 2-pointer, and the area in red is where it’s more efficient to shoot a 3-pointer. Notice how much more green space we’ve created in the Make-It-Take-It NBA:
You can think of this as an offensive decision maker. Pretend that LeBron drives in the lane, and he has the choice to either a) take a contested midrange floater, which he’ll make 60% of the time, or b) kick the ball out to Danny Green for a wide-open corner three, which he’ll make about 55% of the time. We look at the plot above, find the point where the 2PT percentage is 0.60 and the 3PT percentage is 0.55, and see which side of the line the point falls on. In this case, under both systems, the point falls on the red side, meaning LeBron should kick the ball out to Danny Green for the open three.
By now, it’s easy to see how the make-it-take-it rule adds value to the midrange jump shot. But how will this rule actually affect teams’ shooting preferences? Just as we had done before, we can calculate the expected value of each shot just by using the probability that a given player can make that shot. Average shot value tends to be a good proxy for how often players take each shot, so as long as those probabilities don’t change much under the new ruleset, it’s reasonable to think that the league’s new shot chart might look something like this:
In the current NBA, shots at the three-point line are just as valuable as shots just 7 feet from the basket. The average player needs to shoot from within dunking range to be more efficient from inside the arc than outside the arc. In our new version of the NBA, that line has been moved out to about 14 feet – as in, the average player needs to shoot from within 14 feet to be just as proficient from two as he is from three. Above-average shooters can extend this line even further, which is ultimately what gives midrange specialists a place in the make-it-take-it NBA
Notice my one caveat to the shot chart on the right: it is only perfectly accurate as long as players shoot just as well from each spot on the floor as they do now. As offenses and defenses shift their playstyles in search of a new dominant strategy, it’s difficult to say how this might impact shot percentages. Since shots underneath the rim are even more valuable with the make-it-take-it rule, maybe defenses will put greater emphasis on rim protection, thus giving offenses more open looks from the mid and long-range. Maybe the new rule will reduce the number of plays in transition, thus further decreasing shot percentages around the rim. Maybe the rule will force defenses to cover each area of the court more actively, raising shot percentages everywhere.
It’s difficult to say exactly what will happen, but that’s part of what makes this rule so interesting. Now that midrange shots are valuable, each team can better tailor its offense to the strengths of its personnel. No longer must teams put four shooters and a center on the floor to win; teams like the Spurs can feel more comfortable letting their stars dominate the midrange, and the Sixers no longer have to pretend that Ben Simmons is going to shoot threes. Yet still, under the new system, sharpshooters will be incredibly useful for stretching the floor and creating space for others. Essentially, we’re giving each team a broader scope of strategies it can choose from, which makes for a more interesting NBA.
For all the praise I’ve given the make-it-take-it rule, I haven’t talked much about its implementation. But, for the most part, I don’t think it’s too difficult to imagine: Team A scores, they take the ball out of bounds, and they in-bound it again towards the opposite hoop. They shoot again and miss; Team B gets the rebound, dribbles back down the floor, and tries to score on the first hoop. Basically with each made shot, the teams switch directions. Altogether, it’s a pretty thin layer of complexity for its elegant impact on 3-point mania.
Things get a little trickier when we start thinking about free-throws. One potential problem with the make-it-take-it rule is that fouling bad free-throw shooters becomes an easy way to regain possession of the ball, and no one likes to watch a game of Hack-a-Shaq. So what I propose is this:
The third part to this rule is powerful. It means that if you foul an average (76%) free-throw shooter in the bonus, then you only have about a 5% chance of gaining possession of the ball (accounting for offensive rebounds). It seems harsh to the fouling team, but the rule has one major benefit: it strongly disincentivizes fouling at the end of games, and instead encourages tougher defense. Fans, refs, and players alike would no longer have to suffer through hour-long fourth quarters caused by relentless fouling. If a team is up by 8 and has the ball with 24 seconds remaining, then as long as they can play keep-away for 24 seconds, the game is over. No intentional fouling to be had. But if the losing team has the ball, maybe, just maybe, they can put together a string of offensive miracles and pull off the win.
Of all the ways to bring the midrange back into basketball, implementing the make-it-take-it rule is easily the most economically feasible. If we wanted to extend the 3-point line, we would have to re-paint every NBA regulation court in the world. If we wanted to make the 3-point shot worth just 2.5 points, we would have to add a decimal place to every scoreboard in the world. The make-it-take-it rule, on the other hand, requires nothing more than a pen, and for Adam Silver to stumble upon this post.
Is the make-it-take-it rule a little silly? Maybe. Am I convinced that it definitely couldn’t work? Not one bit. There are still plenty of things to sort out with this rule, so if you have any thoughts about it, feel free to reach out on Twitter (@ejohnsson50) or email (ejohnsson@college.harvard.edu). I look forward to hearing other ideas.
Since its birth in 1979, we’ve watched the 3-point shot evolve from a signal of desperation to a cornerstone of offenses around the league. Any fan of the sport knows that teams have gradually embraced the long ball in place of the mid-range, and they show no sign of slowing down. Every year since the 3-point arc was extended to 23.75 feet in 1997, the league has steadily and substantially increased its 3-point attempt rate. To fans, players, and coaches alike, it is clear that basketball will only continue to drift beyond the arc.
Luckily, we don’t need a crystal ball to see what a future NBA might look like. Over the past decade, teams have consistently upped their appetite for threes when it matters most: clutch time. With less than 5 minutes remaining and within 5 points of their opponents, teams have always been more willing to step behind the arc – and not just because of last-second heaves. If we remove all shots taken in the last second of the game, and all shots beyond 40 feet, the league still shoots far more threes in the clutch than it does otherwise (for reference, Damian Lillard’s shot to finish off the Thunder was 38 feet from the basket).
If what happens in clutch time is any indicator of the future of basketball, then we’ve only scratched the surface of 3-point shooting in the NBA. When games get tight down the stretch, teams become increasingly reliant on the 3-ball – even if they shoot it less efficiently. In fact, in the 2018-2019 NBA season, only four teams took fewer 3-point shots in the clutch than their starters did in the first quarter:
It appears that teams hoist up more threes in the clutch, attempting to raise their efficiency when it matters most. But what shots are these new 3-pointers replacing, and who shoots them? Are coaches just putting more three-point specialists on the floor in clutch time, or are players choosing to take more of their shots from deep?
To answer the first question, we can look at the league’s shot selection in the clutch compared to the shot selection in the opening minutes of each game. If we divide the court into six different zones, we can compare the frequency of shots from each zone during the two time periods. On the left is a shot chart of all shots taken in the 2018-19 NBA season, colored by zone. On the right is the whole league’s shot distribution, in and out of clutch time. As you’ll notice, those new threes aren’t just repurposed long twos; teams are taking more threes, even in favor of shots from the restricted area.
Of course, there’s a second way to interpret this information. Instead of offenses deliberately changing their shot selection down the stretch, perhaps defenses tighten up and force their opponents to take deeper shots. This might explain why, on average, players seem to raise their three-point attempt rate about .035 percentage points in the clutch. This difference is statistically significant (p < .05), and is shown by the blue line on the plot below. Only players with at least 40 shots attempted in the clutch are shown:
Note: If we lower that threshold, the trend actually gets stronger; we start to capture players like Wesley Matthews who appear to function almost exclusively as sharpshooters in the fourth quarter, but don’t get nearly as many looks as other, more ball-dominant stars. For the sake of having a more accurate sample, these players were left out.
Players above the zero-line shot more threes in the clutch, and players below the line shot fewer. As expected, most players fell above the zero-line. Even Anthony Davis, who normally took just 13% of his shots from 3, jacked his 3-point attempt rate up to 30% when games were tight down the stretch. Though, there’s good reason to think that some of these additional threes were forced by the defense; in the clutch across the league, teams tended to shoot more threes in the form of pullups and step-backs, rather than straight up jump-shots. Knowing that these shots are generally less efficient than standard J’s are, it’s likely that clutch defenses are forcing their opponents into tough one-on-one scenarios (of course, some stars are more comfortable handling this than others are).
In reality, all this is probably the result of both offenses changing their playstyles to maximize efficiency, and defenses getting more stingy in the fourth. When defenses lock up at the end of games, they force opponents to lean on their biggest stars to generate points, many of whom rely on stepback and pullup threes to score most efficiently in one-on-one scenarios. But of course, stepbacks and pullups are less efficient than the types of threes that teams might get when they’re not in the clutch. Not to mention, a few of these shots are coming when offenses have no choice but to shoot threes, just to have a chance at winning the game. Though, with more granular data containing the position and orientation of each defender on the floor, we could ask whether or not clutch threes are more-heavily contested than others are. In turn, this might tell us which side of the ball is really dictating the hyper long-range brand of basketball we see in clutch time.
If you have any questions about this post, you can reach Erik on Twitter (@ejohnsson50) or email (ejohnsson@college.harvard.edu).
Bonus: In case you’re interested, check out the players who saw the biggest changes in 3-point frequency when it mattered most. As you’ll notice, AD isn’t the only player who might have taken some ill-advised threes in the clutch. Many players at the top of the scatter plot might have benefited from dishing the ball to a teammate more often (looking at you, Russ). Remember that some of the players (especially near the bottom) on this list might have misleading 3-point percentages in clutch time. Julius Randle only took five threes in clutch time, and he happened to make four of them. Giannis shot seven, and made three. No matter how hard they try, neither of these players will shoot that well in the long run, at any point in the game (sorry Giannis).
The Dallas Mavericks finished the 2018-2019 season with a record of 33-49. Despite finishing 15 games back of the #8 seed in the Western Conference, the Mavs experienced a surprising number of bright spots. They started the season with a record of 15-11, including an 8-4 record in the month of November. Their rookie sensation Luka Doncic won the Rookie of the Year award, winning 98 of the 100 possible first-place votes. Dallas also acquired 7’3” All-Star Kristaps Porzingis from the Knicks, who missed the entirety of this past season recovering from a torn ACL. Despite their attempted tank job at the end of the season (7-18 after the All-Star break), the Mavs were unable to keep their top-five protected pick in the 2019 draft because of the Luka Doncic / Trae Young swap in 2018. But as coach Rick Carlisle said after the Doncic trade last year, “Future draft picks to me are of very little interest at this point. We’ve got to take this group and move these guys forward.”
Dallas entered 2019 free agency with over $30 million in cap space, including the $17.1 million restricted free agent cap hold of Kristaps Porzingis. They could have feasibly traded away 2017 1st round pick Justin Jackson to get the requisite cap space to sign a player with 7-9 years of experience (like Kemba Walker, Tobias Harris, or Khris Middleton) to a max contract starting at $32.7M this season. Operating as an under-the-cap, however, would render them unable to use their mid-level, bi-annual, and trade exceptions that totaled over $35M. The trade exception they acquired when they traded Harrison Barnes to the Kings last February totaled over $21M and was the largest trade exception in the league heading into the 2019 offseason, so giving that up to operate as an under-the-cap team would have presumably been a difficult pill for the Mavericks to swallow.
At the NBA Summer League in Las Vegas this past July, Mark Cuban, owner of the Mavericks, admitted that the Mavs’ Plan A was to sign Kemba Walker to a max contract, their Plan B was to use cap space to sign Danny Green and presumably another player in the $14-18M range, and their Plan C was the plan they actually executed, which was to operate as an over-the-cap team and use their various exceptions.
Once Kemba Walker signed with the Celtics and it became known that Danny Green was deciding between the Raptors and Lakers based on Kawhi Leonard’s decision, the Mavericks regrouped and decided to operate in free agency as an over-the-cap team. This allowed them to sign their own restricted free agents, Maxi Kleber and Dorian Finney-Smith, to contracts irrespective of the salary cap (Kleber and Finney-Smith have salaries this year of $8M and $4M respectively). They re-signed Kristaps Porzingis to a 5-year max contract starting at $27.29M this season. They used most of their mid-level exception on Seth Curry and 2nd-round pick Isaiah Roby to sign them both to four-year contracts and used most of their bi-annual exception to sign Boban Marjanovic to a two-year contract. Finally, they used a little less than half of their ginormous trade exception to absorb the contract of Delon Wright in a sign-and-trade, trading two future 2nd round picks to the Grizzlies. Though it did not affect their 2019-2020 financial situation, they also signed Dwight Powell to a 3 year / $33M extension that runs through the 2022-2023 season, locking up the Canadian big man through his age 31 season.
In the years following their championship season in 2011, the Mavericks unsuccessfully attempted to use cap space to lure top free agent talent to Dallas to foster a new era of Mavericks basketball as Dirk Nowitzki aged out of his prime. Despite Nowitzki’s acceptance of massive pay cuts in the later years of his career, the Mavericks were never able to pair him with a productive star player following his Finals MVP performance in his age 33 season. Whether it was Deron Williams in 2012, Dwight Howard in 2013, Carmelo Anthony in 2014, De’Andre Jordan in 2015, or Mike Conley and Hassan Whiteside in 2016, the Mavericks have had meetings with top free agents but have been unable to sign any of them in recent memory. One could argue that the Mavs were lucky to not be financially strapped by these hefty contracts for players past their primes (Williams and Anthony were waived before the end of their contracts), but the team’s lack of star power in recent years has contributed to not being able to get out of the first round of the playoffs and a winning percentage of just 47% since their championship season.
As seen in the graph above, the Mavs have been under the luxury tax threshold every season since their Finals victory except for the year immediately following their championship season. This is certainly not because of a lack of funding; Mavericks owner Mark Cuban and the Dallas Mavericks organization both rank in the top ten in richest owners and most valuable organizations, respectively, in the NBA. It is reasonable to assume that if any marquee free agent ever signed with the Dallas Mavericks (let’s not count Chandler Parsons and Harrison Barnes in this exercise), the Mavericks would gladly have paid the tax for a championship contender, as they did nearly every year in the 2000s.
Following a 24-58 season, the Mavs fortunes finally changed during the 2018 draft when they traded their #5 pick as well as a top-five protected 2019 pick to the Atlanta Hawks for the #3 pick. The Mavericks selected Slovenian “Wonder Boy” Luka Doncic with that pick, giving up the picks that turned into Trae Young and Cam Reddish to the Atlanta Hawks. Both teams seem to be happy with the trade so far, and the careers of Young and Doncic will be linked for years to come. Luka had a wonderful rookie season for a 19-year-old, filling up the stat sheet with 21 points, 8 rebounds, and 6 assists per game. Doncic’s usage rate increased each month of the season, and his usage rate following the January 31st trade with New York involving three of Dallas’ starters skyrocketed to third highest in the league behind James Harden and D’Angelo Russell. His shooting efficiency left room for improvement, though, dropping to 42% from the field, 28% from three-point range, and only 68% from the free throw line after the big trade at the end of January. Still, Luka had a very promising first year in the league, and was recently named one of the top three players that NBA front offices would start a franchise with, along with Giannis Antetokounmpo and Anthony Davis, as well as the third-best international player in just his age-20 season, behind Antetokounmpo and Nikola Jokic.
As previously mentioned, the Mavericks and New York Knicks agreed to a blockbuster trade on January 31, 2019. The Mavericks traded Dennis Smith Jr, Wesley Matthews, De’Andre Jordan, their unprotected 2021 first-round pick and their top-10 protected 2023 first-round pick to the Knicks in exchange for the injured Kristaps Porzingis, Tim Hardaway Jr, Courtney Lee, and Trey Burke. More specifically, the Mavericks acquired an (injured) 7’3” “unicorn” who was named an All-Star at the age of 22 in exchange for two of their first-round picks, their #9 pick in the 2017 draft Dennis Smith Jr, and over $45 million in potential cap space for the 2019 offseason. This trade signified the end of the Mavericks hoarding cap space with the hope of signing marquee free agents; instead, they traded for a star. Ironically enough, the Knicks agreed to this trade to create the room to sign two max free agents this past summer, and we all know how that turned out. Despite not playing at all this past season, the Mavericks agreed to a 5 year / $158M contract with Kristaps Porzingis with a player option for the 2023-2024 season. On top of that, the contract is fully guaranteed and there are no injury provisions in the contract like there are in Joel Embiid’s current contract. With how much they traded away for him and how much they paid him, it is clear the Mavericks strongly believe in the (newly swole) Dirk-like big man.
The Dallas Mavericks truly think they have the newer, better version of Dirk Nowitzki and Steve Nash. The Mavs probably still regret not matching Phoenix’s offer in 2004 free agency for Steve Nash, as he turned into a two-time league MVP after leaving Dallas. As Porzingis and Doncic enter their age 24 and age 20 seasons, respectively, with each under contract for years to come, Dallas knows that the supporting cast the organization surrounds their two European stars with is just as important as the development of Porzingis and Doncic. By giving multi-year deals to a lot of their role players this past offseason (Powell, Wright, Kleber, Curry, and Finney-Smith all signed three or four year contracts), they effectively evaporated their potential 2021 cap space for a loaded free agent class that could include superstar players like Giannis Antetokounmpo, Kawhi Leonard, Paul George, LeBron James, Victor Oladipo, and Blake Griffin. In the past, the Mavericks would have hoarded this cap space with the hope that a star would choose Dallas in free agency. But now, with two players they view as franchise cornerstones in Doncic and Porzingis, the organization is focused on surrounding their young stars with the best supporting cast possible.
For the 2019-2020 season, the Mavericks have the largest usable trade exception in the league, totaling $11.83M. With this exception, they are able to absorb a player’s contract in trade without sending outgoing salary in return. It is probably not a coincidence that the Mavericks current salary total plus their large trade exception totals an amount just $400K less than the luxury tax threshold. With a plethora of capable guards and big men on their roster, the Mavs figure to use the trade exception on a versatile wing player that could potentially match up with the likes of LeBron James, Kawhi Leonard, or Paul George in a playoff series. Players that fit this build with a salary around the amount of their trade exception are Tony Snell, Jae Crowder, Robert Covington, Norman Powell, and C.J. Miles. Though the Mavericks do not have many draft picks to trade, they could be a valuable trade partner for a team trying to get under the luxury tax.
Nevertheless, the Mavericks now have a bright future with two franchise cornerstones to lead them in the post-Dirk era. After years of failed free agency pitches and mediocre teams following their championship in 2011, the Mavericks are poised to be competitive in the deep Western Conference for years to come.
If you have any questions for Buddy, you can email him at jamesscott@college.harvard.edu. You can also check out his site at buddyscottnba.home.blog
References
Basketball Reference
NBA.com stats page
Basketball Insiders Cap Sheets
Spotrac
Forbes Sports
]]>Every year, the MLB showcases its best players in the All-Star game. The game is never without controversy, as certain players who seem to deserve the honor are left off the league rosters (“snubs”), whereas others having not-so-stellar seasons find a way into the game. This is in part due to how the teams are selected: fans vote for players they think deserve the honor, and the player with the most votes at each position earns a starting spot on that league’s team. The players then vote for their peers to fill out the reserve rosters.
As one may expect, fans often “stuff the ballot box” for home-team players, such as in the notorious 2015 All-Star game, when the American League starting roster featured four players from the Kansas City Royals (at one point in balloting, eight of nine players were Royals).
The overall impact may seem marginal, especially given that a player who misses out on the starting lineup still has a shot at the reserve lineup, which is decided by peers. Nonetheless, it’s worth investigating the structure of the All-Star game starting rosters. In particular:
To conduct this analysis, I used pybaseball, an open-source python library for baseball statistics, to pull statistics on the first half of each season for every position player in the MLB, starting from 2008 (the farthest back pybaseball’s database goes). Players who played fewer than 45 games or recorded fewer 135 at-bats are removed. I then marked all players as All-Star starters or not based on Baseball Reference data. An example entry looks like this:
While 2008 is an arbitrary cutoff enforced by the pybaseball package, it doesn’t necessarily compromise our analysis, as (1) online voting has dramatically changed fan voting, so 2008 seems like a reasonable starting point, (2) the criteria for what fans look for in players has also changed; some fans nowadays care about WAR, which wasn’t the case in the 1950s, for example. Nonetheless, we should be mindful of the limitations of our data.
To start, let’s look at how All-Star starters and the rest of the league differ across various statistics (all stats are converted to rates to control for All Stars having more plate appearances, so HR is actually HR/Plate Appearance):
Figure 1: Percentage Difference (where 1.0 represents 100% difference) between All-Star Starters and Rest of League across various statistics. Note that All-Star starters were intentionally walked at a far greater rate than others.
We see a large difference in intentional walks — All-Star starters are intentionally walked nearly 150% more than average — which makes sense, as pitchers intentionally walk batters when they believe they have better chances against the next batter (indicating the current batter is stronger). Interestingly, All-Star starters had fewer sacrifice hits than the rest of the league, perhaps because they got their RBIs while also getting on base (as opposed to sacrifice hits). All-Star starters also struck out at a lower rate, as expected.
We can formalize these ideas in a table using a two-sample t-test. The t-test tests the plausibility of the assumption that two datasets are from the same model. The p-value essentially indicates how often we can expect the two datasets to be as different as the ones we have assuming a certain model for both.
For example, if we get a p-value of 0.01 for the difference in home runs between All-Star starters and everyone else, then if All-Star starters and other players both had the same probabilities to hit home runs, we’d expect to observe the differences we see between the two groups about 1% of the time.
Here are some p-values from our two-sample t-test (we convert all stats like home runs to rates like home runs per plate appearance to control for All Stars having more plate appearances):
Age | 0.26 |
Games | 2.09*10-31 |
At Bats | 8.66*10-57 |
Runs/PA | 3.25*10-57 |
HR/PA | 5.26*10-31 |
RBI/PA | 1.14*10-36 |
Avg | 3.11*10-53 |
SB/PA | 0.39 |
These are some ridiculously low p-values, but this is exactly what we would expect: All Star starters are supposed to be the best of the best, so we would expect them to hit home runs, drive in runs, and get on base at a significantly higher rate. It’s worth noting, though, that voters don’t seem to care as much about speed (stolen bases had a p-value of 0.39) or age (perhaps countering the notion that the All Star game features well-past-their-prime players who find a way in because of generous fans).
Figure 2: Age Distribution of All-Star starters (red) vs rest of MLB (blue)
To find the snubs and fairies, we need a method to predict All-Star starters. Since being an “All-Star” is a subjective notion, any metric we use, such as WAR, VORP, or HRs, is subject to our own biases. Our goal is instead to see who the fans left out by their own metrics. To do so, we train three machine learning models — Logistic Regression, k-Nearest Neighbors (kNN), and Random Forest — to predict whether a player is an All-Star starter or not. In each case, we train the model on 80% of the data, make predictions on the remaining 20%, and repeat this five times to ensure every point in the dataset also gets a prediction (except in Logistic Regression, in which case the model is trained on all the data).
Note that this means our “snubs” and “fairies” list relies on the imprecision of the model: if our model were perfect and classified each player as a starter or not correctly, then there would be no snubs or fairies! If we visualize each player as a 0 or 1 in this space defined by various statistics (each axis is a different statistic, like HRs, RBIs, ABs, etc.), then what we’re looking for is essentially 0s surrounded by a lot of 1s (snubs) and 1s surrounded by a lot of 0s (fairies).
Figure 3: We want to find a model to distinguish the All-Star starters (orange) from the rest (grey). Here we plot an example with two statistics: HR/PA and Batting Average. As we add more and more statistics, our model will try to distinguish between the two clusters of data (this plot will become multi-dimensional in our model).
Let’s start by looking at the players that all three models agree were snubs (there’s a lot: 20+!):
No Love: Players Classified as Snubs by KNN, RF, and Logistic Regression
Player (Team) | Year | HR | RBI | AVG |
Paul Goldschmidth (Arizona Diamondbacks) | 2018 | 18 | 48 | 0.274 |
Melky Cabrera (San Francisco Giants) | 2012 | 7 | 39 | 0.354 |
Martin Prado (Atlanta Braves) | 2010 | 7 | 36 | 0.355 |
Carl Crawford (Tampa Bay Rays) | 2010 | 7 | 40 | 0.316 |
Jose Altuve (Houston Astros) | 2015 | 7 | 35 | 0.302 |
We can also look at this from another perspective; whereas kNN and Random Forest are pure classification algorithms, logistic regression outputs a probability that a player is an All-Star starter. If this probability is greater than 0.5, we say he’s an All-Star starter.
No Love: Snubs with Highest Logistic Regression Probability to Be an All-Star Starter
Player (Team) | Year | HR | RBI | AVG | All-Star Starter Probability | Lost Out To(which player was the starter?) |
Victor Martinez (Detroit Tigers) | 2014 | 21 | 55 | .328 | 89.3% | Nelson Cruz (Baltimore Orioles) |
Justin Morneau (MinnesotaTwins) | 2009 | 20 | 67 | .320 | 85.9% | Mark Teixeira (New York Yankees) |
Miguel Cabrera (Detroit Tigers) | 2011 | 17 | 67 | .324 | 85.8% | Adrian Gonzalez (Boston Red Sox) |
Miguel Cabrera (Detroit Tigers) | 2012 | 18 | 56 | .323 | 85.1% | Prince Fielder (Tigers) |
Joey Votto (Cincinnati Reds) | 2017 | 24 | 61 | .312 | 83.4% | Ryan Zimmerman (Washington Nationals) |
Some of these snubs are truly head scratching. Rafael Devers’ exclusion from the All-Star roster, both starting and reserve, for example, was hotly contested in MLB circles this year. Others, however, just took a backseat to other outstanding players at their position. First basemen are generally renowned for the batting abilities and don’t have many defensive responsibilities, so it’s no surprise that four of the five logistic regression snubs were first basemen, who all took a backseat to other first basemen having excellent seasons.
Moreover, all three of our models agreed that the Tigers, Diamondbacks, and Blue Jays had the most snubs, so get on it Detroit, Arizona and Toronto! Support your players!
And who were the biggest fairies? Well, our models agree on a lot of them — 71 to be precise! Here’s a list of 5:
Free Pass: Players Classified as ‘Fairies’ by KNN, RF, and Logistic Regression
Player (Team) | Year | HR | RBI | AVG |
Jackie Bradley Jr. (Boston Red Sox) | 2016 | 13 | 53 | .294 |
Chase Utley (Philadelphia Phillies) | 2014 | 6 | 40 | .286 |
Rafael Furcal (St. Louis Cardinals) | 2012 | 5 | 32 | .274 |
Alcides Escobar (Kansas City Royals) | 2015 | 2 | 28 | .277 |
Joe Mauer (Minnesota Twins) | 2010 | 3 | 34 | .310 |
Free Pass: Fairies with Lowest Logistic Regression Probability to Be an All-Star Starters
Player (Team) | Year | HR | RBI | AVG | Prob |
Scott Rolen (Cincinnati Reds) | 2016 | 4 | 32 | .256 | 0.8% |
Derek Jeter (New York Yankees) | 2014 | 2 | 21 | .268 | 1.1% |
Dan Uggla (Atlanta Braves) | 2012 | 11 | 43 | .229 | 1.1% |
Yadier Molina (St. Louis Cardinals) | 2015 | 5 | 25 | .278 | 1.3% |
Salvador Perez (Minnesota Twins) | 2010 | 13 | 34 | .263 | 1.8% |
And which teams had the most fairies? The St. Louis Cardinals, with a “whopping” 6 over the past 11 years, so congrats Cardinals fans? And sorry Royals fans, even your brave efforts in 2015 weren’t quite enough…
Nevertheless, there are a couple limitations to our model worth discussing.
There are various methods to possibly correct for these limitations. For example, we could try to adjust for the quality of the season; if home runs were up across the MLB in 2015, for example, we would want to make each home run count less. We could try standardizing each statistic by year, but this also runs into problems: do we standardize with respect to the entire league for that year? Do we standardize only with respect to players who meet a certain baseline (i.e. do we want to standardize home runs and include a bunch of pinch hitters with lots of at-bats)? The general fear here is overfitting; if we adjust for year, position, park, etc., we may begin to model the noise in our dataset, although admittedly adding one of these may help our model.
It’s my belief, however, that having two exceptionally strong shortstops (or multiple exceptionally strong players in a year) still warrants labeling one as a snub — good hitters shouldn’t be penalized just because they happen to play the same position and same year as a slightly stronger player.
Nonetheless, our model showed that All-Star starters, for the most part, are indeed very good players and while a few players get snubbed every year, fans on the whole select players with strong seasons.
If you have any questions for Shuvom about this article, please feel free to reach out to him at ssadhuka@college.harvard.edu
]]>The off-season is a time in hockey when teams can make large improvements in order to win more games in the following year. One way to do this, and probably the most notable, is through the player movement that occurs during the off-season. However, a second way is to analyze and improve the tactics a team will deploy for the next season. I focused on one area of the game that I think has room for improvement: special teams’ strategies.
There is a hypothesis in the NHL that during a short-handed situation, the team with fewer players on the ice should primarily focus on defense, preventing the other team from scoring a goal. In case you are unfamiliar with the rules, a power play occurs in hockey when one team receives a penalty for violating the rules of the game. When this happens, the team that is penalized has one less player on the ice, so they are considered “shorthanded.” With fewer players on the ice, a common strategy is to ice the puck immediately at every possession, rather than attempt to score a goal (“icing” is to send the puck from one’s own defensive zone to the other end of the ice). Normally, icing the puck results in a penalty and the offending team cannot change players while having a faceoff in their own defensive end. However, killing a penalty when a team has fewer players on the ice allows them to ice the puck.
When talking with the Harvard Hockey coaching staff and former assistant coach Rob Rassey, we had the idea that it might be beneficial to focus on offense, rather than defense, in a shorthanded situation because of the emotional boost that scoring a shorthanded goal would give a team in the middle of a game. Additionally, this might also create a negative sentiment within the team enjoying the power play. While scoring a goal at any strength is technically worth an equal amount on the scoreboard, we might be able to say that shorthanded goals are more “valuable” than power-play or even-strength goals if this emotional effect indeed impacts a team’s chances of winning. Therefore, I set out to determine if there was a significant improvement in a team’s probability of winning if they had scored a short-handed goal, relative to that from a power play or even-strength goal. If we find a significant result, then not only would this signal that teams should be more aggressive on shorthanded (or penalty-kill) situations, but also that teams on a power-play might want to be less aggressive.
In order to answer the question of whether shorthanded goals lead to wins more often than even strength or power play goals do, I retrieved data from hockey-reference.com and hockeyeloratings.com. This gave me a full dataset of each goal scored from the 2005-2006 through the 2017-2018 seasons with the following variables: the score of the game at the time each goal was scored, the final score of the game, and the Elo ratings of each team playing. Elo ratings are simply used to account for the strength of the two teams playing. I also added a column for the current goal differential in the game by subtracting the current away score from the current home score. If a game went to a shootout there were not extra observations in the dataset, but the final score is indicative of which team won. For example: if the Stars beat the Bruins in a shootout 2-1, this dataset would only include two observations for each goal that was scored in regulation.
I decided to use Logistic Regression for this analysis since my response variable had a binary response. The first thing I did to check if there was any validity to this theory was run a simple regression using only the strength of the goal scored (power-play, shorthanded, or even-strength) to predict the winner of the game. This model had shorthanded goals as the reference level, and the coefficients for even strength and power play goals were significant and negative. Thus, I knew that more investigation was needed, but at least there was some evidence to demonstrate that there might be a significant difference in a team’s winning percentage based upon the strength of the goal being scored.
Next, I created both a win probability model and a model that calculated the probability that a team would win a game given that they had just scored a goal. The response variable for my first model was whether or not the scoring team won the game. The predictors for this logistic regression included a factor for the strength of the goal with even-strength being the reference level, the current score of each team, the time and period of the goal, and the Elo ratings of the two teams. Additionally, an interaction term between the Elo ratings of the two teams was included. For the second logistic model predicting the home team’s winning probability, the predictors included the current difference in score, the time of the goal, the home and away team’s Elo ratings, an interaction between the time of the goal and the current goal differential, and an interaction between the two team’s Elo ratings.
For the model that predicted the home team’s win probability, results matched our expectations. For example, the probability of a home team winning a game when they were tied halfway through a game with both teams having an average Elo rating, resulted in a win probability of 50%. For the model that predicted whether the team that scored would win the game, the model predicted a team that tied a game between two average opponents at 1-1 halfway through the game won 69% of the time if the goal was a shorthanded one. If the goal were even-strength, the model predicted that the same type of team would only win 67% of the time, and only 64% of the time if the goal were scored on a power play.
Looking at the summary produced by the second model for whether or not the scoring team would win the game, we note that the factor for the goal being scored as a shorthanded goal is positive with a p-value of .0479 < .05. We also see that the factor for the goal being a power play goal is negative.
The most important takeaway from the above model is that there is a positive and significant linear relationship between shorthanded goals and an increased winning percentage when we hold these other variables constant. This demonstrates that scoring a shorthanded goal leads to a higher chance of winning the game as opposed to an even strength goal. Since the coefficient for the goal being scored on a power-play is negative, we conclude that it is also statistically significant that short-handed goals lead to a higher win probability than power-play goals do when holding other factors constant. The plot below illustrates this relationship:
Looking at this plot, we are able to observe that the red line, which is the win probability when a team scores a short-handed goal, is consistently above the other two lines for other types of goals. This might be caused by the emotional benefit of scoring a shorthanded goal and the deflating emotional consequence of giving up a shorthanded goal. This model was fit using a smoother.
There are still some confounding variables that could have skewed the results. One of these is the fact that when a team scores a shorthanded goal, they are inherently still killing a penalty. In theory, the fact that they are still one player short should decrease their winning percentage given that the other team has a higher chance of scoring while they have a power play. This situation can be juxtaposed to scoring an even strength or power play goal when the game would then be at even strength or in some circumstances, the team that scored would still be on the power play. Another factor that could be affecting the model is that the model cannot take into account how aggressive a penalty kill or power play will be. Finally, there could be a confounding variable that teams that are giving up shorthanded goals against might be having a bad night as this is something that should not happen. This could be indicative of a team that is not focused on a given night, turning the puck over more often or giving the other team too much time and space. Additionally, of course, teams that give up more shorthanded goals might be weaker defensively than their Elo rating would suggest. Though, I’m skeptical; a team’s defensive strength is naturally baked into its Elo rating, and some of the best teams in the league gave up the most shorthanded goals. More investigation may be necessary, but assuming that these confounding variables even out, then the results of the model may still be valid. In order to account for these variables, we would need a more comprehensive dataset.
The question that must be answered next using more granular data, would be whether the benefit of scoring more goals, given that those goals are less likely to lead to wins if they are on a power play, still outweighs the risk of giving up a goal. To answer this question, a dataset that has the formation of each power play unit and the times that they were on the ice should be collected. With player tracking data available in the NHL next year, an answer to this question will be possible.
If you have any questions about this post, feel free to contact Paul at pmarino@college.harvard.edu
]]>One of America’s greatest borderline sporting events is almost upon us. On Independence Day, about twenty brave souls will compete in the Nathan’s Hot Dog Eating Contest. The event has been held annually since 1978, although one-off events have allegedly taken place since 1916 (including one in 1967 where a man ate 127 hot dogs in an hour). Even though robust data on participants who finished outside the top slots only date back to 2004, Nathan’s has kept record of each winner and their amount of hot dogs and buns eaten (often abbreviated as HDB). The data not only tells a story that is both uniquely American but also owes a great deal to a wave of Japanese competitors. Below are 9 charts that explain the Nathan’s Hot Dog Eating Contest.
The official Nathan’s website keeps results of their contest dating back to 1972, which, for our purposes, began the “modern era” of hot dog eating. To study the world of hot dog eating, I started with a dataset of Nathan’s eaters from 2002-2015 from former HSACer Daniel Alpert ‘18 and expanded it using the official results to include all eaters in the modern era. As I found out, the early years were unstable and marred by organizational changes. Jason Schechter’s 1972 feat of 14 HDB in 3.5 minutes stood as the record for seven years, even as the time grew to 6.5 minutes and then 10 minutes. The best example of how loose things were run in the contest’s rise to relevance is the 1981 edition. After a dismal 1980 contest that saw a tie for first place with 9 HDB in 10 minutes, The New York Times recounted how Thomas DeBarry “downed 11 hot dogs in five minutes and then rushed off with his family to attend a barbecue.” That the years 1972-1998 had written records of only the winners goes to show that Nathan’s didn’t take itself too seriously at that point.
The contest continued to find its footing before deciding on sticking to 12 minutes in 1988. From there, the record increased at a fairly linear pace. The relative parity of the 1970s and early 1980s began to fade away, though, signaling the potential for eating powerhouses. Between 1988 and 1998, there were 5 repeat winners. International eaters began to enter the contest. After the first female victory in 1984 (by Birgit Felden, a West German judo practitioner who claimed to have never eaten a hot dog before the contest), there was a 13 year wait before another international winner. Hirofumi Nakajima’s victories in 1997 and 1998 as well as Kazutoyo Arai’s 2000 victory signaled that more was to come. But for now, the record stood at 25.125 HDB, set by Arai in 2000. Unbeknownst to the Coney Island crowds, the competition was about to have a watershed moment.
In the span of 12 minutes, Japanese eater Takeru Kobayashi more than doubled the competition record by eating 50 hot dogs and buns, ushering in a new era of the Nathan’s Hot Dog Eating Contest. Arai ate at a rate of 2.09 HDB/min. Kobayashi ate twice as fast at a rate of 4.17 HDB/min. He claims his secret was largely psychological, but he brought new techniques (such as the Solomon Method) to the competition that outlasted him. After 6 straight titles, Kobayashi won a share of the crown in 2008 but never won again. He was infamously arrested after attempting to go on stage in 2010 and had a contract dispute with Major League Eating that continues to this day.
The man who carried on his legacy is fascinating in his own right, but it is also worth questioning whether Kobayashi’s success trickled down to average competitors. Luckily, Nathan’s has increasing amounts of data available for the Kobayashi years and beyond, allowing me to compare middle-of-the-pack eaters with previous champions. I decided to average the results of each year and weigh each point in the graph below on number of eaters in my dataset. There is a clear jump due to Kobayashi breaking out in 2001, and while there seems to be a drop afterwards, this can be attributed to an increase in contest data from Nathan’s.
The graph makes two things clear: Nathan’s data has improved substantially in the last ten years, and the average HDB has followed the trend started in the 1990s. But since the number of recorded competitors has grown, this trend means that even the average eater today would still beat out the champions of the pre-Kobayashi era. Since there is only data on winners until 1999, this difference in ability across eras is even more pronounced. Similar to how medal-winning swimmers in the early Olympics had times that would not even allow them to qualify today, the champions of the 1970s are similarly flat-footed compared to today’s eaters. Some of this is due to training. Competitors take the contest much more seriously today, as the mere existence of Major League Eating demonstrates. Some of this may also be due to the increased prestige of the event, now drawing eaters from all over the world.
That said, I wanted to confirm this trend was evident even under different rule changes. Although the time was actually reduced to 10 minutes in 2008, I made the same plot as above but using HDB/min on the y-axis. The first two contests have abnormally high HDB/min since the contest was 3.5 minutes, but that number began to dip as the contest time increased. That could point to the event being more a spring in its early incarnations before transforming into the endurance test it is today. After timing stabilized in 1988, it appears that the average eaters of today beat out the champions of yesterday.
That slower pre-Kobayashi growth trend is still evident in two areas of today’s competition. Nathan’s introduced a separate women’s competition in 2011. After Sonya Thomas took home the first three titles, Miki Sudo has reigned supreme over the field since 2014, as she gears up to claim her sixth consecutive victory. While Birgit Felden was the first female victor in the modern era in 1984, the women’s competition tends to see a lower HDB count than the men’s. But this count is still high relative to the pre-Kobayashi eaters. Indeed, the women’s champions today seems to be where we would expect the overall competition to have ended up by now had Kobayashi not revolutionized the event in 2001. I chose to analyze champions only since they provide the only reliable data pre-1999, but I would expect the trend to be similar when judging the entire field.
Next, I took the pre-Kobayashi champions and graphed their HDB with the eaters who finished in places 3-5 since 2001. The trend curves upward only a bit sharper than with the average eater, which means that outliers in either direction — good or terrible — skew the results. That the fifth-best eater today is eating around the average in a field of 20 is evidence that the ridiculous surge in eating ability may be concentrated at the very top, while the average eaters are still improving.
Today, nobody is pushing the record more than Joey Chestnut. After beating the record by more than 12 HDB in 2007, he has gone on to win 11 of the last 12 editions of Nathan’s Hot Dog Eating Contest. He has broken his own record 5 times and is the heavy favorite heading into the 2019 contest.
The easiest way to visualize his dominance is to plot the 20 all-time best performances at the Nathan’s Hot Dog Eating Contest. Kobayashi has 3 entries on the list. Carmen Cincotti and Matt Stonie (who broke Chestnut’s streak in 2015) each have 2, and Pat Bertoletti has 1. The other 12 belong to Chestnut.
His dominance still stands when measured in HDB/min. In fact, the only difference is that Kobayashi’s 2007 second-place finish (63 HDB) falls off since the competition was still 12 minutes at the time. Tim “Eater X” Janus’ fourth-place finish in 2009 (53 HDB) enters, which points to a spectacular feat.
The top four finishers in 2009 were among the 20 greatest performances of all time at Nathan’s Hot Dog Eating Contest. Even then, the gap from the top was still enormously high. Janus finished 15 HDB away from Chestnut’s record-breaking 68. It was Kobayashi’s last year at the contest, after which he would begin his contract dispute with Major League Eating. It was the highest HDB for Kobayashi, Bertoletti, and Janus. Chestnut, on the other hand, has beaten 68 HDB 4 times since then. 2009 was, for all intents and purposes, the greatest Nathan’s Hot Dog Eating Contest of all time.
That is not to take anything away from this year’s upcoming contest. As the number of hot dogs eaten continues to increase, perhaps Chestnut will find new challengers awaiting him on the Coney Island boardwalk. Or maybe the world will get to witness a true master of the craft on display with another dominant performance. Regardless, Nathan’s Hot Dog Eating Contest is a semi-sporting event with a mostly-storied history that started to tell a great story once someone had the thought to start writing it down. And that story will continue this Independence Day on the Coney Island boardwalk, as a handful of men and women attempt to push their stomachs to the limit in front of cheering crowds.
If you have any questions for Jack about this article, please feel free to reach out to him at jackschroeder@college.harvard.edu
]]>For the past few years, the NBA has seen a fair number of teams tanking; that is, teams lowering their level of play and purposely losing games in the hopes of receiving a higher draft pick through the lottery (the worst teams lottery to determine the exact draft order). The NBA draft is traditionally viewed as more top heavy than other leagues; LeBron James, Kevin Durant, Tim Duncan, Anthony Davis, Kobe Bryant, and Allen Iverson all were selected through the lottery, with all but Kobe being selected with one of the top two picks. As such, teams like the Philadelphia 76ers accumulated poor records to receive higher draft picks, drafting stars like Joel Embiid and Ben Simmons in the process.
In an effort to reduce tanking in the NBA, the league rolled out a new lottery system in which the very worst teams in the standings were given lower overall odds to receive a top-4 pick. The first lottery under the new system rewarded the 9th seeded New Orleans Pelicans with the first pick, hailing as a win for the league and a loss for tanking.
Figure 1: Odds Under New Draft Lottery vs. Old (from ESPN)
Compared to the old system, each of the fourteen lottery teams now has a chance at drawing a top-4 pick, lowering the probability that the very worst teams get one. So, given this new system, what is the expected value added to a team by tanking? How good of a player can the worst team expect to receive, compared to the old draft lottery system?
HSAC has previously analyzed the expected career win shares by pick and seed; I will conduct a similar analysis, analyzing all drafts from 1985 (the first year of the lottery system) to 2011. 2011 was selected for the end date because it allows enough time for an “average” NBA player to reach his peak (i.e. players who were drafted more recently have not recorded enough data for us to know how good they will be). A few additional tweaks have been made from the previous HSAC article:
Figure 2: Histogram of Career Win Shares for Players Drafted 11th Overall, 1985-2011
Win shares tend to be right skewed for each draft pick, including the 11th pick shown above. Below is a table summarizing the results when we simulate each draftee’s value from a gamma distribution:Figure 3: Table of Draft Value Simulation Results
And here is just the expected win shares for each draft pick under the new and old systems, shown as a line plot:
Figure 4: Trends in Expected Career Win Shares by Draft Seed, New vs Old
We see that expected career win shares from the first four seeds has decreased under the new system. Simultaneously, the value of seeds six through nine have noticeably increased. After the 9th seed, the draft pick values under the two systems begin to converge.
The uptick in win shares at the nine seed is due to some exceptionally strong players being selected at ninth overall (and the ninth seed has the best shot at ninth overall): Tracy McGrady, Dirk Nowitzki, Shawn Marion, Amar’e Stoudemire, Gordon Hayward, Kemba Walker, and DeMar Derozan were all drafted 9th overall. Compare this to the best players selected at eighth overall — Jamal Crawford, Andre Miller, and Rudy Gay — and it’s clear that the ninth draft pick is an anomaly.
From the table, we also see large standard deviations in the career win shares. This is reflective of uncertainty generated from two places: first, the lottery, as being the worst team doesn’t guarantee receiving the top pick. Second, once the team receives a position in the lottery, there is variance in the actual pick’s career win shares. For example, even if we know a team got the fifth overall pick, there is still a lot of uncertainty in how good their drafted player might be.
Moreover, even when we simulate other value-added metrics, such as BPM or VORP, we get similar results — the new lottery system hurts the top four seeds, helps the middle seeds, and has roughly no effect on the bottom seeds.
Figure 5: Expected VORP by Draft Seed Under the New and Old Systems
Overall, however, the differences are marginal. The second seed dropped from 63.07 expected career win shares to 53.87, about the difference between Richard Hamilton, a three-time All-Star, and Caron Butler, a two-time All-Star, respectively. Moreover, the standard deviation and skewness for each seed’s career win shares, in part, overpowers any major change in expected win shares. Considering that the largest difference in expected career win shares is only 10 win shares, it seems that teams that want to tank still have strong incentives to tank, even if the restructured NBA draft lottery makes it less likely for them to receive the best picks.
If you have questions for Shuvom about this article, please feel free to reach out to him at ssadhuka@college.harvard.edu.
]]>With the Boston Bruins set to take on the St. Louis Blues for the Stanley Cup Finals tonight, Boston is on the verge of being the first city since Detroit in 1935 to hold at least three of the four major North American professional sports titles (sorry, MLS). Earlier this postseason, I wrote an article detailing Boston’s chances of achieving the “Boston Slam” and holding all four titles at the same time, and then detailed Boston’s chances on the HSAC Twitter page throughout the two conference semifinal series. Unfortunately, the chances never got much higher than 1% as the Celtics flamed out in the Eastern Conference semifinals to the Milwaukee Bucks in 5 games.
However, the Bruins have just kept on winning, winning their last seven games and completing an impressive sweep over the Carolina Hurricanes in the Eastern Conference Finals to claim the Prince of Wales Trophy. During this excruciatingly long time period between the Conference Finals and the Stanley Cup (11 days for the Bruins, 6 for the St. Louis Blues), I was asked on Twitter about what the chances of one city winning three out of the four major titles in the same year.
I immediately realized that this was an interesting combinatorics problem that did not require super sophisticated math, and decided to set out on finding an answer.
There were some interesting things to think about with this problem. The first is that there are effectively five ways for a city to hold three out of four championships. They can win all four in the same year, or they can win exactly three out of four, leaving one sport out (four combinations). The second is that some cities have teams in exactly three out of the four major sports (like Atlanta, Pittsburgh or Houston) and thus can only have at least three out of four if all of their teams win. Meanwhile, some cities have exactly one team in all four sports (like Boston, Philadelphia or Detroit) and others have more than one team in at least one sport (like New York, Los Angeles or Chicago). Finally, some cities (like Seattle, St. Louis or Baltimore) do not have teams in at least three sports and thus are ineligible from achieving this feat. Thus, each city does not have an equal chance of attaining this feat. Finally, it is impossible for more than one city to achieve this in the same year (as that would mean at least six championships!), so it is sufficient to independently sum each individual city’s probability of winning at least three out of four.
In order to compute each city’s chances of winning at least three out of four titles, I made a simplifying assumption that in a given league, each team’s probability of winning a championship is uniform. Thus, it is assumed that the Patriots have a 1/32 chance of winning the Super Bowl, while the New York Yankees have a 1/30 chance of winning the World Series. This is not a totally reasonable assumption, as it ignores the effect of teams in “big markets” having the ability to spend more money and thus will have a higher probability of winning a championship than a team in a smaller market. However, in absence of any formal modeling of this effect,, a uniform distribution will have to do.
I wrote the following function in R to determine the chances of a city winning at least three out of four, given the number of teams that city has in each individual league, based on the five potential combinations described above.
For example, to compute the probability of New York doing this, you would feed in (3,2,2,2) to the function because New York has 3 NHL teams (yes, the Devils count), 2 NBA teams (yes, the Knicks also count even though New Yorkers would prefer if they didn’t exist), 2 NFL teams (technically, although they both play in New Jersey too) and 2 MLB teams. For Boston, you would feed in (1,1,1,1) and for Atlanta you would feed in (0,1,1,1) since Atlanta does not have an NHL team (#RIPThrashers).
In this analysis, we used 20 cities that had a team in at least three of the four major professional sports leagues. Some subjective judgements were made in terms of determining what counted and what didn’t. For example, I decided that the Green Bay Packers were a Milwaukee team despite being a two hour drive away and I also decided that all teams in the San Francisco Bay Area were considered to be from the same city, thus grouping together San Francisco, Oakland and San Jose. On the flip side, I decided that teams from Nashville/Memphis and Charlotte/Raleigh should not be combined, so none of the teams from those four cities were considered in this analysis.
After computing the above function for each city, I found each individual city’s probabilities of winning at least three out of four.
When we sum up the individual probabilities, we get 0.43%. Thus, ignoring the “big market” effect and assuming all franchises in a given league have a uniform probability of winning the championship, we would expect one city to win at least three out of four titles about once every 227 years.
As a note, these calculations will be slightly altered when the new Seattle NHL team enters the league in 2021/22, giving Seattle 3 teams and adding to the denominator for the NHL calculations.
It is also interesting to calculate the probability of Detroit achieving the same feat in 1935. Back then, there was no NBA, the MLB had 16 teams, the NHL had 8 (as the NY Americans and Montreal Maroons had yet to fold to give the traditional Original Six), and the NFL (in the pre Super Bowl era) had 9. Four cities (New York, Boston, Chicago and Detroit) had at least one team in all three leagues. New York was boosted by 3 MLB teams (New York Giants and Brooklyn Dodgers), 2 NFL teams (New York Giants and Brooklyn Dodgers) and 2 NHL teams (Rangers and Americans), while Chicago was boosted by having two teams in both the NFL (Bears and Cardinals) and MLB (Cubs and White Sox) and Boston had 2 MLB teams (Red Sox and Braves). Thus, Detroit’s probability of winning all three was 0.08%, Boston’s was 0.17%, Chicago’s was 0.35% and New York’s was 1.04%. If you sum all four of those up, the probability of a city winning three championships in 1935 was 1.65%, and we would expect it to happen once every 60 years given the league compositions in 1935.
Another interesting thing to study with this is when did the probability of a city winning at least three out of four peak, and how high was this probability? This was likely in 1947/48, during the 2nd year of the NBA.The NBA had 8 teams, the NHL 6, the MLB 16 and the NFL 10. There were six cities that had the chance to win three titles, and Chicago (boosted by having two teams in both the MLB and NFL) had the highest probability with 1.25%. The overall probability of one city winning at least three out of four was 3.37%, meaning we would expect this to happen about once every 29 years.
It would be interesting to control for the big market effect to redo these calculations. If you have any ideas for how to go about this, or have any questions/comments about the article, please feel free to reach out to me on Twitter @andrew_puopolo.
In June 2008, Boston stood atop the sports world; the Celtics had just won their first championship since 1986 with their new “Big 3”, the Red Sox were reigning World Series champs with David Ortiz and Manny Ramirez, one of the best 3-4 punches in MLB history, and the Patriots were one drive away from becoming the first team to ever go 19-0. The Bruins, on the other hand, were not quite at the same heights as the other Boston teams, leading to jokes around the country thanking the Bruins for giving other cities a chance.
Despite all this dominance, Boston only held 2 out of the 4 major professional sports league championships that year. Only once has a city controlled three out of the four major championships at the same time. This was in 1935, when the city of Detroit had the NFL Champion Lions, the World Series winning Tigers and the Stanley Cup Champion Red Wings. No team has ever held all four titles at the same time. The closest a city has come to true dominance was in 2002, when the city of Los Angeles had the winners of the NBA (Lakers) and MLB (Angels), as well as the smaller leagues MLS (Galaxy) and WNBA (Sparks).
Fast forward to April 2019, and Boston once again controls two out of the four major championships. In October, the Red Sox saw off the Los Angeles Dodgers in five games to win their fourth World Series title since breaking the curse in 2004. In February, the Patriots defeated the Los Angeles Rams 13-3 to win their sixth Super Bowl since 2002 and continue their awe inspiring dynasty.
Currently, Boston’s other two major professional sports teams are in the midst of playoff runs. The Bruins defeated the Toronto Maple Leafs in Game 7 of a very tense opening round series, and fortunately do not have to face the record setting Tampa Bay Lightning in the second round, as the Lightning were swept by the Columbus Blue Jackets. The Celtics are coming off a sweep of the Indiana Pacers, and face the Greek Freak and the Milwaukee Bucks in Round 2. It is very plausible that Boston could become the first city since 1935 to control three out of the four major championships.
In 2015, on the back of the famous Seahawks-Patriots Super Bowl, former HSAC Co-President Harrison Chase dubbed Boston “The Most Successful Sports City Of the 21st Century”. However, if Boston were to win one (or both) of the championships that are currently up for grabs, it would rightfully gain the right to call itself “TitleTown.”
We wanted to take a look at this possibility, and be able to quantify the probability of Boston holding three or four major professional sports championships at the same time this year. To do this, we simulated the remainder of the NBA and NHL playoffs 100,000 times using a Glicko model fit separately for the NBA and NHL that has been used to generate predictions showcased here and here. The ratings for each team have been updated to reflect the game results in the first round. Thus, we consider the Columbus Blue Jackets to be a stronger team than we did in our previous simulations. If you are interested in learning about the technical details of this Glicko model, please reach out to me on Twitter @andrew_puopolo or by email at andrewpuopolo@college.harvard.edu.
The first thing we wanted to take a look at was each of the two team’s current probability of reaching the subsequent three rounds of the playoffs:
The distributions for both teams are quite different. In each series, the Bruins are considered favorites and the Celtics are considered underdogs.
Next, we will take a look at the conditional probability of the Bruins and Celtics winning the Championship given they advance to each round of the playoffs.
What this table tells us is that the Celtics would have a 28.4% chance of winning the NBA Finals if they win the Eastern Conference. The first row of the table is the same as the last row of the previous table, as they both measure the current title odds. These probabilities are not exact, and are dependent on opponents in subsequent rounds. For example, if the Celtics were to beat the Bucks, their title odds are likely to be greater than 10% if their Eastern Conference Finals opponent is the Philadelphia 76ers, and less than 10% if the opponent is the Toronto Raptors.
Finally, we will take a look at Boston’s path to potential dominance, and how the probability of Boston holding 3 or 4 championships at the same time changes as each team progresses through the playoffs. Since the NHL playoffs are generally a week ahead of the NBA playoffs, each Bruins series is likely to wrap up before the Celtics series, and we will consider six distinct “steps” between Boston and history, namely each series in each sport. If either team is eliminated, then the probabilities of Boston winning a third championship are the same as the relevant probability in the previous chart. Each Celtics probability assumes that the Bruins have already won their series in the current round.
These probabilities represent Boston’s chances of attaining Titletown status after each “step” of the process.
Overall, the chances of Boston taking home a championship in either basketball or hockey this year is still relatively low, as the Celtics are unlikely to raise Banner 18. However, this storyline is an interesting one to follow if both Boston teams get past their second round opponents.
If you have any questions or comments, please reach out to Andrew on Twitter @andrew_puopolo or by email at andrewpuopolo@college.harvard.edu.