A History of MLB Parity

By David RoherClick for Full ViewA league is considered to have parity if there is not a wide difference in talent level between the best and worst teams. An easy way to measure that is to find the standard deviation of win totals among the teams in the league for any given year. Thanks to the amazing Baseball Reference, I was able to compile the standard deviation for every year from 1901 on. Basically, the way to interpret the graph is that the smaller the standard deviation, the greater the parity in that year. There’s a lot of interesting stuff to find in this graph, and I hope that some of you will point them out in the comments. Here are some of my thoughts:

– There is a clear upward trend in parity (downward trend in SD) over the course of MLB history. It was lowest in the earliest parts of the dead ball era, but increased throughout the dead ball era and into the first part of the 1920s. After a level period of about 20 years, there was a steep upward parity trend until the 1980s. From that point on, parity decreased until the turn of the 21st century. We currently appear to be on another parity increase swing (though not as steep as the best fit curve would indicate).

– My best guess for the cause of this increase is the increase in talent level. There’s no question that players today are stronger and more talented than their earlier counterparts. This isn’t to diminish those who succeeded before the current era at all  – they didn’t get the same benefits of improvements to equipment, training and, uh, nutrition that today’s guys do. But when everyone gets more talented on average, the playing field gets leveled. Think about college sports compared to professional leagues – in college, a strong or weak strength of schedule can make or break your season, while that is only true to a much lesser extent in the pros.

– It’s interesting that parity stopped its mid-century increase right around the time of the beginning of the free-agent era. However, it is unclear what effect that has had. Also worth noting is that the decrease in parity stopped right around 2001, and that revenue sharing was implemented one year later. But there’s not enough evidence here to see if those two things are related.

What do people think? I would hesitate to look at specific years, as I think a lot of that is noise. But I think the trends might be interesting. Additionally, how else can we measure parity?

About the author

harvardsports

View all posts

17 Comments

  • #1. Be careful with curve fitting (6 Degrees is pretty high, and some of the bumps in your curve appear to be artifacts caused by the curve.
    #2. You could try and compare the top and bottom 4ths of the league. Its a common statistical technique.
    #3. Does this have a standardization to 162 games? If not then the earlier seasons would seem to have even less parity and if so some of the additional variance could be a result of the smaller sample size.

    • Hey Alex, thanks for reading.

      1. Thanks, I was a little bit unsure – I kind of wanted the curve to act as a smoothed out moving average.

      2. I’ll take a look. But I think that parity plays a really big role in the two middle quartiles in baseball, since they contain all of the teams on the cusp of being competitive.

      3. Yes, it does have a standardization.

    • From 1981-1986, six of the seven teams in the American League East each won the division title…it was excellent parity except for the Cleveland Indians.

      Your measurement ignores which team wins and which team losses.

      The point is that there doesn’t need to be parity within a season.

      The lack of parity in MLB is evident in the consistency of which teams go to the playoffs/World Series, and which teams consistently lose. There is little point in caring about baseball when you already know which teams are going to win. It’s like watching a rerun of picking lottery numbers.

  • This seems like a good way to measure how close teams are in quality during the same season.

    Sometimes when people talk about parity they’re talking about a different concept: how feasible is it for a losing franchise to ‘turn it around’ and become a winning franchise. Measuring this concept requres some sort of comparison between years. Perhaps seeing how a team’s record correlates to it’s record 5 years down the road. The stronger the correlation, the less hope a bad team can have for the future.

  • What about the issue of expansion? Did you standardize for the number of teams? In the earlier days of MLB when there appears to be less parity, there were also fewer teams. This would increase the standard deviation compared to the 32-team league we have today.

  • Nice graph. When you look at team competitiveness, you are really looking at the combined effect of two factors: 1) distribution of talent among players, and 2) how talent is organized by team. You may want to look at these separately, by charting SD of hitter and pitcher performance over time.

    The variance in player talent definitely declines over the history of the game, as overall quality rises. The pace of change probably changes depending on whether the talent pool is expanding particularly fast/slow — e.g. racial integration in the 1960s/70s probably had a major equalizing impact. Major changes in how the game is played can stop or reverse the general downward trend in SD, creating more variance — Ruth’s discovery of HRs in the 1920s, for example (or adding 3 point shot in NBA). And we would expect the rate of compression to slow down eventually, as player talent approaches some maximum possible level — we may be seeing that after about 1980. Dan Fox had one or two great articles on this at Baseball Prospectus (sub. required), showing the decline in SD for hitters over time.

    How equally talent is distributed among teams, though, is a separate issue. Maybe free agency created more inequality, at least through 2000. Then again, maybe the late 1990s Yankees were just a fluke and create this illusion. I think this is harder to measure, but maybe there’s a way to adjust the SD in team win% for the underlying level of talent inequality, to measure “organizational competitiveness.” Would be interesting to try.

  • I’d argue that looking at the SD in Wins from 1900 to now isn’t the best way of doing things. While I agree with the general findings of your graph, there has been an extremely large change in the number of games per season, as well as the number of teams, that could result in more variable SDs in earlier eras. Why not use the Noll-Scully measure? This type of thing has been done a LOT in the sports economics literature, though it’s always interesting to look at and think about new ways in which to measure it.

    If you’re truly interested in competitiveness, I would recommend reading the abundance of literature about this exact topic you’re looking at by Rodney Fort and Young Hoon Lee. You can get most of their papers here:

    http://www.rodneyfort.com/Academic/Academic.html

    A full time series treatment on competitive balance is there, as well as relating it to league policies (doesn’t seem like there’s much relationship), and league attendance.

Leave a Reply to mm Cancel reply

Your email address will not be published. Required fields are marked *