By Adam Gilfix

The phrase “pitching [and defense] wins championships” is tossed around a lot, especially in the MLB around this time of year. Clearly, some teams (read: Blue Jays, Royals, and Rangers) took that to heart at the trade deadline, bartering for some of the best hurlers in the league in order to bolster their championship odds. So far, their bets appear to be paying off (less so for Kansas City and Cueto)….but only one of those three American League teams mentioned above can make the World Series in late October. According to Baseball Prospectus, the Blue Jays playoff and World Series chances have risen from 37.2% and 3.3%, respectively, the day before the David Price trade, to 99.3% and 15.6% today:

Setting titles aside, I wanted to determine* if pitching and defense really were more important than hitting in terms of winning games. Presumably, if a team can win a lot more games, they are superior and have a better chance at winning the Fall Classic. I went about this in a very simple way: regressing winning percentage against runs allowed per game (RA/G) and runs scored per game (RS/G) for each team in the MLB across multiple time-frames in baseball’s storied past. Using percentage and per game metrics are important because season length has varied over the last century.

First and foremost, all of the t-values above are well inside their respective critical regions, all yielding p-values < 0.001, meaning that for each time period, both of our independent variables (RA/G and RS/G) are extremely significant to the dependent variable (winning percentage). Obviously, RA/G has a negative coefficient (the fewer runs allowed, the more likely a team is to win) and similarly RS/G has a positive coefficient. These two coefficients are interpreted as (using 2015 as an example): for every additional run allowed per game, with else equal, a team is expected to have its winning percentage decrease by 0.105; for each extra run scored per game, *ceteris paribus*, a team’s winning percentage should increase by 0.068. The fact that the magnitude of the RA/G coefficient is greater than that of the RS/G variable – as well as the greater magnitude of the t-value for RA/G compared to RS/G – signifies that allowing runs impacts winning percentage more so than does scoring. Thus, pitching and defense are more important than offense, but of course luck is always a factor as well – as seen by the fact that the adjusted R^{2} values aren’t quite 1. As you can see above, pitching appears to have become more important, as the difference in magnitude of the coefficients for RA/G and RS/G has grown in the Wildcard Era, with 2015 a prime example of the recent increased significance of pitching/defense.

Given this more substantial impact of pitching and defense on winning, we can observe data to determine the best teams in baseball history when it comes to the non-offensive side of the game. I chose to continue using runs allowed per game as the main metric of determining team strength given the fact that other indicators (ERA, FIP, etc.) are all tailored to measure slightly more specific things and they strip away or limit certain factors, like fielding. Runs allowed per game is an all-encompassing statistic that gives a good sense as to a team’s pitching (both starters and relievers) as well as defense. Since I have data from Baseball Reference on each team’s RA/G for every year in Major League Baseball, dating back to 1871, it was simple enough to find the best defensive (referring to both pitching and fielding) teams in history just by sorting. Moreover, I created “RA/G-” by dividing each team’s RA/G by the league average for that year and then multiplying by 100 (so 100 is average and below 100 is better than average), thereby adjusting for the run-scoring environment. I sorted on this new measure as well.

In both instances, if the season had ended after the games on August 31, the 2015 Cardinals would rank 25th all-time. If it holds up, the Cards will be one of just 26 teams in baseball history to allow fewer than 3 runs per game – St. Louis would be just the 3rd team to do so in the last 95 years, and the first since the 1972 Baltimore Orioles and Oakland Athletics. There’s a good reason the Cardinals are dominating the rest of baseball right now, and that is clearly their pitching.

However, there is another way to rank each team in MLB history using RA/G that also takes into account the performance of the rest of League for each given year. That method is standardizing (normalizing, z-scoring) the metric such that 0 is average, 1 is one standard deviation above (worse in the case of RA/G) the mean, -1 is one standard deviation below, and so forth.

To ensure that getting z-scores of runs allowed per game for each year was reasonable, I tested the normality of each year’s collection of RA/G values using the Shapiro-Wilk Test. The results were encouraging. Very few years had data that yielded p-values below 0.05 (which implies we can reject the null hypothesis that the data is normal), whereas most years suggested normality, including all of the last 52 years (wherein many p-values were much greater 0.05, meaning a stronger preference to not reject the null hypothesis of normal data). I also used a Q-Q plot in Python to verify the normality of the data. The plot for each year included an R^{2} value, such that a value of 1 corresponds to perfectly normal data. I found that the average of these R^{2} values was a promising 0.972. As an example, for the 2015 data, the Shapiro-Wilk test gave a 0.99 p-value, indicating normality. Moreover, here is the quantile-quantile plot for this season’s data, as well as that for 2012 (the best example for recent years) below:

Given all of the results discussed above, it seemed sensible** to go ahead and normalize each season’s runs allowed per game data. I thereby obtained a z-score for each team’s RA/G, a metric I labeled Z-RA/G. A team’s run-preventing prowess, relative to the League, is thus given by its z-scored RA/G, such that a lower value signifies a greater number standard deviations better than the league average. Without further ado, here are the **best pitching/defensive teams** – those at least 2 standard deviations better than that year’s average – in **Major League Baseball history according to the Z-RA/G metric**:

As per the title of this post, the 2015 Cardinals are arguably the best team in MLB history when it comes to pitching and fielding – relative to the run-scoring environment. By the way, they’ve done that while missing their best pitcher for almost the entire season. I like this method for a variety of reasons, but one of the most interesting is that this normalization ranking is kind to the 1990s Atlanta Braves. Commonly considered one of the best pitching staffs of any time period, the ’90s Braves appear twice (1993, 1997) in the top 5 , three times (’93, ’97, ’98) in the top 10, and four times (’93, ’95, ’97, ’98) in the top 15. Given that Z-RA/G ranks highly such a prolific pitching team (one that finished first in the MLB in RA/G each year 1992-2002 and ranked in the top two in WHIP from 1991 to 2000), I am a fan of the standardization process here.

A very similar way to rank teams is to compare their RA/G using z-scores (which I will call Z5-RA/G) for 5-year periods (starting with 1901-1905 up until 2011-2015) as opposed to the annual comparisons to League averages (Z-RA/G). I chose to not include the earlier years here because of the slightly less consistency in the normality of the data. If I do this, St. Louis still comes out very much near the top, with the familiar 1997 and ’98 Braves as well as the extremely high-ranking 1981 Astros (second all-time in Z-RA/G above) in the mix:

*All data for this post is up-to-date thru the afternoon of 9/1/2015

**To be fair, histograms of yearly runs allowed per game show that the data is sometimes slightly right-skewed, but the results are still meaningful

## 1 Comment