Why Any NFL GM Who Reads Bleacher Report Should Be Fired: A Response to “Why Any NFL GM Who Uses the Harvard NFL Pick Value Chart Should Be Fired”

The Bleacher Report’s logo

In April, a writer for the Bleacher Report (BR) attempted to critique a new pick value chart for the NFL draft that I created and that HSAC published. There are many legitimate criticisms of my work in that article that are worth exploring, and that many of our commenters have brought up: the need to include replacement level, the strength of Career Approximate Value (CAV) as a statistic, or the skew of the underlying data. These are not the criticisms levied in BR. This post counters the various critiques made by BR in hopes of ending what some have called “HSAC’s first beef”.

BR’s article has four main components. First, they seem to question the logic of creating a draft value chart before levying three claims: that pick x is worth more than the career performance of the average player picked at pick x; that past failures at pick x have no bearing on the value of pick x; and that predicting a player’s reaction to his first contract is hard, but does not diminish the value of rookies.

BR appears to question the validity of having a system to evaluate draft picks because “every team will value picks differently”. They argue that because teams have different rankings of players on their board, certain picks are worth more to some teams than others. This claim may be true in very specific cases: the Colts probably valued the first overall pick more than the Packers did this year because the Colts needed a quarterback (Andrew Luck) and the Packers already had one (Aaron Rodgers). However, BR’s claim implies that teams deviate significantly from Jimmy Johnson’s chart, which lists the accepted market values of draft picks. As Cade Massey and Richard Thaler have shown, teams do not stray from the chart. So while the true value of NFL draft picks (what I tried to find) has been “elusive”, the market value of those picks has not. The problem now is that the market values are systematically wrong.

In their next criticism, BR draws the wrong conclusion from an important phenomenon. They write that a team with the 10th overall pick expects the player they pick to perform better than the historically average 10th overall pick. This expectation is called overconfidence, and is rampant in the NFL draft. In a different article, I found that players selected as the result of a trade up (players that teams are especially confident in) performed worse than players who were selected normally (for details, read the article). Thus teams tend to be overconfident in their abilities to draft the best players, and overvalue the right to choose early in the draft.  Teams should not use “the expected value of the player acquired in the mind of the team” because their mind is probably overconfident. Teams can avoid this bias by using the average value of the player picked in that slot.

While BR argues otherwise, past failures at a pick slot are important in large part because they help combat the overconfidence bias. I will borrow BR’s analogy: downtown real estate. If a string of bars rent a prime spot and all fail, the underlying potential of the real estate is not impacted. However, the high rent surely contributed to those bars failing; if the rent were half its price, those bars would probably still be in business. If I am an entrepreneur thinking of starting a bar on this piece of real estate, I will want to know the base rate of success for bars here. If the base rate of success is low, then the risk is higher, and visa versa.

To put this analogy in football terms, while the underlying potential of high picks remains high even if teams fail to fulfill that potential, teams should remember prior failures so that they are not overconfident in themselves.

Finally, BR again draws a strange conclusion from a real experience: it is difficult to predict a player’s reaction to his first big contract. As BR writes, “predicting how young, talented football players will perform in the pros is an inexact science.” For this reason, there is more uncertainty around rookies, which should lessen the value of the earliest picks (as I found).

Instead of drawing this conclusion, however, BR muddles through two separate, unrelated claims: 1) the uncertainty of a rookie’s first contract doesn’t lessen the value of early draft picks as much as I found, and 2) because teams generally pick better players earlier, the entire process is valid. To support claim 1, BR needs to provide evidence, which they fail to do – aside from simply stating claim 2, which is unrelated. Aside from being tangential, claim 2 seems naive. Yes, teams generally pick better players earlier. However, you can almost make a Pro Bowl team out of undrafted players.* Their second claim ignores that the scouting process remains far from perfect.

One phrase from BR’s critique sums up the underlying issue in the article: “[the HSAC chart] just feels wrong on its face”. Just because something is counterintuitive does not make it false. The introduction of sabermetrics into Major League Baseball just felt wrong on its face to many people inside the MLB because of their traditional beliefs in how to run a baseball team. That’s the beauty of empiricism: even if a conclusion feels wrong, through careful analysis, one can demonstrate that it’s right.

*Roster of undrafted players:

QB: Tony Romo, RB: Arian Foster, Fred Jackson, FB: Vonta Leach, WR: Wes Welker, Brandon Lloyd, Miles Austin, Victor Cruz, TE: Antonio Gates, LT, Jason Peters, G: Brain Waters, Brandon Moore C: Jeff Saturday, RT: Tyson Clabo

DT: Pat Williams, DE: Cullen Jenkins, Mike Devito, LB: James Harrison, Antonio Pierce, London Fletcher, Cameron Wake, CB: Brent Grimes, Tramon Williams S: Jim Leonhard, Ryan Clark

K: Adam Vinatieri, P: Britton Colquitt, KR: Josh Cribbs

About the author

harvardsports

View all posts

7 Comments

  • Fantastic post. Great counter-argument. I agree with most of the piece, but no point more so than something being counterintuitive not necessarily meaning it’s wrong. People just need to look outside the box more and look at facts rather than what they think they know.

  • If teams don’t deviate from the chart, how could the patriots have moved down from 62 to 90 for less than the Jets paid to move from 47 to 43? what a team will pay for a pick is part what the pick is worth and part how much they value the player they are targeting/how many players they would be satisfied with at the picks.

    Also, I’m not really sure how the rent analogy applies other than a bigger contract possibly giving a player more incentive to give less than full effort, which I address in my article.

    As far as the value of early picks not being lessened by uncertainty, I believe the price the Browns and Redskins paid just to move up within the top six basically proves that teams will pay what they think is a player is worth when trading up as opposed to what a chart that averages out the careers of the players picked at that slot as a baseline for value.

    As far as teams having overconfidence in their ability to evaluate players correctly, well if they weren’t supremely confident in that ability, why would they ever get in that business? They are paid to make those decisions, I would hope they believe every one of them is correct when they make them, if not, they are unfit for the job.

    • On 1: As I wrote, teams do deviate on a micro-level. However, when you look at all of the data, not one or two examples, teams do not deviate from the chart. Read the paper I linked to for more info.

      2: The rent analogy is lifted directly from your article. I interpreted the analogy as meaning that the potential of a high draft pick (like land downtown) is not influenced by prior busts. I argued that the base rate of success at that pick (or land downtown) matters when evaluating how much it’s worth. It has nothing to do with having a bigger contract.

      3: I think you’re probably right: teams probably do pay what they think a player is worth. That does not mean that they SHOULD pay that price. Teams are overconfident in their abilities to evaluate players, so they overpay for early picks.

      4: You successfully identified a potential source of overconfidence, but you haven’t shown why being overconfident would be a good thing. I’m also not sure if you think that (A) confidence is what should determine employment in the NFL or (B) having a perfect ability to predict players’ abilities. (A) would allow for a a supremely confident idiot to run teams; (B) does not exist.

  • of course GMs are overconfident in their abilities, but they better check their ego or they will be just like a superstar athlete who tries to get by on natural talent rather than hard work. The teams actually need statistical basis to be confident, otherwise they are just another idiot who will have a low success rate just like baseball before “moneyball” and sabermetrics changed the game…

    The NFL needs more analytics, and more advanced ways of measuring how strong a correlation is between their ranking and how a player pans out. They need to assess the probability that they will get wins as a result of their decision and find advanced ways to translate it.

    If the NFL was perfect, it wouldn’t just now be a passing league. Teams like the Packers last year averaged nearly 10 yards per pass attempt. No RB in history has come close to putting up those kind of numbers per attempt. Yet “old school” NFL will have you run the ball, always punt on 4th down, and never attempt onside kicks unless behind. Well advancednflstats.com among many others has done a lot of research that proves that strategy to be vastly incorrect and only brave innovators like Bill Bellichek have begun to challenge that. If anyone in an NFL staff knew what the hell they were doing, it wouldn’t be a copycat league and they wouldn’t have to be so dependent upon “what works for other teams”

  • The NFL front offices just don’t have enough experience. If you were to find a random kid who plays virtual GM in Madden and spends a lot of his time doing fantasy football, you could probably find someone with more simulated experience. Not all of it would translate and there would be some sort of a learning curve, but to act like the teams that do what they do are always right just because they have that job is ridiculously stubborn and stuck in old ways… adapt or die.

  • eams should actually construct their own value trade chart based partially on self evaluation of their own draft boards and how the players panned out, but because not many GMs are with the team long enough, they should put significant weight into this particular chart you’ve mentioned… As they gain a large sample size over the years, they can attempt to integrate a larger and larger amount of weightings into their own predictions and how they’ve panned out.
    Additionally, they might try ranking individual players by where in an average draft they think the player is worth (i.e. Trent Richardson perhaps is worth a 4th overall pick and should be given a value of something like 400 points according to the chart.) Then as players fall, a team can decide how much a player needs to fall to justify the pick… but perhaps a percentage of the value like say 30% should be weighted with what the current pick is worth so that they don’t trade up too far based on their evaluations since their evaluations of talent are not anywhere near 100% accurate.

    It’s very possible that the average player drafted is overvalued because the losing teams have losing GMs that make losing decisions and continue to get higher picks which skews the data…

    Regardless using the same methods one could construct a chart based upon the past accuracy rate of an individual team’s draft board(s) and make decisions accordingly.

Leave a Reply to Sigmund Bloom Cancel reply

Your email address will not be published. Required fields are marked *