### Analysis

The NFL draft is a kind of auction, with auction-like dynamics. It’s also akin to a marriage. It only takes one, not a crowd, to get married and the opinion of the one outweigh the many. When analyzing the draft, I’ve been known to say things like between three players of the same true value, the one that gets drafted is the one whose value is most overestimated (1). I’ve also said things like one scouting opinion isn’t important, but the envelope of opinions is. The distribution of those opinions is crucial to knowing when a player can be drafted (2).

The distribution of player rankings can affect the possible draft positions of a player. Hand drawn curves on a brand new pen tablet, so they’re not perfect curves. Imagine the purple curve with more extensive tails.

In the diagram above, there are three distributions, with different peaks, means and spreads. Player A, in black, has a tight distribution of values and barring any issues with uniqueness of position, there is a consensus where he will be drafted. Player B, in red, has a broader distribution, but is unlikely to suffer more than half a round of variance in draft position. Player C, in purple, has an envelope encompassing two whole rounds of the draft. It’s the player C types that create a lot of controversy.

Travis Frederick and ZZ Top: Separated at birth?

The player the Dallas Cowboys drafted in the first round, Travis Frederick, is exactly one of those types. He was highly ranked before the combine, but suffered because of his bad 40 time. People like Gil Brandt, who had him ranked 27th best at the time, dropped him because of his 40 time. Perusing various links, such as this one, you see rankings ranging from 31 (Gary Horton) into the 90s. Now please note that draft pundits really don’t count, NFL teams do. But for the sake of argument, we’re using media scouts as an estimator of the envelope of NFL opinions. And that envelope of values encompass two whole rounds of variance.

So, what happens when you must have a player whose valuation envelope is a broad distribution? This player must be taken pretty far from the mean, in the tails of the “high value” side, or else you risk losing him (3). What is guaranteed though, is that the pundits on the other side of the mean from you will undoubtedly scream bloody murder. That’s because a draft pundit’s opinion is his life’s blood, and they make their money validating and defending that opinion, usually in print, and sometimes on television. That it’s one of many doesn’t matter if that’s how you make your living. So of course pundits will scream.

2013 was a draft with few good players. If estimates are valid that there were only 16 or so players truly worth a first round pick, then by default you’re overdrafting your quality by a half round by the middle of the first round. If the span of Frederick’s valuations really ran from, say, mid second to the beginning of the fourth, then the so-called overdraft is not, it’s entirely the function of three things: first, the perceived need for the player and second, such a broad valuation envelope that Dallas had to draft him in the tails of the distribution. Third factor, the lack of talent overall in the draft that led to overdrafting in general.

~~~
Footnotes

1. Jonathan Bales, before he became a New York Times contributor, favored this comment (common sense, IMO) and used it to help validate a pet drafting theory of his. I never saw enough rigor in his theory to separate it from the notions of BPA or need, as it was more a collective efficiency concept. IMO the notion hardly led to the invalidation of BPA or needs based drafting.

2. In the early 2000s, I wrote a Monte Carlo simulator of the draft, which explicitly used those distributions to estimate where players would be drafted. More discussion of that code, released as a Sourceforge project, is here.

3. Let me note that in “must have” situations, teams whose draft record no one complains about .. New England say .. will draft players above their worth. Belichick’s rationale, given in the link, is instructive. An excerpt is:

Now, the question is always, “How much do they like him and where are they willing to buy?’ I’m sure for some teams it was the fourth round. For some teams it was the third round. But we just said, ‘Look we really want this guy. This is too high to pick him, but if we wait we might not get him, so we’re going to step up and take him.’

PS – tskyler, a Cowboys Zone forum contributor, has a very nuanced fan analysis of the Frederick draft here, one worth reading.

This has been part of an ongoing conversation among Dallas fans, and perhaps among any of the 9 teams, from the Redskins to Patriots to the Vikings, that traded up in the first round of the 2012 NFL draft. There are some new tools for the analyst and the fan, and these include: (1) Pro Football Reference’s average AV per draft choice list, (2)  Pro Sports Transactions’ NFL draft trade charts, and (3) The Jonathan Bales’ article on Dallas Cowboys.com where he analyzes a series of first round trades up from 2000 to 2010. He concludes that in general, the trade up does not return as much value as it gives.

I suspect that Jonathan’s conclusion is also evident in the fantasydouche.com plot we reposted here. The classic trade chart of Jimmy Johnson really does overvalue the high end draft choices. You’re not paying for proven value, but rather potential when you trade up. I suspect by the break even metric we chose, comparing relative average AVs, that many draft trades never pay off, in part because people pay too much for the  value they receive. This is most evident in trading a current second or third and a future first for a current first round draft choice. These trades tend almost to be failures by design, and smack ultimately of desperation, true even when the player obtained (e.g. Jason Campbell) actually has some skills.

That said, how many of these players exceed the average abilities of the slot in which they were drafted? Now that we have the PFR chart, this is another question that can be asked of the first round players. Note that Jonathan Bales’ study doesn’t really answer the question of how good the player becomes, in part because the time frame chosen doesn’t allow the player adequate development. I started in the year 2000 1995, ended in the year 2007. I identified 67 players in that time frame, and I compared the AV for each player as given by the weighted average on the PFR player page. I’ll note that the player page and the annual draft pages do not agree on players’ weighted career accumulated value, so I assumed the personal pages were more accurate.

As far as a scale, we’re using the following:

AV relative to average Ranking
-25 AV or more Bust
-24 to -15 AV Poor
-14 to -5 AV Disappointing
-4 to +4 AV Satisfactory
+5 to +14 AV Good
+15 to +24 AV Very Good
+25 AV and up Excellent

Note there are some issues with the scale. Plenty of players from 1995 through 2007 are still playing, and their rankings are almost certainly going to change. In particular, Eli Manning at +24 and Jay Cutler at +23 have a great chance to end up scored as Excellent before the next season is over. Jason Campbell is at +19, and if he starts for a team for one season, he will end up with a ranking of Excellent. Santonio Holmes (+19) also has a shot at the Excellent category.

Players in the years 2006 and 2007 in lower categories (Manny Lawson at +7, Joe Staley at +4, Anthony Spencer at 0 ) could end up as Very Good, perhaps even Excellent if their careers continue.

The scoring ended up as

Scale Number Percent as Good Percent as Bad
Excellent 14 20.9 100.0
Very Good 9 34.3 79.1
Good 13 53.7 65.7
Satisfactory 10 68.7 46.3
Disappointing 7 79.1 31.3
Poor 5 86.6 20.9
Bust 9 100.0 13.4

Data came from the sources above. A PDF of these raw data is here:

Update: Increased the dates of players considered from 2000-2007 to 1995-2007. Moved Ricky Williams back to 1999.

I ran into it via Google somehow, while searching for ideas on the cost of an offense, then ran into again, in a much more digestible form through Benjamin Morris’s blog. Brian Burke has at least 4 articles on the Massey-Thaler study (here, here, here and most *most* importantly here). Incidentally, the PDF of Massey-Thaler is available through Google Docs.

The surplus value chart of Massey-Thaler

Pro Football Reference talks about Massey-Thaler here, among other places. LiveBall Sports, a new blog I’ve found, talks about it here. So  this idea, that you can gain net relative value by trading down, certainly has been discussed and poked and prodded for some time. What I’m going to suggest is that my results on winning and draft picks are entirely consistent with the Massey-Thaler paper. Total draft picks correlate with winning. First round draft picks do not.

One of the  points of the Massey-Thaler paper is that psychological factors play in the evaluation of first round picks, that behavioral economics are heavily in play. To quote:

We find that top draft picks are overvalued in a manner that is inconsistent with rational expectations and efficient markets and consistent with psychological research.

I  tend to think that’s true. It’s also an open question just how well draft assessment ever gets at career  performance (or even whether it should). If draft evaluation is really only a measure of athleticism and not long term performance, isn’t that simply encasing in steel the Moneyball error? Because, ultimately, BPA only works the way its advocates claim if the things that draft analysts measure are proportional enough to performance to disambiguate candidates.

To touch on some of the psychological factors, and for now, just to show, in some fashion, the degree of error in picking choices, we’ll look at the approximate  value of the first pick from 1996 to 2006 and then the approximate value of possible alternatives. To note, a version of this study has already been done by Rick Reilly, in his “redraft” article.

Year Player AV Others AVs
1996 Keyshawn Johnson 74 #26 Ray Lewis 150
1997 Orlando Pace 101 #66 Rhonde Barber, #73 Jason Taylor 114, 116
1998 Peyton Manning 156 #24 Randy Moss 122
1999 Tim Couch 30 #4 Edgerrin James 114
2000 Courtney Brown 28 #199 Tom Brady 116
2001 Michael Vick 74 #5 LaDanian Tomlinson, #30 Reggie Wayne, #33 Drew Brees 124, 103, 103
2002 David Carr 44 #2 Julius Peppers, #26 Ed Reed 95, 92
2003 Carson Palmer 69 UD Antonio Gates, #9 Kevin Williams 88, 84
2004 Eli Manning 64 #126 Jared Allen, #4 Phillip Rivers, #11 Ben Roethlisberger 75, 74, 72
2005 Alex Smith 21 #11 DeMarcus Ware 66
2006 Mario Williams 39 #60 Maurice Jones-Drew, #12 Hlati Ngata 60, 55

If drafting were accurate, then the first pick should be the easiest. The first team to pick has the most choice, the most information, the most scrutinized set of candidates. This team has literally everything at its disposal. So why aren’t the first round picks better performers? Why is it across the 11 year period depicted, the are only 2 sure fire Hall of Famers (100 AV or more)  and only 1 pick that was better than any alternative? Why?

My answer is (in part) that certain kinds of picks, QBs, are prized as a #1 (check out the Benjamin Morris link above for why), and that the QB is the hardest position to accurately draft. Further, though teams know and understand that intangibles exist, they’re not reliably good at tapping into them. Finally, drafting in any position in the NFL, not just number 1, has a high degree of inaccuracy (here and here).

In the case of Tom Brady, the factors are well discussed here. I’d suggest that decoy effects, as described by Dan Ariely in his book Predictably Irrational (p 15, pp 21-22), affected both Tom Brady (comparisons to Drew Henson) and Drew Brees (compared to Vick). Further, Vick was so valued the year he was drafted that  he surely affected the draft position of Quincy Carter and perhaps Marques Tuiasosopo (i.e. a coattail effect). If I were to estimate of the coattail effect for Q, it would be about two rounds of draft value.

How to improve the process? Better data and deeper analysis helps. There are studies that suggest, for example, that the completion percentage of college quarterbacks is a major predictor of professional success. As analysts dig into factors that more reliably predict future careers, modern in-depth statistics will help aid scouting.

Still, 10 year self studies of draft patterns are beyond the ken of NFL management teams with 3-5 year plans that must succeed. Feedback to scouting departments is going to have to cycle back much faster than that. For quality control of draft decisions, some metric other than career performance has to be used. Otherwise, a player like Greg Cook would have to be treated as a draft bust.

At some point, the success or failure of a player is no longer in the scout’s hands, but coaches, and the Fates. Therefore, a scout can only be asked to deliver the kind of player his affiliated coaches are asking for and defining as a model player. It’s in the ever-refined definition of this model (and how real players can fit this abstract specification) in which progress will be made.

Now to note, that’s a kind of progress that’s not accessible from outside the NFL team.  Fans consistently value draft picks via the tools at hand – career performance – because that’s what they have. In so doing, they confuse draft value with player development and don’t reliably factor the quality of coaching and management out of the process. And while the entanglement issue is a difficult one in the case of quarterbacks and wide receivers, it’s probably impossible to separate scouting and coaching and sheer player management skills with the kinds of data Joe Fan can gain access to.

So, if scouting isn’t looking directly at career performance, yet BPA advocates treat it as if it does, what does it mean for BPA advocates? It means that most common discussions of BPA theory incorporate a model of value that scouts can’t measure. Therefore, expectations don’t match what scouts can actually deliver.

I’ve generally taken the view that BPA is most  useful when it’s most obvious. In subtle cases of near equal value propositions, the value of BPA is lost in the variance of draft evaluation. If that reduces to “use BPA when it’s as plain as the nose on your face, otherwise draft for need”, then yes, that’s what I’m suggesting. Empirical evidence, such as the words of Bobby Beathard, suggest that’s how scouting departments do it anyway. Coded NFL draft simulations explicitly work in that fashion.

Update 6/17: minor rewrite for clarity.

When trying to value drafts, we tend to think only in one direction:  how to get as much talent for the kinds of draft picks we have.There is another kind of optimization that often goes under the radar, and that is having the coaching talent and foresight to construct a winning offense that doesn’t require extreme athletes. If, for example, you can get the same caliber running game out of 4th round draft choices as other coaches would with a mid first round choice, you’ve lowered the cost of the offense (Mike Shanahan and his zone blocking-cutback running scheme). If you can get high quality play out of quarterbacks with modest physical skills by making their reads simpler and jobs easier, you’ve lowered the cost of your quarterbacks (the West Coast offense). If by looking for smaller players with plenty of speed, drafting linebackers from strong safety-linebacker tweeners, putting linebackers at defense end and defensive ends at defense tackle,  you markedly  increase  your team speed. Further, because you’ve fruitfully used so many tweeners, you’ve cut the cost of your defense (Miami 4-3, notes here and here and here. Coach Hoover talks about it here, defending the flexbone, and I’m pretty sure the Penn State defense, described here, is a derivative of this defense as well).

You can probably formalize the cost of an offense (or defense) by treating the draft as a market and assigning the players on a team their draft value, either by methods we touched on here, or a fit to a Weibull distribution, as shown in figure 1 of this manuscript, or by analogy using AdamJT13′s chart here. To note, the cost of a free agent in this context is zero, since no draft choice was spent purchasing them. I don’t claim ideas like these are original to me. On the site LiveBall Sports – very nice multisport site with a nice analytics bent – they  have a 2 part series (NFC and AFC) evaluating the usage of free agents, and the language of the author, Greg Trippiedi, makes it clear he’s thinking in terms of draft value. How valuable are these no-cost free agents? Please recall that in this article, we quote Bobby Beathard as saying the first Super Bowl team under his watch with the Redskins  had 26 free agents on the roster. But it also had excellent coaches, who could turn sow’s ears into.. well.. Hawgs.

Since a player  that makes a roster is occupying a slot that others could also occupy, I suspect a true valuation of the cost of a player would also have to include development time. If it takes 5 years for a player to become a starter (or major rotation player), there is the cost of his draft choice and the time cost of his development. Both need to be assessed in terms of his cost. A player that never starts, never plays and occupies space becomes a dead weight cost.

One final issue. Dynasties can’t be constructed with expensive players. Think about it. Dynasties don’t have particularly good draft position. Winning in the early years guarantees that. The average player lasts about four years. So in general, they will have a few elite players with long careers and a large corps of pretty good, inexpensive players. If costs of the team model can’t be lowered adequately, sustained winning can’t be achieved. Replacement players will simply come at an unsustainable cost.

I’ve gotten some interesting feedback with regard to my first noise simulation study (see here and here), and wanted to touch base on some ideas, and  then get to a point I actually consider important. I’ve been talking about draft noise, or scouting error, or draft error, without exactly defining the phrase. I’m probably not going to satisfy the definition hounds here either, but I’ll approach the notion of value and draft value and then say what I’m calling draft noise might also be fruitfully called draft variability, since really it’s the variability of the value of a potential draftee that we’re after.

We know this exists. We know draft value, in this context, changes. This value could be related to athletic potential, but that’s another abstract quantity that isn’t measurable, and misses the point that the draft is a market. And as it’s a market, the currency is the draft slot used to ‘purchase’ a player. Pretty simple, huh? Just as you might pay 50 cents to purchase an apple, Reggie Bush is worth the #2 draft slot. That’s Reggie Bush’s value, in the context of the draft.

One approach for getting at the value and variability of  players, we discussed here in the context of Monte Carlo draft simulations.  Simply collect a group of scouts and let them all rank the players, take the mean and calculate the standard deviations of their estimates and you’ve got a normally distributed estimate of the draft variability of the player.  Another was the ‘model  the envelope of the apparent noise’ approach of the first simulation study. Rick Reilly uses a third approach in his recent re-draft article, which I believe really isn’t valuing the draft as a marketplace. He’s assessing the player’s performance after they were drafted. Mel Kiper’s language in this article is not only true, but utterly on the mark.

the obvious complaint we always hear is, “You can’t really grade a draft for a few years.” Not true. You can’t assess the performance of the players drafted for a few years, but you can assess the degree to which teams maximized value while filling needs.

To flip back to an analogy, I buy an apple. I pay 50 cents for the apple. I bite into the apple and get bruised flesh in my bite, and I say, “That was a lousy apple for the price.” That assessment doesn’t change the price. The price was 50 cents.   Since the price exists, the kinds of analysis that Mel does is legitimate. He’s not analyzing the post draft performance of a player, he’s analyzing the market, and too much “draft” analysis forgets what the draft is.

One critique of my  previous noise model is  that there wasn’t a unique valuation for each individual team. Well, this is the deal. That’s per-player variability built into the model, and misses the point of the original study, which was to get a rough measure of the variability. However, the critique points out something else, and that draft noise, no matter how accurately  teams scout, is never completely going away. In a word, it’s irreducible.

Let’s pick a player out of the blue, call him,  oh Von Miller, and  say that scout A from team A and scout B from team B each rank Von Miller as the 4th best athlete of 2011. But the scout from team A says, “We look for large linemen and large linebackers and so Von Miller isn’t big enough. On  our scale, he’s worth a 15th pick.” Scout B from team B says, “We play a Miami 4-3, and we optimize our teams for pursuit and speed, and Von Miller is faster than greased lightning. We rank him 3rd, because he is a perfect fit to our requirements.” And therein lies a source of variability that will never go away.

Teams have physical and athletic requirements based on the offenses they play and the kinds of players they know can play in them. Scouts are taught to seek and value players based on those requirements. This creates variability in the market, hence noise. As we’ve said previously, that noise can be exploited. And that noise is never going away, no matter how accurately scouts slot players to their system.

Super short summary: more accurate drafting is more effective drafting.

Summary for Statheads: Improving the draft accuracy of a single team improves the quality of draft choices picked across the entire draft. Simulations at draft error levels of 0.8 and 0.6 rounds respectively show that the effect is on the order of 7 and 5 picks. In other words, someone picking 12th at a noise level of 0.8, that picks twice as accurately as the norm, has picks equivalent to a team slotted into the 5th position. At a noise level of 0.6, their picks would be equivalent to someone picking in the 7th position. The implications of these findings based on PFR’s approximate value stat and draft round are discussed.

Recently, we posted data showing that the draft error of NFL teams can be estimated based on the kinds of reaches observed in the draft, and our estimated range of error was from 0.5 to 1.0 rounds of error per draft pick. Taking these ideas further, I wanted to examine what would happen to a team that picked twice as accurately as its peers. By accurately, the error of its scouts are half  that of all the other teams. What advantages would they gain?

Figure 1. Pick improvement as a function of round at draft error = 0.8 round

Figure 2. Pick improvement as a function of round at draft error = 0.6 round.

The charts above plot “effective draft position” (i.e. improvements in the value of draft picks, as ranked by the draft position, or slot, they should have been picked) as a function of round, for teams with improved drafting ability. This term can be converted, using Pro Football Reference’s formula for estimated approximate value per slot, into a difference in estimated approximate value for such a choice, and those plots are given below.

Figure 3. AV improvement as a function of round at draft error = 0.8 round.

Figure 4. AV improvement as a function of round at draft error = 0.6 round.

That these results are not unique to these particular error levels is also true, as we calculated estimated AV improvements for a team picking 10th and one picking 20th at error levels of 0.4 as well. 0.4 is so low, in my opinion, as to be unbelievable, but even  then, you can see advantages to the team that drafts well.

One last point. Notice the jump in advantage from the 3rd to 4th round using our model of drafting? That jump is a function of less intense drafting of those players whose first ranking is less than 8.0, and therefore a product of a specific feature in the model. The notion that good teams improve as scouting resources become more scarce is not.

Good teams should  be expected to do markedly better the fewer scouting resources are applied to each player. Where that happens in the real NFL is beyond the scope of this study, but that it almost certainly does happen seems evident. Teams that are expert at drafting will show their expertise more and more as the draft goes on. Or, said another way, anyone with a copy of USA Today or an ESPN Insider subscription can draft a first rounder. It takes really good teams to take best advantage of late round draft choices.

Super short summary: Head scratching moments in the NFL draft are useful clues to the average error in the draft.

Summary for statheads: A simple, efficient market model of drafting can account for commonly observed reaches in the first round if the average error per draft pick is between 0.8 and 1.0 round. The model yields asymmetric deviances from optimal drafting even when the error is itself described by a normal distribution. This model cannot account for busts or finds; players such as Terrell Davis, Tony Romo, or Tom Brady are not accounted for by this model. I conclude that drafting in the most general sense is not efficient even though substantial components of apparent drafting behavior can be analyzed by this model.

Introduction

There are 4 typical ways to describe a draft choice. The first is by the number of the choice (Joe Smith is the 39th player chosen in the draft). Second, by a scale, usually topping at 10, and going down one point for every round of change. In such a situation  the ideal player is a 10.0, a very promising player a 9.0, a first of the third round a 8.0, and so forth. Ourlads uses a similar device to rank players as draft candidates.  The third way to rank a draft candidate is by the market value of the slot taken, and the best known representative of that kind of methodology is Jimmy Johnson’s trade value chart. The fourth way to rank a draft choice is by the historically derived value of players drafted at that position, and Pro Football Reference has done that here. Note: another interesting attempt at an AV value chart is here.

Every draft has a moment where you see a player drafted, and you wonder what drove a team to take this player. In the 2011 draft I can recall off the top of my head at least three four head scratching moments: the draft by San Francisco of Aldon Smith (Ourlads  8.99, but rising), by Tennessee of Jake Locker (Ourlads 9.15, considered by many to be late first, second round) , by Seattle of James Carpenter (Ourlads 7.05),  and the draft by New England of Ras-I-Dowling (Ourlads 7.82, but perhaps scheme related). All four left me wondering. Perhaps the same do to you, perhaps they don’t. But what I’m getting at is the number of these moments defines an error level by its recognizable tails, and using that, we can back track to an estimate of the actual error involved in selecting players.

If, say, the first round of 2011 was typical of all rounds of the NFL draft, and there was at least one truly puzzling reach in every round of the draft, and let’s say the puzzlers involved a reach of at least a round or more of value in the draft, then any noise model of the NFL draft has to be at least that noisy, else it is unrealistic. If, for the sake of argument we’ll assume the baseline draft model is efficient, then we add the assumptions that there are no systematic errors in drafting, and that drafting errors are normally distributed. So, if there is 1 error per round of 1.0 rounds or more, then there should be 7 in the whole draft, and 7000 in 1000 simulated drafts. We set out to build a simulator and test  these principles.

Again, I have to credit Chris Malumphy for originally making this suggestion. The Pearson correlation coefficient is described here, and can be thought to be a measure of how clustered around a line the data set is. Right around a Pearson coefficient  of 0.4 a data set shows a kind of elliptical shape, as in this diagram, borrowed from the Wikipedia site.

Diagram from Wikipedia article on correlation.

The reason this intro is important is that the data set from 1994 to 2010 is correlated, as in winning percentage is correlated with draft picks per year, with a Pearson correlation coefficient of 0.378. Yes, that’s significant.

The statistical test for significance of  the data set is found, among other places, here. And for a data set with 32 points,  the 0.05 confidence interval for the two tailed test is about 0.35. In other words, with a correlation coefficient of 0.378, I have a better  than 95 percent confidence that these data are real, and not a product of chance.

Total correlations and summed round per year correlations

To note, that’s a small correlation over a very long period of time. If in fact it’s a small effect, when you divide the data set up into smaller chunks, the correlation should  grow smaller. And if you calculate correlations for data sets from 1994-2004 and 2000-2010 you in fact get smaller correlations than the 17 year data set. Well, in the first set, the correlation is positive *only* if you throw out the Texans, who are a huge outlier.

My gut reaction: in the sciences, they call this a “publishable result”. I could start making the posters now, if there were some kind of Georgia Academy of Sciences. Who knows, it might not hold up under  the scrutiny of harder core analysis, but for now,  it’s nice to know that the trend I saw in the data has a physical representation. In geometric terms, the data set is elliptical.

The lifespan of the effect is interesting. It’s not viewable in the typical life span of a coach, or a player. It’s only viewable on the time scale of a dynasty. It’s a small effect of dynastic scale. Pretty cool, that.

Update: editing some bad sentences. Replaced plot that cut off some poorly performing teams.

I’ve had this book a while, but really haven’t had a chance to show it off.

Hail Victory” is an oral history of the Washington Redskins, written by Tom  Loverro, a writer for the Washington Times. It’s smaller than Pete Golenbock’s oral history of the Cowboys, by a few hundred pages, and as a consequence, coverage of certain periods can be spotty.

But to give an example of the kinds of insights this book does have, here is a quote from page 180 talking about the beginnings of the 1982 season.

Gibbs made it clear he was going to use youngsters over veterans who didn’t produce. He cut running back Terry Metcalf, whom  he had been close to from their days in Saint Louis. He made backup linebacker Rich Milot a starter, as well as rookie cornerback Vernon Dean. He cut receiver Carl Powell, a top draft choice, in favor of unheralded Alvin Garrett. He brought in veteran defensive end Tony McGee to replace Mat Mendenhal and shore up the pass rush.

I bought “Hail Victory” initially to help answer the question of George Allen’s five man line back in 1972, but it was no help there. It’s going to be a terrific help as I chase down information on my next element of interest, Bobby Beathard. And he’s interesting because Washington is the ultimate counter example of the group “A” teams I’ve been so fascinated by recently.

What’s a  group “A’ team?  It’s one of the four I’ve circled on this plot:

I’m thinking now there are clusters of teams with draft strategies. The four in group “A” are New England, Green Bay, Pittsburgh, and Philadelphia. I spoke about their apparent habits here. The groups “B” and “C” are unstudied so far. Group B  teams are Denver and Indianapolis. Group C teams are Minnesota and the New York Giants. Left of group B are a cluster of 8 teams, that might as well be named group D for now. And down by its little lonesome, right at the 6.5 player/year line, is Washington.

My guess is that Bobby Beathard, the former general manager of the Washington Redskins, is the ultimate counterexample for the type “A” team.

Some things to note. Bobby was quarterback in college, and then a scout before he entered the NFL. He scouted for Kansas City in the later 1960s, was the director of player personnel for the Miami Dolphins during their peak, and in 1978, when Jack Kent Cooke was the majority owner of the Skins, he became their general manager.

There is an excellent interview of Bobby Beathard on the site Burgundy and Gold Obsession. There is a section from that interview that really stands out, and it’s the same kind of emphasis  that Bill Billick  has attributed to the Belichick era with New England. Bobby is responding to a question in this excerpt (emphasis is mine).

There should be a relationship where the personnel people and the coach are really together. We knew exactly what type of player each Redskin position coach wanted. We knew what kind (offensive line coach) Joe Bugel wanted, we knew what kind (linebackers coach) Larry Peccatiello, (defensive coordinator) Richie Petitbon wanted. I think on our first Super Bowl team we had 26 kids who weren’t drafted, we just signed them as free agents. It didn’t matter who we brought in. Those guys coached the dog out of them. When I was with (head coach) Kevin Gilbride in San Diego, he’d make up his mind before he even got to minicamp, `I don’t want that guy, I don’t want this guy, I don’t want that guy.’ And it became impossible to satisfy him with anybody. The exact opposite was Joe and his staff. Having a staff like that really helps the organization.

What’s very intriguing is this emphasis on the “back end” of the draft, or in this case, post-draft free agents. It’s also the notion that the coaches tell the scouts what kind of players to get, and the scouts go out and  get them exactly those kinds of players. The fit helped make the Redskins of the 1980s successful. And in another form, it’s the same back end emphasis you see in the type “A” teams.

With regard to the best possible athlete versus need question, Bobby said this:

Sometimes you get into that situation when you have the philosophy which we did, you have to take the best one on the board, regardless of position. We always hoped when we picked there would be two or three good players available at different positions, so you’d at least get to take closer to your need. But if there’s just one there, and he’s outstanding, and you have a great grade on the guy and the next athlete on the board doesn’t have that kind of grade, you have to go with the highest-graded player.

And that seems to be a common theme, BPA of course, but need when there are two or three attractive alternatives.

This is a follow up piece to my previous post on draft trends and football teams. I have some new charts, some new ways of looking at the data. I’ve found some new analysis tools (such as the fitting machine at zunzun.com). What I don’t have — I’ll be upfront about this — is one true way to draft. The data that I have don’t support that.

We’ll start with some comments from Chris Malumphy of drafthistory.com, almost all constructive. I wrote him about my work and he replied. He says in part:

What would also be interesting is how many “compensatory” picks are included in each team’s totals. I believe that New England is among the leaders in receiving compensatory picks (which were first awarded in 1994 or so). I’ve frequently suggested that compensatory picks are contrary to the initial purpose of the draft, which was to award high picks to the poorest teams, in the hope that they would improve. Compensatory picks typically go to teams that decide not to sign their own free agents, which often means teams like New England let relatively good, but perhaps overpriced or problematic players go, knowing that they will get extra draft picks in return. Typically, poor teams aren’t in the position to make “business” decisions like that.

Yes, that’s a really good point. I probably won’t be able to get to anything like that off the bat, unless someone suggests a good exhaustive resource for all compensatory picks ever awarded. Anyone have a guess?

I resheeted these data via rounds and it’s not as visually interesting a spreadsheet. In part, it doesn’t make the point that top 10 picks are almost inversely related to winning. The second issue is it appears there is something “magical” about the 181 and down draft bin that just sticks out when it is sheeted. And perhaps it’s a side effect of this “compensation pick” issue that Chris raises.

When I plot the data, we get an interesting trend line, but the confidence interval of the fitted slope parameter isn’t significant.

The fit is, as zunzun.com says (nice online tool for fitting small sets of data):

y = a + bx

Fitting target of sum of squared absolute error = 1.0770302448975281E+01

a = 6.5737988655760553E+00
b = 2.9998264270189301E+00

and the error analysis is:

Degrees of freedom (error): 30.0
Degrees of freedom (regression): 1.0
R-squared: 0.143164205216
Model F-statistic: 5.01254287301
Model F-statistic p-value: 0.0327335138151
Model log-likelihood: -27.9829397908
AIC: 1.87393373693
BIC: 1.96554223085
Root Mean Squared Error (RMSE): 0.58014821514

Coefficient a std error: 2.13460E+00, t-stat: 3.07964E+00, p-stat: 4.40691E-03
Coefficient b std error: 4.23686E+00, t-stat: 7.08030E-01, p-stat: 4.84392E-01

Coefficient Covariance Matrix
[ 12.69186065 -24.87944877]
[-24.87944877 50.00139868]

I don’t know much statistics, but I do know that when the relative error of a fitted parameter exceeds 100% (and 4.237/3.00*100 = 141.2%), it’s not significant.

Take home? These data are useful for examining the draft strategies of select winning teams. They are not a mantra for how to draft. If you want to look in depth at the draft strategies of the Vikings versus the Patriots .. probably the two most extreme cases in the data set, you’re likely to glean some insight. But the draft methods of one team.. or even three or four.. aren’t the one and only way to win in the NFL.

Next Page »