This is a follow up piece to my previous post on draft trends and football teams. I have some new charts, some new ways of looking at the data. I’ve found some new analysis tools (such as the fitting machine at zunzun.com). What I don’t have — I’ll be upfront about this — is one true way to draft. The data that I have don’t support that.
We’ll start with some comments from Chris Malumphy of drafthistory.com, almost all constructive. I wrote him about my work and he replied. He says in part:
What would also be interesting is how many “compensatory” picks are included in each team’s totals. I believe that New England is among the leaders in receiving compensatory picks (which were first awarded in 1994 or so). I’ve frequently suggested that compensatory picks are contrary to the initial purpose of the draft, which was to award high picks to the poorest teams, in the hope that they would improve. Compensatory picks typically go to teams that decide not to sign their own free agents, which often means teams like New England let relatively good, but perhaps overpriced or problematic players go, knowing that they will get extra draft picks in return. Typically, poor teams aren’t in the position to make “business” decisions like that.
Yes, that’s a really good point. I probably won’t be able to get to anything like that off the bat, unless someone suggests a good exhaustive resource for all compensatory picks ever awarded. Anyone have a guess?
I resheeted these data via rounds and it’s not as visually interesting a spreadsheet. In part, it doesn’t make the point that top 10 picks are almost inversely related to winning. The second issue is it appears there is something “magical” about the 181 and down draft bin that just sticks out when it is sheeted. And perhaps it’s a side effect of this “compensation pick” issue that Chris raises.
When I plot the data, we get an interesting trend line, but the confidence interval of the fitted slope parameter isn’t significant.
The fit is, as zunzun.com says (nice online tool for fitting small sets of data):
y = a + bx
Fitting target of sum of squared absolute error = 1.0770302448975281E+01
a = 6.5737988655760553E+00
b = 2.9998264270189301E+00
and the error analysis is:
Degrees of freedom (error): 30.0
Degrees of freedom (regression): 1.0
R-squared adjusted: 0.114603012057
Model F-statistic: 5.01254287301
Model F-statistic p-value: 0.0327335138151
Model log-likelihood: -27.9829397908
Root Mean Squared Error (RMSE): 0.58014821514
Coefficient a std error: 2.13460E+00, t-stat: 3.07964E+00, p-stat: 4.40691E-03
Coefficient b std error: 4.23686E+00, t-stat: 7.08030E-01, p-stat: 4.84392E-01
Coefficient Covariance Matrix
[ 12.69186065 -24.87944877]
I don’t know much statistics, but I do know that when the relative error of a fitted parameter exceeds 100% (and 4.237/3.00*100 = 141.2%), it’s not significant.
Take home? These data are useful for examining the draft strategies of select winning teams. They are not a mantra for how to draft. If you want to look in depth at the draft strategies of the Vikings versus the Patriots .. probably the two most extreme cases in the data set, you’re likely to glean some insight. But the draft methods of one team.. or even three or four.. aren’t the one and only way to win in the NFL.