Modeling


As I understand it, The Patriots have traded Sony Michel to the Rams for a fifth round choice and a sixth round choice. So is it a good trade? A bad trade? What does AV have to say? Well, Sony’s draft position is worth 21 AV, by the old 2012 Pro Football Reference charts. He gave them 16 AV of play over the past three years. To break even, all the Pats have to do is recover 5 AV. My tools and guessing the 5th as a 150 and the 6th as a 180 nets the Pats 13 AV. So the trade is a net positive of 8 AV. Given errors on a per-position basis, this could be break even or a small win for the Pats.

The first version of my Open Source Draft Simulator I wrote in time to analyze the draft of 2001, and it was based on C++. Later on, in 2007, while trying to get a job, I rewrote the simulator in Ruby because I was trying to impress people that I could learn the language. I didn’t get the job. The Ruby simulator isn’t as statistically versatile, but it works on multiple sports.

I pulled out that ten year old code, in part to see if it still works, and part to see if I could make use of the data I had received from Ourlads. Ourlads does a 32 team needs list, which in general is the hardest part of setting up a draft simulator.

The ruby code, as downloaded, has a dependence on the module ‘rdoc/usage’. It is not essential, and I recommend you comment out or delete the line that says ‘require ‘rdoc/usage”. At that point you’ll have a working program. If all the warnings at the beginning bother you, remove the -w flag from the hash bang (first) line.

On Linux create all the files and then get rid of the ^Ms at the end of the lines. I had originally developed this sim on Windows. You can use perl to remove the ^M characters with something like perl -pre ‘s/\r//g’.

Data sources? Sports Illustrated has a top 100 list that works well. The top 100 list from NFL Draft Scout also yields useful results. I used Ourlads as my ‘serious’ set of needs, but Lance Zierlein has a set, as do other sites.

A typical rule file in my current setup is:

#
# rule file for Cleveland Browns.
#
rule need
#
needlist QB RB OL DB DE
#
cond QB max 1 high 
#
cond RB max 1

To note, with the SI top player set, if you don’t set QB to a “high” need, you’ll end up drafting Saquon Barkley number one. That’s one of the things I like about my own code. Slight changes in the needs of a single team can cause ripple effects throughout the draft.

A typical mock draft using this setup is:

ruby rubysim.rb -y 2018 -s football

This mock draft was made by rubysim.rb on 2018-04-16


Round 1.

1. Cleveland Browns select Sam Darnold, QB.
2. New York Giants select Bradley Chubb, DE.
3. New York Jets select Baker Mayfield, QB.
4. Cleveland Browns select Saquon Barkley, RB.
5. Denver Broncos select Josh Allen, QB.
6. Indianapolis Colts select Quenton Nelson, G.
7. Tampa Bay Buccaneers select Minkah Fitzpatrick, S.
8. Chicago Bears select Roquan Smith, LB.
9. San Francisco 49ers select Calvin Ridley, WR.
10. Oakland Raiders select Denzel Ward, CB.
11. Miami Dolphins select Vita Vea, DT.
12. Buffalo Bills select Josh Rosen, QB.
13. Washington Redskins select Josh Jackson, CB.
14. Green Bay Packers select Derwin James, S.
15. Arizona Cardinals select Connor Williams, OT.
16. Baltimore Ravens select Mike McGlinchey, OT.
17. Los Angeles Chargers select Tremaine Edmunds, LB.
18. Seattle Seahawks select Marcus Davenport, DE.
19. Dallas Cowboys select Da'Ron Payne, DT.
20. Detroit Lions select Harold Landry, DE.
21. Cincinnati Bengals select Leighton Vander Esch, LB.
22. Buffalo Bills select Courtland Sutton, WR.
23. New England Patriots select Derrius Guice, RB.
24. Carolina Panthers select Isaiah Oliver, CB.
25. Tennessee Titans select Maurice Hurst, DT.
26. Atlanta Falcons select Taven Bryan, DL.
27. New Orleans Saints select Christian Kirk, WR.
28. Pittsburgh Steelers select Rashaan Evans, LB.
29. Jacksonville Jaguars select Kolton Miller, OT.
30. Minnesota Vikings select Arden Key, DE.
31. New England Patriots select Isaiah Wynn, G.
32. Philadelphia Eagles select Justin Reid, S.

Round 2.

33. Cleveland Browns select James Daniels, C.
34. New York Giants select Lamar Jackson, QB.
35. Cleveland Browns select Mike Hughes, CB.
36. Indianapolis Colts select Jaire Alexander, CB.
37. Indianapolis Colts select Ronnie Harrison, S.
38. Tampa Bay Buccaneers select Carlton Davis, CB.
39. Chicago Bears select D.J. Moore, WR.
40. Denver Broncos select Hayden Hurst, TE.
41. Oakland Raiders select Donte Jackson, CB.
42. Miami Dolphins select Ronald Jones II, RB.
43. New England Patriots select Mike Gesicki, TE.
44. Washington Redskins select Will Hernandez, G.
45. Green Bay Packers select Orlando Brown, OT.
46. Cincinnati Bengals select Billy Price, C.
47. Arizona Cardinals select Chukwuma Okorafor, OT.
48. Los Angeles Chargers select Rasheem Green, DT.
49. Indianapolis Colts select Sam Hubbard, DE.
50. Dallas Cowboys select James Washington, WR.
51. Detroit Lions select Brian O'Neill, OT.
52. Baltimore Ravens select Jessie Bates, S.
53. Buffalo Bills select Deon Cain, WR.
54. Kansas City Chiefs select Tim Settle, DT.
55. Carolina Panthers select Lorenzo Carter, DE.
56. Buffalo Bills select Martinas Rankin, OT.
57. Tennessee Titans select Armani Watts, S.
58. Atlanta Falcons select Harrison Phillips, DT.
59. San Francisco 49ers select Uchenna Nwosu, LB.
60. Pittsburgh Steelers select Dallas Goedert, TE.
61. Jacksonville Jaguars select Anthony Averett, CB.
62. Minnesota Vikings select DeShon Elliott, S.
63. New England Patriots select Tyrell Crosby, OT.
64. Cleveland Browns select Ogbonnia Okoronkwo, DE.

Round 3.

65. Buffalo Bills select Darius Leonard, LB.
66. New York Giants select Sony Michel, RB.
67. Indianapolis Colts select Desmond Harrison, OT.
68. Houston Texans select Mark Andrews, TE.
69. New York Giants select Mason Rudolph, QB.
70. San Francisco 49ers select Dante Pettis, WR.
71. Denver Broncos select Kerryon Johnson, RB.
72. New York Jets select Nick Chubb, RB.
73. Miami Dolphins select Jerome Baker, LB.
74. San Francisco 49ers select Equanimeous St. Brown, WR.
75. Oakland Raiders select Malik Jefferson, LB.
76. Green Bay Packers select Michael Gallup, WR.
77. Cincinnati Bengals select Ian Thomas, TE.
78. Washington Redskins select Frank Ragnow, C.
79. Arizona Cardinals select Geron Christian, OT.
80. Houston Texans select Kyzir White, S.
81. Dallas Cowboys select Jamarco Jones, OT.
82. Detroit Lions select Jeff Holland, DE.
83. Baltimore Colts select Josh Sweat, DE.
84. Los Angeles Chargers select Trenton Thompson, DT.
85. Carolina Panthers select D.J. Chark, WR.
86. Kansas City Chiefs select Braden Smith, G.
87. Los Angeles Rams select Kemoko Turay, DE.
88. Carolina Panthers select Dorance Armstrong Jr., DE.
89. Tennessee Titans select Tarvarus McFadden, CB.
90. Atlanta Falcons select Chad Thomas, DE.
91. New Orleans Saints select Jordan Lasley, WR.
92. Pittsburgh Steelers select Shaquem Griffin, OLB.
93. Jacksonville Jaguars select Rashaan Gaulden, CB.
94. Minnesota Vikings select Tre'Quan Smith, WR.
95. New England Patriots select Anthony Miller, WR.
96. Buffalo Bills select Simmie Cobbs Jr., WR.
97. Arizona Cardinals select Joseph Noteboom, OT.
98. Houston Texans select Nick Nelson, CB.
99. Denver Broncos select Rashaad Penny, RB.
100. Cincinnati Bengals select Jaylen Samuels, RB.

Summary: Replacing Wentz with Foles removes about 6.5 points of offense from the Philadelphia Eagles, turning a high flying offense into something very average.

Last night the Atlanta Falcons defeated the LA Rams. Now we’re faced with the prospect of the Falcons playing the Eagles. I have an idiosyncratic playoff model, one I treat as a hobby. It is based on static factors, the three being home field advantage, strength of schedule, and previous playoff experience. And since it values the Eagles as 0.444 and the Falcons as 1.322, the difference is -0.878 (win probability in logits). The inverse logit of -0.878 is 0.294, which is the probability of the Eagles winning, and an estimated point spread would be a 6.5 point advantage for the Falcons.

Another question that a Falcons or Eagles fan might have is how much is Carson Wentz worth as a QB, in points scored? We can use the adjusted yards per attempt stat of Pro Football Reference to estimate this, and also to estimate how much Carson Wentz is better than Foles. We have made these kinds of analyses before for Matt Ryan and Peyton Manning.

Pro Football Reference says that Carson Wentz has a AYA of 8.3 yards per attempt. Nick Foles has a AYA of 5.4. Now lets calculate the overall AYA for every pass thrown in the NFL. Stats are from Pro Football Reference.

(114870 yards + 20*741 TDs – 45*430 Ints) / 17488 Attempts
(114870 yards + 14820 TD “yards” – 19350 Int “yards”) / 117488 Attempts
110340 net yards / 17488 yards
6.31 yards per attempt to three significant digits

So about 6.3 yards per attempt. Carson Wentz is 2 yards per attempt better than the average. Nick Foles is 0.9 yards less than the average. The magic number is 2.25 which converts yards per attempt to points scored per thirty passes. So Carson, compared to Foles, is worth 2.9 * 2.25 = 6.5 points per game more than Foles, and 4.5 points more than the average NFL quarterback.

This doesn’t completely encompass Carson Wentz’s value, as according to ESPN
‘s QBR stat
, he account for 10 expected points on the ground in 13 games, so he nets about 0.8 points a game on the ground as well.

Now, back to some traditional stats. The offensive SRS assigned to Philadelphia by PFR is 7.0 with a defensive SRS of 2.5. If Carson Wentz is worth between 6.5 and 7.3 points per game, then it reduces Philadelphia’s offense to something very average, about 0.5 to -0.3. That high flying offense is almost completely transformed by the loss of their quarterback into an average offense.

Note: logits are to probabilities as logarithms are to multiplication. Rather than multiplying probabilities and using transitive rules, you just add the logits and convert back. Logarithms allow you to add logarithms of numbers rather than multiplying them.

One of the ESPN folks posted FPI odds today, retweeted by Ben Alamar. The numbers are very different from my playoff formulas. The nature of those odds made me suspect that FPI is intrinsically an offensive stat, with the advantages and disadvantages of such a stat.

One of the issues I’ve has with offensive stats is that the confidence interval of any I’ve looked at, in terms of predicting playoff performance, is that those confidence intervals are on the order of 85%. Whatever flaws of my formulas, they fit to confidence intervals of 95%. The effects they touch on are real.

But still, the purpose of this is to compare FPI odds to the odds generated by some common offensive stats. We’re using Pythagorean expectation, as generated by my Perl code, SRS as generated by my Perl code, and median point spread, also calculated by my code.

Results are below.

FPI Odds versus Other Offensive Stats
Game FPI Pythag Simple Ranking Median Pt Spread
Kansas City – Tennessee 0.75 0.75 0.79 0.73
Jacksonville – Buffalo 0.82 0.89 0.86 0.73
Los Angeles – Atlanta 0.62 0.75 0.74 0.68
New Orleans – Carolina 0.70 0.73 0.74 0.78

 

The numbers correlate too well for FPI not to have a large offensive component in its character. In fact, Pythagorean odds correlate so well with FPI I’m hard pressed to know what advantages FPI gives to the generic fan.

Note: the SRS link above points out that PFR has added a home field advantage component to their SRS calcs. I’ll note that our SRS was calibrated against PFR’s pre 2015 formula.

 

This question came up when I was looking up the last year in the playoffs for seven probable NFC playoff teams. Both New Orleans and Philadelphia last played in the playoffs four years ago, in 2013. And then the thought came up in my head, “But Drew Brees is a veteran QB.” This seems intuitive, but wanting to actually create such a definition and then later to test this using a logistic regression, there is the rub.

There are any number of QBs a fan can point to and see that the QB mattered. Roger Staubach seemed a veteran in this context back in the 1970s, Joe Montana in the 1980s, Ben Roethlisberger in the 21st century, Eli Manning in 2011, and Aaron Rogers last year. But plenty of questions abound. If a veteran QB is an independent variable whose presence or absence changes the odds of winning a playoff game, what tools do we use to define such a person? What tools would we use to eliminate entanglement, in this case between the team’s overall offensive strength and the QB himself?

The difference between a good metric and a bad metric can be seen when looking at the effect of the running game on winning. The correlation between rushing yards per carry and winning is pretty small. The correlation between run success rate and winning are larger. In short, being able to reliably make it on 3rd and 1 contributes more to success than running 5 yards a carry as opposed to 4.

At this point I’m just discussing the idea. With a definition in mind, we can do one independent variable logistic regression tests. Then with a big enough data set – 15 years of playoff data should be enough, we can start testing three independent variable logistic models (QB + SOS + PPX).

I’ve been curious, since I took on a new job and a new primary language at work, to what extent I could begin to add Python to the set of tools that I could use for football analytics. For one, the scientific area where the analyst needs the most help from experts is in optimization theory and algorithms, and at this point in time, the developments in Python are more extensive than Perl.

To start you have the scipy and numpy packages, with scipy.optimize having diverse tools for minimization and least squares fitting.  Logistic regressions in python are discussed here,  and lmfit provides some enhancements to the fitting routines in scipy.  But to start we need to be able to read and write existing data, and from that then write the SRS routines. The initial routines were to be based on my initial SRS Perl code, so don’t be surprised if code components looks very familiar.

This code will use an ORM layer, SQLAlchemy, to get to my existing databases, and to create the Class used to fetch the data, we used a python executable named sqlacodegen. We set up sqlacodegen in a virtual environment and tried it out.  The output was:

# coding: utf-8
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()
metadata = Base.metadata

class Game(Base):
    __tablename__ = 'games'

    id = Column(Integer, primary_key=True)
    week = Column(Integer, nullable=False)
    visitor = Column(String(80))
    visit_score = Column(Integer, nullable=False)
    home = Column(String(80))
    home_score = Column(Integer, nullable=False)

Which, with slight mods, can be used to read my data. The whole test program is here:

from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from pprint import pprint

def srs_correction(tptr = {}, num_teams = 32):
    sum = 0.0
    for k in tptr:
        sum += tptr[k]['srs']
    sum = sum/num_teams
    for k in tptr:
        tptr[k]['srs'] -= sum
        tptr[k]['sos'] -= sum 

def simple_ranking(tptr = {}, correct = True, debug = False):
    for k in tptr:
        tptr[k]['mov'] = tptr[k]['point_spread']/float(tptr[k]['games_played'])
        tptr[k]['srs'] = tptr[k]['mov']
        tptr[k]['oldsrs'] = tptr[k]['srs']
        tptr[k]['sos'] = 0.0
    delta = 10.0
    iters = 0
    while ( delta > 0.001 ):
        iters += 1
        if iters > 10000:
            return True
        delta = 0.0
        for k in tptr:
            sos = 0.0
            for g in tptr[k]['played']:
                sos += tptr[g]['srs']
            sos = sos/tptr[k]['games_played']
            tptr[k]['srs'] = tptr[k]['mov'] + sos
            newdelta = abs( sos - tptr[k]['sos'] )
            tptr[k]['sos'] = sos
            delta = max( delta, newdelta )
        for k in tptr:
            tptr[k]['oldsrs'] = tptr[k]['srs']
    if correct:
        srs_correction( tptr )
    if debug:
        print("iters = {0:d}".format(iters)) 
    return True     


year = "2001"
userpass = "user:pass"

nfl = "mysql+pymysql://" + userpass + "@localhost/nfl_" + year
engine = create_engine(nfl)

Base = declarative_base(engine)
metadata = Base.metadata


class Game(Base):
    __tablename__ = 'games'
    id = Column(Integer, primary_key=True)
    week = Column(Integer, nullable=False)
    visitor = Column(String(80))
    visit_score = Column(Integer, nullable=False)
    home = Column(String(80))
    home_score = Column(Integer, nullable=False)

Session = sessionmaker(bind=engine)
session = Session()
res = session.query(Game).order_by(Game.week).order_by(Game.home)

tptr = {}
for g in res:
#    print("{0:d} {1:s} {2:d} {3:s} {4:d}".format( g.week, g.home, g.home_score, g.visitor, g.visit_score ))
    if g.home not in tptr:
        tptr[g.home] = {}
        tptr[g.home]['games_played'] = 1
        tptr[g.home]['point_spread'] = g.home_score - g.visit_score
        tptr[g.home]['played'] = [ g.visitor ]
        tptr[g.visitor] = {}
        tptr[g.visitor]['games_played'] = 1
        tptr[g.visitor]['point_spread'] = g.visit_score - g.home_score
        tptr[g.visitor]['played'] = [ g.home ]
 
    else:
        tptr[g.home]['games_played'] += 1
        tptr[g.home]['point_spread'] += (g.home_score - g.visit_score)
        tptr[g.home]['played'] += [ g.visitor ]
        tptr[g.visitor]['games_played'] += 1
        tptr[g.visitor]['point_spread'] += ( g.visit_score - g.home_score )
        tptr[g.visitor]['played'] += [ g.home ]

simple_ranking( tptr )
for k in tptr:
    print("{0:10s} {1:6.2f} {2:6.2f} {3:6.2f}".format( k, tptr[k]['srs'],tptr[k]['mov'], tptr[k]['sos']))

The output was limited to two digits past the decimal and to that two digits past decimal of precision, my results are the same as my Perl code. The routines should look a lot the same. The only real issue is that you have to float one of the numbers when you calculate margin of victory, as the two inputs are integers. Python isn’t as promiscuous in type conversion as Perl is.

Last note. Although we included pprint, at this point we’re not using it. That’s because with the kind of old fashioned debugging skills I have, I use pprint the way a Perl programmer might use Data::Dumper, to look at data structures while developing a program.

Update: the original Doug Drinen post about the Simple Ranking System has a new url. You can now find it here.</

Odds for the 2015 NFL playoff final, presented from the AFC team’s point of view:

SuperBowl Playoff Odds
Prediction Method AFC Team NFC Team Score Diff Win Prob Est. Point Spread
C&F Playoff Model Denver Broncos Carolina Panthers 2.097 0.891 15.5
Pythagorean Expectations Denver Broncos Carolina Panthers -0.173 0.295 -6.4
Simple Ranking Denver Broncos Carolina Panthers -2.3 0.423 -2.3
Median Point Spread Denver Broncos Carolina Panthers -5.0 0.337 -5.0

 

Last week the system went 1-1, for a total record of 6-4. The system favors Denver more than any other team, and does not like Carolina at all. Understand, when a team makes it to the Super Bowl easily, and a predictive system gave them about a 3% chance to get there in the first place, it’s reasonable to assume that in that instance, the system really isn’t working.

So we’re going to modify our table a little bit and give some other predictions and predictive methods. The first is the good old Pythagorean formula. We best fit the Pythagorean exponent to the data for the year, so there is good reason to believe that it is more accurate than the old 2.37. It favors Carolina by a little more than six points. SRS directly gives point spread, which can be back calculated into a 57.7% chance of Carolina winning. Likewise, using median point spreads to predict the Denver-Carolina game gives Carolina a 66.3% chance of winning.

Note that none of these systems predicted the outcome of the Carolina – Arizona game. Arizona played a tougher schedule and was more of a regular season statistical powerhouse than Carolina. Arizona, however, began to lose poise as it worked its way through the playoffs. And it lost a lot of poise in the NFC championship game.

Odds for the third week of the 2015 playoffs, presented from the home team’s point of view:

Conference Championship Playoff Odds
Home Team Visiting Team Score Diff Win Prob Est. Point Spread
Carolina Panthers Arizona Cardinals -1.40 0.198 -10.4
Denver Broncos New England Patriots 1.972 0.879 14.6

 

Last week the system went 2-2, for a total record of 5-3. The system favors Arizona markedly, and Denver by an even larger margin. That said, the teams my system does not like have already won one game. There have been years when a team my system didn’t like much won anyway. That was the case in 2009, when my system favored the Colts over the Saints. The system isn’t perfect, and the system is static. It does not take into account critical injuries, morale, better coaching, etc.

Odds for the second week of the 2015 playoffs, presented from the home team’s point of view:

Second Round Playoff Odds
Home Team Visiting Team Score Diff Win Prob Est. Point Spread
Carolina Panthers Seattle Seahawks -1.713 0.153 -12.7
Arizona Cardinals Green Bay Packers -0.001 0.500 0.0
Denver Broncos Pittsburgh Steelers 0.437 0.608 3.2
New England Patriots Kansas City Chiefs -0.563 0.363 -4.2

 

Last week the system went 3-1 and perhaps would have gone 4-0 if after the Burflict interception, Cincinnati had just killed three plays and kicked a field goal.

The system currently gives Seattle a massive advantage in the playoffs. It says that Green Bay/Arizona is effectively an even match up, and that both the AFC games are pretty close. It favors Denver in their matchup, and the Chiefs in theirs.

One last comment about last week’s games. The Cincinnati-Pitt game was the most depressing playoff game I’ve seen in a long time, both for the dirty play on both sides of the ball, and the end being decided by stupid play on Cincinnati’s part.  It took away from the good parts of the game, the tough defense when people weren’t pushing the edges of the rules, and the gritty play on the part of McCarron and Roethlisberger. There was some heroic play on both their parts, in pouring rain.

But for me, watching Ryan Shazier leading with the crown of his helmet and then listening to officials explain away what is obvious on video more or less took the cake. If in any way shape or form, this kind of hit is legal, then the NFL rules system is busted.

This is going to be a mixed bag of a post, talking about anything that has caught my eye over the past couple weeks. The first thing I’ll note is that on the recommendation of Tom Gower (you need his Twitter feed), I’ve read Josh Katzowitz’s book: Sid Gillman: Father of the Passing Game.

img_6590

I didn’t know much about Gillman as a young man, though the 1963 AFL Championship was part of a greatest games collection I read through as a teen. The book isn’t a primer on Gillman’s ideas. Instead, it was more a discussion of his life, the issues he faced growing up (it’s clear Sid felt his Judaism affected his marketability as a coach in the college ranks). Not everyone gets the same chances in life, but Sid was a pretty tough guy, in his own right, and clearly the passion he felt for the sport drove him to a lot of personal success.

Worth the read. Be sure to read Tom Gower’s review as well, which is excellent.

ESPN is dealing with the football off season by slowly releasing a list of the “20 Greatest NFL Coaches” (NFL.com does its 100 best players, for much the same reason). I’m pretty sure neither Gillman nor Don Coryell will be on the list. The problem, of course, lies in the difference between the notions of “greatest” and “most influential”. The influence of both these men is undeniable. However, the greatest success for both these coaches has come has part of their respective coaching (and player) trees: Al Davis and Ara Parseghian come to mind when thinking about Gillman, with Don having a direct influence on coaches such as Joe Gibbs, and Ernie Zampese. John Madden was a product of both schools, and folks such as Norv Turner and Mike Martz are clear disciples of the Coryell way of doing things. It’s easy to go on and on here.

What’s harder to see is the separation (or fusion) of Gillman’s and Coryell’s respective coaching trees. Don never coached under or played for Gillman. And when I raised the question on Twitter, Josh Katzowitz responded with these tweets:

Josh Katzowitz : @smartfootball @FoodNSnellville From what I gathered, not much of a connection. Some of Don’s staff used to watch Gillman’s practices, tho.

Josh Katzowitz ‏: @FoodNSnellville @smartfootball Coryell was pretty adament that he didn’t take much from Gillman. Tom Bass, who coached for both, agreed.

Coaching clinics were popular then, and Sid Gillman appeared from Josh’s bio to be a popular clinic speaker. I’m sure these two mixed and heard each other speak. But Coryell had a powerful Southern California connection in Coach John McKay of USC, and I’m not sure how much Coryell and Gillman truly interacted.

Pro Football Weekly is going away, and Mike Tanier has a nice great article discussing the causes of the demise. In the middle of the discussion, a reader who called himself Richie took it upon himself to start trashing “The Hidden Game of Football” (which factors in because Bob Carroll, a coauthor of THGF, was also a contributor to PFW). Richie seems to think, among other things, that everything THGF discussed was “obvious” and that Bill James invented all of football analytics wholesale by inventing baseball analytics. It’s these kinds of assertions I really want to discuss.

I think the issue of baseball analytics encompassing the whole of football analytics can easily be dismissed by pointing out the solitary nature of baseball and its stats, their lack of entanglement issues, and the lack of a notion of field position, in the football sense of the term. Since baseball doesn’t have any such thing, any stat featuring any kind of relationship of field position to anything, or any stat derived from models of relationships of field position to anything, cannot have been created in a baseball world.

Sad to say, that’s almost any football stat of merit.

On the notion of obvious, THGF was the granddaddy of the scoring model for the average fan. I’d suggest that scoring models are certainly not obvious, or else every article I have with that tag would have been written up and dismissed years ago. What is not so obvious is that scoring models have a dual nature, akin to that of quantum mechanical objects, and the kinds of logic one needs to best understand scoring models parallels that of the kinds of things a chemistry major might encounter in his junior year of university, in a physical chemistry class (physicists might run into these issues sooner).

Scoring models have a dual nature. They are both deterministic and statistical/probabilistic at the same time.

They are deterministic in that for a typical down, distance, to go, and with a specific play by play data set, you can calculate the odds of scoring down to a hundredth of a point. They are statistical in that they represent the sum of dozens or hundreds of unique events, all compressed into a single measurement. When divorced from the parent data set, the kinds of logic you must use to analyze the meanings of the models, and formulas derived from those models, must take into account the statistical nature of the model involved.

It’s not easy. Most analysts turns models and formulas into something more concrete than they really are.

And this is just one component of the THGF contribution. I haven’t even mentioned the algebraic breakdown of the NFL passer rating they introduced, which dominates discussion of the rating to this day. It’s so influential that to a first approximation, no one can get past it.

Just tell me: how did you get from the formulas shown here to the THGF formula? And if you didn’t figure it out yourself, then how can you claim it is obvious?

Next Page »