The competitors are Denver and Seattle, and as stated previously, my model favors Seattle substantially.

Super Bowl
NFC Champion AFC Champion Score Diff Win Prob Est. Point Spread
Seattle Seahawks Denver Broncos 1.041 0.739 7.7

 

Of course by this point my model has been reduced to a single factor, as there is no home field advantage in the Super Bowl and both teams are playoff experienced. Since every season 8 of the 11 games are before the Conference chanpionships and Super Bowl, the model works best for those first eight games. Still, it’s always interesting to see what the model calculates.

At least as interesting is the Peyton Manning factor, a player having the second best season of his career (as measured by adjusted yards per attempt). I thought it would be interesting to try and figure out how much of the value above average of the potent Denver Broncos attack that Peyton Manning was responsible for. We’ll start by looking at the simple ranking of the team, divided into the offensive and defensive components. Simple rankings help adapt for the quality of opposition, which for Denver was below league average.

Denver Broncos Simple Ranking Stats
Margin of Victory Strength of Schedule Simple Ranking Defensive Simple Ranking Offensive Simple Ranking
12.47 -1.12 11.35 -3.31 14.65

 

Narrowed down to the essentials, how much of the 14.65 points of Denver offense (above average) was Peyton Manning’s doing? With some pretty simple stats, we can come up with some decent estimates of the Manning contribution to Denver’s value above average.

We’ll start by calculating Peyton’s adjusted yards per attempt, and do so for the league as a whole. We’ll use the Pro Football Reference formula. Later, we’ll use the known conversion factors for AYA to turn that contribution to points, and the subtract the league average from that contribution.

Passing Stats, 2013
Player(s) Completions Attempts Yards Touchdowns Interceptions AYA
Peyton Manning 450 659 5477 55 10 9.3
All NFL passing 11102 18136 120626 804 502 6.3

 

The difference between Peyton Manning’s AYA and the league average is 3 points. Peyton Manning threw 659 times, averaging about 41.2 passes per game. This compares to the average team passing about 35.4 times a game. To convert an AYA into points per 40 passes, the conversion factor is 3.0. This is math people can do in their head. 3 times 3 equals 9 points. In a game situation, in 2013, where Peyton Manning throws 40 passes, he’ll generate 9 points more offense than the average NFL quarterback. So, of the 14.65 points above average that the Denver Broncos generated, Peyton Manning is at least responsible for 61% of that.

Notes:

There is a 0.5 point difference between the AYA reported by Pro Football Reference and the one I calculated for all NFL teams. I suspect PFR came to theirs by taking an average of the AYA of all 32 teams as opposed to calculating the number for all teams. To be sure, we’ll grind the number out step by step.

The yards term: 120626
The TD term: 20 x 804 = 16080
The Int term: 45 x 502 = 22590

120626 + 16080 – 22590 = 114116

Numerator over denominator is:

114116 / 18136 = 6.29223… to two significant digits is 6.3.

There are two well known adjusted yards per attempt formulas, which easily reduce to simple scoring models. The first is the equation  introduced by Carroll et al. in “The Hidden Game of Football“, which they called the  New Passer Rating.

(1) AYA = (YDs + 10*TDs- 45*INTs)/ ATTEMPTS

And the Pro Football Reference formula currently in use.

(2) AYA  = (YDs +20*TDs – 45*INTs)/ATTEMPTS.

Scoring model corresponding to the THGF  New Passer Rating, with opposition curve also plotted. Difference between curves is the turnover value, 4 points.

Scoring model corresponding to the THGF New Passer Rating, with opposition curve also plotted. Difference between curves is the turnover value, 4 points.

Formula (1) fits well to a scoring model with the following attributes:

  • The value at the 0 yard line is -2 points, corresponding to scoring a safety.
  • The slope of the line is 0.08 points per yard.
  • At 100 yards, the value of the curve is 6 points.
  •  The value of a touchdown in this model is 6.8 points.

The difference, 0.8 points, translated by the slope of the line,  (i.e 0.8/0.08) is equivalent to 10 yards. 4 points, the value of a turnover, is equal to 50 yards. 45 was selected to approximate a 5 yard runback, presumably.

Pro Football Reference AYA formula translated into a scoring model. Difference in team and opposition curves, the turnover value, equals 3.5 points.

Pro Football Reference AYA formula translated into a scoring model. Difference in team and opposition curves, the turnover value, equals 3.5 points.

Formula (2) fits well to a scoring model with the following attributes:

  • The value at the 0 yard line is -2 points, corresponding to scoring a safety.
  • The slope of the line is 0.075 points per yard.
  • At 100 yards, the value of the curve is 5.5 points.
  • The value of a touchdown in this model is 7.0 points.

The difference, 1.5 points, translated by the slope of the line,  (i.e 1.5/0.075) is equivalent to 20 yards. 3.5 points, the value of a turnover, is equal to 46.67 yards. 45 remains in the INT term for reasons of tradition, and the simple fact this kind of interpretation of the formulas wasn’t available when Pro Football Reference introduced their new formula. Otherwise, they might have preferred 40.

Adjusted yards per attempt or adjusted expected points per attempt?

Because these models show a clearly evident relationship between yards and points, you can calculate expected points from these kinds of formulas. The conversion factor is the slope of the line. If, for example, I wanted to find out how many expected point Robert Griffin III would generate in 30 passes, that’s pretty easy, using the Pro Football Reference values of AYA. RG3’s AYA is 8.6, and 0.075 x 30  = 2.25. So, if the Skins can get RG3 to pass 30 times, against a league average defense, he should generate 19.35 points of offense. Matt Ryan, with his 7.7 AYA, would  be expected to generate 17.33 points of offense in 30 passes. Tony Romo? His 7.6 AYA corresponds to  17.1 expected  points per 30 passes.

Peyton  Manning, in his best  year, 2004, with a 10.2 AYA, could have been expected to generate 22.95 points per 30 passes.

This simple relationship is one reason why, even if you’re happy with the correlation between the NFL passer rating and winning  (which is real but isn’t all that great), that  you should sometimes consider thinking in terms of AYA.

A Probabilistic Rule of Thumb.

If you think about these scoring models in a simplified way, where there are only two results, either a TD or a non-scoring result, an interesting rule of thumb emerges. The TD term in equation (1) is equal to 10 yards, or 0.8 points. 0.8/6.8 x 100 = 11.76%, suggesting that the odds of *not* scoring, in formula (1), is about 10%. Likewise, for equation (2) whose TD term is 20, 1.5/7 x 100 = 21.43%, suggesting the odds of *not* scoring, in formula (2), is about 20%.

This is going to be a mixed bag of a post, talking about anything that has caught my eye over the past couple weeks. The first thing I’ll note is that on the recommendation of Tom Gower (you need his Twitter feed), I’ve read Josh Katzowitz’s book: Sid Gillman: Father of the Passing Game.

img_6590

I didn’t know much about Gillman as a young man, though the 1963 AFL Championship was part of a greatest games collection I read through as a teen. The book isn’t a primer on Gillman’s ideas. Instead, it was more a discussion of his life, the issues he faced growing up (it’s clear Sid felt his Judaism affected his marketability as a coach in the college ranks). Not everyone gets the same chances in life, but Sid was a pretty tough guy, in his own right, and clearly the passion he felt for the sport drove him to a lot of personal success.

Worth the read. Be sure to read Tom Gower’s review as well, which is excellent.

ESPN is dealing with the football off season by slowly releasing a list of the “20 Greatest NFL Coaches” (NFL.com does its 100 best players, for much the same reason). I’m pretty sure neither Gillman nor Don Coryell will be on the list. The problem, of course, lies in the difference between the notions of “greatest” and “most influential”. The influence of both these men is undeniable. However, the greatest success for both these coaches has come has part of their respective coaching (and player) trees: Al Davis and Ara Parseghian come to mind when thinking about Gillman, with Don having a direct influence on coaches such as Joe Gibbs, and Ernie Zampese. John Madden was a product of both schools, and folks such as Norv Turner and Mike Martz are clear disciples of the Coryell way of doing things. It’s easy to go on and on here.

What’s harder to see is the separation (or fusion) of Gillman’s and Coryell’s respective coaching trees. Don never coached under or played for Gillman. And when I raised the question on Twitter, Josh Katzowitz responded with these tweets:

Josh Katzowitz : @smartfootball @FoodNSnellville From what I gathered, not much of a connection. Some of Don’s staff used to watch Gillman’s practices, tho.

Josh Katzowitz ‏: @FoodNSnellville @smartfootball Coryell was pretty adament that he didn’t take much from Gillman. Tom Bass, who coached for both, agreed.

Coaching clinics were popular then, and Sid Gillman appeared from Josh’s bio to be a popular clinic speaker. I’m sure these two mixed and heard each other speak. But Coryell had a powerful Southern California connection in Coach John McKay of USC, and I’m not sure how much Coryell and Gillman truly interacted.

Pro Football Weekly is going away, and Mike Tanier has a nice great article discussing the causes of the demise. In the middle of the discussion, a reader who called himself Richie took it upon himself to start trashing “The Hidden Game of Football” (which factors in because Bob Carroll, a coauthor of THGF, was also a contributor to PFW). Richie seems to think, among other things, that everything THGF discussed was “obvious” and that Bill James invented all of football analytics wholesale by inventing baseball analytics. It’s these kinds of assertions I really want to discuss.

I think the issue of baseball analytics encompassing the whole of football analytics can easily be dismissed by pointing out the solitary nature of baseball and its stats, their lack of entanglement issues, and the lack of a notion of field position, in the football sense of the term. Since baseball doesn’t have any such thing, any stat featuring any kind of relationship of field position to anything, or any stat derived from models of relationships of field position to anything, cannot have been created in a baseball world.

Sad to say, that’s almost any football stat of merit.

On the notion of obvious, THGF was the granddaddy of the scoring model for the average fan. I’d suggest that scoring models are certainly not obvious, or else every article I have with that tag would have been written up and dismissed years ago. What is not so obvious is that scoring models have a dual nature, akin to that of quantum mechanical objects, and the kinds of logic one needs to best understand scoring models parallels that of the kinds of things a chemistry major might encounter in his junior year of university, in a physical chemistry class (physicists might run into these issues sooner).

Scoring models have a dual nature. They are both deterministic and statistical/probabilistic at the same time.

They are deterministic in that for a typical down, distance, to go, and with a specific play by play data set, you can calculate the odds of scoring down to a hundredth of a point. They are statistical in that they represent the sum of dozens or hundreds of unique events, all compressed into a single measurement. When divorced from the parent data set, the kinds of logic you must use to analyze the meanings of the models, and formulas derived from those models, must take into account the statistical nature of the model involved.

It’s not easy. Most analysts turns models and formulas into something more concrete than they really are.

And this is just one component of the THGF contribution. I haven’t even mentioned the algebraic breakdown of the NFL passer rating they introduced, which dominates discussion of the rating to this day. It’s so influential that to a first approximation, no one can get past it.

Just tell me: how did you get from the formulas shown here to the THGF formula? And if you didn’t figure it out yourself, then how can you claim it is obvious?

I’ve been looking at this model recently, and thinking.

Backstory references, for those who need them: here and here and here.

Pro Football Reference’s AYA statistic as a scoring potential model. The barrier potential represents the idea that scoring chances do not become 100% as the opponents goal line is neared.

If the odds of scoring a touchdown approach 100% as you approach the goal line, then the barrier potential disappears, and the “yards to go” intercept is equal to the value of the touchdown. The values in the PFR model appear to always increase as they approach the goal line. They never go down, the way real values do. Therefore, the model as presented on their pages appears to be a fitted curve, not raw data.

The value they assign the touchdown is 7 points. The EP value of first and goal on the 1 is 6.97 points. 6.97 / 7.00 * 100 = 99.57%. How many of you out there think the chances of scoring a touchdown on the 1 yard line are better than 99%?

More so, the EP value, 1st and goal on the 2 yard line is 6.74. Ok, if the fitting function is linear, or perhaps quadratic, then how do you go 6.74, to 6.97, to 7.00? The difference between 6.74 and 6.97 is 0.23 points. Assuming linearity (not true, as first and 10 points on the other end of the curve typically differ by 0.03 points per yard), you get an extrapolated intercept of 7.20 points.

The PFR model has its issues. The first down intercept seems odd, and it lacks a barrier potential. To what extent this is an artifact of a polynomial (or other curve) fitted to real data remains to be seen.

Update: added a useful Keith Goldner reference, which has a chart giving probabilities of scoring a touchdown.

After watching one or another controversy break out during the 2011 season, I’ve become convinced that the average “analytics guy” needs a source of play-by-play data on a weekly basis. I’m at a loss at the moment to recommend a perfect solution. I can see the play-by-play data on NFL.com, but I can’t download it. Worst case, you would think you could save the page and get to the data, but that doesn’t work. I suspect the use of AJAX or equivalent server side technology to write the data to the page after the HTML has been presented. Good for business, I’m sure, but not good for Joe Analytics Guy.

One possible source is now Pro Football Reference (PFR), which now has play by play data in their box scores, and has tended to present their data in AJAX free, user friendly fashion. Whether Joe Analytics Guy can do more than use those data personally, I doubt. PFR is purchasing their raw data from another source. And whatever restrictions the supplier puts on PFR’s data legally trickle down to us.

Further, along with the play by play, PFR is now calculating expected points (EP) along with the play by play data. Thing is, what expected point model is Pro Football Reference actually using? Unlike win probabilities, which have one interpretation per data set, EP models are a class of related models which can be quite different in value (discussed here, here, here). If you need independent verification, please note that Keith Goldner now has published 4 separate EP models (here and here), his old Markov Chain model, the new Markov Chain model, a response function model, and a model based on piecewise fits.

That’s question number one. Question that have to be answered to answer question one are things like:

  • How is PFR scoring drives?
  • What is their value for a touchdown?
  • If PFR were to eliminate down and distance as variables, what curve do they end up with?

This last would define how well Pro Football Reference’s own EP model supports their own AYA formula. After all, that’s what a AYA formula is, a linearized approximation of a EP model where down and to go distance are ignored, with yards to score is the only independent variable.

Representative Pro Football Reference EP Values
1 yard to go 99 yards to go
Down EP Down EP
1 6.97 1 -0.38
2 5.91 2 -0.78
3 5.17 3 -1.42
4 3.55 4 -2.49

 

My recommendation is that PFR clearly delineate their assumptions in the same glossary where they define their version of AYA. Make it a single click lookup, so Joe Analytics Guy knows what the darned formula actually means. Barring that, I’ve suggested to Neil Paine that they publish their EP model data separately from their play by play data. A blog post with 1st and ten, 2nd and ten, 3rd and ten curves would give those of us in the wild a fighting chance to figure out how PFR actually came by their numbers.

Update: the chart that features 99 yards to go clearly isn’t 1st and 99, 2nd and 99. Those are 1st and 10 values, 2nd and 10, etc at the team’s 1 yard line. The only 4th down value of 2011, 99 yards away, is a 4th and 13 play, so that’s what is reported above.

This is something I’ve wanted to test ever since I got my hands on play-by-play data, and to be entirely  honest, doing this test is the major reason I acquired play-by-play data in  the first place. Linearized scoring models are at the heart of the stats revolution sparked by the book, The Hidden Game of Football, as their scoring model was a linearized model.

The simplicity of the model they presented, the ability to derive it from pure reason (as opposed to hard core number crunching) makes me want to name it in some way that denotes the fact: perhaps Standard model or Common model, or Logical model. Yes, scoring the ‘0’ yard line as -2 points and  the 100 as 6, and everything in between as a linearly proportional relationship between those two has to be regarded as a starting point for all sane expected points analysis. Further, because it can be derived logically, it can be used at levels of play that don’t have 1 million fans analyzing everything: high school play, or even JV football.

From the scoring models people have come up with, we get a series of formulas that are called adjusted yards per attempt formulas. They have various specific forms, but most operate on an assumption that yards can be converted to a potential to score. Gaining yards, and plenty of them, increases scoring potential, and as Brian Burke has pointed out, AYA style stats are directly correlated with winning.

With play-by-play data, converted to expected points models, some questions can now be asked:

1. Over what ranges are expected points curves linear?

2. What assumptions are required to yield linearized curves?

3. Are they linear over the whole range of data, or over just portions of the data?

4. Under what circumstances does the linear assumption break down?

We’ll reintroduce data we described briefly before, but this time we’ll fit the data to curves.

Linear fit is to formula Scoring Potential = -1.79 + 0.0653*yards. Quadratic fit is to formula Scoring Potential = 0.499 + 0.0132*yards + 0.000350*yards^2. These data are "all downs, all distance" data. The only important variable in this context is yard line, because this is the kind of working assumption a linearized model makes.

Fits to curves above. Code used was Maggie Xiong's PDL::Stats.

One simple question that can change the shape of an expected points curve is this:

How do you score a play using play-by-play data?

I’m not attempting, at this point, to come up with “one true answer” to this question, I’ll just note that the different answers to this question yield different shaped curves.

If the scoring of a play is associated only with the drive on which the play was made, then you yield curves like the purple one above. That would mean punting has no negative consequences for the scoring of a play. Curves like this I’ve been calling “raw” formulas, “raw” models. Examples of these kinds of models are Kieth Goldner’s Markov Chain model, and Bill Connelly’s equivalent points models.

If a punt can yield negative consequences for the scoring of a play, then you get into a class of models I call “response” models, because the whole of the curve of a response model can be thought of as

response = raw(yards) – fraction*raw(100 – yards)

The fraction would be a sum of things like fractional odds of punting, fractional odds of a turnover, fractional odds of a loss on 4th down, etc. And of course in a real model, the single fractional term above is a sum of terms, some of which might not be related to 100 – yards, because that’s not where the ball would end up  – a punt fraction term would be more like fraction(punt)*raw(60 – yards).

Raw models tend to be quadratic in character.  I say this because Keith Goldner fitted first and 10 data to a quadratic here. Bill Connelly’s data appear quadratic to the eye. And the raw data set above fits mostly nicely to a quadratic throughout most of the range.

And I say mostly because the data above appear sharper than quadratic close to the goal line, as if there is “more than quadratic” curvature less than 10 yards to go. And at the risk of fitting to randomness, I think another justifiable question to look at is how scoring changes the closer to the goal line a team gets.

That sharp upward kink plays into  how the shape of response models behaves. We’ll refactor the equation above to get at, qualitatively, what I’m talking about. We’re going to add a constant term to the last term in the response equation because people will calculate the response differently

response = raw(yards) – fraction*constant*raw(100 – yards)

Now, in this form, we can talk about the shape of curves as a function of the magnitude of “constant”. As constant grows larger,  the more the back end of the curve takes on the character of the last 10 yards. A small constant and you yield a less than quadratic and more than linear curve. A mid sized constant yields a linearized curve. A potent response function yields curves more like  those of David Romer or Brian Burke, with more than linear components within 10 yards on both ends of the field. Understand, this is a qualitative description. I have no clues as to the specifics of how they actually did their calculations.

I conclude though, that linearized models are specific to response function depictions of equivalent point curves, because you can’t get a linearized model any other way.

So what is our best guess at the “most accurate” adjusted yards per attempt formula?

In my data above, fitting a response model to a line yields an equation. Turning the values of that fit into an equation of the form:

AYA = (yards + α*TDs – β*Ints)/Attempts

Takes a little algebra. To begin, you have to make a decision on  how valuable your touchdown  is going to be. Some people use 7.0 points, others use 6.4 or 6.3 points. If TD = 6.4 points, then

delta points = 6.4 + 1.79 – 6.53 = 1.79 + 0.07 = 1.86 points

α = 1.86 points/ 0.0653 = 28.5 yards

turnover value = (6.53 – 1.79) + (-1.79) = 6.53 – 2*1.79 = 2.95 points

β = 2.95 / 0.0653 = 45.2 yards

If TDs = 7.0 points, you end up with α = 37.7 yards instead.

It’s interesting that this fit yields a value of an interception (in yards) almost identical to the original THGF formula. Touchdowns are more close in value to the NFL passer rating than THGF’s new passer rating. And although I’m critical of Chase Stuart’s derivation of the value of 20 for  PFR’s AYA formula, the adjustment they made does seem to be in the right direction.

So where does the model break down?

Inside the 10 yard line. It doesn’t accurately depict  the game as it gets close to the goal line.  It’s also not down and distance specific in the way a more sophisticated equivalent points model can be. A stat like expected points added gets much closer to the value of an individual play than does a AYA style stat. In terms of a play’s effect on winning, then you need win stats, such as Brian’s WPA or ESPNs QBR to break things down (though I haven’t seen ESPN give us the QBR of a play just yet, which WPA can do).

Update: corrected turnover value.

Update 9/24/11: In the comments to this link, Brian Burke describes how he and David Romer score plays (states).

The formal phrase is “finite state automaton“, which is imposing and mathy and often too painful to contemplate, until you realize what kinds of things are actually state machines [1].

Tic-Tac-Toe is a state machine. The diagram above, from Wikimedia, shows the partial solution tree to the game.

Tic-tac-toe is a state machine. You have 9 positions on a board, a state of empty, X, or O, marks that can be placed on the board by a defined set of rules, and you have a defined outcome from those sets of rules.

Checkers is also a state machine.

Checkers (draughts) is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

Chess is a state machine.

Chess is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

If you can comprehend checkers, or even tic-tac-toe, then you can understand state machines.

To treat football as a state machine, start with the idea that football is a function of field position. There are 100 yards on the field, so 100 positions to begin with. Those positions have states (1st and 10, 2nd and 3, etc), there are plays that lead to a transition from position to position and state to state, there is a method of scoring, and there is a defined outcome that results from position, states, plays, scoring and the rules of the game of football.

A lot of the analytical progress that has been made over the past several years comes from taking play by play data, breaking it down into things like games, drives, scoring, and so forth, compiling that info into a state (i.e. down and distance) database, and then asking questions of that database of interest to the analyst.

You can analyze data in a time dependent or a time independent manner. Time dependence is important if you want to analyze for things like win probability. If you’re just interested in expected points models (i.e. the odds of scoring from any particular point on the field), a time independent approach is probably good enough (that’s sometimes referred to as the “perpetual first quarter assumption”).

Net expected point models, all downs included. The purple curve does not account for response opposition drives, the yellow one does. The yellow curve was used to derive turnover values.

Take, for example, Keith Goldner’s Markov chain model. As explained here, a Markov chain is a kind of state machine. The same kinds of ideas that are embedded in simple state machines (such as tic-tac-toe) also power more sophisticated approaches such as this one.

Once a set of states is defined, a game becomes a path through all the states that occur during the course of the game, meaning an analyst can also bring graph theory (see here for an interesting tutorial) into the picture. Again, it’s another tool, one that brings its own set of insights into the analysis.

[1] More accurately, we’re going to be looking at the subset of finite state automata (related to cellular automata) that can be represented as 1 or 2 dimensional grids.  In this context, football can be mapped into a 1 dimensional geometry where the dimension of interest is position on the football field.

Notes: The checkers board is a screen capture of a game played here. The chess game above is Nigel Short-Jan Timman Tilburg 1991, and the game diagram (along with some nice game analysis) comes from the blog Chess Tales.

Follow

Get every new post delivered to your Inbox.

Join 244 other followers