After watching one or another controversy break out during the 2011 season, I’ve become convinced that the average “analytics guy” needs a source of play-by-play data on a weekly basis. I’m at a loss at the moment to recommend a perfect solution. I can see the play-by-play data on NFL.com, but I can’t download it. Worst case, you would think you could save the page and get to the data, but that doesn’t work. I suspect the use of AJAX or equivalent server side technology to write the data to the page after the HTML has been presented. Good for business, I’m sure, but not good for Joe Analytics Guy.

One possible source is now Pro Football Reference (PFR), which now has play by play data in their box scores, and has tended to present their data in AJAX free, user friendly fashion. Whether Joe Analytics Guy can do more than use those data personally, I doubt. PFR is purchasing their raw data from another source. And whatever restrictions the supplier puts on PFR’s data legally trickle down to us.

Further, along with the play by play, PFR is now calculating expected points (EP) along with the play by play data. Thing is, what expected point model is Pro Football Reference actually using? Unlike win probabilities, which have one interpretation per data set, EP models are a class of related models which can be quite different in value (discussed here, here, here). If you need independent verification, please note that Keith Goldner now has published 4 separate EP models (here and here), his old Markov Chain model, the new Markov Chain model, a response function model, and a model based on piecewise fits.

That’s question number one. Question that have to be answered to answer question one are things like:

  • How is PFR scoring drives?
  • What is their value for a touchdown?
  • If PFR were to eliminate down and distance as variables, what curve do they end up with?

This last would define how well Pro Football Reference’s own EP model supports their own AYA formula. After all, that’s what a AYA formula is, a linearized approximation of a EP model where down and to go distance are ignored, with yards to score is the only independent variable.

Representative Pro Football Reference EP Values
1 yard to go 99 yards to go
Down EP Down EP
1 6.97 1 -0.38
2 5.91 2 -0.78
3 5.17 3 -1.42
4 3.55 4 -2.49

 

My recommendation is that PFR clearly delineate their assumptions in the same glossary where they define their version of AYA. Make it a single click lookup, so Joe Analytics Guy knows what the darned formula actually means. Barring that, I’ve suggested to Neil Paine that they publish their EP model data separately from their play by play data. A blog post with 1st and ten, 2nd and ten, 3rd and ten curves would give those of us in the wild a fighting chance to figure out how PFR actually came by their numbers.

Update: the chart that features 99 yards to go clearly isn’t 1st and 99, 2nd and 99. Those are 1st and 10 values, 2nd and 10, etc at the team’s 1 yard line. The only 4th down value of 2011, 99 yards away, is a 4th and 13 play, so that’s what is reported above.

This is something I’ve wanted to test ever since I got my hands on play-by-play data, and to be entirely  honest, doing this test is the major reason I acquired play-by-play data in  the first place. Linearized scoring models are at the heart of the stats revolution sparked by the book, The Hidden Game of Football, as their scoring model was a linearized model.

The simplicity of the model they presented, the ability to derive it from pure reason (as opposed to hard core number crunching) makes me want to name it in some way that denotes the fact: perhaps Standard model or Common model, or Logical model. Yes, scoring the ’0′ yard line as -2 points and  the 100 as 6, and everything in between as a linearly proportional relationship between those two has to be regarded as a starting point for all sane expected points analysis. Further, because it can be derived logically, it can be used at levels of play that don’t have 1 million fans analyzing everything: high school play, or even JV football.

From the scoring models people have come up with, we get a series of formulas that are called adjusted yards per attempt formulas. They have various specific forms, but most operate on an assumption that yards can be converted to a potential to score. Gaining yards, and plenty of them, increases scoring potential, and as Brian Burke has pointed out, AYA style stats are directly correlated with winning.

With play-by-play data, converted to expected points models, some questions can now be asked:

1. Over what ranges are expected points curves linear?

2. What assumptions are required to yield linearized curves?

3. Are they linear over the whole range of data, or over just portions of the data?

4. Under what circumstances does the linear assumption break down?

We’ll reintroduce data we described briefly before, but this time we’ll fit the data to curves.

Linear fit is to formula Scoring Potential = -1.79 + 0.0653*yards. Quadratic fit is to formula Scoring Potential = 0.499 + 0.0132*yards + 0.000350*yards^2. These data are "all downs, all distance" data. The only important variable in this context is yard line, because this is the kind of working assumption a linearized model makes.

Fits to curves above. Code used was Maggie Xiong's PDL::Stats.

One simple question that can change the shape of an expected points curve is this:

How do you score a play using play-by-play data?

I’m not attempting, at this point, to come up with “one true answer” to this question, I’ll just note that the different answers to this question yield different shaped curves.

If the scoring of a play is associated only with the drive on which the play was made, then you yield curves like the purple one above. That would mean punting has no negative consequences for the scoring of a play. Curves like this I’ve been calling “raw” formulas, “raw” models. Examples of these kinds of models are Kieth Goldner’s Markov Chain model, and Bill Connelly’s equivalent points models.

If a punt can yield negative consequences for the scoring of a play, then you get into a class of models I call “response” models, because the whole of the curve of a response model can be thought of as

response = raw(yards) – fraction*raw(100 – yards)

The fraction would be a sum of things like fractional odds of punting, fractional odds of a turnover, fractional odds of a loss on 4th down, etc. And of course in a real model, the single fractional term above is a sum of terms, some of which might not be related to 100 – yards, because that’s not where the ball would end up  - a punt fraction term would be more like fraction(punt)*raw(60 – yards).

Raw models tend to be quadratic in character.  I say this because Keith Goldner fitted first and 10 data to a quadratic here. Bill Connelly’s data appear quadratic to the eye. And the raw data set above fits mostly nicely to a quadratic throughout most of the range.

And I say mostly because the data above appear sharper than quadratic close to the goal line, as if there is “more than quadratic” curvature less than 10 yards to go. And at the risk of fitting to randomness, I think another justifiable question to look at is how scoring changes the closer to the goal line a team gets.

That sharp upward kink plays into  how the shape of response models behaves. We’ll refactor the equation above to get at, qualitatively, what I’m talking about. We’re going to add a constant term to the last term in the response equation because people will calculate the response differently

response = raw(yards) – fraction*constant*raw(100 – yards)

Now, in this form, we can talk about the shape of curves as a function of the magnitude of “constant”. As constant grows larger,  the more the back end of the curve takes on the character of the last 10 yards. A small constant and you yield a less than quadratic and more than linear curve. A mid sized constant yields a linearized curve. A potent response function yields curves more like  those of David Romer or Brian Burke, with more than linear components within 10 yards on both ends of the field. Understand, this is a qualitative description. I have no clues as to the specifics of how they actually did their calculations.

I conclude though, that linearized models are specific to response function depictions of equivalent point curves, because you can’t get a linearized model any other way.

So what is our best guess at the “most accurate” adjusted yards per attempt formula?

In my data above, fitting a response model to a line yields an equation. Turning the values of that fit into an equation of the form:

AYA = (yards + α*TDs – β*Ints)/Attempts

Takes a little algebra. To begin, you have to make a decision on  how valuable your touchdown  is going to be. Some people use 7.0 points, others use 6.4 or 6.3 points. If TD = 6.4 points, then

delta points = 6.4 + 1.79 – 6.53 = 1.79 + 0.07 = 1.86 points

α = 1.86 points/ 0.0653 = 28.5 yards

turnover value = (6.53 – 1.79) + (-1.79) = 6.53 – 2*1.79 = 2.95 points

β = 2.95 / 0.0653 = 45.2 yards

If TDs = 7.0 points, you end up with α = 37.7 yards instead.

It’s interesting that this fit yields a value of an interception (in yards) almost identical to the original THGF formula. Touchdowns are more close in value to the NFL passer rating than THGF’s new passer rating. And although I’m critical of Chase Stuart’s derivation of the value of 20 for  PFR’s AYA formula, the adjustment they made does seem to be in the right direction.

So where does the model break down?

Inside the 10 yard line. It doesn’t accurately depict  the game as it gets close to the goal line.  It’s also not down and distance specific in the way a more sophisticated equivalent points model can be. A stat like expected points added gets much closer to the value of an individual play than does a AYA style stat. In terms of a play’s effect on winning, then you need win stats, such as Brian’s WPA or ESPNs QBR to break things down (though I haven’t seen ESPN give us the QBR of a play just yet, which WPA can do).

Update: corrected turnover value.

Update 9/24/11: In the comments to this link, Brian Burke describes how he and David Romer score plays (states).

The formal phrase is “finite state automaton“, which is imposing and mathy and often too painful to contemplate, until you realize what kinds of things are actually state machines [1].

Tic-Tac-Toe is a state machine. The diagram above, from Wikimedia, shows the partial solution tree to the game.

Tic-tac-toe is a state machine. You have 9 positions on a board, a state of empty, X, or O, marks that can be placed on the board by a defined set of rules, and you have a defined outcome from those sets of rules.

Checkers is also a state machine.

Checkers (draughts) is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

Chess is a state machine.

Chess is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

If you can comprehend checkers, or even tic-tac-toe, then you can understand state machines.

To treat football as a state machine, start with the idea that football is a function of field position. There are 100 yards on the field, so 100 positions to begin with. Those positions have states (1st and 10, 2nd and 3, etc), there are plays that lead to a transition from position to position and state to state, there is a method of scoring, and there is a defined outcome that results from position, states, plays, scoring and the rules of the game of football.

A lot of the analytical progress that has been made over the past several years comes from taking play by play data, breaking it down into things like games, drives, scoring, and so forth, compiling that info into a state (i.e. down and distance) database, and then asking questions of that database of interest to the analyst.

You can analyze data in a time dependent or a time independent manner. Time dependence is important if you want to analyze for things like win probability. If you’re just interested in expected points models (i.e. the odds of scoring from any particular point on the field), a time independent approach is probably good enough (that’s sometimes referred to as the “perpetual first quarter assumption”).

Net expected point models, all downs included. The purple curve does not account for response opposition drives, the yellow one does. The yellow curve was used to derive turnover values.

Take, for example, Keith Goldner’s Markov chain model. As explained here, a Markov chain is a kind of state machine. The same kinds of ideas that are embedded in simple state machines (such as tic-tac-toe) also power more sophisticated approaches such as this one.

Once a set of states is defined, a game becomes a path through all the states that occur during the course of the game, meaning an analyst can also bring graph theory (see here for an interesting tutorial) into the picture. Again, it’s another tool, one that brings its own set of insights into the analysis.

[1] More accurately, we’re going to be looking at the subset of finite state automata (related to cellular automata) that can be represented as 1 or 2 dimensional grids.  In this context, football can be mapped into a 1 dimensional geometry where the dimension of interest is position on the football field.

Notes: The checkers board is a screen capture of a game played here. The chess game above is Nigel Short-Jan Timman Tilburg 1991, and the game diagram (along with some nice game analysis) comes from the blog Chess Tales.

The value of a turnover is a topic addressed in The Hidden Game of Football, noting that the turnover value consists of the loss of value by the team that lost the ball and the gain of value  by the team that recovered the ball. To think in these terms, a scoring model is necessary, one that gives a value to field position. With such a model then, the value is

Turnover = Value gained by team with the ball + Value lost by team without the ball

In  the case of the classic models of THGF, that value is 4 points, and it is 4 points no matter what part of the field the ball is recovered.

That invariance is a product of the invariant slope of the scoring model. The model in THGF is linear, the derivative of a line is a constant, and the slopes, because this model doesn’t take into account any differences between teams, cancel. That’s not true in models such as the Markov chain model of Keith Goldner, the cubic fit to a “nearly linear” model of Aaron Schatz in 2003, and the college expected points model (he calls his model equivalent points, but it’s clearly the same thing as an expected points model)  of Bill Connelly on the site Football Study Hall. Interestingly, Bill’s model and Keith’s model have a quadratic appearance, which guarantees better than constant slope throughout their curves. Aaron’s cubic fit has a clear “better than constant” slope beyond the 50 yard line or so.

Formula with slopes exceeding a constant result  in turnover values that maximize at the end zones and minimize in the middle  of the field, giving plots that Aaron calls the “Happy Turnover Smile Time Hour”. As an example, this is the value of a turnover on first and  ten (ball lost at the LOS) for Keith Goldner’s model

First and ten turnover value from Keith Goldner’s Markov chain model

And this is the piece of code you can use to calculate this curve yourself.

Note also, the models of Bill Connelly and Keith have no negative expected points values. This is unlike the David Romer model and also unlike Brian Burke’s expected points model. I suspect this is a consequence of how drives are scored. Keith is pretty explicit about his extinction “events” for drives in his model, none of which inherit any subsequent scoring by the opposition. In contrast, Brian suggests that a drive for a team that stalls inherits some “responsibility” for points subsequently scored.

A 1st down on an opponent’s 20 is worth 3.7 EP. But a 1st down on an offense’s own 5 yd line (95 yards to the end zone) is worth -0.5 EP. The team on defense is actually more likely to eventually score next.

This is interesting because this “inherited responsibility” tends to linearize the data set except inside  the 10 yard line on either end. A pretty good approximation to the first and ten data of the Brian Burke link above can be had with a line that is valued 5 points at one end,  -1 points at the other. The value of the slope becomes 0.06 points, and the value of the turnover becomes 4 points in this linearization of the Advanced Football Stats model. The value of the touchdown is 7.0 points minus subsequent field position, which is often assumed to be 27 yards. That yields

27*0.06 – 1.0 = 1.62 – 1.0 = 0.62 points,  or approximately 6.4 points for a TD.

This would yield, for a “Brianized” new passer rating formula, a surplus yardage value for the touchdown of 1.4 points / 0.06 = 23.3 yards.

The plot is below:

Eyeball linearization of BB’s EP plots yield this simplified linear scoring model. The surplus value of a TD = 23.3 yards, and a turnover is valued 66.7 yards.

Update 9/29/2011: No matter how much I want to turn the turnover equation into a difference, it’s better represented as a sum. You add the value lost to the value gained.

In chemistry, people will speak of the chemical potential of a reaction. That a mix of chemicals has a potential doesn’t mean the reaction will happen. There is an activation energy that prevents it. To note, the reaction energy can’t exceed the chemical potential of a reaction. Energy is conserved, and can neither be created nor destroyed.

Likewise, common models of the value of yardage assign a scoring potential to yards. I know of 5 models offhand, of which the simplest is the linear model (one discussed in The Hidden Game of Football). We’re going to derive this model by argument from first principles. There is also Keith Goldner’s Markov Chain model (see here and here), David Romer’s quadratic spline model (see here or just search for “David Romer football” via a good Internet search engine), the linear model of Football Outsiders in 2003, and Brian Burke’s expected points analysis (see here, here, here, and here). And just as in thermodynamics, where energy is conserved, this scoring potential has to be a conserved quantity, else the logic of the model falls apart.

One of the points of talking about the linear model is that is applies to all levels of football, not just the pros. Second, since it doesn’t require people to break down years worth of play by play data to understand it, the logic is useful as a first approximation. Third, I suspect some clever math geek could derive all the other models as Taylor series expansions where the first term in the Taylor series is the linear model itself. At one level, it has to be regarded as the foundation of all the scoring potential models.

Deriving the linear model.

If I start at the one yard line and then proceed back into my own end zone and get tackled, I’ve just lost 2 points. This is true regardless of the level of football being played. If instead I run 99 yards to my opponent’s end zone, I score 6 points instead. That means the scale of value in the common linear model is 8 points, and if we count each yard as equal in scoring potential, we start at -2 yards in my end zone, 6 in my opponents, and every 12.5 yards on the field, I gain 1 point of value. I do not have to crunch any numbers to assume this model as a first approximation.

Other models derive from analyzing a large data set of  games for down, distance, to go, and time situations.  They can follow all the consequences of being in  those down/distance combinations and  then derive real probabilities of scoring. We’re going to call those model EP, EPA or NEP models. The value in these models is rather than assuming some probability of scoring, average scoring probabilities are built into the model itself.

What’s the value of a turnover?

In the classic linear model,  as explained by The Hidden Game of Football, the cost of a turnover is 4 points. This is because the difference in value between both teams everywhere is 4 points.  The moment the model becomes nonlinear, that no longer applies. Both Keith Goldner’s model and the FO model predict that a turnover at the line of scrimmage minimizes in the middle of the field and maximize at the ends.

4 points is worth 50 yards. We’ll come  back to that in a bit.

What’s the value of a possession?

It’s the value of not turning  the ball over, and since we know the value of a turnover, in the linear model, possession is worth 4 points. In other models, this may change.

The value of the possession in  the linear model is always 4 points, even at the end of the game. To explain,  there are  two kinds of models that predict two kinds of things.

scoring potential models predict scoring

win probability models predict winning

The scoring potential of  the possession does not change as the game is ending. The winning potential does change and should change markedly as the game begins to end.

How much is a down worth?

This  is an important issue and not readily studied without a data heavy model. I’d suggest following a couple of the Brian Burke links above, they shed a terrific amount of light on the topic. Essentially, the value of a down at a particular time and distance is the difference in expected points at that time and distance between those downs.

How much is a touchdown worth?

We’ll start with the expected points models, because it becomes easy to see how they work. EPA or NEP style models have a total assigned value for the score (6.4 pts Romer, 6.3 Burke), so the value of scoring a touchdown is the value of the score minus the value of the position on the field. It has to be that way because the remaining value is a function of field position et al. If this isn’t true, you violate conservation of a scoring potential.

Likewise, in the linear model, the value of the touchdown is equivalent, due to linearity and scoring potential conservation, to the yards required to score the touchdown. This means if the defense recovers  the ball on the opponent’s 5  (i.e. the defense has just handed you 95 yards of value),  and your team runs for 3 yards, and then passes 2 yards for the score, that the value of the touchdown is 2 yards, or 0.16 points, and the value of the entire drive is 5 yards.

In this context, the classic interpretation of what THGF calls the new rating system doesn’t make a lot of sense.

RANKING = ( yards + 10*TDs – 45*Ints)/attempts

I say so because the yards already encompass the value of the touchdown(s). In this context, the second term could be regarded as an approximation of the value of the extra point (0.8 points of value in this case). And 45 instead of 50 is an estimation that the average INT changes field  position by about 5 yards.

Finally, this analysis begs the question of what model Pro Football Reference’s adjusted yards per attempt actually describes. I’ll try, however. If you adjust the value of yards to create a “barrier potential” term to describe the touchdown, you get the following bit of algebra

0.2(x + 2) + (x + 2 ) = value of true scoring difference = 6.4 + 2 = 8.4

1.2x + 2.4 = 8.4

1.2x = 6.0

x = 5

So, if you adjust the slope so the value of the line  at 100 equals 5 instead of 6, then the average value of a yard becomes 0.07 points, and the cost of  a turnover then becomes 3 points, or about 43 yards.

How much is a field goal worth?

The same logic that applies for a touchdown also applies for a field goal. It’s the value of the score minus the value of the particular field position, down, etc from which the goal is scored. Note that in a linear model, the value is actually negative for a field goal scored from the 37.5 yard line in. And  this actually makes sense, because the sum of the score values, as the number of scores grow large, in a well balanced EPA/NEP model should approach zero.  In the linear model, I suspect it will approach some nonzero number, which would be an approximation of  the average deviation from best fit EPA/NEP function itself.

Okay, so what if high scoring teams have this zero scoring value? What’s going on?

This is the numerator of a rate term, akin to that of a shooting percentage in the NBA. But since EP models are already averaged, the proper analogy is to the shooting percentage minus the league average shooting percentage. And to continue the analogy a bit further, to score in the NBA, you not only need to shoot (not necessary a good percentage), but you also need to make your own shot. Teams that put  themselves into position to score are the equivalent, they make their own shot. I’ll also note this +/- value probably also is a representation of the TD to FG ratio.

Conclusion

Scoring potential models are part of the new wave of football analysis and the granddaddy of all scoring potential models  is the linear model discussed extensively  in The Hidden Game of Football.  In these models, scoring potential is a conserved quantity and can neither be created nor destroyed. Some of the consequences of this conservation are discussed above.

Follow

Get every new post delivered to your Inbox.

Join 243 other followers