The three sites we noted last year: Cool Standings, Football Outsiders, and NFL Forecast, are at it again, providing predictions of who is going to be in the playoffs.

 

Cool Standings uses Pythagoreans to do their predictions (and for some reason in 2011, ignored home field advantage), FO uses their proprietary DVOA stats, and NFL Forecast uses Brian Burke’s predictive model.

Blogging the Beast has a terrific article on “the play”. If you watched any Dallas-Philadelphia games in 2011, you’ll know exactly what I mean, the way with a simple counter trap, LeSean McCoy treated the Cowboys line as if it were Swiss cheese.

Most important new link, perhaps, is a new Grantland article by Chris Brown of Smart Football. This article on Chip Kelly is really good. Not only is the writing good, but I love the photos:

Not my photo. This is from Chris Brown’s Chip Kelly article (see link in text).

as an example. Have you ever seen a better photo of the gap assignments of a defense?

Ed Bouchette has a good article, with Steelers defenders talking about Michael Vick. Neil Payne has two interesting pieces (here and here) on how winning early games is correlated with the final record for the season.

Brian Burke has made an interesting attempt to break down EP (expected points) data to the level of individual teams. I’ve contributed to the discussion there. There is a lot to the notion that slope of the EP curve reflects the ease with which a team can score, and the more shallow the slope, the easier it is for a team to score.

Note that the defensive contribution to a EP curve will depend on how expected points are actually scored. In a Keith Goldner type Markov chain model (a “raw” EP model), a defense cannot affect its own EP curve. It can only affect an opponent’s curve. In a Romer/Burke type EP formulation, the defensive effect on a team’s EP curve and the opponent’s EP curve is complex. Scoring by the defense has an “equal and opposite” effect on team and opponent EP, the slope being affected by frequency of the scoring as a function of yard line. Various kinds of stops could also affect the slope as well. Since scoring opportunities increase for an offense the closer to the goal line the offense gets, an equal stop probability per yard line would end up yielding nonequal scoring chances, and thus slope changes.

There are three interesting sites doing the dirty job of forecasting playoff probabilities.  The first is Cool Standings, which is using Pythagorean expectations to calculate the odds of successive wins and losses, and thus, the likelihood of a team making it to the playoffs. The second is a page on the Football Outsiders’s site named DVOA Playoff Odds Report, which is using their signature DVOA stat – a “success” stat – to  generate the probability of a team making it to the playoffs. Then there is the site NFL Forecast, which has a page that predicts playoff winners using Brian Burke’s predictive model.

Of the three, Cool Standings is the most reliable in terms of updates. Whose model is actually most accurate is something any individual reader should try and take into consideration. Pythagoreans, in my opinion, are an underrated predictive stat. DVOA will tend to emphasize consistency and has large turnover penalties. BB’s metrics have tended to emphasize explosiveness, and now recently, running consistency, as determined by Brian’s version of the run success stat.

I’ve found these sites to be more reliable than local media (in particular Atlanta sports radio) in analyzing playoff possibilities. For a couple weeks now it’s been clear, for example, that Dallas pretty much has to win its division to have any playoff chances at all, while the Atlanta airwaves have been talking about how Atlanta’s wild card chances run through (among other teams) Dallas. Uh, no they don’t. These sites, my radio friends, are more clued in than you.

The recent success of DeMarco Murray has energized the Dallas fan base. Felix Jones is being spoken of as if he’s some kind of leftover (I know, a 5.1 YPC over a career is such a drag), and people are taking Murray’s 6.7 YPA for granted. That wasn’t the thing that got me in the fan circles. It’s that Julius Jones was becoming a whipping boy again, the source of every running back sin there is, and so I wanted to build some tools to help analyze Julius’s career, and at the same time, look at Marion Barber III’s numbers, since these two are historically linked.

We’ll start with this database, and a bit of sql, something to let us find running plays. The sql is:

select down, togo, description from nfl_pbp where season = 2007 and gameid LIKE "%DAL%" and description like "%J.Jones%" and not description LIKE '%pass%' and not description LIKE '%PENALTY on DAL%' and not description like '%kick%' and not description LIKE '%sacked%'

It’s not perfect. I’m not picking up plays where a QB is sacked and the RB recovers the ball. A better bit of SQL might help, but that’s a place to start. We bury this SQL into a program that then parses the description string for the statement “for X yards”, or alternatively, “for no gain”, and adds them all up. From this, we could calculate yards per carry, but more importantly, we’ll calculate run success and we’ll also calculate something I’m going to call a failure rate.

For our purposes, a failure rate is the number of plays that gained 2 yards or less, divided by the total number of running attempts, multiplied by 100. The purpose of the failure rate is to investigate whether Julius, in 2007, became the master of the 1 and 2 yard run. One common fan conception of his style of play in his last year in Dallas is that “he had plenty of long runs but had so many 1 and 2 yards runs as to be useless.” I wish to investigate that.

(more…)

This is something I’ve wanted to test ever since I got my hands on play-by-play data, and to be entirely  honest, doing this test is the major reason I acquired play-by-play data in  the first place. Linearized scoring models are at the heart of the stats revolution sparked by the book, The Hidden Game of Football, as their scoring model was a linearized model.

The simplicity of the model they presented, the ability to derive it from pure reason (as opposed to hard core number crunching) makes me want to name it in some way that denotes the fact: perhaps Standard model or Common model, or Logical model. Yes, scoring the ‘0’ yard line as -2 points and  the 100 as 6, and everything in between as a linearly proportional relationship between those two has to be regarded as a starting point for all sane expected points analysis. Further, because it can be derived logically, it can be used at levels of play that don’t have 1 million fans analyzing everything: high school play, or even JV football.

From the scoring models people have come up with, we get a series of formulas that are called adjusted yards per attempt formulas. They have various specific forms, but most operate on an assumption that yards can be converted to a potential to score. Gaining yards, and plenty of them, increases scoring potential, and as Brian Burke has pointed out, AYA style stats are directly correlated with winning.

With play-by-play data, converted to expected points models, some questions can now be asked:

1. Over what ranges are expected points curves linear?

2. What assumptions are required to yield linearized curves?

3. Are they linear over the whole range of data, or over just portions of the data?

4. Under what circumstances does the linear assumption break down?

We’ll reintroduce data we described briefly before, but this time we’ll fit the data to curves.

Linear fit is to formula Scoring Potential = -1.79 + 0.0653*yards. Quadratic fit is to formula Scoring Potential = 0.499 + 0.0132*yards + 0.000350*yards^2. These data are "all downs, all distance" data. The only important variable in this context is yard line, because this is the kind of working assumption a linearized model makes.

Fits to curves above. Code used was Maggie Xiong's PDL::Stats.

One simple question that can change the shape of an expected points curve is this:

How do you score a play using play-by-play data?

I’m not attempting, at this point, to come up with “one true answer” to this question, I’ll just note that the different answers to this question yield different shaped curves.

If the scoring of a play is associated only with the drive on which the play was made, then you yield curves like the purple one above. That would mean punting has no negative consequences for the scoring of a play. Curves like this I’ve been calling “raw” formulas, “raw” models. Examples of these kinds of models are Kieth Goldner’s Markov Chain model, and Bill Connelly’s equivalent points models.

If a punt can yield negative consequences for the scoring of a play, then you get into a class of models I call “response” models, because the whole of the curve of a response model can be thought of as

response = raw(yards) – fraction*raw(100 – yards)

The fraction would be a sum of things like fractional odds of punting, fractional odds of a turnover, fractional odds of a loss on 4th down, etc. And of course in a real model, the single fractional term above is a sum of terms, some of which might not be related to 100 – yards, because that’s not where the ball would end up  – a punt fraction term would be more like fraction(punt)*raw(60 – yards).

Raw models tend to be quadratic in character.  I say this because Keith Goldner fitted first and 10 data to a quadratic here. Bill Connelly’s data appear quadratic to the eye. And the raw data set above fits mostly nicely to a quadratic throughout most of the range.

And I say mostly because the data above appear sharper than quadratic close to the goal line, as if there is “more than quadratic” curvature less than 10 yards to go. And at the risk of fitting to randomness, I think another justifiable question to look at is how scoring changes the closer to the goal line a team gets.

That sharp upward kink plays into  how the shape of response models behaves. We’ll refactor the equation above to get at, qualitatively, what I’m talking about. We’re going to add a constant term to the last term in the response equation because people will calculate the response differently

response = raw(yards) – fraction*constant*raw(100 – yards)

Now, in this form, we can talk about the shape of curves as a function of the magnitude of “constant”. As constant grows larger,  the more the back end of the curve takes on the character of the last 10 yards. A small constant and you yield a less than quadratic and more than linear curve. A mid sized constant yields a linearized curve. A potent response function yields curves more like  those of David Romer or Brian Burke, with more than linear components within 10 yards on both ends of the field. Understand, this is a qualitative description. I have no clues as to the specifics of how they actually did their calculations.

I conclude though, that linearized models are specific to response function depictions of equivalent point curves, because you can’t get a linearized model any other way.

So what is our best guess at the “most accurate” adjusted yards per attempt formula?

In my data above, fitting a response model to a line yields an equation. Turning the values of that fit into an equation of the form:

AYA = (yards + α*TDs – β*Ints)/Attempts

Takes a little algebra. To begin, you have to make a decision on  how valuable your touchdown  is going to be. Some people use 7.0 points, others use 6.4 or 6.3 points. If TD = 6.4 points, then

delta points = 6.4 + 1.79 – 6.53 = 1.79 + 0.07 = 1.86 points

α = 1.86 points/ 0.0653 = 28.5 yards

turnover value = (6.53 – 1.79) + (-1.79) = 6.53 – 2*1.79 = 2.95 points

β = 2.95 / 0.0653 = 45.2 yards

If TDs = 7.0 points, you end up with α = 37.7 yards instead.

It’s interesting that this fit yields a value of an interception (in yards) almost identical to the original THGF formula. Touchdowns are more close in value to the NFL passer rating than THGF’s new passer rating. And although I’m critical of Chase Stuart’s derivation of the value of 20 for  PFR’s AYA formula, the adjustment they made does seem to be in the right direction.

So where does the model break down?

Inside the 10 yard line. It doesn’t accurately depict  the game as it gets close to the goal line.  It’s also not down and distance specific in the way a more sophisticated equivalent points model can be. A stat like expected points added gets much closer to the value of an individual play than does a AYA style stat. In terms of a play’s effect on winning, then you need win stats, such as Brian’s WPA or ESPNs QBR to break things down (though I haven’t seen ESPN give us the QBR of a play just yet, which WPA can do).

Update: corrected turnover value.

Update 9/24/11: In the comments to this link, Brian Burke describes how he and David Romer score plays (states).

The formal phrase is “finite state automaton“, which is imposing and mathy and often too painful to contemplate, until you realize what kinds of things are actually state machines [1].

Tic-Tac-Toe is a state machine. The diagram above, from Wikimedia, shows the partial solution tree to the game.

Tic-tac-toe is a state machine. You have 9 positions on a board, a state of empty, X, or O, marks that can be placed on the board by a defined set of rules, and you have a defined outcome from those sets of rules.

Checkers is also a state machine.

Checkers (draughts) is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

Chess is a state machine.

Chess is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

If you can comprehend checkers, or even tic-tac-toe, then you can understand state machines.

To treat football as a state machine, start with the idea that football is a function of field position. There are 100 yards on the field, so 100 positions to begin with. Those positions have states (1st and 10, 2nd and 3, etc), there are plays that lead to a transition from position to position and state to state, there is a method of scoring, and there is a defined outcome that results from position, states, plays, scoring and the rules of the game of football.

A lot of the analytical progress that has been made over the past several years comes from taking play by play data, breaking it down into things like games, drives, scoring, and so forth, compiling that info into a state (i.e. down and distance) database, and then asking questions of that database of interest to the analyst.

You can analyze data in a time dependent or a time independent manner. Time dependence is important if you want to analyze for things like win probability. If you’re just interested in expected points models (i.e. the odds of scoring from any particular point on the field), a time independent approach is probably good enough (that’s sometimes referred to as the “perpetual first quarter assumption”).

Net expected point models, all downs included. The purple curve does not account for response opposition drives, the yellow one does. The yellow curve was used to derive turnover values.

Take, for example, Keith Goldner’s Markov chain model. As explained here, a Markov chain is a kind of state machine. The same kinds of ideas that are embedded in simple state machines (such as tic-tac-toe) also power more sophisticated approaches such as this one.

Once a set of states is defined, a game becomes a path through all the states that occur during the course of the game, meaning an analyst can also bring graph theory (see here for an interesting tutorial) into the picture. Again, it’s another tool, one that brings its own set of insights into the analysis.

[1] More accurately, we’re going to be looking at the subset of finite state automata (related to cellular automata) that can be represented as 1 or 2 dimensional grids.  In this context, football can be mapped into a 1 dimensional geometry where the dimension of interest is position on the football field.

Notes: The checkers board is a screen capture of a game played here. The chess game above is Nigel Short-Jan Timman Tilburg 1991, and the game diagram (along with some nice game analysis) comes from the blog Chess Tales.

Brian Burke has made available play by play data from 2002 to 2010 here, and it’s available as .CSV files. The files are actually pretty small, about 5 megs for a year’s worth of data. CSV is a convenient format, and the data themselves are well enough organized an Excel or OpenOffice junkie can use the product, and so can those of us who work with SQL databases. The advantage of a SQL database is the query language you inherit. And what we’re going to show is how to embed Brian’s data into a small simple SQLite database (see here for M. Richard Hipp’s site, and here for the Wikipedia article).

SQLite is a tiny SQL engine, about 250 kilobytes in size. That’s right, 250 kilobytes. It’s intended to be embedded in applications, and so it doesn’t have the overhead of an Internet service, the way MySQL and Postgres do. It is extensively used in things like browsers (Firefox), mail clients, and internet metrics applications (Unica’s Nettracker). It has an MIT open source license, and  there are commercial versions of this free product you can buy, if you’re into that thing. Oracle, among others, sells a commercial derivative of this free product.

A SQLite database is a single file, so once you create it,  you could move the file onto a USB stick and carry it around with you (or keep it on your Android phone). The database that results is about 55 megabytes in size, not much different in size from the cumulative .CSVs themselves.

Brian’s data lack a primary key, which is fine for spreadsheets, but creates issues in managing walks through sequential data in a database. We’ll create a schema file (we’ll call it schema.sql) as so:

Use a text editor to create it. With the sqlite3 binary, create a database by saying:


sqlite3 pbp.db
sqlite>.read schema.sql
sqlite>.tables
nfl_pbp
sqlite>.exit

Once that’s all done, we’ll use Perl and the DBI module to load these data into our SQLite table. Loading is fast so long as you handle the transaction as a single unit, with the $dbh->begin_work and $dbh->commit statements.

Once loaded, you can begin using the data almost immediately:

sqlite> select count(*) from nfl_pbp;
384809
sqlite> select distinct season from nfl_pbp;
2002
2003
2004
2005
2006
2007
2008
2009
2010
sqlite> select count( distinct gameid ) from nfl_pbp;
2381

As far as the data themselves go, I’ll warn you that the ydline field is a little “lazy”,  in that if you score a touchdown from the 20, the extra point play and the ensuing kick also “occur” on the 20. So you end up with interesting sql statements like this when you search the data:


sqlite> select count(*) from nfl_pbp where ydline = 1 and not description like "%extra point%" and not description like "%two-point%" and not description like "%kicks %";
3370
sqlite> select count(*) from nfl_pbp where ydline = 1 and description like "%touchdown%" and not description like "%extra point%" and not description like "%two-point%" and not description like "%kicks %";
1690

Using the DBI module, or whatever database interface your language supports, you can soon start crunching data towards game outcome probabilities in no time.

In Brian Burke’s recent roundup, he references a Fifth Down blog article on Rex Ryan’s philosophy of offense, one where running is heavily emphasized and the yardage? Not so much. He then says that as an offensive philosophy, it seems to be “ridiculous”, except in the metaphoric sense of a boxer, with a jab, using the run to keep an opponent off balance, so that he can lay out the “killing blow”.

I tend to think that Brian’s boxing metaphor is, at best, an incomplete picture. For one, he doesn’t see the jab as a knockout punch, but for Muhammad Ali, it was. Another point is the jab is fast, elusive, confusing. By contrast, the run is a slow play, and there is nothing particularly elusive or confusing about the run. Rex-like coaches often run when it is most expected.

The way Rex is using the run, in my opinion, is closely tied to the way Bill Parcells used to use the run, especially in the context of Super Bowl 25. This New York Times article, about Super Bowl 25, details Parcells’ view of the philosophy neatly.

Parcells' starting running backs averaged about 3.7 ypc throughout his NFL coaching career.

To quote Bill:

“I don’t know what the time of possession was,” the Giants’ coach would say after the Giants’ 20-19 victory over the Buffalo Bills in Super Bowl XXV. “But the whole plan was try to shorten the game for them.”

The purpose, of course, is time control, optimizing time of possession, and thus reducing the opportunity of the opposing offense to have big plays. It’s a classic reaction to an opponent’s big play offense, to their ability to create those terrific net yards per attempt stats [1].

Note also Rex is primarily a defensive coach. If the game changing, explosive component of a football team is the defense, doing everything to suppress the opponent’s offense only hands more tools to the defensive team. It forces the opponent’s offense to take risks to score at all. It makes them go down the field in the least amount of time possible. It takes the opponents out of their comfort zone, especially if they are used to large, early leads.

The value of time, though, is hard to quantify.  Successful time control is folded into stats like WPA, and thus is highly situation dependent. The value of such a strategy is very hard to determine with our current set of analytic tools. Total time of possession no more captures the real value of time any more than total running yards captures the real value of the running game in an offense.

Chris, from Smart Football, says that the classic tactic for a less talented team (a “David”) facing a more talented team (a “Goliath”) is to use plenty of risky plays, to throw the outcome into a high risk, high reward, high  variance regime. The opposite approach, to minimize the scoring chances of the opposition, is a bit neglected in Chris’s original analysis, because he assumed huge differences in talent. However, he explicitly includes it here, as a potential high variance “David” strategy.

It’s ironic to think of running as the strategy of an underdog, but that’s what it is in this instance. New England is the 500 pound gorilla in the AFC East, ranked #1 on offense 2 of the last 4 years, and that’s the team he has to beat. And think about it more, just a college analogy for now: what teams do you know, undersized and undermanned,  that use a ground game to keep them in the mix? It’s the military academies, teams like Army, Navy, and the Air Force, using ground based option football.

[1] The down side of a loose attitude towards first and second down yardage is that it places an emphasis on third down success rate, and thus execution in tough situations.

Summary: The NFL passer rating can be considered to be the sum of two adjusted yards per attempt formulas, one cast in units of yards and the other using catches as a measure of yards. We show, in this article, how to build such a model by construction.

My previous article has led to some very nice emails back and forth with the Pro Football Focus folks. In thinking about ways to explain the complexities of the original NFL formula,  it occurred to me that there are two yardage terms because the NFL passer rating can be regarded as the sum of two adjusted yards per attempt formulas. Once you begin thinking in those terms, it’s not all that hard to derive an NFL style formula.

Our basic formula will be

<1> AYA = (yards + α*TDs – β*Ints)/Attempts

The Hidden Game of Football’s new passer rating is a formula of this kind, with α = 10 and β = 45. Pro Football Reference’s AY/A has an α value of 20 and a β value of 45. On this blog, we’ve shown that these formulas are tightly associated with scoring models.

Using the relationship Yards = YPC*Catches, we then get

<2> AYA = (YPC*Catches + α*TDs – β*Ints)/Attempts

Since the point of the exercise is to end up with an NFL-esque formula, we’ll multiply both sides of equation <2> with 20/YPC.

<3> 20*AYA/YPC = (20*Catches + 20*α*TDs/YPC – 20*β*Ints/YPC)/Attempts

Now, adding equations <1> and <3>, we now  have

<4> (20/YPC + 1)*AYA = (20*Catches + Yards + [20/YPC + 1]*α*TDs – [20/YPC + 1]*β*Ints)/Attempts

and if we now define RANKING as the left hand side of equation <4>, A as [20/YPC + 1]*α and B as [20/YPC + 1]*β, formula <4> becomes

RANKING = (20*Catches + Yards + A*TDs – B*Ints)/Attempts

Look familiar? This is the same form as the NFL passer  rating, when stripped of its multiplier and the additive coefficient. To complete the derivation, multiply both sides of the equation by 100/24 and then add 50/24 to both sides. You end up with

RANKING = 100/24*[(20*Catches + Yards + A*TDs - B*Ints)/Attempts] + 50/24

which is the THGF form of the NFL passer rating, when A = 80 and B = 100.

If YPC equals 11.4, then the conversion coefficient (20/YPC + 1) becomes 2.75. The relationship between the scoring model coefficients α and β and the NFL style passer model coefficients A and B become

A = 2.75*α
B = 2.75*β

Just for the sake of argument, we’re going to set alpha to 25, pretty close to  the 23.3 that we get from a linearized Brian Burke model, and beta we’ll set to 60, 6.7 yards less than  the 66.7 yards we calculated from the linearized Brian Burke scoring model. using those values, we get 68.75 for A and 165 for B. Rounding the first value to the nearest 10 and rounding B down a little, our putative NFL style model becomes:

RANKING = (20*Catches + Yards + 70*TDs – 160*Ints)/Attempts

Note that formulas <1> and <2> do not contribute equally to the final sum. Equation <2> is weighted by the factor (20/YPC)/(20/YPC + 1) and equation <1> is weighted by the factor 1/(20/YPC + 1). When YPC is about 11.4 yards, then the contribution of equation <2> to the total is about 63.6% and equation <1> adds about 35.4% to the total. Complaints that the NFL formula is heavily driven by completion percentage are correct.

Using the values α = 20 and β = 45, which are values found in Pro Football Reference’s version of adjusted yards per attempt, we then get values of A and B that are 55 and 123.75 respectively. Rounding down to the nearest 10, and plugging these values into the NFL style formula yields

RANKING = (20*Catches + Yards + 50*TDs – 120*Ints)/Attempts

Note that the two models in question have smaller A values than the core of the traditional NFL model (80) and larger B values than the traditional NFL model (100). This probably reflects the times. The 1970s were a defensive era. It was harder to score then. As it becomes harder to score, the magnitude of the TD term should increase. TD/Interception ratios were smaller in the 1950s, 1960s, and 1970s. As interceptions were more a part of the job, perhaps their effect wasn’t as valued when the original NFL formula was constructed.

Afterward: in many respects, this article is just the reverse of the arguments here. However, the proof by construction yields some useful formulas, and in my opinion, is easier to explain.

Update: more exhaustive derivation of the NFL passer rating.

When I was an undergrad at the University of Guam, all the science majors hung out in the Biology Department office. In part, this was because some of the biologists had licenses to fish and scuba outside the coral reef of Guam, and so you never knew what would be dragged into the building. Another reason was a small but efficient library of science books, one of which was by George Gamow. I wish I recalled the title, as one topic in this book had a powerful influence on me.

It discussed dimensional analysis, and showed an example of using dimensional analysis to derive a formula for some physical process. I’ve long forgotten the analysis and the page, but it left an indelible impression of  the power of accurately accounting for the  physical dimensions of the components of a formula.

On August 15th, Pro Football Focus introduced a new passer rating formula. It is:

Ranking = 4.66667*[ 20*Completions + 20*Drops + Yards in Air +20*Tds - 45*Ints ]/(Attempts – Spikes – Throw Aways)

There are some interesting ideas in this formula, but it seems seriously flawed from my point of view. Complaints in order are:

1. It is double counting yards.

2. It is trying to add two different kinds of yardage metrics in the same formula.

3. It doesn’t seem to understand the origin of the TD and interception terms it actually is using.

4. Items 1 and 3 interact in ways that I suspect the author never intended, yielding a scoring model that seriously undervalues turnovers.

We’ll address each of these issues in turn. As Brian Burke has pointed out and we’ve discussed in more detail here, completions and yardage are related  through the equation yardage = completion*yards per completion. If we note that YPC in the modern NFL is actually 11.4 yards, within a relative error of 9%, the first two terms in the numerator can be rewritten:

20/11.4*[ Yards + Extra Yards] = 20/11.4*Equivalent yards = 1.75*U*Yards

Yards is equal to 11.4*Catches. Extra Yards would be defined as 11.4*Drops, and is equal to the yards a QB would have gotten if  those passes hadn’t been dropped. The sum 11.4*(Catches + Drops) can be defined as Equivalent Yards, the total yards a QB would have gotten without any dropped passes. U, a dimensionless parameter, is Equivalent Yards/Yards. U, pretty much by definition, is greater than or equal to 1.0.

The third term in the numerator, by contrast, is Yards in the Air, the yards a QB is responsible for, or Yards – Yards after the catch. If V is YIA/Yards, then V is a dimensionless positive valued term less than 1. So, not only are there two yardage terms, there are two different kinds of yardage terms. This touches on items 1 and 2. Item 3 will be discussed in a footnote.

To get to item 4, the yardage components in this formula can be combined into a term like this:

20*Completions + 20*Drops + YIA = [1.75*U + V]*Yards

Leading to a numerator like this

4.6667*[ (1.75*U + V)*Yards +20*TDs -45*Ints]

whose functional scoring model becomes this:

(Yards +20/[1.75*U + V]*Tds -45/[1.75*U + V]*Ints)/Equivalent Attempts

I don’t think that was the intended result of the author of this model.

I suspect that U is in the vicinity of 1.1 and V, who knows? Call it 0.5 for the sake of argument.  The term  1.75U + V = 2.425 (which might as well be 2.4) and the core formula then becomes

Yards + 8*Tds – 19*Ints/Equivalent Attempts

So to ask the question that occurs to me, does the author think an interception is only worth about 2 points?

Solutions?

My gut feeling is that this is a formula trying to do too many things. You don’t want to add two different kinds of yardage metrics. So, initially, either dropping the completion + drops terms or getting rid of the YIA terms would yield a formula logically and algebraically sound in its treatment of yardage. A formula like

[11.4*(Completions + Drops) + 20*TDs - 45*Ints]/Equivalent Attempts

or

[YIA + 20*TDs - 45*Ints]/Equivalent Attempts

or better yet, since Brian Burke’s expected points formulas linearize to a surplus value for TDs of 23.3 yards, and the value of a turnover in yards is about 67 yards, use this:

[YIA + 23.3*TDs - 60*Ints]/Equivalent Attempts [1]

An even better formula, since PFF must have excellent data on how many yards an interception is run back, would be:

(YIA + 23.3*TDs – [ 67 - average net field position relative to original LOS]*Ints)/Equivalent Attempts [2]

So there you have it. With a little work, PFF can have a self consistent formula encompassing many of the new ideas they wish to add to a modern passer rating.

Update 9/27/2011: just noted that average YPC I previously calculated is actually 11.4 ± 0.96, instead of the originally published 14.7. Correcting the math  (which I’ve done) doesn’t affect the argument.

~~~~~

[1] I say this because Chase Stuart’s “derivation” of 20 yards, while it turns out to be a fairly good number, goes through too  many concepts that do not make sense in a world where football is treated as a Markov chain, or alternatively, a finite state machine. Seriously, does anyone believe yardage gained running and yardage gained passing differ? That completely breaks the notion of path independence in a Markov chain. Further, as we explain here and here, the idea that the TD term is “the value of the touchdown” is broken. It’s not something you can measure on the field by calculating, say, the net value of a touchdown relative to the one yard line, as it’s related to total scoring (i.e. TDs plus field goals) of all kinds.

Likewise, the 45 yard term for the interception is based on on the THGF model.  It’s the THGF value of a turnover (4 points or 50 yards) less the net value of field position after the runback (estimated at 5 yards beyond the original LOS).

[2] I’m hesitant to point this out, but yet another variation on these formulas would be to use the dimensionless parameter U or the dimensionless parameter V as a multiplier into the yardage term. Something like

U*YIA or V*11.4*(Catches + Drops)

comes to mind. Just, you’re not really measuring what was actually left on the field, in these instances. You’re measuring what could have been. The use solely of YIA appeals to me,  if the idea is to have a formula that measures the quarterback’s real contribution to scoring.

Update 9/29/2011: U simplifies to (Catches + Drops)/Catches, and as such, U*YIA has a particularly simple, appealing form.

Follow

Get every new post delivered to your Inbox.

Join 244 other followers