The three sites we noted last year: Cool Standings, Football Outsiders, and NFL Forecast, are at it again, providing predictions of who is going to be in the playoffs.

 

Cool Standings uses Pythagoreans to do their predictions (and for some reason in 2011, ignored home field advantage), FO uses their proprietary DVOA stats, and NFL Forecast uses Brian Burke’s predictive model.

Blogging the Beast has a terrific article on “the play”. If you watched any Dallas-Philadelphia games in 2011, you’ll know exactly what I mean, the way with a simple counter trap, LeSean McCoy treated the Cowboys line as if it were Swiss cheese.

Most important new link, perhaps, is a new Grantland article by Chris Brown of Smart Football. This article on Chip Kelly is really good. Not only is the writing good, but I love the photos:

Not my photo. This is from Chris Brown’s Chip Kelly article (see link in text).

as an example. Have you ever seen a better photo of the gap assignments of a defense?

Ed Bouchette has a good article, with Steelers defenders talking about Michael Vick. Neil Payne has two interesting pieces (here and here) on how winning early games is correlated with the final record for the season.

Brian Burke has made an interesting attempt to break down EP (expected points) data to the level of individual teams. I’ve contributed to the discussion there. There is a lot to the notion that slope of the EP curve reflects the ease with which a team can score, and the more shallow the slope, the easier it is for a team to score.

Note that the defensive contribution to a EP curve will depend on how expected points are actually scored. In a Keith Goldner type Markov chain model (a “raw” EP model), a defense cannot affect its own EP curve. It can only affect an opponent’s curve. In a Romer/Burke type EP formulation, the defensive effect on a team’s EP curve and the opponent’s EP curve is complex. Scoring by the defense has an “equal and opposite” effect on team and opponent EP, the slope being affected by frequency of the scoring as a function of yard line. Various kinds of stops could also affect the slope as well. Since scoring opportunities increase for an offense the closer to the goal line the offense gets, an equal stop probability per yard line would end up yielding nonequal scoring chances, and thus slope changes.

There are three interesting sites doing the dirty job of forecasting playoff probabilities.  The first is Cool Standings, which is using Pythagorean expectations to calculate the odds of successive wins and losses, and thus, the likelihood of a team making it to the playoffs. The second is a page on the Football Outsiders’s site named DVOA Playoff Odds Report, which is using their signature DVOA stat – a “success” stat – to  generate the probability of a team making it to the playoffs. Then there is the site NFL Forecast, which has a page that predicts playoff winners using Brian Burke’s predictive model.

Of the three, Cool Standings is the most reliable in terms of updates. Whose model is actually most accurate is something any individual reader should try and take into consideration. Pythagoreans, in my opinion, are an underrated predictive stat. DVOA will tend to emphasize consistency and has large turnover penalties. BB’s metrics have tended to emphasize explosiveness, and now recently, running consistency, as determined by Brian’s version of the run success stat.

I’ve found these sites to be more reliable than local media (in particular Atlanta sports radio) in analyzing playoff possibilities. For a couple weeks now it’s been clear, for example, that Dallas pretty much has to win its division to have any playoff chances at all, while the Atlanta airwaves have been talking about how Atlanta’s wild card chances run through (among other teams) Dallas. Uh, no they don’t. These sites, my radio friends, are more clued in than you.

The recent success of DeMarco Murray has energized the Dallas fan base. Felix Jones is being spoken of as if he’s some kind of leftover (I know, a 5.1 YPC over a career is such a drag), and people are taking Murray’s 6.7 YPA for granted. That wasn’t the thing that got me in the fan circles. It’s that Julius Jones was becoming a whipping boy again, the source of every running back sin there is, and so I wanted to build some tools to help analyze Julius’s career, and at the same time, look at Marion Barber III’s numbers, since these two are historically linked.

We’ll start with this database, and a bit of sql, something to let us find running plays. The sql is:

select down, togo, description from nfl_pbp where season = 2007 and gameid LIKE "%DAL%" and description like "%J.Jones%" and not description LIKE '%pass%' and not description LIKE '%PENALTY on DAL%' and not description like '%kick%' and not description LIKE '%sacked%'

It’s not perfect. I’m not picking up plays where a QB is sacked and the RB recovers the ball. A better bit of SQL might help, but that’s a place to start. We bury this SQL into a program that then parses the description string for the statement “for X yards”, or alternatively, “for no gain”, and adds them all up. From this, we could calculate yards per carry, but more importantly, we’ll calculate run success and we’ll also calculate something I’m going to call a failure rate.

For our purposes, a failure rate is the number of plays that gained 2 yards or less, divided by the total number of running attempts, multiplied by 100. The purpose of the failure rate is to investigate whether Julius, in 2007, became the master of the 1 and 2 yard run. One common fan conception of his style of play in his last year in Dallas is that “he had plenty of long runs but had so many 1 and 2 yards runs as to be useless.” I wish to investigate that.

(more…)

This is something I’ve wanted to test ever since I got my hands on play-by-play data, and to be entirely  honest, doing this test is the major reason I acquired play-by-play data in  the first place. Linearized scoring models are at the heart of the stats revolution sparked by the book, The Hidden Game of Football, as their scoring model was a linearized model.

The simplicity of the model they presented, the ability to derive it from pure reason (as opposed to hard core number crunching) makes me want to name it in some way that denotes the fact: perhaps Standard model or Common model, or Logical model. Yes, scoring the ‘0’ yard line as -2 points and  the 100 as 6, and everything in between as a linearly proportional relationship between those two has to be regarded as a starting point for all sane expected points analysis. Further, because it can be derived logically, it can be used at levels of play that don’t have 1 million fans analyzing everything: high school play, or even JV football.

From the scoring models people have come up with, we get a series of formulas that are called adjusted yards per attempt formulas. They have various specific forms, but most operate on an assumption that yards can be converted to a potential to score. Gaining yards, and plenty of them, increases scoring potential, and as Brian Burke has pointed out, AYA style stats are directly correlated with winning.

With play-by-play data, converted to expected points models, some questions can now be asked:

1. Over what ranges are expected points curves linear?

2. What assumptions are required to yield linearized curves?

3. Are they linear over the whole range of data, or over just portions of the data?

4. Under what circumstances does the linear assumption break down?

We’ll reintroduce data we described briefly before, but this time we’ll fit the data to curves.

Linear fit is to formula Scoring Potential = -1.79 + 0.0653*yards. Quadratic fit is to formula Scoring Potential = 0.499 + 0.0132*yards + 0.000350*yards^2. These data are "all downs, all distance" data. The only important variable in this context is yard line, because this is the kind of working assumption a linearized model makes.

Fits to curves above. Code used was Maggie Xiong's PDL::Stats.

One simple question that can change the shape of an expected points curve is this:

How do you score a play using play-by-play data?

I’m not attempting, at this point, to come up with “one true answer” to this question, I’ll just note that the different answers to this question yield different shaped curves.

If the scoring of a play is associated only with the drive on which the play was made, then you yield curves like the purple one above. That would mean punting has no negative consequences for the scoring of a play. Curves like this I’ve been calling “raw” formulas, “raw” models. Examples of these kinds of models are Kieth Goldner’s Markov Chain model, and Bill Connelly’s equivalent points models.

If a punt can yield negative consequences for the scoring of a play, then you get into a class of models I call “response” models, because the whole of the curve of a response model can be thought of as

response = raw(yards) – fraction*raw(100 – yards)

The fraction would be a sum of things like fractional odds of punting, fractional odds of a turnover, fractional odds of a loss on 4th down, etc. And of course in a real model, the single fractional term above is a sum of terms, some of which might not be related to 100 – yards, because that’s not where the ball would end up  – a punt fraction term would be more like fraction(punt)*raw(60 – yards).

Raw models tend to be quadratic in character.  I say this because Keith Goldner fitted first and 10 data to a quadratic here. Bill Connelly’s data appear quadratic to the eye. And the raw data set above fits mostly nicely to a quadratic throughout most of the range.

And I say mostly because the data above appear sharper than quadratic close to the goal line, as if there is “more than quadratic” curvature less than 10 yards to go. And at the risk of fitting to randomness, I think another justifiable question to look at is how scoring changes the closer to the goal line a team gets.

That sharp upward kink plays into  how the shape of response models behaves. We’ll refactor the equation above to get at, qualitatively, what I’m talking about. We’re going to add a constant term to the last term in the response equation because people will calculate the response differently

response = raw(yards) – fraction*constant*raw(100 – yards)

Now, in this form, we can talk about the shape of curves as a function of the magnitude of “constant”. As constant grows larger,  the more the back end of the curve takes on the character of the last 10 yards. A small constant and you yield a less than quadratic and more than linear curve. A mid sized constant yields a linearized curve. A potent response function yields curves more like  those of David Romer or Brian Burke, with more than linear components within 10 yards on both ends of the field. Understand, this is a qualitative description. I have no clues as to the specifics of how they actually did their calculations.

I conclude though, that linearized models are specific to response function depictions of equivalent point curves, because you can’t get a linearized model any other way.

So what is our best guess at the “most accurate” adjusted yards per attempt formula?

In my data above, fitting a response model to a line yields an equation. Turning the values of that fit into an equation of the form:

AYA = (yards + α*TDs – β*Ints)/Attempts

Takes a little algebra. To begin, you have to make a decision on  how valuable your touchdown  is going to be. Some people use 7.0 points, others use 6.4 or 6.3 points. If TD = 6.4 points, then

delta points = 6.4 + 1.79 – 6.53 = 1.79 + 0.07 = 1.86 points

α = 1.86 points/ 0.0653 = 28.5 yards

turnover value = (6.53 – 1.79) + (-1.79) = 6.53 – 2*1.79 = 2.95 points

β = 2.95 / 0.0653 = 45.2 yards

If TDs = 7.0 points, you end up with α = 37.7 yards instead.

It’s interesting that this fit yields a value of an interception (in yards) almost identical to the original THGF formula. Touchdowns are more close in value to the NFL passer rating than THGF’s new passer rating. And although I’m critical of Chase Stuart’s derivation of the value of 20 for  PFR’s AYA formula, the adjustment they made does seem to be in the right direction.

So where does the model break down?

Inside the 10 yard line. It doesn’t accurately depict  the game as it gets close to the goal line.  It’s also not down and distance specific in the way a more sophisticated equivalent points model can be. A stat like expected points added gets much closer to the value of an individual play than does a AYA style stat. In terms of a play’s effect on winning, then you need win stats, such as Brian’s WPA or ESPNs QBR to break things down (though I haven’t seen ESPN give us the QBR of a play just yet, which WPA can do).

Update: corrected turnover value.

Update 9/24/11: In the comments to this link, Brian Burke describes how he and David Romer score plays (states).

The formal phrase is “finite state automaton“, which is imposing and mathy and often too painful to contemplate, until you realize what kinds of things are actually state machines [1].

Tic-Tac-Toe is a state machine. The diagram above, from Wikimedia, shows the partial solution tree to the game.

Tic-tac-toe is a state machine. You have 9 positions on a board, a state of empty, X, or O, marks that can be placed on the board by a defined set of rules, and you have a defined outcome from those sets of rules.

Checkers is also a state machine.

Checkers (draughts) is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

Chess is a state machine.

Chess is a state machine. You have 64 positions on a board, pieces that move through the positions via a set of defined rules, with a defined outcome from those rules.

If you can comprehend checkers, or even tic-tac-toe, then you can understand state machines.

To treat football as a state machine, start with the idea that football is a function of field position. There are 100 yards on the field, so 100 positions to begin with. Those positions have states (1st and 10, 2nd and 3, etc), there are plays that lead to a transition from position to position and state to state, there is a method of scoring, and there is a defined outcome that results from position, states, plays, scoring and the rules of the game of football.

A lot of the analytical progress that has been made over the past several years comes from taking play by play data, breaking it down into things like games, drives, scoring, and so forth, compiling that info into a state (i.e. down and distance) database, and then asking questions of that database of interest to the analyst.

You can analyze data in a time dependent or a time independent manner. Time dependence is important if you want to analyze for things like win probability. If you’re just interested in expected points models (i.e. the odds of scoring from any particular point on the field), a time independent approach is probably good enough (that’s sometimes referred to as the “perpetual first quarter assumption”).

Net expected point models, all downs included. The purple curve does not account for response opposition drives, the yellow one does. The yellow curve was used to derive turnover values.

Take, for example, Keith Goldner’s Markov chain model. As explained here, a Markov chain is a kind of state machine. The same kinds of ideas that are embedded in simple state machines (such as tic-tac-toe) also power more sophisticated approaches such as this one.

Once a set of states is defined, a game becomes a path through all the states that occur during the course of the game, meaning an analyst can also bring graph theory (see here for an interesting tutorial) into the picture. Again, it’s another tool, one that brings its own set of insights into the analysis.

[1] More accurately, we’re going to be looking at the subset of finite state automata (related to cellular automata) that can be represented as 1 or 2 dimensional grids.  In this context, football can be mapped into a 1 dimensional geometry where the dimension of interest is position on the football field.

Notes: The checkers board is a screen capture of a game played here. The chess game above is Nigel Short-Jan Timman Tilburg 1991, and the game diagram (along with some nice game analysis) comes from the blog Chess Tales.

Brian Burke has made available play by play data from 2002 to 2010 here, and it’s available as .CSV files. The files are actually pretty small, about 5 megs for a year’s worth of data. CSV is a convenient format, and the data themselves are well enough organized an Excel or OpenOffice junkie can use the product, and so can those of us who work with SQL databases. The advantage of a SQL database is the query language you inherit. And what we’re going to show is how to embed Brian’s data into a small simple SQLite database (see here for M. Richard Hipp’s site, and here for the Wikipedia article).

SQLite is a tiny SQL engine, about 250 kilobytes in size. That’s right, 250 kilobytes. It’s intended to be embedded in applications, and so it doesn’t have the overhead of an Internet service, the way MySQL and Postgres do. It is extensively used in things like browsers (Firefox), mail clients, and internet metrics applications (Unica’s Nettracker). It has an MIT open source license, and  there are commercial versions of this free product you can buy, if you’re into that thing. Oracle, among others, sells a commercial derivative of this free product.

A SQLite database is a single file, so once you create it,  you could move the file onto a USB stick and carry it around with you (or keep it on your Android phone). The database that results is about 55 megabytes in size, not much different in size from the cumulative .CSVs themselves.

Brian’s data lack a primary key, which is fine for spreadsheets, but creates issues in managing walks through sequential data in a database. We’ll create a schema file (we’ll call it schema.sql) as so:

Use a text editor to create it. With the sqlite3 binary, create a database by saying:


sqlite3 pbp.db
sqlite>.read schema.sql
sqlite>.tables
nfl_pbp
sqlite>.exit

Once that’s all done, we’ll use Perl and the DBI module to load these data into our SQLite table. Loading is fast so long as you handle the transaction as a single unit, with the $dbh->begin_work and $dbh->commit statements.

Once loaded, you can begin using the data almost immediately:

sqlite> select count(*) from nfl_pbp;
384809
sqlite> select distinct season from nfl_pbp;
2002
2003
2004
2005
2006
2007
2008
2009
2010
sqlite> select count( distinct gameid ) from nfl_pbp;
2381

As far as the data themselves go, I’ll warn you that the ydline field is a little “lazy”,  in that if you score a touchdown from the 20, the extra point play and the ensuing kick also “occur” on the 20. So you end up with interesting sql statements like this when you search the data:


sqlite> select count(*) from nfl_pbp where ydline = 1 and not description like "%extra point%" and not description like "%two-point%" and not description like "%kicks %";
3370
sqlite> select count(*) from nfl_pbp where ydline = 1 and description like "%touchdown%" and not description like "%extra point%" and not description like "%two-point%" and not description like "%kicks %";
1690

Using the DBI module, or whatever database interface your language supports, you can soon start crunching data towards game outcome probabilities in no time.

Follow

Get every new post delivered to your Inbox.

Join 245 other followers