Thursday, March 24, 2011

Economic Logician Displaying Precisely No Logic

David Hendry has just published an excellent working paper on model discovery in economics. Economic Logician (EL) attempts to destroy it but misses the point so badly it's really quite astonishing. I can only conclude he hasn't actually read the paper. I should disclose at the start: Hendry was my PhD supervisor. That may make me a little more liable to be sympathetic towards David's methods, but most importantly it means I've studied them for a long time and am well acquainted with them.

It's hard to know where to start really, but it's clear EL misses the entire point of what Hendry is doing, because it fits precisely within his description of the scientific method:

 

  1. Observe regularities in the data.
  2. Formulate a theory.
  3. Generate predictions from the theory (hypotheses).
  4. Test your theory (is it consistent with data?)

 

Hendry's method majors on the fourth part of this list, but is crucially reliant on the first three. We form the General Unrestricted Model (GUM) from all economic theories related to our object of interest (the exchange rate, inflation, interest rates, etc), and from there we make use of Hendry's general-to-specific method to uncover the best possible model fitting the data related to the thing we're interested in.

The new stuff in Hendry's paper that builds on what is standard in PcGets and Autometrics is that non-linear functions of the variables included in the GUM are added to the mixer and a souped up version of what used to be called Dummy Saturation is added. No new variables (exports of cabbage, number of sunny days, etc) are added.

These developments are responses to the fact that economic theory is necessarily simplified, and will often get functional forms wrong (hence the non-linear functions), and also a response to the problem of structural change - things change over time. The latter, dummy saturation, is the product of many years of research, from which the main conclusions are: If they (dummies) doesn't matter, generally they are omitted by the selection procedure, and if they are kept by mistake they won't harm inference (i.e. the other coefficients of variables that matter). But if they matter, then omitting dummy variables will distort inference, and policy conclusions will be flawed as a result.

Because it takes structural change so seriously, the Lucas Critique (another of EL's misfires) is moot. Other areas of Hendry's research (notably in Dynamic Econometrics and with Rob Engle in this paper) make clear that if models found in this manner are to be used for policy analysis, they must be checked for super exogeneity. This means they the model's parameters must be checked for stability over a number of well known policy changes and other periods of structural change. The addition here in Hendry's latest paper of being able to automatically detect such structural change and hence directly control for them only adds to the usefulness of this method for policy analysis and for understanding more about the economy and economic theories.

This is perhaps the biggest misfire of EL - that somehow this Hendry method is least suited to policy analysis. It could not be better suited to it because it follows precisely the four steps above, and takes the most difficult part of it (the last one) very seriously indeed. It is not wedded to one particular theory, which too much pre-Financial Crisis advice arguably was, and takes economic data seriously. What would be better for policy analysis exactly?

0 Comments:

Post a Comment

<< Home