Today's Nobel Announcements
I've been thinking all day about why today's announcement of the 2011 Nobel Prize in economics going to Chris Sims and Tom Sargent bugs me.
I always think it a little odd when others get perplexed about a particular award, and I think it's because this award is ostensibly in my own area of research that I'm irked a little. I'm also a little biased towards my PhD supervisor who in an ideal world would have his Nobel by now, but instead must sit back as two folk he gave ideas to continue enjoy the acclaim from their Nobels (that they got in 2003). He does have a Knighthood though to placate him, which is good.
But back to the point; what is it that bothers me about these two fantastically intelligent and insightful scholars getting acclaim for their many works over the years? They pointed out the huge gaping problems in traditional macroeconometric models of the 1970s which generally had simplistic economic theory structures based on macroeconomic identities rather than any kind of optimising agents, and we all know the economy is made up of optimising agents. Sims promoted the use of VARs, while Sargent is best known for fearsome theoretical macro models and technical wizardry, which has helped bulk up macro theory in response to these old, simplistic models.
It's not this theory aspect that bothers me. Macroeconometrics should be informed by what theory is. It's impossible to try and do some macroeconometrics without some theory to motivate you - how would you know where to even start without it?
I think it's the kind of empirical macro they espouse that bothers me. And the belief that those that practice it that this kind of macro lets the data speak too is something that bothers me.
The kind of empirical macro that these guys epitomise is US empirical macro, which still forces the data into straight-jackets. The only difference is that the new straight jackets have structure - microfoundations, i.e. assumptions about behaviour based on the principles of microeconomics. Empirical macroeconomists, the ones Williams lauds in the linked article above, all use methods like GMM heavily, and for one reason - you force the data into particular forms motivated by theory without testing these forms.
That Sims is credited with the move towards Bayesian VARs says it all. Bayesian econometrics is a classic case of people deciding that traditional methods that let the data speak don't give us the results we want, so we'll tweak those results with our own particular "know how", embodied in priors.
The problem is, what if we are wrong? What if this new structure is completely and utterly wrong? Ideally, it would lead to progress - we learn from where we're wrong, and we then find better ways to model the things we're struggling to model. My hope is this happens, but when the estimation methods remain so heavily wedded to the economic theory, it's very hard to see that happening.
I don't want this post to be entirely negative, and un-constructive. The essential point is this: I believe, as Sims and Sargent claim, that the data should be allowed to speak. I also believe macro theory has a huge role to play, but it should be a role subservient to the evidence from the data; the data should not continue to be subservient to macroeconomic theory and the curse of plausibility.
Data are allowed to speak if we ensure that all statistical assumptions placed upon a model are found to be satisfied; if the statistical model we fit is checked for its fit, and we only proceed once we are convinced that fit is acceptable. I could be wrong, but I just don't think Sims and Sargent work this way.