Knowing what is happening now or even what has just happened is vital but elusive when it comes to economics. While much of the forecasting literature is focusing on applying elementary math to tell us what will happen in the future much of the data they use to do so is of questionable validity. On top of it all most of the models which project past data into the future are basically making physics assumptions about the economy: the larger a mass of a moving object the harder it is that it can dramatically change its kinetic situation. The problem with this is that it is simply not true: It takes a tiny rumor or a sudden loss of confidence to bring a huge economic system into a screeching halt. But let’s go back to the data. Consider this statement:
The growth rate that the government announces roughly one month after the end of each quarter — news much anticipated in Washington and on Wall Street — has been off the mark over the period from 1983 to 2009 by an average of 1.3 percentage points, compared with more fully analyzed figures released years later, according to federal data.
The second and third estimates, announced at subsequent one-month intervals, are no more reliable. The first quarter this year offers a typical example. The government estimated the annual growth rate at 1.8 percent in May and 1.9 percent in June before issuing its most recent estimate of 0.4 percent.
This is why we need to start looking at alternatives to the standard ways we do things. The problem is not so much a theoretical one as it is a matter of not knowing what the hell is going on right now. This is what our work with Google Data or Toll Data is trying to do: propose alternative ways to better know what is going on.
There is plenty of things we did not exploit yet and most of them have to do with the amazing growth of technology the last couple of decades. In our Toll Index paper we look at basically heavy trucks on the German highways. They use GPS to count driven kilometers and use mobile phone technology to broadcast the data to the computing center in order to be billed. The data is economic telemetry and that type of thing is what we need to exploit to have a good picture of the economy. In our housing paper we show how you can have a weekly picture of mortgage delinquencies in the US simply by looking at the searches for “hardship letter”.
So as the old computing science mantra goes Garbage In, Garbage Out: it does not help to have elementary math models which you sell high. Even if they worked the quality of what comes out is largely determined by the quality of what goes in and that which goes in now is questionable.