Jesse is thinking about buying a house. But the decision is a tough one. Are house prices going to rise in the future? Where will interest rates be in a few years? Is Jesse likely to get that promotion? The answers to all these questions have a direct impact on Jesse's decision. Yet they all depend on subjective expectations of future events. Since most major economic decisions share this feature, expectations are central to economics.
The starting point in economics is typically that of rational expectations. The key tenet of rational expectations is that expectations held by people exactly match reality. If Jesse thinks that the odds of house prices rising next quarter are 3-to-1, then the actual odds must be precisely that. In practice, however, subjective expectations are not always well described by rational expectations. But how, exactly, do people form expectations? There is currently little agreement on how to answer that question.
That is where our new research comes in. In a new working paper, Florian Peters and I develop a new way for measuring biases in expectation formation. The proposed method is simple to use, able to capture various types of bias, and it does not require precise knowledge of how the variable being forecast is actually generated.
The basic insight is that biases can be inferred from the predictability of forecast errors. Suppose that we observe Jesse's expectations of future house prices. For some reason, house prices in the current quarter are higher than was expected by Jesse. If Jesse reacts to new information in an unbiased way, the forecast error in the current quarter should not help in predicting future forecast errors. However, suppose that Jesse does not follow the housing market very carefully and tends to underreact to new information. Since house prices are persistent, a positive forecast error today implies that the forecast error next quarter is likely to again be positive. If, on the other hand, Jesse overreacts to the higher-than-expected prices, Jesse's house price expectations will increase by more than is warranted. On average, Jesse will overshoot.
We use this logic to represent biases as an impulse response function. Suppose that there is a positive shock to house prices today—say, due to inflows of money from abroad. As time passes, the effect of the shock will dissipate. If we plot the effect of this shock over time we get what is known an impulse response function, shown here in the dashed blue line:
Now the solid red line shows the impulse response function as it is perceived by Jesse. In this case, Jesse thinks that the short-run effect is larger than it in fact is. However, in the long run Jesse believes that the shock will dissipate faster than actually is the case. Put differently, Jesse overreacts to news in the short run and underreacts to news in the long run.
The difference between the true and perceived impulse response functions gives a natural measure of biased reaction to news, which we term a bias coefficient. These bias coefficients can be recovered by simply estimating the impulse response function of forecast errors—actual house prices minus expected house prices—as that directly gives the difference between the two impulse responses shown above.
We apply the methodology to data on quarterly inflation forecasts made by professional forecasters in the US. Here is what we find:
Professional forecasters underreact to information for at least up to four quarters (in a way that is statistically distinguishable from unbiased reaction to news). That is a significant amount of underreaction.
The estimated bias coefficients can be used to learn more about how people form expectations. For instance, we can ask what existing model best matches the estimated bias coefficients. We perform this exercise for a number of most widely used models of expectations:
The model that best fits the data is a simple misperception model. In that model, forecasters think that inflation is less persistent that it actually is. (The estimated persistence of quarterly inflation is roughly 0.80 whereas the model implies that professional forecasters think it is around 0.60.) The second best-fitting model is that of sticky information proposed by N. Gregory Mankiw and Ricardo Reis. We find a level of information stickiness of around 0.50, suggesting that people update information roughly twice a year. The level of information stickiness is very close to that estimated in recent research by Olivier Coibion and Yuriy Gorodnichenko using a completely different technique. Rational expectations perform significantly worse than these two models.
Recent years have seen an explosion in the availability of data on subjective expectations, thanks to numerous data collection efforts (see here and here for two examples). While many questions remain open, new data will no doubt shed light on how people form expectations. Our method may prove useful in this endeavor.
And let's hope Jesse does not under- or overreact to news when buying that house.
* * * * *
Complete paper can be found here, along with the replication code in Python and STATA.