by Patrice Ollivaud, Economist, OECD Economics Department, Pierre-Alain Pionnier, Head of Unit, OECD Statistics Directorate and Cyrille Schwellnus, Senior Economist, OECD Economics Department
How was it possible not to see the Great Recession of 2008-09 coming? How could economic forecasters blindly ignore financial developments? These are typical questions asked by the media in the wake of the Great Recession.
The OECD has drawn a number of lessons from the failure to forecast the Great Recession for the monitoring and statistical modelling of near-term economic developments. Crucially, a broader range of information, including financial developments, is now accounted for in OECD forecasts (Lewis and Pain, 2014). In an attempt to systematise this approach, OECD economists have recently estimated state-of-the-art statistical models that allow extracting meaningful signals from a large set of economic indicators, including equity and credit market indicators, real estate and consumer prices, disaggregate industrial production, as well as business and consumer surveys. They compare these models’ ability to forecast quarterly GDP growth during and after the Great Recession with that of the smaller-scale traditional OECD forecasting models (Ollivaud et al., 2016).
The main lesson of this research is that relying on more data does not mechanically improve forecast performance. This is because some economic indicators are redundant and because more data also means more noise to filter out. Identifying which variables are most relevant for the GDP forecast is tremendously difficult in real time, even though with hindsight the relation appears obvious, as for instance with financial market developments during the Great Recession.
Ollivaud et al. (2016) show that traditional OECD forecasting models based on a reduced set of 5-6 economic indicators perform similarly to the state-of-the-art models that exploit up to 150 indicators. While forecasts become more precise as more up-to-date indicators become available, forecast errors during the Great Recession are large for both types of models even around the publication date of GDP (Figure 1).
As was emphasised at a workshop on complexity and policy recently organised at the OECD, big data and models that include non-linear features can certainly help to better understand economic phenomena and are worth pursuing further. However, the results in Ollivaud et al. (2016) suggest that implementing this approach in practice will be a long endeavour. In the meantime, smaller and simpler models can play an important role in tracking short-term economic developments and also have the advantage that it is easier to understand what is behind any forecast revision.
Lewis C., Pain N. (2014): Lessons from OECD forecasts during and after the financial crisis. OECD Journal: Economic Studies
Ollivaud P., Pionnier P.-A., Rusticelli E., Schwellnus C., Koh S.-H. (2016): Forecasting GDP during and after the Great Recession. OECD Economics Department Working Paper No. 1313