Because economists use incredible assumptions in their models, the public routinely loses faith in them. Or at least that’s the sense I get from the media.
However, I think as long as model predictions are accurate, people will tolerate some unrealistic assumptions.
“Physics Envy” (a phrase so Freudian, Sigmund himself would be impressed) aside, Newton’s Law of Gravity is wrong because objects do not necessarily need to have mass to be sensitive to gravitational force. Nevertheless, Newton’s Law of Gravity is a work rarely equaled in genius or utility.
Economists could get away with being Newton. But they still have a way to go. They should solicit criticism to, to paraphrase Fukuyama, “get to Newton.”
But let’s be real. Non-economists’ critiques are often incoherent. I often hear things like “economists don’t account for the fact that the market is broken in their forecasts.” Broken? In what way? According to who? The answer, of course, are people losing money.
Luckily, the adoption of machine learning by economists is on the rise as they seek to make better forecasts. And yet, not everyone is happy.
As one person I respect told me, the embrace of machine learning is a tacit admission that economics “doesn’t work.”
But if economists are to consider themselves some form of scientist, isn’t this exactly what they should be doing? Working towards better models?
Luckily, pivots in econometric methodology has historically lead to better economic models. The demise of large Keynesian simultaneous-equation models ultimately yielded a vibrant literature producing VARs, DSGEs, dynamic factor models, and more. Models ought to always be improving.
Now, it’s machine learning’s turn to save economics.
My enthusiasm for the Macro ML revolution is uniquely excited by Phillipe Goulet Coloumbe’s (PGC) work on the Macro Random Forest (MRF) and now the Hemisphere Neural Network (HNN).
The new HNN paper strikes with an informative example at the heart of economic theory: the Phillips Curve (PC).
Walking through the Covid era recursively (to ensure the model does not overfit the Covid data ex-post), PGC finds, among other things, that today’s output gap is not near zero, as existing approaches imply. He also finds that the PC coefficient of the output gap has increased reliably since 1990, contradicting the popular narrative that the PC is dead.
More generally, HNN is a new way to estimate latent economic variables that we don’t actually observe (e.g., the output gap, neutral interest rates, term premia). HNN improves upon existing methods in a number of ways.
- It dispenses with restrictive law of motion assumptions on model parameters and latent states. State-space approaches often impose that factors or coefficients evolve according to a random walk or an arbitrary AR process.
- It produces a linear output layer. So, despite all the non-linear ML, there is a final, fully interpretable end-product.
- Important predictors of the latent states can be identified, allowing for more interpretability over and above both traditional machine learning and factor-based econometric methods.
- It allows for a novel sense of volatility that addresses known weaknesses in the widely-used GARCH and SV approaches.
- As an extension of 4, the model can predict its own demise. It will explicitly tell you in real-time if its predictions are uncertain.
I believe PGC’s recent work is an important step forward in the ML-meets-macroeconometrics literature. MRF and HNN directly, intuitively, and effectively address many concerns about macro models head-on using clever modifications of widely-known ML approaches. I am looking forward to future developments from PGC and many others. And so should you. For the sake of economics.