Jump to ratings and reviews
Rate this book

Forecasting: An Essential Introduction

Rate this book
Concise, engaging, and highly intuitive—this accessible guide equips you with an understanding of all the basic principles of forecasting

Making accurate predictions about the economy has always been difficult, as F. A. Hayek noted when accepting his Nobel Prize in economics, but today forecasters have to contend with increasing complexity and unpredictable feedback loops. In this accessible and engaging guide, David Hendry, Michael Clements, and Jennifer Castle provide a concise and highly intuitive overview of the process and problems of forecasting. They explain forecasting concepts including how to evaluate forecasts, how to respond to forecast failures, and the challenges of forecasting accurately in a rapidly changing world.
 
Topics covered What is a forecast? How are forecasts judged? And how can forecast failure be avoided? Concepts are illustrated using real-world examples including  financial crises, the uncertainty of Brexit, and the Federal Reserve’s record on forecasting. This is an ideal introduction for university students studying forecasting, practitioners new to the field and for general readers interested in how economists forecast.

240 pages, Paperback

Published June 11, 2019

15 people are currently reading
65 people want to read

About the author

David Hendry

5 books

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
3 (21%)
4 stars
1 (7%)
3 stars
9 (64%)
2 stars
1 (7%)
1 star
0 (0%)
Displaying 1 - 2 of 2 reviews
118 reviews36 followers
January 23, 2023
This is a well-written and clearly explained guide to the basics of economic forecasting from a team of some of the most respected practitioners. The discussion is kept nontechnical enough to be accessible to novices, but with enough detail to clearly describe what is meant, including a few, fairly simple, equations and quite a lot of graphs and charts. I think several chapters of it, especially in the first half, would make good supplements for an undergraduate class in forecasting, like, let's say, the one that I teach.

That said, it does have several limitations, representing a particular perspective on forecasting within the broader field.

As a minor point, the recurring metaphor of predicting driving times, while kept deliberately simple and stylized as a way to illustrate concepts, is strangely disconnected from the actual practice of traffic and transportation time forecasting, an established area important for logistics, ridesharing, and transportation industries with its own practices and unique difficulties which need not always line up with the simplifications used merely for framing here. Presumably they could have taken some time to talk to people who work for Waze or Lyft or wherever to see if the stories hold up, or at least cited some of the secondary literature on the topic.

More majorly, the last half or so of the book, which focuses on sources of nonstationarity and forecast failure, which brings in the authors' own research contributions, makes quite a few claims whose generality is at least contestable. Essentially, after introducing in the first part of the book the probabilistic and decision theoretic approaches to forecasting which are quite classical, it begins discussing breaks or shifts as a deviation from that paradigm. As discussion has centered on linear and Gaussian models, such shifts are described as changes in mean (or sometimes growth rate). This does characterize what would happen if you re-estimated a linear or Gaussian model on data from a new distribution, but in the book all changes in distribution are described in this way, and all changes in mean are described as shifts of this sort, downplaying several other plausible approaches to this kind of behavior. These include the possibility that such shifts could be included as part of a larger probability model (regime switching models are mentioned but then dismissed as performing poorly) or that use of a misspecified linear model could systematically create the appearance of mean shifts when applied over time, or that a distribution change to a new probability model could take quite arbitrary forms. Overfitting is barely mentioned, leading them to often dismiss poor performance of more complicated models that might generate this break behavior as also reflecting misspecification, neglecting the alternative that something can be closer to right even quantitatively[^1] but worsen forecast performance, a phenomenon which is absolutely first order in forecasting.

This artificially limited scope then creates a presumption for the class of fixes they have on offer. The chapter on forecasting breaks is entirely qualitative: they don't describe, even in words, any quantitative procedures. Instead they offer fixes for dealing with breaks quickly ex post, based on the idea that mean shifts are both the main important form of nonstationarity and cannot be probabilistically forecast. The first is working with differences, which turns a persistent mean change into a one period rather than repeated forecast error. This is helpful if the mean is changing quickly then re-stabilizing, which is merely one possible form of nonstationarity and could perform poorly with others. Essentially, the advocacy for this method must rest on an empirical claim that economic time series are frequently well described by a model which is something like integrated of order 2 (ie, stationary after differencing twice), typically with a very specific pattern of innovations. Based on the authors experience, I'm sure it works well for the series they've been asked to forecast. But this is a fact about economic data, not a general law regarding when probability models do and do not work. The other approach described, impulse indicator saturation, which includes a sequence of post-break indicator functions at each point and uses a model selection procedure to select a small subset of them, is designed for detecting jumps of this form ex post and for forecasting for the indeterminate but possibly short time until the next break. This is again well suited to models of the form they describe, but is again not universally applicable. For what it's worth, the method is awfully close to a sparse Haar wavelet selection approach, with a particular thresholding rule, and so should do a good job describing any time series well-described by such a model, which is quite a lot of them given that it is a general adaptive nonparametric regression estimator: see works by Donoho and Johnstone.

Overall, I think there's a lot of practical experience in what they are doing. But by couching it in language arguing against a full probability approach, on vaguely Knightian grounds, they both claim too little (they have a pretty good model of economic data, if only they were willing to call it a model!) and too much (that their model is somehow the solution to or strong salve against the general problem of induction, as opposed to a contingent procedure adapted to a particular set of circumstances). I think economics would benefit from more serious work on what kinds of models can generate patterns like those they describe, along with forecasting procedures which take that full process into account, at minimum because it could probe the limitations more seriously. One might still have a role for non-probability approaches, but beyond the linear setting they will require newer and stranger forms.

[1]: They do discuss why causally correct models may not make good forecasting procedures, but this is a distinct phenomenon, relating to population rather than sample distribution of procedures.
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.