Time Series Forecasting Using Foundation Models is a timely and practical book that focuses on how modern foundation models—primarily transformer-based architectures—are being applied to time-series forecasting in real-world settings.
The author does a solid job explaining why naïve or “vanilla” transformers historically performed poorly on forecasting benchmarks, and then methodically walks through the architectural adaptations that make large-scale models like TimeGPT, Chronos, Moirai, and TimesFM viable in practice. Concepts such as patching, positional encoding for time series, zero-shot forecasting, fine-tuning, and handling exogenous variables are explained clearly, with enough depth to be useful without becoming overly theoretical.
A clear strength of the book is its practitioner lens. The examples reflect real operational constraints—heterogeneous datasets, multiple frequencies, long horizons, and the need for generalization across domains. The discussion around when foundation models do not outperform simpler statistical or linear baselines is also refreshingly honest.
That said, readers should be aware of the author’s industry context: the book is written by a practitioner from Nixtla, the creators of TimeGPT. While this does introduce a natural emphasis on transformer-based foundation models, the technical content remains rigorous and the limitations are acknowledged rather than glossed over.
This is not a beginner’s book on forecasting, nor a replacement for classical time-series texts. It is best suited for practitioners who already understand forecasting fundamentals and want to evaluate modern foundation-model approaches critically and pragmatically.
Recommended for: applied ML practitioners, data scientists, and architects interested in modern time-series forecasting at scale.