Learn how to put Large Language Model-based applications into production safely and efficiently.
This practical book offers clear, example-rich explanations of how LLMs work, how you can interact with them, and how to integrate LLMs into your own applications. Find out what makes LLMs so different from traditional software and ML, discover best practices for working with them out of the lab, and dodge common pitfalls with experienced advice.
In LLMs in Production you
• Grasp the fundamentals of LLMs and the technology behind them • Evaluate when to use a premade LLM and when to build your own • Efficiently scale up an ML platform to handle the needs of LLMs • Train LLM foundation models and finetune an existing LLM • Deploy LLMs to the cloud and edge devices using complex architectures like PEFT and LoRA • Build applications leveraging the strengths of LLMs while mitigating their weaknesses
LLMs in Production delivers vital insights into delivering MLOps so you can easily and seamlessly guide one to production usage. Inside, you’ll find practical insights into everything from acquiring an LLM-suitable training dataset, building a platform, and compensating for their immense size. Plus, tips and tricks for prompt engineering, retraining and load testing, handling costs, and ensuring security.
Foreword by Joe Reis.
About the technology
Most business software is developed and improved iteratively, and can change significantly even after deployment. By contrast, because LLMs are expensive to create and difficult to modify, they require meticulous upfront planning, exacting data standards, and carefully-executed technical implementation. Integrating LLMs into production products impacts every aspect of your operations plan, including the application lifecycle, data pipeline, compute cost, security, and more. Get it wrong, and you may have a costly failure on your hands.
About the book
LLMs in Production teaches you how to develop an LLMOps plan that can take an AI app smoothly from design to delivery. You’ll learn techniques for preparing an LLM dataset, cost-efficient training hacks like LORA and RLHF, and industry benchmarks for model evaluation. Along the way, you’ll put your new skills to use in three exciting example creating and training a custom LLM, building a VSCode AI coding extension, and deploying a small model to a Raspberry Pi.
What's inside
• Balancing cost and performance • Retraining and load testing • Optimizing models for commodity hardware • Deploying on a Kubernetes cluster
About the reader
For data scientists and ML engineers who know Python and the basics of cloud deployment.
About the author
Christopher Brousseau and Matt Sharp are experienced engineers who have led numerous successful large scale LLM deployments.
The authors have dedicated a fair amount of effort to choose the topics to build a foundational level knowledge and have stretched enough to create a useful product. I liked how the authors have explained the simple concepts and expanded to design and build an AI product. The book offers a sound knowledge of LLMs, comparison of different LLMs, when to use existing LLMs, how to train and fine tune existing LLMs, scaling up to ML platform and finally deployment of the LLMs to the cloud. Due to the expansive nature of LLMs, a thorough understanding of AI product design to deployment phase it would be helpful to reduce the iterations of AI product. The covers the complete lifecycle of LLMs products. The book can be very useful for AI Product designers/developers, MLOps and Business development teams or the newcomers to the field to understand the integration of LLMs into production.
When to use LLMs and when not to; and considerations for buy vs. build are discussed. The book explores the challenges of deploying LLMs to production, including long deployment times, costs, and security concerns. Topics include evaluating LLMs and preparing training data, data annotation and training data challenges.
The book compares different strategies for storing LLMs.
It covers the implementation of rate limiting, access keys, and K8s setup, including Seldon for ML model management.
It discusses prompting, application development with chat history, and challenges of running LLMs on the edge.
Additionally, the book implements and productionizes Llama 3, creates a coding copilot, and deploys a model on Raspberry Pi. It also touches upon legal implications.
It’s clear that the writers are very knowledgeable people. But the book for me seems to be more of a “jazz jam” about topics they like. Like history of linguistics or deploying LLM on Raspberry Pi. They have a whole chapter on Raspberry Pi and several subsections in other chapters, but only 6 pages for hallucinations. And in many cases instead of explaining the concept of algorithm they just put a code listing with a few comments.
This is an excellent book to understand fundamentals of how LLMs work, and how to go about using LLMs to develop and deploy production-grade applications. The hands-on exercises make it very easy and a lot of fun to grasp and learn the concepts discussed in the book.