Jump to ratings and reviews
Rate this book

How Large Language Models Work

Rate this book
Learn how large language models like GPT and Gemini work under the hood in plain English.

How Large Language Models Work translates years of expert research on Large Language Models into a readable, focused introduction to working with these amazing systems. It explains clearly how LLMs function, introduces the optimization techniques to fine-tune them, and shows how to create pipelines and processes to ensure your AI applications are efficient and error-free.

In How Large Language Models Work you will learn how

• Test and evaluate LLMs
• Use human feedback, supervised fine-tuning, and Retrieval Augmented Generation (RAG)
• Reducing the risk of bad outputs, high-stakes errors, and automation bias
• Human-computer interaction systems
• Combine LLMs with traditional ML

How Large Language Models Work is authored by top machine learning researchers at Booz Allen Hamilton, including researcher Stella Biderman, Director of AI/ML Research Drew Farris, and Director of Emerging AI Edward Raff. They lay out how LLM and GPT technology works in plain language that’s accessible and engaging for all.

About the Technology

Large Language Models put the “I” in “AI.” By connecting words, concepts, and patterns from billions of documents, LLMs are able to generate the human-like responses we’ve come to expect from tools like ChatGPT, Claude, and Deep-Seek. In this informative and entertaining book, the world’s best machine learning researchers from Booz Allen Hamilton explore foundational concepts of LLMs, their opportunities and limitations, and the best practices for incorporating AI into your organizations and applications.

About the Book

How Large Language Models Work takes you inside an LLM, showing step-by-step how a natural language prompt becomes a clear, readable text completion. Written in plain language, you’ll learn how LLMs are created, why they make errors, and how you can design reliable AI solutions. Along the way, you’ll learn how LLMs “think,” how to design LLM-powered applications like agents and Q&A systems, and how to navigate the ethical, legal, and security issues.

What’s Inside

• Customize LLMs for specific applications
• Reduce the risk of bad outputs and bias
• Dispel myths about LLMs
• Go beyond language processing

About the Readers

No knowledge of ML or AI systems is required.

About the Author

Edward Raff, Drew Farris and Stella Biderman are the Director of Emerging AI, Director of AI/ML Research, and machine learning researcher at Booz Allen Hamilton.

Table of Contents

1 Big What are LLMs?
2 How large language models see the world
3 How inputs become outputs
4 How LLMs learn
5 How do we constrain the behavior of LLMs?
6 Beyond natural language processing
7 Misconceptions, limits, and eminent abilities of LLMs
8 Designing solutions with large language models
9 Ethics of building and using LLMs

324 pages, Kindle Edition

Published July 22, 2025

9 people are currently reading
9 people want to read

About the author

Edward Raff

4 books2 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
4 (28%)
4 stars
5 (35%)
3 stars
4 (28%)
2 stars
1 (7%)
1 star
0 (0%)
Displaying 1 - 4 of 4 reviews
62 reviews2 followers
December 11, 2025
High on fluff and short on deep insights. This is apparently not intended for a technical audience. For one thing, it consistently conflates the core LLM framework (based on next-token prediction) with so-called "agentic" models (which may well be built on top of LLMs) that have access to novel inputs and can take actions on the world and learn via reinforcement; these are very different beasts. There are also issues such a hyper-focusing on tokenization as a representation issue preventing LLMs from adapting to new symbols, etc., while ignoring the fact that the embedding space allows base tokens to carry additional semantics. In the worst case of new (emoji) symbols appearing in a text at inference time, the raw bytes will appear concatenated (i.e., this just looks like a new "word", moreover with additional context that strongly suggests this is an emoji and may even carry hints as to its semantics due to the character range it falls in). This is no different than having the LLM train on "word parts" in the first place and then take a guess at what it means when it sees multiple parts smashed together in a new word it hasn't previously seen.

The lack of nuance with which the authors treat various AI risks (including existential risk) is also disappointing. This book is concurrently optimistic about the risks of this technology _and_ pessimistic about its upsides (and I'm no LLM booster).

AI/ML practitioners won't find much of interest here, nor will philosophers or AI ethicists interested in some of the thornier questions.
5 reviews
August 5, 2025
This book really surprised me. I started reading it with a bit of skepticism about AI, but I had to change my mind — the authors did a great job explaining how an LLM works and how I can actually use it in my daily activities. It’s not the easiest read, especially for someone like me who doesn’t have much experience with the topic, but it’s definitely a great resource for anyone who wants to start exploring the world of AI.
Profile Image for Hou.
102 reviews
August 21, 2025
Excellent résumé sur comment les LLM marchent
Displaying 1 - 4 of 4 reviews

Can't find what you're looking for?

Get help and learn more about the design.