Large language models (LLMs) are revolutionizing the world, promising to automate tasks and solve complex problems. A new generation of software applications are using these models as building blocks to unlock new potential in almost every domain, but reliably accessing these capabilities requires new skills. This book will teach you the art and science of prompt engineering-the key to unlocking the true potential of LLMs.
Industry experts John Berryman and Albert Ziegler share how to communicate effectively with AI, transforming your ideas into a language model-friendly format. By learning both the philosophical foundation and practical techniques, you'll be equipped with the knowledge and confidence to build the next generation of LLM-powered applications.
Understand LLM architecture and learn how to best interact with itDesign a complete prompt-crafting strategy for an applicationGather, triage, and present context elements to make an efficient promptMaster specific prompt-crafting techniques like few-shot learning, chain-of-thought prompting, and RAG
John Allyn Berryman (originally John Allyn Smith) was an American poet, born in McAlester, Oklahoma. He was a major figure in American poetry in the second half of the 20th century and often considered one of the founders of the Confessional school of poetry. He was the author of The Dream Songs, which are playful, witty, and morbid. Berryman committed suicide in 1972.
A pamphlet entitled Poems was published in 1942 and his first proper book, The Dispossessed, appeared six years later. Of his youthful self he said, 'I didn't want to be like Yeats; I wanted to be Yeats.' His first major work, in which he began to develop his own unique style of writing, was Homage to Mistress Bradstreet, which appeared in Partisan Review in 1953 and was published as a book in 1956. Another pamphle.
His thought made pockets & the plane buckt, followed. It was the collection called Dream Songs that earned him the most admiration. The first volume, entitled 77 Dream Songs, was published in 1964 and won the Pulitzer Prize for poetry. The second volume, entitled His Toy, His Dream, His Rest, appeared in 1968.
The two volumes were combined as The Dream Songs in 1969. By that time Berryman, though not a "popular" poet, was well established as an important force in the literary world, and he was widely read among his contemporaries. In 1970 he published the drastically different Love & Fame. It received many negative reviews, along with a little praise, most notably from Saul Bellow and John Bailey. Despite its negative reception, its colloquial style and sexual forthrightness have influenced many younger poets, especially from Britain and Ireland. Delusions Etc., his bleak final collection, which he prepared for printing but did not live to see appear, continues in a similar vein. Another book of poems, Henry's Fate, culled from Berryman's manuscripts, appeared posthumously, as did a book of essays, The Freedom of the Poet, and some drafts of a novel, Recovery.
The poems that form Dream Songs involve a character who is by turns the narrator and the person addressed by a narrator. Because readers assumed that these voices were the poet speaking directly of himself, Berryman's poetry was considered part of the Confessional poetry movement. Berryman, however, scorned the idea that he was a Confessional poet.
A good book on prompt engineering. When the field evolves quickly writing a good book is hard, because you never know what will get out of date when the book will be published. The authors concentrate on the fundamentals, i.e. LLM is a completion engine, and provide useful advice how to navigate prompt engineering. As I was a part of a project building LLM app in the last year or so I can vouch for the usefulness of advice. Naturally the things will evolve, but I think the usefulness of this book will stay. It is one of these rare books which teach you how to fish, instead of just giving the fish.
“Prompt Engineering” is a book that makes it clear from the very beginning who it is written for. And that’s good — because this is not a handbook for a casual chat user who just wants to “talk better with AI.” It’s primarily a book for developers and prompt engineers who actually build on top of models and need to understand how they work, what their limitations are, and what good practices look like so they can consistently produce better prompts. A typical user will quickly feel overwhelmed — and understandably so, because a big part of the material requires solid familiarity with LLMs, including API-level concepts.
Let’s start with the less pleasant aspects. There are sections that feel overcomplicated or unnecessarily lengthy. And occasionally you stumble upon strange phrases like “the common sense of the LLM,” which don’t quite match.
But once you get past these weaker parts, you’ll notice that the authors have a thoughtful and transparent approach. They discuss different techniques, presenting both their strengths and weaknesses. This helps the reader understand exactly where certain recommendations come from. It builds real awareness and skill — not just “apply this rule,” but why you should apply it.
A big plus goes to the visual examples — many of the discussed concepts are illustrated, and it works extremely well. Ideas that initially sound abstract suddenly become intuitive. There are also exercises that force you to think and help you organize the knowledge.
I also appreciate the authors’ approach: before you start crafting prompts, you must first understand how the LLM works. Why it behaves the way it does. Where incorrect answers come from. How tokenizers influence prompt precision. It sounds academic, but in practice it makes prompt work significantly easier. At first, it may feel overwhelming — especially since the introduction is quite long — but I know this pain well; the same thing happens during the trainings I run.
Some elements are genuinely fresh, for example, the dynamic construction of system prompts. Honestly, I haven’t seen this discussed in other publications yet.
A big plus for the references to research papers. They make it easy to trace the sources and explore topics more deeply, which is especially valuable in such a rapidly evolving field.
Unfortunately, the book is partially outdated. You can see a strong focus on GPT-3, which makes some details obsolete today. There are suggestions that simply no longer apply (e.g., regarding “echo”). On the other hand, this also highlights which concepts turned out to be timeless. The section on tool calling, for example, doesn’t mention MCP, but it essentially describes the mechanism that MCP implements. So even though the book discusses older technologies, it doesn’t diminish its practical value.
Summary: This is an uneven book — at times too broad, sometimes overly verbose — but still packed with concrete, practical, and well-explained guidance. If you build prompts professionally, it will give you a solid foundation. If you use LLMs casually, this isn’t the book for you. But for an engineer: absolutely worth it.
Good book on prompt engineering. Well written and accessible to people who aren’t deep software developers, however the book is targeted towards writing applications that leverage LLMs. That said, there are takeaways for people who manage the development of these applications as well as those who are playing around with LLMs for fun.
Great explanation of how LLMs are built and how to prompt the models. It is clear that prompt engineering is the future of both analysis and development in software engineering and this book nicely explains the process on all possible levels.
Delivers a practical overview of building LLM Applications with special focus on OpenAI systems. Has useful information on how to best get language models to act in more aligned ways. Very realistic about the capacities and limitations of models.
Very good, extremely useful. I have been working with LLMs for a while now, but I have no idea how the predictions work with ChatML and many other things.
This entire review has been hidden because of spoilers.
The book about prompt engineering that contains nothing about prompt engineering. Instead, the second part of the title is the clue, it is about building LLM based applications. Unfortunately, it does this in such an abstract an general way, that it also lacks practicality.