Jump to ratings and reviews
Rate this book

Systemantics: How Systems Work and Especially How They Fail

Rate this book
Hardcover published by Quadragle/The New York Times Book Co., third printing, August 1977, copyright 1975.

192 pages, Hardcover

First published January 1, 1977

656 people are currently reading
6631 people want to read

About the author

John Gall

52 books37 followers
John Gall (September 18, 1925 - December 15, 2014) was an American author and retired pediatrician. Gall is known for his 1975 book General systemantics: an essay on how systems work, and especially how they fail..., a critique of systems theory. One of the statements from this book has become known as Gall's law.

Gall started his studies in St. John's College in Annapolis, Maryland. He received further medical training at George Washington University Medical School in Washington, and Yale College. Eventually early 1960s he took his pediatric training at the Mayo Clinic in Rochester, Minnesota.[3]

In the 1960s Gall started as a practicing pediatrician in Ann Arbor, Michigan and became part of the faculty of the University of Michigan. In 2001 he retired after more than forty years of private practice. In the first decades of his practice he had also "conducted weekly seminars in Parenting Strategies for parents, prospective parents, medical students, nursing students, and other health care practitioners." Until 2001 he held the position of Clinical Associate Professor of Pediatrics at the University of Michigan. Since 1958 he has been Fellow of the American Academy of Pediatrics.

After he retired, Gall and his wife Carol A. Gall moved to Walker, Minnesota, where he continued writing and published seven more titles. He died in December 2014.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
399 (38%)
4 stars
341 (33%)
3 stars
185 (18%)
2 stars
71 (6%)
1 star
30 (2%)
Displaying 1 - 30 of 122 reviews
Profile Image for John.
327 reviews33 followers
December 25, 2016
I think what this book demonstrates is that a certain kind of common sense isn't sense at all, but rather a cynical tyranny of half-truths. It is disingenuous, in that it attempts to borrow the prestige of technical language exactly while also writing in a register of humor, so that any attempt to see past it would provoke the guard reflect of not being in on the joke. Another frequently-used convention is to use upper-case words to make conceptual entities seem justified, well-known, and cohesive, while offering no definition or justification of those entities as common phenomena. For example, I could call it Sophistical Trickery to make such a move, but I really don't mean anything more than sophistical trickery no matter how much the emphasis seems to imply. However, beyond the matter of style, there is what this little book of systems ethics forgot: what it is good to do, given the state of living within systems.

Such is the dogmatism of this volume that I fear this negative review might cause partisans to label me as some kind of "systems thinker" or "change agent", for which the book spares no tar. What I am an apologist for is straightforward speaking, careful scholarship, and thoughtful analysis. The great tragedy of this book is that in spreading the tar around it covers good intuitions with bad argument.

There are good intuitions; I might say that there is a lot of half-truth to be found here, and in particular the last sections, which suggest guidelines, are useful. Here is what is useful to know about this book: don't build systems, but instead solve the problems you encounter directly. I think this is right, but let me say something further: don't be afraid to build frameworks, to build guidelines, to build processes, to nonetheless undertake design and undertake it creatively. You should resolve the matters in front of you as straightforwardly and conclusively as possible, but with all of the preparation, care, and design you think is truly appropriate to the task.

The author notes that everything is a system, and all systems attempt to preserve themselves: therefore, we are systems and our obligation is our own care. We are within a profusion of systems, so it is for us to preserve the practices that demonstrate their capacity to care for us, and don't just do things because they are there to do, but because they genuinely promote flourishing as we see it.
Profile Image for Kaspar.
8 reviews
March 25, 2020
A simple and brilliant work that you'll probably misunderstand.

Gall, with wit and concision, advocates an attitude and mindset of deep humility and skepticism when dealing with systems. The problem is that this book really is the Tao of Systems Thinking. To receive its wisdom and recognize its profound depth, to grasp even the need for systems-skepticism, the reader should expect to meditate on these aphorisms for days, months or even years. As an intro to systems thinking, its not very good or useful at all. It might infuriate the reader or leave them completely indifferent. But if the reader already has substantial experience with systems thinking, Gall will elevate and organize their understanding by validating their intuitions, offering useful heuristics and checking their ego.
Profile Image for Sai.
97 reviews12 followers
January 21, 2018
Some parts systems theory, some parts psychology. Author has a quirky writing style and a consistent dry sense of humor which I enjoyed, but can't see it being everyone's liking.

This book reads like as if a shaman were educating you about complex systems. Very pithy, but also can come across as not rigorous enough.

I wish the author had tackled more systems post-failure scenarios and how to deal with messes, and I would be happy to add that 5th star. Author briefly touches system resiliency, but a full blown take would have made the book more complete IMO.

Worth picking up after finishing Thinking in Systems by Donella Meadows and before reading Antifragile by Taleb
Profile Image for Otto Lehto.
475 reviews233 followers
May 24, 2019
Do not take this book very seriously. It is a quirky little comedy essay about General Systems Theory. There is really nothing to compare it to, so I really have no idea how to rate it...

Although it lacks any kind of scientific rigour or empirical accuracy, it does a pretty good job of explaining the basics of how (complex) systems work. It does so in a surreptitious way by simplifying the science behind systems theory and complexity theory into pithy slogans and anecdotes. (This is bad practice that usually leads to terrible results, but somehow not here.)

The book is pleasantly short to read (it can be finished in an hour or two) and contains enough ideas to give the reader an idea of how to think skeptically and cynically about the powers of prediction and control that systems-builders and systems-reformers always claim to possess. (Spoiler alert: they don't.)

I would not rely on the book if I wanted to find out the best science of systems. It is mostly factually dubious, rhetorically outrageous, and severely biased. But I liked it quite a bit for what it is.
Profile Image for Ushan.
801 reviews77 followers
December 27, 2010
A cross between Dilbert, Dao De Jing and Charles Perrow's Normal Accidents. Large technological and social systems lose track of their original purpose and turn self-serving; they do not function as designed because their creators forgot Le Chatelier's principle and were unaware of various feedback loops. The process of observing the systems changes them. Passive safety is better than active safety; when used mindlessly, safety devices and procedures themselves become safety hazards.

The examples of systems gone bad are great. An enormous hangar designed to house space rockets and protect them from the elements generates its own weather, so it can rain inside it upon the rockets. When the Fermi I experimental breeder reactor experienced partial meltdown, radioactive sodium was drained and a complicated periscope and pincers were lowered into it; it was found that a foreign object blocked the flow of sodium; the object was later identified as a safety device installed at the very last moment and not documented (Perrow also tells this story; Gall is mistaken in calling it an anti-meltdown device: this would've been too cute). A Peruvian railroad replaced its steam locomotives with diesel ones; they discovered after the fact that diesel locomotives lose most of their power at Andean altitudes, unlike steam ones, but instead of going back to steam, the Peruvians used two 3000hp diesel locomotives where one 1300hp steam locomotive sufficed before. The Nile used to flood annually and fertilize the Egyptian fields; Nasser built the Aswan dam, which stopped the flooding; the dam produces electricity, which is used to make artificial fertilizer (J. R. McNeill also tells this story in his environmental history of the twentieth century). The examples of ignored feedback are also nice. The Green Revolution caused third-worlders to go hungry as before - but at much higher population densities. Widespread application of antibiotics caused antibiotic-resistant germs to emerge. On the other hand, Washington D.C.'s international airport has a better-than-average safety record despite its hazardous features: "It was safe because it was bad. It kept pilots alert".

Every engineer could cite many examples of systems gone bad. So could everybody interested in politics. I wonder if politically Gall is a Reaganite; certainly his book made me think of Reagan's famous remark, "My friends, some years ago, the Federal Government declared war on poverty, and poverty won." My favorite political example is the agricultural policy of the USA and the EU. The United States Federal Government on the one hand, subsidizes farmers and tries to keep food prices and demand for food high, on the other hand, issues food stamps to poor people because food prices are too high, and on the third hand, combats obesity through the National Institute of Health. The EU countries' governments give out large amounts of aid to poor countries, yet impose high tariffs on agricultural imports from them. Like Stanislaw Lem's King Murdas, they are examples of systems so large that their various parts have minds of their own, sometimes contradicting the minds of other parts.
Profile Image for Phil Eaton.
122 reviews307 followers
March 19, 2019
Douglas Adams writes a book on complexity and failure.

In the top two books I've read in the last five years.
Profile Image for John Fultz.
28 reviews5 followers
August 2, 2014
What a very odd book. The voice is incredibly serious, yet often with tongue planted firmly in cheek. The style reminds me a bit of The Dilbert Principle, but with less overt humor and more "wink-wink-nudge-nudge, but no, this is really serious".

Also, it's an old book, and it shows through the examples and footnotes. Many of them date to the 60s and the 70s. Although this printing was released in 2012, there's a lot of the previous decades leaking through.

All of that having been said, the principles seem fairly solid to me. The book warns firmly against optimism when dealing with systems, and advises pragmatic approaches which involve compromise. Several chapters go into some detail discussing software as a system, and I can certainly personally verify most of what was written there. And the many anecdotes are entertaining, if humbling.
Profile Image for Eric Franklin.
79 reviews84 followers
February 6, 2019
Absurd, hilarious, and useful, this is a complete and creative toolkit for understanding and interacting with systems. Replete with humorous examples and rife with overt cynicism, a timeless representation of human futility for engaging with our own creations.
Profile Image for Justus.
182 reviews4 followers
November 21, 2010
tries unsuccessfully to be flip and not very insightful, but its a quick read with an interesting of mind tickling maxims.
43 reviews1 follower
June 6, 2016
Funny at times, but I'm not sure there was actually much I could take away from it. I did like the use of very short chapters.
Profile Image for Дмитрий Филоненко.
88 reviews3 followers
November 18, 2025
Quite a weird book. I don't know how to rate it. So let it be 3.
In the beginning I felt even outraged. Ridiculing everything is not a writing style which I usually appreciate. Far fetching conclusions, dirty play with logic, overgeneralization on an edge with lies, etc. But some people here on Goodreads whom I respect had rated it quite high. So I wanted to understand what it was in it?
Then I looked through some comments (yes, not very fair way to make an impression of a book). And found there a hint: not to take it too seriously. OK, then I lowered my expectation bar and continued.

Basically, you are going to find a collection of so called axioms. They cover functioning of systems, their goals, people working in systems, errors in systems, their sensitivity (or insensitivity) to world's signals, etc. By the way, systems here are any kind of systems, essentially, solutions to any kind of problem: technical systems, social ones, governments, constructions, defense, whatever else. These axioms can look like these (sorry, they are really all in capital, pardon for unintended associations): "NEW SYSTEMS GENERATE NEW PROBLEMS", "Systems Expand, and as they expand, they Encroach.", "COMPLEX SYSTEMS EXHIBIT UNEXPECTED BEHAVIOR", "SYSTEMS TEND TO OPPOSE THEIR OWN PROPER FUNCTIONS", "PEOPLE IN SYSTEMS DO NOT DO WHAT THE SYSTEM SAYS THEY ARE DOING.", "THE SYSTEM ITSELF DOES NOT DO WHAT IT SAYS IT IS DOING", etc.
Then the second part of the book is devoted to recommendations how to deal with the systems. You're not going to find here same kind of thinking and approaching as for example in the "Thinking in Systems" by Meadows. Here it's a bit higher level of abstraction, more strategic. I found them quite interesting, at least worth thinking about. Some of them look more like Confucius' wisdom though: sit by the river and wait. It might be the only right solution sometimes perhaps. But somehow it felt like cheating (I mean, as an axiom). Some recommendations I've met recently in other books or contexts, e.g. reframing. Interesting practice. A new gem was a recommendation to see a few systems as connected via shared elements. And when certain element is difficult to approach directly in the target system, it might be approachable in connected systems.
Profile Image for Prasanna.
241 reviews17 followers
October 28, 2017
I read the book in one sitting on a Friday when I was taking a break from working on some annoying distributed systems issues. It speaks to the timelessness of Systems problems that a book that was first published in 1975 can have such an impact even today. There have been many Systems Theory books since this one and I had just read John Miller's "A crude Look at the whole" and expected more of the same treatment. Boy, was I surprised!? The book is broken into very small chapters that essentially talk about a certain axiom about systems thinking with anecdotes. The book feels friendly while also sounding serious and authoritative at times. This validated a lot of my own thoughts about heterogeneity gaining stability in a system and how complex systems with problems should be tackled. This should be a required reading for anyone who wants to develop a systems thinking mindset.
66 reviews
January 14, 2023
Good nuggets about how systems work, or don’t, at a macro level. Lots of food for thought on systems I encounter at work (the company as a system, the industry as a system), or our daily lives. My simplified summary: complex systems fail in unexpected ways, in fact they mostly exhibit failures and you should just accept that, systems tend to self preserve their existence even when they’re bad or not needed, you need an outsider view to properly evaluate them as systems create their own internal rewards to keep you hooked.

The material is quite dry and at some point in the latter half it becomes self-referential. I was relieved when the Appendix section started about 60% into the book and I could move on to the next book.

I was slightly put off by the self-aggrandizing writing style and the unexpected and frequent capitalization of many words as if they’re axioms I’m supposed to memorize. That gives the book a vibe of a cultish/religious book I’m supposed to believe and not scrutinize.
Profile Image for Alexander Yakushev.
49 reviews38 followers
March 21, 2018
However satirical, this book presents hard questions and no easy answers. It is a very humbling experience that makes you rethink your approach to solving problems (and whether what you do solves them at all).
Profile Image for Barack Liu.
596 reviews20 followers
October 13, 2025

590-The Systems Bible-John Gall

Barack
October 12, 2025


The Systems Bible, first published in 1977, is a critique of systems theory. It contains a statement known as Gall's Law.

John Gall was born in 1925 in the United States. He is an American author, scholar, and pediatrician. Gall is best known for his 1975 book, General Systems Theory: A Treatise on How Systems Work, and Especially How They Fail…

Table of Contents
Part One: Basic Theory
A. The Mysterious Ways of Systems
B. Inside and Outside
C. Function and Failure
D. Communication Theory
Part Two:Applied Systemantics
A. Systems and Self-Defense (Meta-Strategies)
B. Practical Systems-Design
C. Management and Other Myths
D. Intervention

I was working at Microsoft , I had a mentor. We didn't have a direct reporting relationship, and we weren't even in the same city—I was in Shanghai, he was in Beijing. We chatted at an event and hit it off unexpectedly. I later offered to learn from him, and I became his mentor. Later, when he was in Shanghai on business, he'd tell me about it, and we'd meet again for lunch or dinner. A while ago, he was in Seattle for a meeting, and we talked again. He recommended three books on systems thinking, one of which was "The Systems Bible." The book begins with the idea that whenever humans design a system to "solve a problem," it inevitably changes the existing environment and creates new problems. For example, building a garbage collection system may ostensibly reduce waste, but it also requires money, manpower, and energy, as well as the construction, transportation, and incineration of waste. We may have solved an old problem, but in turn, created a new one. To improve a process, we introduce new tools; to make the system more stable, we add more detection mechanisms. The author reminded me that from the moment a system is introduced, it ceases to be an "external solution" and becomes part of the problem itself. This led me to ask myself: When I design a system, am I humble enough to acknowledge that I'm not creating perfect order and am inevitably introducing new problems? Perhaps true wisdom lies not in creating a flawless system, but in constantly recognizing its own side effects and coping with them at limited cost. Human development may seem unstoppable, but in reality, we're constantly indebted to nature. Energy, minerals, water—for now, no one is demanding payment, leading us to mistakenly believe they're "free." But when nature strikes back, will we remember the original system design? The terrifying thing about systems thinking is that it reminds us that every solution is a rose with thorns. It's beautiful, but it must be handled with care.

The author proposes a law: "People in systems do not do what the system says they are doing." Take shipbuilding, for example: the more complex the system, the less the participants feel like they are actually building the ship itself, and more like they are doing things "related to shipbuilding but not equivalent to shipbuilding." If you were to build a small boat from scratch in your own backyard, you would measure the wood, saw the boards, polish, assemble, and launch it yourself. You would be in control of the entire process. Although the boat is small, you can truly say that you are "building a ship." In a massive shipyard, the tonnage of the ship can be enormous, and the collaboration can be precise, but most positions involve only drawing a design, installing a component, or performing a single test. The discrepancy between the system's stated goals and people's daily behavior is the inevitable price of a detailed division of labor. When individuals mechanically submit forms or follow scripts within a vast system, the causal connection between the final outcome and the work itself is weak. This can easily lead us to find our work boring, and we might even mistake "completing tasks in compliance" for "accomplishing our mission." However, when you can walk through a complete chain from start to finish, from requirements clarification to prototype verification, from material selection to final assembly and launch, your sense of meaning is completely different, because you can see the impact each step has on the whole. Without division of labor, how can we build behemoths that transcend individual limitations? If only division of labor exists, how can we avoid reducing people to mere tools, only able to tighten the same screw? Perhaps the answer lies not in denying division of labor, but in repairing the chain of meaning—giving roles context and reconnecting actions with goals. Even if responsibilities are narrow, through rotation, task splicing, or minimal closed loops, people can periodically complete a "small but complete" shipbuilding project. To use a more homely analogy: if we were simply chopping vegetables in the kitchen, over time, we might grow tired of the tedium. But when you manage a meal from prep to cooking, from plating to serving, you naturally develop a sense of unity between flavor, heat, and rhythm. So I asked myself again: Is what I call "boredom" really the boredom of the work itself, or is it that the causal chain between me and the outcome has been severed? Can I polish even a small process into a closed-loop system, so that "what I do within the system" is closer to "what the system claims I'm doing"?

" A big system either walks on its own, or if they don't, you ca n't make them." This means that a complex system either operates on its own or it doesn't; you can't force it to run . I'm currently working on a visual novel about Chinese emperors. When we look at emperors, we may find that while they appear to possess supreme power, in reality, it's a vast bureaucracy that truly keeps the empire running on a daily basis. The emperor can certainly issue decrees and use his authority to enforce reforms, but if the entire system—civil servants, military commanders, and local institutions—is unwilling to cooperate or unable to operate efficiently, then even if the orders are carried out, they often fall short of the intended results. Therefore, the "self-operation" of a system is crucial. The same is true of our everyday phones, computers, and cars. While they appear to be working for you, they are actually composed of thousands of components, software, and algorithms working in concert. You can certainly tweak your computer and reboot the system, but if a core module is completely broken, you can never "restore" its overall operation. A system that can't sustain itself is like a tree without roots: you can water it endlessly, but it will never bear fruit. Conversely, when a system operates smoothly, it can continue to create value even without supervision. You see, a truly healthy team doesn't rely on a boss's constant watchful eye; a good software architecture doesn't rely on programmers constantly putting out fires. This led me to wonder, isn't the ultimate goal of designing a system to enable it to "run" on its own? Great systems aren't "pushed" but "cultivated." Like a tree, it doesn't grow by being pulled, but slowly formed by soil, water, and sunlight.

When designing a system, another issue needs to be considered: communication. Humans seem to be born with a certain arrogance. We often overestimate our ability to understand the world, especially our confidence that others understand us. I used to be a typical example. I always thought I was eloquent and could chat with anyone, as if "being able to speak" equaled "being able to communicate." It wasn't until later that I realized that this confidence was actually a blind spot. True communication isn't about the words being spoken, but about whether they "reach" the other person and have a behavioral impact. In other words, if you say a lot but nothing changes their thinking, emotions, or actions, then the communication is essentially a failure. After realizing this, I asked myself: What is the purpose of communication? Is it to make the other person understand me, or to move them to action? Without behavioral feedback, our words are just vibrations in the air. Communication between people is essentially like the transmission of signals between two systems. You input a command, but the system may output something completely unexpected—because you thought you expressed it clearly, but in fact the signal was distorted along the way. This feeling has been particularly pronounced recently when I've been programming AI. Interacting with AI is far more like "communication" than "operation" than I'd imagined. Traditional tools, like phones and cars, are like stones: you tell them what to do, and they do it mechanically, without any issues of understanding or misunderstanding. AI, however, is different. It's like a clever but temperamental cat. You can train it, coax it, scold it, and guide it, but you can never fully control it. Different models are like cats' personalities: some are brilliant and insightful, while others are clumsy and dull. Can you blame them for being disobedient? Perhaps it's because you haven't clearly articulated your ideas, or perhaps you haven't thought them through clearly yourself. For example, sometimes when I write prompts, I don't even understand the underlying logic, yet I expect AI to give a perfect answer. This is actually ridiculous. If it gets it right, it's a testament to the model's strength; if it gets it wrong, it's more in line with common sense. This reminds me of human communication—how many families and couples never truly communicate. Parents assume their children understand their efforts, children assume their parents will never understand them, and lovers believe silence is a form of tacit understanding, only to find themselves in a separate, echo chamber. Without genuine information exchange between systems, there's no way for them to collaborate. You can fully grasp the properties of a stone, but when you're dealing with an intelligent being—whether a cat, dog, human, or AI—you can no longer treat it the same way you would a stone. It's no longer a question of control, but of understanding. So I ask myself: Am I truly communicating? Or am I just venting? Am I willing to admit that I don't actually know how to be understood? Perhaps true communication isn't about teaching someone to understand, but about learning to speak anew with humility—like someone learning a language for the first time, cautiously, clumsily, yet sincerely approaching another system.

"No problem . " The first step in solving a problem is recognizing its existence . People often say that recognizing a problem is half the solution, but I don't think that's accurate enough. The real key lies in whether you can clearly see where the problem lies. I've experienced this myself: sometimes when I'm with someone, the relationship suddenly becomes cold, and I'm confused and bewildered. Later, I realize that it was something I unintentionally said or did that hurt them. If I can't even notice this, how can I possibly repair the relationship? Therefore, true awareness isn't knowing there's a problem, but understanding its root. And the root of the problem often lies not in external circumstances, but within myself. I used to complain about unfair circumstances, poor opportunities, and people not understanding me. But gradually, I began to question myself: Is it that I don't know how to love others? Is it that I don't spend enough time researching and thinking before doing something? Is it simply because I'm not truly passionate about what I'm doing that I can't give my all? When I reflected on this, I suddenly felt a sense of relief—the problem wasn't in the outside world, but in the hidden areas within myself. The second principle that got me thinking was "Don't try to get rid of"—don't rush to get rid of problems. We often subconsciously want to "get rid of" frustrations, insecurities, and flaws. However, the author reminded me that a true system isn't one without problems, but one that can function within them. The same is true for people. Some people create admirable works through extreme obsession and near-insane perfectionism, but they are, after all, a rare breed of genius. For the average person, perhaps it's more important to first learn to coexist with imperfection. Get the system running first, then slowly make corrections, rather than getting stuck in a "black-or-white" dilemma. I began to think that perhaps "balance" is even rarer than "perfection." Once you have sufficient understanding and experience, then pursue perfection. Perhaps then you'll understand the price of perfection. The author's third insight is "Information, please"—information is crucial in decision-making. Faulty systems often stem from "assumptions." I thought back to filling out my college entrance exam application: from knowing my scores to completing the application process, there was only a week. How could we choose our life's direction back then? Did we truly understand the university? Had we even been there? Do you even know what that major entails? We hadn't even flipped through the course materials, yet we were expected to decide the next four years of our lives in just seven days. Looking back, it seems almost absurd. Why do people make decisions without sufficient information? Perhaps it's laziness, impatience, or a false sense of "I know." This kind of presumptuous confidence is the root cause of failure. Making decisions without information is like building a machine with your eyes closed: problems are inevitable. However, there's another pitfall to be wary of: information overload. When we become obsessed with collecting information and analyzing data, hesitating to make decisions, the system can become stagnant. So, when should we stop collecting and start taking action? Perhaps this is another manifestation of systems intelligence: finding that fine line between ample information and decisive action. The book doesn't provide an answer, but it leaves me with a question: In facing a complex world, what rhythm should I learn—when to stop thinking and when to start acting?

"When you want to solve a problem, first consider whether it can be solved with an existing system." This sentence reminds me of the first step I took in graduate school: literature research. Why research? Because before you begin, you should first see if others have already solved the same problem. Perhaps the "innovative solution" you've racked your brains to come up with has already been done by others, and even better. The common saying "don't reinvent the wheel" actually holds this true. Creating new systems isn't necessarily a sign of wisdom; often, it's just an excuse for not investing the time to understand existing systems. Systems aren't easily replaceable. They're like trees: to plant a new one, you have to uproot the old one, and those roots are often deeply embedded in the soil. "Do it with a small system if you can." Don't design a large system when a small system can solve the problem. The wisdom of Occam's razor shines again here: shave off the unnecessary. While this statement sounds simple, it's often difficult to implement. Especially in the age of AI, where the cost of adding features is decreasing, we seem to have a constant urge: if we can, why not? Thus, a once simple system grew increasingly complex, until one day it collapsed, and we realized the problem wasn't "not enough features" but "too many features." I encountered this pitfall while working on my own visual novel generator project. Initially, I ran the program in the terminal, but later, for convenience, I moved to the web to reduce user input. While seemingly progressive, it actually left behind many remnants of the old system. Some code remnants, like ghosts, would occasionally pop up and interfere with the external version. I thought I had completely deleted it, only to have new bugs pop up. "Taking down is often more tedious than setting up"—uninstalling is more difficult than installing. Installation is like sowing a seed: a single idea can spawn countless branches. Expanding from one to many is easy; uninstalling is like pruning: going from many to one, or even back to zero, is the real challenge. This is true in coding, and in life. Subtraction is much more difficult than addition. When we're young, we always want to learn more, meet more people, and do more things, as if more means fulfillment. But as I've grown older, I've gradually come to appreciate a different kind of wisdom: learn to reduce. Delete unnecessary commitments, delete draining relationships, and delete tasks that seem important but cause you anxiety. This applies to system design, and to life as well. Adding brings excitement, while removing brings maturity.

The "Potemkin Village Effect," often translated into Chinese as "Potemkin Village Effect," has a rather ironic historical story behind it. In the 18th century, Russian Empress Catherine II toured the newly conquered Crimea. To curry her favor, her favorite, the local governor, Grigory Potemkin, ordered the construction of rows of freshly painted "wealthy villages" along the route. The images featured neatly-knit farmers and markets overflowing with food, creating a scene of prosperity. However, these "villages" were actually temporary sets. Once the Empress's ship departed, they were quickly dismantled and moved to the next location for a repeat performance. Similar "superficial prosperity" can easily occur in system design: a seemingly perfect but lifeless fake system is constructed for inspection, demonstration, and reporting purposes. However, this kind of "facade engineering" is often extremely costly because it blocks the generation of real feedback. We mistakenly believe that a system is performing well when, in reality, it is only one step away from collapse. The author mentions the "Face-of-the-Future Theorem"—"When dealing with the shape of things to come, it pays to be good at recognizing shapes." What does this mean? It means that before a system is even formed, you must first learn to recognize "good shapes"—a reasonable structure, a realistic path, a direction that conforms to long-term principles. Otherwise, you'll easily be misled by temporary "beautiful curves" and "false prosperity." My professor has repeatedly emphasized a similar point: before starting any research project, you must first define "what constitutes a good outcome" and write it down, not just relying on intuition. At the time, I didn't quite understand it, thinking it was too formal. But later I realized that if you don't even have a clear idea of "success," then success will become an illusion. Just like in relationships—if you don't have a clear definition of friends
Profile Image for Ganesh Babu.
3 reviews4 followers
October 21, 2023
Super insightful. I think I’ll come back to this again and again.
Profile Image for Ben.
179 reviews
October 30, 2025
This should be mandatory reading for anyone running an organization. I wouldn’t say this is for beginners as some of the concepts do take quite a bit of brainpower to wrap your head around but overall a really useful book.
Profile Image for Leo.
341 reviews26 followers
January 18, 2023
Even though book has some interesting (albeit not super-original) insights, sometimes it's ironic tone harms the message, as I feel that author tries to rely on anecdotes to prove system failures, rather then providing better evidence. Also sometimes it feels like author deliberately misrepresents goals of some systems to show their failures.
To those interested in the general "systemantics" topic I'd much more recommend "Thinking in Systems" by Donella H. Meadows.
Profile Image for Bob.
45 reviews
January 21, 2020
Meh...

As a systems thinker and fan, didn't find much to recommend it.

Complex systems do what they're gonna do, and it's not what you want, was my main take away.

Abandoned after 2/3 through, but definitely feeling done with it.
39 reviews1 follower
October 8, 2018
Things are not working, that much we agree about. For me, these 'things' are mostly software systems which drive me closer to the edge of insanity the more I work with them, and for the author of this book it's mostly human organizations. The arguments of the book are supposed to apply to all systems, so we can ignore this minor difference. Who or what is to blame for this state of affairs? Or even better, how can we navigate it? The culprit, according to the author,is the systems that we see everywhere, enjoined and entangled,permeating our lives, and pulling us in all directions. The solution proposed by the author is to be suspicious of systems, and know their tricky ways ("We must learn to live with systems, to control them lest they control us", p. xiii). An extensive definition of a system is not attempted by the author, rather wisely, because this gives the reader the drive to look for systems in their own life. There is one roundabout way of defining systems towards the end of the book which I found rather enlightening, however: "The system represents someone's solution to a problem. It does not solve the problem" (p. 74). More than systems per se, the author targets what he calls systemism, the vacuous belief in the fundamental usefulness of systems, and the book aims to cast it out of the reader by succinct statements of the flaws of systems. The book is written in a tongue-in-cheek style. The author makes claims of systematicity, and orders his argument in axioms that should somehow lead up to a consistent theory, but that's not really the case. This doesn't mean that the book is inconsistent,however. The axioms serve to deconstruct systemism, and show, in the process, how systems not only fail to function as advertised, but also blind us to their failure.

The argument starts off with a statement of the singularity of systems: All systems exhibit system behavior, and when we have a new one, we will have to deal with its problems, just like with all the others. Systems also have a will to survive and grow. Based on work on the size of administrative systems, the author (rather informally) judges the annual growth of systems to be in the order of 5 to 6 percent per year. This despite the fact that systems do not produce the results we expect them. In fact, we cannot even project what kind of behavior complex systems will exhibit. This is due to what the author calls the Generalized Uncertainty Principle (GUP): "Complex systems exhibit unexpected behavior" (p. 19). The simple and straightforward explanation for the GUP is that the no one can understand the real world good enough to predict the outcome of complex processes. This is a fact that has been internalized in parts of the startup community with the principle of experimenting and failing fast to find out what is working, instead of just guessing it. One obvious trick to beat the GUP is taking a simple system that is well understood, and making it bigger. This will not work, however,which I have had the chance to observe personally a number of times,and the author agrees: "A large system, produced by expanding the dimensions of a smaller system, does not behave like the smaller system".

The next axiom, called Le Chatelier's principle, is also something I have had the misfortune of observing: "Systems tend to oppose their own proper functions" (p.23). That is, systems, through ill-defined proposals for improvement, will install procedures which in name aim to do so, but bind people and resources in what the author calls Administrative Encirclement. The example the author gives is from an academic setting, but every software developer knows the dread of having to attend meetings that are supposed to improve efficiency, but acoomplish nothing other than holding back people from working.

The result of the GUP and Le Chatelier's principle is that systems do not perform the function a similar system of smaller size would perform. The relationship is in the opposite direction: The system is not defined by the function, but the function by the system: "The function or product is defined by the systems-operations that occur in its performance or manufacture" (p.36). But if systems are inefficient, and producing the wrong thing at the same time, how come they don't self-correct? To do so, a system should have the capacity to perceive the situation, which is not the case. For systems, "The real world is what is reported to the system" (p. 39). I think this is one of the most striking lessons of this book. It is astonishing how distorted the views of people working within a company (especially in higher positions) can become, and this precludes making any significant changes in the company. This is also one of the most significant parallels between human and software systems. A software system is also as good as its fidelity to the real world. Some systems keep on running for a long time as their records of reality and the real world drift apart. The author also has a really nice name for the ratio of reality that reaches administration to the reality impinging on the system: Coefficient of Friction.

What about intervening into systems as an outsider? Can one judge their behavior from the outside, without the tinted glasses with which the system sees the world, and improve the system? According to the author, this is possible only if the system already worked at some point. One cannot build a complex system and expect it to work; this is not possible. A working complex system can be achieved only by starting with a small system, and growing it. This is another parallel to software development: Large systems that are designed without one line of code being written, and are then built by separate teams, face huge problems when the time for integration comes. This insight has led to the agile movement, which aims for always working software that is integrated as frequently as possible. What's more, software teams also face a similar issue. Gathering a large number of developers together, and telling them to build a specific thing does not work either. The best approach is to start with a relatively small team,see what works and what doesn't, establish a culture, and grow around them.

How systems deal with errors (or fail to do so) is one of the most relevant parts of the book for modern technological systems. Due to the fact that systems tend to grow and encroach, they will have an infinite number of ways in which they can fail. These ways, and the crucial variables which control failure, can be discovered only once the system is in operation, since the functionality of a complex system cannot be deduced from its part, but only observed during actual functioning. These points lead to the conclusion that "Any large system is going to be operating most of the time in failure mode" (p. 61). It is therefore crucial to know, and not delegate as only extraordinary, what a system does when it fails. This is not so easy, however, since as per the coefficient of fiction, it is difficult for a system to perceive that it is working in error mode,which leads to the principle that "In complex systems, malfunction and even total nonfunction may not be detectable for long periods, if ever" (p. 55).

If it is so difficult to design systems that work, and keep them working as they grow, how are we supposed to live with them? The obvious step is to avoid introducing new ones, that is, "Do it without a system if you can" (p. 68), because any new system will bring its own problems. If you definitely have to use a system, though, the trick is to design the system with human tendencies rather than against them. In technological systems, this is understood as usability. Another principle is to design systems that are not too tightly coupled in the name of efficiency or correctness. This is stated as "Loose systems last longer and function better" (p. 71).

After reading the book, and writing this review, I have only one question in my mind: Why does this book exist? How can it exist? How can it be that so many mind-blowing insights about technological systems were derived by a MD, and recorded in an obscure 80 page book sometime in the seventies? And which other books exist out there that are as good as this one, and are not yet discovered?
Profile Image for Tony.
103 reviews
August 1, 2015
I read this book on a Saturday afternoon. Small book, amusing writing, easy to follow.

This book was published in 1975. Don't be surprised if some of the examples, and some of the language, is somewhat dated.

The author attempts to be both amusing and academic in his approach. I find most academic writing to be dry and overly intellectual. While the intellectual aspects of this book annoyed me to some degree (otherwise it would have 5 stars) the humor does shine through.

What are the common characteristics of systems? Keep in mind: machines are systems, as are electronic devices, as are organizations of people, as are computer programs. What things can we say about ALL of these things?

First off: have you ever noticed that systems tend to fail? So much, in fact, that we consider that normal? So much so that we have a common acronym for when it does (SNAFU; Situation Normal All F***ed Up)? Systems tend to spend more time in "failure mode" than in proper working order. And attempts to remedy the situation, usually by making the system more sophisticated, only increase probability that it will be in failure mode at any given time.

As someone who creates systems for a living (I'm a programmer), I tend to think that just a few more tweaks to my shiny new system is all that is needed to get it working properly, reliably. Instead, I need to be designing systems to be as easy as possible to clean up after when they do fail and the consequences of said failure need to cause as little pain as possible. Because the only thing you can count on is that it WILL fail, at least part of the time.

Right. I needed a book to tell me this? It's full of things which should be "duh!" And, in retrospect, are. But going in, you will probably find yourself with a lot of "AHA!" moments.

Spend an hour or two with this one. You won't regret it. You'll probably have a couple laughs. And sometimes, we just need to have those things we've been feeling, down in our bones, publicly stated.

Never forget. The hot air balloon was invented by a couple paper makers (Montgolfier brothers). And the airplane was invented by a couple bicycle makers. Not by some organization.
212 reviews10 followers
November 1, 2014
Not what I expected, but still very relevant. I expected something very academic and mathematical. The author claimed many times that his principles were "axioms", and that they were pristinely mathematical in nature and all self evident. This was a rather annoying claim, since the book was not mathematical at all, nor were the axioms necessarily self-evident (though good supporting examples were provided). Despite this, it all still rings perfectly true. A system can be a blessing or a curse, but it is guaranteed to have unexpected behavior. When it does something bad, you'd better hope that your system is flexible, changeable, monitorable objectively somehow, that it doesn't completely dominate everything and only allow positive feedback.
The style is kind of a mix of Taoist philosophy, design, a tiny bit of math (really, barely any), common sense, self-improvement, endearing Latin textbook, and more. Lately I'd been thinking about all of the generalizable things I'd learned from programming (and especially my strengthened dislike of large bureaucracies): that things need to be flexible and interchangeable, testable and constantly tested, designed with user experience in mind, tested before deployed, etc. All of that is abstracted out of software design and into the real world with this book, which is really quite phenomenal. Software happens to be just one type of system.
Profile Image for Taylor Pearson.
Author 4 books755 followers
January 28, 2019
Complex systems are one of my favorite subjects and The Systems Bible is a great entry in the genre.

Simple systems are a sum of their parts: a bike is just a bunch of parts. If you take a wheel off and replace it with another, no big deal.

In a complex system, the whole is greater than the sum of its parts. If you take the heart out of a horse and then replace it a few hours later, it doesn’t start working again like a bike. This does not mean we can’t understand complex systems, only that they play by a different rulebook which The Systems Bible attempts (and largely succeeds) at capturing with quotes like:

“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”
and

“SYSTEMS TEND TO MALFUNCTION CONSPICUOUSLY JUST AFTER THEIR GREATEST TRIUMPH:
Toynbee explains this effect by pointing out the strong tendency to apply a previously-successful strategy to the new challenge:
THE ARMY IS NOW FULLY PREPARED TO FIGHT THE PREVIOUS WAR
For brevity, we shall in future refer to this Axiom as Fully Prepared for the Past (F.P.F.P.)”
Profile Image for Raziel.
Author 1 book1 follower
November 9, 2016
"we humans tend to forget inconvenient facts, and if special notice is not taken of them, they simply fade away out of awareness", p. xx.

"The reader is hereby warned that any such optimism is the reader's own responsibility", p. xxi

"Error is our existential situation and that our successes are destined to be temporary and partial", p. xxv.

Efficiency Expert: Someone who thinks s/he knows what a given System is, or what it should be doing, and who therefore feels s/he is in a position to pass judgment on how well the System is doing it. At best a nuisance, at worst a menace, on certain rare occasions a godsend. (p. 237)

Expert: A person who knows all the facts except how the System fits into the larger scheme of things. (p. 237)

Specialist: One who never makes small mistakes while moving toward the grand fallacy (McLuhan). (p, 240)

Systems-person: For purposes of recognition, a System-persons is someone who wants you to really believe or (worse) really participate in their System. (p. 241)

System: A set of parts coordinated to accomplish a set of goals. Now we need only to define "parts", "coordinated", and "goals", not to mention "set". (p. 240)
89 reviews
June 7, 2016
This book was first published in 1975 and has gone through several printings.
It is a serious book that sometimes masquerades its points with humor. The general theory supported in the book is that: "Systems in general work poorly or not at all". Two representative corollaries of this theory are: "Large systems usually operate in failure mode" and "The system tends to oppose its own proper function." The strength of the book is the examples of real world systems behaviour ranging from the administrative to those in the technical world. Human failure while a part of many systems is not claimed as the underlying cause and instead it is suggested that the observed difficulties are intrinsic to the system's operation. This theory is hard to validate with our current focus on human error in design and operation. Difficulties notwithstanding, it appears others have attempted to carry this work forward. i look forward to reading further in Systemantics in case hope is found for human intervention.
1 review
May 27, 2008
I would like this book to be required reading for all high school or college students. It would help dispel the now unhealthy wide-spread blind faith in "systems." To paraphrase the author: A large system (Congress for example) never does what it says it does. Large systems have their own goals.

"The Systems Bible" is written for the layperson. It is very witty and full of usable wisdom.
Profile Image for Lou Cordero.
129 reviews1 follower
September 11, 2014
The copy I read is subtitled "How systems work and especially how they fail". Wonderful easy read sheds light and humor on the development of complex systems. The impossibility of solving the problem correctly and completely. I recommend this book to anyone involved in the design of complex systems.
Profile Image for John.
84 reviews10 followers
June 18, 2017
A sardonic overview of the systems theory. Gives plenty of advice on recognizing failure modes of systems. Sadly, I recognize many of them from experience. Also gives advice on building systems, e.g., don't, or modify an existing system (and be prepared for unintended consequences), or at the very least build a small & loose system of modest ambition. Did I mention the book is sardonic?
Profile Image for Julissa Dantes-castillo.
391 reviews26 followers
September 27, 2023
I appreciate the author's enjoyable approach to the book, as it serves as an exemplar of maintaining conciseness and precision in its discussion of the subject matter. It refrains from including unnecessary supplementary examples and remains focused on only those that are truly necessary. Overall, it is a really good book.
Displaying 1 - 30 of 122 reviews

Can't find what you're looking for?

Get help and learn more about the design.