590-The Systems Bible-John Gall
Barack
October 12, 2025
The Systems Bible, first published in 1977, is a critique of systems theory. It contains a statement known as Gall's Law.
John Gall was born in 1925 in the United States. He is an American author, scholar, and pediatrician. Gall is best known for his 1975 book, General Systems Theory: A Treatise on How Systems Work, and Especially How They Fail…
Table of Contents
Part One: Basic Theory
A. The Mysterious Ways of Systems
B. Inside and Outside
C. Function and Failure
D. Communication Theory
Part Two:Applied Systemantics
A. Systems and Self-Defense (Meta-Strategies)
B. Practical Systems-Design
C. Management and Other Myths
D. Intervention
I was working at Microsoft , I had a mentor. We didn't have a direct reporting relationship, and we weren't even in the same city—I was in Shanghai, he was in Beijing. We chatted at an event and hit it off unexpectedly. I later offered to learn from him, and I became his mentor. Later, when he was in Shanghai on business, he'd tell me about it, and we'd meet again for lunch or dinner. A while ago, he was in Seattle for a meeting, and we talked again. He recommended three books on systems thinking, one of which was "The Systems Bible." The book begins with the idea that whenever humans design a system to "solve a problem," it inevitably changes the existing environment and creates new problems. For example, building a garbage collection system may ostensibly reduce waste, but it also requires money, manpower, and energy, as well as the construction, transportation, and incineration of waste. We may have solved an old problem, but in turn, created a new one. To improve a process, we introduce new tools; to make the system more stable, we add more detection mechanisms. The author reminded me that from the moment a system is introduced, it ceases to be an "external solution" and becomes part of the problem itself. This led me to ask myself: When I design a system, am I humble enough to acknowledge that I'm not creating perfect order and am inevitably introducing new problems? Perhaps true wisdom lies not in creating a flawless system, but in constantly recognizing its own side effects and coping with them at limited cost. Human development may seem unstoppable, but in reality, we're constantly indebted to nature. Energy, minerals, water—for now, no one is demanding payment, leading us to mistakenly believe they're "free." But when nature strikes back, will we remember the original system design? The terrifying thing about systems thinking is that it reminds us that every solution is a rose with thorns. It's beautiful, but it must be handled with care.
The author proposes a law: "People in systems do not do what the system says they are doing." Take shipbuilding, for example: the more complex the system, the less the participants feel like they are actually building the ship itself, and more like they are doing things "related to shipbuilding but not equivalent to shipbuilding." If you were to build a small boat from scratch in your own backyard, you would measure the wood, saw the boards, polish, assemble, and launch it yourself. You would be in control of the entire process. Although the boat is small, you can truly say that you are "building a ship." In a massive shipyard, the tonnage of the ship can be enormous, and the collaboration can be precise, but most positions involve only drawing a design, installing a component, or performing a single test. The discrepancy between the system's stated goals and people's daily behavior is the inevitable price of a detailed division of labor. When individuals mechanically submit forms or follow scripts within a vast system, the causal connection between the final outcome and the work itself is weak. This can easily lead us to find our work boring, and we might even mistake "completing tasks in compliance" for "accomplishing our mission." However, when you can walk through a complete chain from start to finish, from requirements clarification to prototype verification, from material selection to final assembly and launch, your sense of meaning is completely different, because you can see the impact each step has on the whole. Without division of labor, how can we build behemoths that transcend individual limitations? If only division of labor exists, how can we avoid reducing people to mere tools, only able to tighten the same screw? Perhaps the answer lies not in denying division of labor, but in repairing the chain of meaning—giving roles context and reconnecting actions with goals. Even if responsibilities are narrow, through rotation, task splicing, or minimal closed loops, people can periodically complete a "small but complete" shipbuilding project. To use a more homely analogy: if we were simply chopping vegetables in the kitchen, over time, we might grow tired of the tedium. But when you manage a meal from prep to cooking, from plating to serving, you naturally develop a sense of unity between flavor, heat, and rhythm. So I asked myself again: Is what I call "boredom" really the boredom of the work itself, or is it that the causal chain between me and the outcome has been severed? Can I polish even a small process into a closed-loop system, so that "what I do within the system" is closer to "what the system claims I'm doing"?
" A big system either walks on its own, or if they don't, you ca n't make them." This means that a complex system either operates on its own or it doesn't; you can't force it to run . I'm currently working on a visual novel about Chinese emperors. When we look at emperors, we may find that while they appear to possess supreme power, in reality, it's a vast bureaucracy that truly keeps the empire running on a daily basis. The emperor can certainly issue decrees and use his authority to enforce reforms, but if the entire system—civil servants, military commanders, and local institutions—is unwilling to cooperate or unable to operate efficiently, then even if the orders are carried out, they often fall short of the intended results. Therefore, the "self-operation" of a system is crucial. The same is true of our everyday phones, computers, and cars. While they appear to be working for you, they are actually composed of thousands of components, software, and algorithms working in concert. You can certainly tweak your computer and reboot the system, but if a core module is completely broken, you can never "restore" its overall operation. A system that can't sustain itself is like a tree without roots: you can water it endlessly, but it will never bear fruit. Conversely, when a system operates smoothly, it can continue to create value even without supervision. You see, a truly healthy team doesn't rely on a boss's constant watchful eye; a good software architecture doesn't rely on programmers constantly putting out fires. This led me to wonder, isn't the ultimate goal of designing a system to enable it to "run" on its own? Great systems aren't "pushed" but "cultivated." Like a tree, it doesn't grow by being pulled, but slowly formed by soil, water, and sunlight.
When designing a system, another issue needs to be considered: communication. Humans seem to be born with a certain arrogance. We often overestimate our ability to understand the world, especially our confidence that others understand us. I used to be a typical example. I always thought I was eloquent and could chat with anyone, as if "being able to speak" equaled "being able to communicate." It wasn't until later that I realized that this confidence was actually a blind spot. True communication isn't about the words being spoken, but about whether they "reach" the other person and have a behavioral impact. In other words, if you say a lot but nothing changes their thinking, emotions, or actions, then the communication is essentially a failure. After realizing this, I asked myself: What is the purpose of communication? Is it to make the other person understand me, or to move them to action? Without behavioral feedback, our words are just vibrations in the air. Communication between people is essentially like the transmission of signals between two systems. You input a command, but the system may output something completely unexpected—because you thought you expressed it clearly, but in fact the signal was distorted along the way. This feeling has been particularly pronounced recently when I've been programming AI. Interacting with AI is far more like "communication" than "operation" than I'd imagined. Traditional tools, like phones and cars, are like stones: you tell them what to do, and they do it mechanically, without any issues of understanding or misunderstanding. AI, however, is different. It's like a clever but temperamental cat. You can train it, coax it, scold it, and guide it, but you can never fully control it. Different models are like cats' personalities: some are brilliant and insightful, while others are clumsy and dull. Can you blame them for being disobedient? Perhaps it's because you haven't clearly articulated your ideas, or perhaps you haven't thought them through clearly yourself. For example, sometimes when I write prompts, I don't even understand the underlying logic, yet I expect AI to give a perfect answer. This is actually ridiculous. If it gets it right, it's a testament to the model's strength; if it gets it wrong, it's more in line with common sense. This reminds me of human communication—how many families and couples never truly communicate. Parents assume their children understand their efforts, children assume their parents will never understand them, and lovers believe silence is a form of tacit understanding, only to find themselves in a separate, echo chamber. Without genuine information exchange between systems, there's no way for them to collaborate. You can fully grasp the properties of a stone, but when you're dealing with an intelligent being—whether a cat, dog, human, or AI—you can no longer treat it the same way you would a stone. It's no longer a question of control, but of understanding. So I ask myself: Am I truly communicating? Or am I just venting? Am I willing to admit that I don't actually know how to be understood? Perhaps true communication isn't about teaching someone to understand, but about learning to speak anew with humility—like someone learning a language for the first time, cautiously, clumsily, yet sincerely approaching another system.
"No problem . " The first step in solving a problem is recognizing its existence . People often say that recognizing a problem is half the solution, but I don't think that's accurate enough. The real key lies in whether you can clearly see where the problem lies. I've experienced this myself: sometimes when I'm with someone, the relationship suddenly becomes cold, and I'm confused and bewildered. Later, I realize that it was something I unintentionally said or did that hurt them. If I can't even notice this, how can I possibly repair the relationship? Therefore, true awareness isn't knowing there's a problem, but understanding its root. And the root of the problem often lies not in external circumstances, but within myself. I used to complain about unfair circumstances, poor opportunities, and people not understanding me. But gradually, I began to question myself: Is it that I don't know how to love others? Is it that I don't spend enough time researching and thinking before doing something? Is it simply because I'm not truly passionate about what I'm doing that I can't give my all? When I reflected on this, I suddenly felt a sense of relief—the problem wasn't in the outside world, but in the hidden areas within myself. The second principle that got me thinking was "Don't try to get rid of"—don't rush to get rid of problems. We often subconsciously want to "get rid of" frustrations, insecurities, and flaws. However, the author reminded me that a true system isn't one without problems, but one that can function within them. The same is true for people. Some people create admirable works through extreme obsession and near-insane perfectionism, but they are, after all, a rare breed of genius. For the average person, perhaps it's more important to first learn to coexist with imperfection. Get the system running first, then slowly make corrections, rather than getting stuck in a "black-or-white" dilemma. I began to think that perhaps "balance" is even rarer than "perfection." Once you have sufficient understanding and experience, then pursue perfection. Perhaps then you'll understand the price of perfection. The author's third insight is "Information, please"—information is crucial in decision-making. Faulty systems often stem from "assumptions." I thought back to filling out my college entrance exam application: from knowing my scores to completing the application process, there was only a week. How could we choose our life's direction back then? Did we truly understand the university? Had we even been there? Do you even know what that major entails? We hadn't even flipped through the course materials, yet we were expected to decide the next four years of our lives in just seven days. Looking back, it seems almost absurd. Why do people make decisions without sufficient information? Perhaps it's laziness, impatience, or a false sense of "I know." This kind of presumptuous confidence is the root cause of failure. Making decisions without information is like building a machine with your eyes closed: problems are inevitable. However, there's another pitfall to be wary of: information overload. When we become obsessed with collecting information and analyzing data, hesitating to make decisions, the system can become stagnant. So, when should we stop collecting and start taking action? Perhaps this is another manifestation of systems intelligence: finding that fine line between ample information and decisive action. The book doesn't provide an answer, but it leaves me with a question: In facing a complex world, what rhythm should I learn—when to stop thinking and when to start acting?
"When you want to solve a problem, first consider whether it can be solved with an existing system." This sentence reminds me of the first step I took in graduate school: literature research. Why research? Because before you begin, you should first see if others have already solved the same problem. Perhaps the "innovative solution" you've racked your brains to come up with has already been done by others, and even better. The common saying "don't reinvent the wheel" actually holds this true. Creating new systems isn't necessarily a sign of wisdom; often, it's just an excuse for not investing the time to understand existing systems. Systems aren't easily replaceable. They're like trees: to plant a new one, you have to uproot the old one, and those roots are often deeply embedded in the soil. "Do it with a small system if you can." Don't design a large system when a small system can solve the problem. The wisdom of Occam's razor shines again here: shave off the unnecessary. While this statement sounds simple, it's often difficult to implement. Especially in the age of AI, where the cost of adding features is decreasing, we seem to have a constant urge: if we can, why not? Thus, a once simple system grew increasingly complex, until one day it collapsed, and we realized the problem wasn't "not enough features" but "too many features." I encountered this pitfall while working on my own visual novel generator project. Initially, I ran the program in the terminal, but later, for convenience, I moved to the web to reduce user input. While seemingly progressive, it actually left behind many remnants of the old system. Some code remnants, like ghosts, would occasionally pop up and interfere with the external version. I thought I had completely deleted it, only to have new bugs pop up. "Taking down is often more tedious than setting up"—uninstalling is more difficult than installing. Installation is like sowing a seed: a single idea can spawn countless branches. Expanding from one to many is easy; uninstalling is like pruning: going from many to one, or even back to zero, is the real challenge. This is true in coding, and in life. Subtraction is much more difficult than addition. When we're young, we always want to learn more, meet more people, and do more things, as if more means fulfillment. But as I've grown older, I've gradually come to appreciate a different kind of wisdom: learn to reduce. Delete unnecessary commitments, delete draining relationships, and delete tasks that seem important but cause you anxiety. This applies to system design, and to life as well. Adding brings excitement, while removing brings maturity.
The "Potemkin Village Effect," often translated into Chinese as "Potemkin Village Effect," has a rather ironic historical story behind it. In the 18th century, Russian Empress Catherine II toured the newly conquered Crimea. To curry her favor, her favorite, the local governor, Grigory Potemkin, ordered the construction of rows of freshly painted "wealthy villages" along the route. The images featured neatly-knit farmers and markets overflowing with food, creating a scene of prosperity. However, these "villages" were actually temporary sets. Once the Empress's ship departed, they were quickly dismantled and moved to the next location for a repeat performance. Similar "superficial prosperity" can easily occur in system design: a seemingly perfect but lifeless fake system is constructed for inspection, demonstration, and reporting purposes. However, this kind of "facade engineering" is often extremely costly because it blocks the generation of real feedback. We mistakenly believe that a system is performing well when, in reality, it is only one step away from collapse. The author mentions the "Face-of-the-Future Theorem"—"When dealing with the shape of things to come, it pays to be good at recognizing shapes." What does this mean? It means that before a system is even formed, you must first learn to recognize "good shapes"—a reasonable structure, a realistic path, a direction that conforms to long-term principles. Otherwise, you'll easily be misled by temporary "beautiful curves" and "false prosperity." My professor has repeatedly emphasized a similar point: before starting any research project, you must first define "what constitutes a good outcome" and write it down, not just relying on intuition. At the time, I didn't quite understand it, thinking it was too formal. But later I realized that if you don't even have a clear idea of "success," then success will become an illusion. Just like in relationships—if you don't have a clear definition of friends