Human intelligence is among the most powerful forces on earth. It builds sprawling cities, vast cornfields, coffee plantations, and complex microchips; it takes us from the atom to the limits of the universe. Understanding how brains build intelligence is among the most fascinating challenges of modern science. How does the biological brain, a collection of billions of cells, enable us to do things no other species can do? In this book John Duncan, a scientist who has spent thirty years studying the human brain, offers an adventure story—the story of the hunt for basic principles of human intelligence, behavior, and thought. Using results drawn from classical studies of intelligence testing; from attempts to build computers that think; from studies of how minds change after brain damage; from modern discoveries of brain imaging; and from groundbreaking recent research, Duncan synthesizes often difficult-to-understand information into a book that will delight scientific and popular readers alike. He explains how brains break down problems into useful, solvable parts and then assemble these parts into the complex mental programs of human thought and action. Moving from the foundations of psychology, artificial intelligence, and neuroscience to the most current scientific thinking, How Intelligence Happens is for all those curious to understand how their own mind works.
A good, well-written, intelligent read, at the tougher end of popular science. Although Duncan explains everything well, I found the book needs close attention. Rather like a mathematics puzzle, you need to follow the thinking all the way through to understand the later stages of the book. There is nothing too difficult, but it is quite dense at times and the reading needs active engagement.
There is a lot that is tentative in this book, as should be expected from what really is still an emerging science, (and how much has changed since this was published in 2010 is interesting in itself). Duncan is both confident and humble. He confidently states his and other scientists observations and findings, but he has the humility to be clear about the limits and work still to be done in this field.
If you are interested in brain, mind or intelligence then a worthy read.
A surprisingly clearly written book, given the Author's line of research and publication history. A nice guided, but not over-simplified, tour around what it is that Duncan believes classifies as intelligence. Well grounded, but a quick look over the references doesn't show many past the year 2000, and much of the evidence discussed comes from very clinical trials (a weakness freely acknowledged). In all, though, an excellent, entertaining, yet quite informative and technical piece.
The word "intelligence" comes from Latin terms intelligentia and intellligere meaning "to comprehend" or "to perceive”. In the Middle Ages, the word intellectus was used to translate the Greek philosophical term nous and was linked with metaphysical theories in scholasticism, such as the immortality of the soul. But early modern philosophers like Bacon, Hobbes, Locke, and Hume rejected these views, preferring "understanding" over "intellectus" or "intelligence". The term intelligence is now more used in the field of psychology than in philosophy. Conceptually, intelligence is often identified with the effective and practical application of knowledge, drawing on a combination of cognitive skills that enable individuals (and, by extension, animals and machines) to navigate and make sense of the complexities of their worlds. It is generally understood as the capacity that enables learning from experience, applying reasoning, solving problems, thinking in abstract terms, and adapting effectively to new and changing situations. This capacity is naturally endowed in a living being or can be imparted to an automaton by some mechanistic process.
Intelligence is not a “hard” problem like consciousness, but its mystery lies in the fact that it can be extended beyond human mind and can be artificial induced like in an AI. It is still a difficult concept to understand as it has many sides to it. First is differences in intelligence from one human to another. We tend to call anyone who is successful, effective and resourceful as intelligent and anyone we dislike is generally given an antonym of intelligent like stupid, dull etc. Many psychologists have spent their lifetime explaining the essence of intelligence, foremost among them was Charles Spearman who in the early part of twentieth century used correlation to try to explain intelligence.
Spearman's theory suggests that any mental ability or achievement is influenced by two types of factors: a general intelligence factor, known as "g" which affects overall performance in various tasks, and specific factors, called "s" which impact specific skills or talents like music, painting etc. Each person's success in an activity depends on their level of both “g” and “s”. While people with high “g” tend to perform well broadly, those with strong “s” excel in specific areas and become great painters or musicians. Spearman ran many experiments correlating performance in different kinds of activity, thousands of similar tests have been performed by later psychologists using every possible variety of tasks like vocabulary, logical skills, route finding etc. and the results have always been the same. The theory explains why people generally do well across different tests due to “g” but also show distinct strengths and weaknesses because of “s”. In recent years Spearman’s theory has been refined by later psychologists who now believe that specific “s” factor can be used for a group of activities related to many different aspects of cognition so “s” is now accepted as a group factor that might include a broad ability to do well on verbal tasks, another for spatial tasks and yet another for memory tasks etc.
While Spearman was researching intelligence, practical measurement methods were also developing in schools, notably after Alfred Binet's work. Various intelligence tests emerged, measuring children's performance on different tasks, which led to the concept of Intelligent Quotient or IQ. These tests lacked a solid theoretical foundation, and psychologists debated which abilities—such as memory, reasoning, or speed—should be included and in what proportions for an accurate measure of intelligence but are still popular to measure intelligence.
Our common understanding of intelligence is vague—it's broad, flexible, and not tied to a single definition. Spearman’s idea of “g” comes closest to defining intelligence, as he offered exact methods to measure it. When these methods are used, intelligence can be measured with a certain degree of accuracy. In his seminal book “The Abilities of Man”, Spearman suggested that the mind consists of multiple specialized "engines," each such module serving a distinct function, mirroring known brain region specializations. He proposed that each module within own brains represents a different "s," while "g" acts as a shared source of power—possibly akin to the amount of attention a person can distribute across various mental tasks.
There was a slightly different explanation suggested by another great psychologist Sir Godfrey Thomson which is transparently consistent with a modular mind but refutes the idea of “g” as a shared ability. On this model, there is still an overall or average ability to do things well, but it reflects just the average efficiency of all of mind’s modules. There is no true “g” factor but only a statistical abstraction of just an average of many independent “s” factors.
Spearman argued that individuals possess innate general as well as specific intelligence and Thomson provided a different if not a contrarian view of distributed intelligence across many mind modules. However, it remains a question whether education and personal effort can further enhance this intelligence. In 1960’s, Raymond Cattell introduced a distinction between “fluid” and “crystallized” intelligence. Cattell suggested that individuals with higher fluid intelligence are likely to gain more from their education. After the knowledge is acquired, it becomes crystallized intelligence, which tends to stay consistent and accessible throughout a person's lifetime. Fluid intelligence, which reflects current ability, declines from the mid-teens onward, with older adults solving fewer problems on tasks like Raven’s Matrices than younger people. In contrast, vocabulary remains stable with age, even if recall slows. Thus, tests of fluid and crystallized intelligence show little correlation across age groups, as their trajectories diverge over time.
The latest advances in medical sciences have allowed neuroscientists to map the brain functions and numerous experimental tests have been executed primarily on partial brain damaged patients to understand if the brain contains an actual “g” factor, an innate intelligence or “g” is simply an average efficiency of all the brains separate functions. Neuroscientists have now evidenced that a specific set of frontal lobe regions in human brain are responsible for behavioural control functions and with extension connect to Spearman’s “g” factor. Using MRI scans we can see that that there are three distinct regions in frontal lobe of the brain that seem to form a brain circuit that come online for almost any kind of demanding cognitive activity in conjunction with other brain areas specific for the task. For example, if task is visual object recognition, this general brain circuit will be joined by regions in brain responsible for visual activity. The general circuit, however, is a constant across demands. We call it the multiple-demand circuit.
At the heart of “g”, there is the multiple-demand system and its role in assembly of a mental program. In any task, no matter what its content, there is a sequence of cognitive enclosures, corresponding to the different steps of task performance. For any task, the sequence can be composed well or poorly. In a good program, important steps are cleanly defined and separated, false moves avoided. If the program is poor, the successive steps may blur, become confused or mixed… we see that the brain needs constant vigilance to keep thought and behaviour on track. A system organizing behaviour in this way will certainly contribute to all kinds of tasks, and if its efficiency varies across people, it will produce universal positive correlations. By systematic solution of focused subproblems, we achieve effective, goal-directed thought and behaviour.
But how do we explain the differences in intelligence between different individuals. The roots of the general intelligence factor, or "g", have long been the subject of debate, with researchers questioning whether it arises predominantly from genetic inheritance or environmental factors. It is now widely accepted that both genes and environment play significant roles in shaping intelligence. Evidence supporting the environmental contribution to “g” comes from studies showing that performance on cognitive tasks such as Raven’s Matrices can be enhanced through targeted training. For example, individuals may experience improvements in their scores after engaging in intensive short-term memory exercises, such as practising the backwards recall of telephone numbers. Parallel to environmental research, genetic investigations are ongoing to determine the hereditary aspects of intelligence. Although this line of inquiry is still in its early stages, initial findings suggest that “g” is likely influenced by a multitude of genes, each exerting a small effect, rather than by one or a few genes with major impacts. It appears improbable that these genes act solely on specific neural systems, such as the multiple-demand system. Instead, the genetic impact on intelligence seems to extend broadly, affecting various regions within the nervous system and possibly having general effects throughout the body.
Despite the advances in the study of neuropsychology, human thought remained mysterious, unanalysable and unique. But then towards the end of 1950s there was a grand moment for scientific understanding of the human mind with the invention of General Problem Solver or GPS by Allen Newell, Cliff Shaw, and Herbert Simon to solve problems in symbolic logic. They quoted in their influential paper on GPS in 1958
It shows specifically and in detail how the processes that occur in human problem solving can be compounded out of elementary information processes, and hence how they can be carried out by mechanisms…. It shows that a program incorporating such processes, with appropriate organization, can in fact solve problems. This aspect of problem solving has been thought to be “mysterious” and unexplained because it was not understood how sequences of simple processes could account for the successful solution of complex problems. The theory dissolves the mystery by showing that nothing more need be added to the constitution of a successful problem solver.
In the decades following the development of the General Problem Solver (GPS), scientists used this line of thinking to create AI systems designed to simulate the processes underlying human reasoning and problem-solving, offering coherent frameworks that could account for a wide range of cognitive activities. But there was a shift towards the end of last century amidst a growing recognition of the fundamental differences between how brains and conventional digital computers operate. Brains address problems using vast networks of millions of interconnected neurons, all functioning in parallel. These neurons simultaneously influence and are influenced by one another, creating a highly dynamic and interconnected system. The remarkable success of the brain in handling tasks such as visual perception and language comprehension highlights the power of this massively parallel mode of operation—a capability that remains beyond the reach of current AI systems. In contrast, traditional digital computers tackle problems by executing a sequence of simple computational steps, one at a time. This ordered series of actions is what constitutes a “program.” As scientific research delved deeper into understanding the parallel mechanisms of the brain, the limitations of serial programs became increasingly apparent. Serial processing, while effective for certain types of logical reasoning, appeared inadequate as a model for the mind’s complex and simultaneous operations. Consequently, conventional computer programs were increasingly regarded as insufficient representations of human cognition, and the focus shifted towards understanding and modelling the brain’s parallel processing capabilities.
GPS was designed for symbolic logic challenges that are quite abstract and involve a limited, predetermined set of moves within a narrow field of symbols. In contrast, real-world problems tend to be far more unpredictable, presenting countless choices and requiring the achievement of specific goals. Successfully tackling such issues hinges on breaking down the overall challenge—the gap between the present situation and the desired outcome—into manageable steps or components. By solving each part individually, you ultimately resolve the entire problem once all segments are addressed.
In each part of a problem’s solution, a small amount of knowledge is assembled for solution of just a restricted subproblem. We might call this assembly a cognitive enclosure—a mental epoch in which, for as long as it takes, just a small subproblem is addressed, and just those facts bearing on this subproblem are allowed into consideration. Effective thought and action require that problems be broken down into useful cognitive enclosures, discovered and executed in turn. As each enclosure is completed, it must deliver important results to the next stage, then relinquish its control of the system and disappear. Equipped with this general view of thought, we can address a range of intriguing questions. In each case, apparently mysterious issues are illuminated by the idea of decomposing problems and assembling successive cognitive enclosures toward a final complete solution.
If we attempt to summarize this general view of thought, then it emphasises the significance of breaking down complex challenges into manageable components. Rather than approaching a problem as a single, overwhelming whole, this perspective advocates for its decomposition into smaller, focused subproblems. Each subproblem is addressed within a distinct cognitive enclosure—a mental space where only the relevant knowledge and strategies for solving that aspect is considered. Once a subproblem is resolved, the solution contributes to the next stage, and a new cognitive enclosure is formed to tackle subsequent subproblems. By systematically assembling these successive cognitive enclosures, the mind can navigate step by step toward a comprehensive solution. This approach sheds light on the mechanics of effective thought and action: the clarity and organisation of the mental programmes that direct behaviour. When cognitive enclosures are well-defined and executed in sequence, they enable goal-directed reasoning and facilitate the resolution of even the most intricate tasks. Thus, this general view of intelligence reveals that the mysterious aspects of problem-solving can be understood through the process of decomposing problems and methodically assembling solutions, with each cognitive enclosure playing a critical role in the path to a final, complete resolution.
Human intelligence, while representing some of our greatest strengths, is also inherently limited by the concept of enclosed thinking. When we are confronted with a problem, various ideas and perspectives vie for our attention. However, despite the availability of crucial knowledge, we often fail to consider all relevant information; important insights may remain unexamined and neglected. This phenomenon can escalate, resulting in reason devolving into mere rationalisation, where a narrow, seemingly coherent set of ideas dominates our thinking. In this state, alternative viewpoints that might lead to different and potentially more accurate conclusions are actively suppressed. This tendency is a fundamental human weakness, as it blinds us to the truth and inhibits our capacity for objective understanding. Although the power of reason has enabled humanity to achieve remarkable intellectual advances and construct the foundations of civilisation, its vulnerability is also profound. The fragility of reason has contributed to some of history’s most severe challenges—including destructive wars, environmental crises, and the suffering inflicted upon animals. Thus, while intelligence is our greatest asset, its limitations have also led to significant and enduring problems.
Also, our minds are likely limited in their capacity for understanding, much as animals can only grasp what their nervous systems allow—caterpillars perceive simple things living their whole lives on a blade of leaf, dogs can't understand calculus. Humans have broader reasoning, but our thoughts are shaped by our biology; we may never know if we're fundamentally different or human intelligence is simply restricted by our own neural boundaries like a caterpillar or a dog.
I really enjoyed this. It's highly readable and very engaging and, more out of luck than design, covers areas that I mostly hadn't read about in pop-sci before - intelligence obviously, but also the frontal lobes. Chances are you've heard about people who've suffered brain damage in part of their visual areas and subsequently been unable to recognise faces, etc, but are you aware of what happens when the frontal lobes are damaged? I wasn't, and finding out was very interesting.
To make science readable you generally have to shy away from the most hardcore level of detail, so there may be people who'll find this a bit too shallow, but I wasn't one of them. I've also just started reading another science book that's much more textbook-like in its approach, and HIH was a refreshing change of gear from that, believe me. I did feel that Duncan overstretched a bit at times, one of the later chapters (on the dangers of thinking inside 'the box') felt a bit like a rushed cobbling together of received wisdom, and I was pretty disappointed that there was no real attempt to explain inter-person differences in intelligence in terms of the operation of neurons at a network level, especially given the amount of set-up devoted to this issue, but overall this was a generally informative, highly enjoyable introduction to what was for me a fairly new topic. Most welcome.
John Duncan's "How Intelligence Happens" offers a thorough look at the evolution of our understanding of intelligence. The book blends history with Duncan's own significant contributions to the field. Focused mainly on cognitive psychology and neuroscience, it traces how views on intelligence have changed from the 20th to the early 21st century. What sets the book apart is Duncan's personal passion for the subject, which comes across clearly.
The insights presented are the result of Duncan's lifetime commitment to studying intelligence. He builds his theories on extensive observations and experiments, showing an inspiring capacity to integrate disparate findings into a novel theory of intelligence. However, the book is a bit dated and doesn't cover recent advances in cognitive AI. It would be interesting to see Duncan's take on these new developments, likely after his usual thoughtful analysis.
The book remains a superb introduction to the cognitive neuroscience of intelligence. Many budding researchers in the area will also find the autobiographical aspects inspiring.
Written by a British research neuroscientist, this book is both too elementary for a lot of American readers who have read a bit about neuroscience, and too technical in spots, enough to become boring. For those interested in this field, Oliver Sacks is a much better bet.