This may be the only book I’ve read on the topic of consciousness that introduces a theory about (subjective experience, or the phenomenology of) consciousness from first principles, develops an operational definition for quantifying degrees of consciousness, and outlines years of experiments by the authors themselves. I really enjoyed this book.
While I appreciate the symmetrical structure of the book, there are problems with the writing (or translation). There are numerous typographical errors throughout the book, a glaring inconsistency in delivery, with portions that are delightfully accessible intermixed with portions burdened with unnecessary jargon (particularly for neuroanatomical descriptions), and significant repetition. For example, I felt that Chapter 4, while it raises interesting questions, could have easily been incorporated in relevant sections of other chapters. Some readers may find the transitions from the brain to theory (Chapter 5) and back somewhat jarring, but I thought the authors provide an appropriate philosophical, social, and clinical context to motivate the development of a meaningful and reliable measure of consciousness that can be generally applied, before revisiting specific brain-related questions.
Chapter 2 introduces the philosophical concept of zombies from David Chalmers’s work, a few variants of “digital zombies”, and “zombies within our skulls”. The cerebellum and basal ganglia are brought up as examples of zombies in our skulls in Chapter 2, but are revisited in further depth in Chapter 6 (see below). Chapter 3 talks about anesthesia, comas, and minimally conscious states between locked-in syndrome and unresponsive wakefulness syndrome, as well as unsatisfactory attempts to assess consciousness by (1) sensory inputs and motor outputs, and (2) sensory inputs and neural outputs (fMRI, EEG). Counters to the former include examples of consciousness without sensory inputs or motor outputs. Counters to the latter include false negatives that can arise from many sources, including aphasia, poor attention, damage, mood, confusion, and motion artifacts, as well as awareness with no subject or content (existence without boundary, consciousness as pure presence,...).
Chapter 3 examines unsuccessful attempts at measuring consciousness: (1) global activity levels (counters: sleep, seizures), (2) activity in specific regions of the brain (counters: self-reporting, frontal lesions -- see quote below), and (3) synchronous activity (counters: NREM sleep, anesthesia, generalized seizures, individual exceptions in patients with minimally conscious states).
Chapter 5 introduces the principle of Integrated Information Theory (IIT). The authors contrast the responses of a brain and a photodiode to light and darkness, by highlighting the potential of the brain to differentiate between vastly more states. Later, they contrast the brain’s integration of information as underlying the unitary experience vs. the independence of individual photodiodes in a camera. While this characterization is useful for later chapters when contrasting different neural structures, it struck me as a bit of a straw person argument. What if the array of photodiodes were connected to an integrated circuit? And what if that system were not only integrated, but could differentiate between stimuli, for example if it were to have object/face recognition built in?
The principle of IIT above is broken down into two principles with their corresponding postulates:
“Conscious experience is rich in information. Thus, the physical substrate of consciousness must be highly differentiated -- that is, it must be able to generate a vast repertoire of states.” (page 66)
“Conscious experience is integrated thus the physical substrate of consciousness must constitute a single, integrated entity.” (page 71)
The authors introduce the quantity phi to define what an “entity” is, which they contend does not already have a satisfactory definition in physics (I believe there have been reasonable attempts in social network analysis to quantify something analogous: clusters of related vertices). It is expressed in bits, but is different than the information in Shannon’s classic theory of communication.
In Chapter 6, the authors discuss sensory and motor systems, and brainstem activating systems. As mentioned above, they also revisit the cerebellum and the basal ganglia as examples of “zombies in the skull” (in contrast to the thalamocortical system). They contrast IIT with Global Workspace Theory.
Chapter 7 summarizes years of the authors’ experiments that test integrated information theory on an impressive array of conditions. They clarify that one has to be careful not to mistake high measures of information with randomness, and not to mistake high synchrony situations for integration if there are parallel drivers that underlie that synchrony.
They list practical rules in assessing integrated information:
----
Rule 1: Observing is not enough, one needs to perturb and detect cause-effect relationships.
Rule 2: Detecting widespread responses is not enough; one needs to assess their differentiation.”
Rule 3: Responses must be recorded on an adequate time scale.
Rule 4: The measurement must bypass sensory inputs and motor outputs.
"A summary of these four guidelines can be formulated as:
Evaluating a brain’s capacity to integrate information requires direct perturbations of cortical neurons (bypass input and output chains) to assess the spatial extent of the evoked response (integration) and its differentiation (information content) on a sub-second time scale (time constant of consciousness).” (pages 100-101)
“Of course, characterizing this echo is still a far cry from calculating phi, which would require perturbing the brain in all possible ways across all possible bipartitions [147, 149].” (page 102)
----
So they introduce their “perturbational complexity index” (PCI) [200]. ”The procedure involved ‘zapping’ the cortex and ‘zipping’ its responses.” (page 120)
The authors even boldly suggest that in our future we will have input and output prostheses to bypass sensory and motor inputs and outputs (pages 134-135). They also extended discussion beyond structural connections to physiological changes. They go on to describe experiments where they replace transcranial magnetic stimulation with single-pulse intracortical electrical stimulation, and replace EEG with local field potential recordings.
In Chapter 8, the author raises the question as to what other animate and inanimate entities have subjective experiences. They make references to the attitudes of different philosophers (Thomas Nagel, Descartes, Montaigne, and Peter Singer) about the capacity of non-human animals, and go into some detail about two current approaches, studying the complexity of behavior and the size of the brain. As with other authors, they believe in graded levels of consciousness: “According to IIT, phi can be graded.” (page 161)
Given their view on animal consciousness and graded levels of consciousness, I was surprised by their stance on embodied vs. simulated consciousness. While they do not believe that a brain needs a body for subjective experience (contrary to Alva Noe’s book Out of Our Heads, although later they do discuss the importance of experience during development), they are very much against the possibility of software simulation-based consciousness:
----
“Consciousness is not produced through interaction with the external world here and now, it exists in the brain! If that brain were to be disconnected from its nerves, extracted from the cranium and kept alive in a bath of oxygen and sugar, the dream would continue, rich and bizarre as before, unpredictable as always. Just as if nothing untoward had happened.” (page 59)
“What about a software that simulates in detail not just our behavior, but even the biophysics of neurons, synapses, and so on of the relevant portion of the human brain, such as the one imagined in Chapter 3? Functionalism would hold that it would be absolutely conscious, as in this case all the relevant functional roles within our brain, not just input-output relationships would have been replicated faithfully. According to IIT, however, this would not be justified, for the simple reason that the brain is real, but a simulation of the brain is virtual. Simulating a black hole, will not bend time and space. For the theory, consciousness is a fundamental property of physical systems, one that requires having real cause-effect power intrinsically.” (pages 158-159)
----
There are beautiful passages in this book, particularly in the existential portions at the very beginning in reflecting upon seeing the Earth from the moon, and at the very end:
“Time will tell if another scientific revolution will return us to the center of our universe, naturally and on our merits. For now, like the astronaut who saw the earth lost in an icy space, the student is caught by a sense of wonder and deep affection. He would like to shield the weakest flames, the ones that struggle to awake and those that are fading, and decides that the best thing he can do is to explore, integrate and share, to understand the world, and let it exist a bit more.” (page 175)