Can octopuses feel pain and pleasure? What about crabs, shrimps, insects or spiders? How do we tell whether a person unresponsive after severe brain injury might be suffering? When does a fetus in the womb start to have conscious experiences? Could there even be rudimentary feelings in miniature models of the human brain, grown from human stem cells? And what about AI?
These are questions about the edge of sentience, and they are subject to enormous, disorienting uncertainty. We desperately want certainty, but it is out of reach. The stakes are immense, and neglecting the risks can have terrible costs. We need to err on the side of caution, yet it's often far from clear what 'erring on the side of caution' should mean in practice. When are we going too far? When are we not doing enough?
The Edge of Sentience presents a comprehensive precautionary framework designed to help us reach ethically sound, evidence-based decisions despite our uncertainty. The book is packed with specific, detailed proposals intended to generate discussion and debate. At no point, however, does it offer any magic tricks to make our uncertainty go away. Uncertainty is with us for the long term. We must manage our uncertainty by taking precautions that are proportionate to the risks. It's time to start debating what those steps should be.
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
I'm a Professor of Philosophy at the London School of Economics, where I direct The Jeremy Coller Centre for Animal Sentience. I study how science is transforming our understanding of the minds of other animals, and the implications of the science for ethics, policy and law.
This book contains instructions on how to properly kill crabs and lobsters (14.4): "The procedure should not be 'stun electrically then boil' but rather 'stun electrically, kill quickly with a mechanical method, then boil... a crab can be killed reasonably quickly by a method called double spiking, and lobsters can be killed reasonably quickly by cutting along the chain of ganglia with a sharp knife, starting at the head. Even these methods still take up to ten seconds, highlighting the value of prior stunning, and suggesting there is a need for quicker methods to be developed and made widely available."
Such is the ironic fate of all overly utilitarian moral frameworks with their excessive focus on a calculus of pain and pleasure: to become killing manuals in the end. It's all fine (or, to be fair to the author, probably fine) to kill animals if we don't cause gratuitous pain to them during their lifetime and if we kill them quickly and painlessly enough, because we don't know if they have a right to live in the first place (10.2) (as if we determine this mystical property through means completely disconnected from considerations of sentience, potential, and agency; how about we adopt a precautionary attitude to right to life as well?), or to quote Peter Singer here in a similar vein: "Somewhat to my chagrin, I admit, I am unable to provide any decisive refutation of the conscientious omnivore." (Animal Liberation Now, Ch. 4)
This somewhat cavalier, non-precautionary "oh, we don't know if they have a right to life in the first place, so it's morally permissible to kill them, even though they may plausibly be sentient" attitude is also on display in the chapter on fetuses and embryos (10), where the author's ideological prejudices unfortunately seem to have clouded his better judgment on issues surrounding abortion. To give a few examples:
"The point at which a human fetus becomes sentient is not the point at which abortion becomes morally impermissible. We should separate these issues. The ethics of abortion depends primarily on questions of personhood and bodily autonomy, not on questions of sentience." (10.2) Not necessarily denying the relevance of bodily autonomy or personhood, but surely, questions of sentience have a major bearing on the ethics of abortion, do they not? To be fair to the author, he later writes (10.5): "the issues are not wholly unrelated", betraying a sense of genuine tension between his ideological commitments and where his arguments are leading him.
Or similarly: "Although recognizing second-trimester fetuses as sentience candidates does not give us a reason to change the legal time limit on abortions, it does require honest communication of uncertainty with patients." (10.8) Why not? Again, surely it clearly does give us a reason to consider changing the legal time limits on abortion, does it not? You may argue other considerations outweigh this particular consideration, but it surely does give us some reason to rethink our stance.
This is all the more astonishing given that in the very next chapter, for instance, the author will write: "If organoid research leads to the creation of organoids that are sentience candidates, a moratorium (time-limited ban) or indefinite ban on the creation of this particular type of organoid may be appropriate." So, a complete ban on the creation of brain organoids (which cannot live on their own without outside support, by the way) may be appropriate when they become sentience candidates, but "recognizing second-trimester fetuses as sentience candidates does not give us a reason to change the legal time limit on abortions"? That sounds a bit inconsistent to me, to say the least.
Also, on the viability-based limits on legal abortion (10.2): "If future technology allows neonates to be viable outside the womb at ever earlier time points, then viability-based legal thresholds may end up pushed to ever earlier time points. This would be hard to justify from an ethical point of view, since it is difficult to explain why the moral status of a fetus inside the womb should depend on the medical technology that happens to exist outside the womb." This is not as difficult as the author imagines at all actually. Consider a hypothetical future scenario where we have the technology to bring back a patient from a vegetative state after some time. The existence of such technology would presumably render it ethically wrong to prematurely end the life of a patient like this.
While reading this book, at the back of my mind, I also could not really stop thinking about the absurdity of the gap between the overall laudable concern for beings "at the edge of sentience" that this book articulates and all the grotesque evil we're already doing (all legally!) to billions of animals around the world that are, by all but the most fringe accounts, well inside of that edge, in factory farms, in slaughterhouses, in animal labs, etc. Shouldn't it be our utmost priority to put an end to the latter as soon as we can, before we worry too much about sentient organoids or sentient AI?
Very compelling argument that all arthropods and fish are at least plausible candidates for creatures that can suffer and that the precautionary principle demands we take their interests seriously.
I think this book is geared at more technically-interested audiences as well as policymakers, so it was a bit of a struggle at times. The policymaking chapters were a slog.
That said, some parts were fascinating. I expected the "...and AI" in the subtitle to take up a substantial portion of the book given that AI is a hot topic, but it ended up being maybe 10% of the content at the end.
The "meat" of the book was the concept of sentience as it applies to humans and all other living creatures. Certainly I came away with a deeper appreciation of sentience in animals, though to be honest I am not sure I am about to change any shopping/eating habits (the author avoided guilt trips as a persuasion tactic). More interesting still were the topics of sentience in human fetuses, people in "persistent vegetative states" and some of the challenges with end of life care in humans (withdrawing food and nutrition is a special torture humans sadly reserve for themselves - we would never impose this on other creatures, sentient or not).
I am not about to stop killing mosquitos, sorry.
I listened to the audiobook and I found the narration a bit subpar. Transitions between recording sessions were noticeable with shifts in tone and pacing.
This is a momentous book—not only in the context of sentience, but in applied philosophy as a whole. Only the very best writers can take on a topic this broad, complex, and multi-faceted, and manage to write something this clear and engaging to experts and casuals alike. But more importantly, I have never read something that simultaneously manages to be at the very forefront of the discipline while also remaining entirely pragmatic in its focus.
Every section of this book puts practicality first, and while it directly engages with the expert-level details—which is very impressive, because it is remarkably expansive—Birch repeatedly emphasizes that nothing about this is an abstract thought exercise. Even if there is considerable uncertainty, we exist in a world in which sentience may not be exclusive to humans, we generally hold moral beliefs that limit how we may act towards sentient beings, and we need to make decisions right now about what actions might be appropriate given the available evidence. I think Birch strikes the perfect balance here between 'informational' and 'activism', because he is not particularly prescriptive regarding what actions are appropriate—he lays out the plausible theories and current evidence, provides us with a framework for making those decisions given what we know, and calls upon us to have those difficult but essential conversations.
I hope this book serves as an impetus for decision makers, experts, and regular folks alike to engage with this topic and try to forge an informed, humane path forward. I also hope it will serve as an inspiration for people working in other disciplines that involve decision making despite uncertainty. I had a really great time reading this book. I think others will too.
The Edge of Sentience is an important guide to acting properly in the face of uncertainty. In this book Birch expertly weaves together relevant work from philosophical and scientific literatures, explaining key concepts succinctly and lucidly. I learned a great deal by reading it.
Birch’s analysis is most interesting on the philosophy of mind. I loved thinking about animal minds across the whole animal kingdom as well as non-animal minds in the form of AI. The material on policy wasn’t easy to get through. But, even here, the framework Birch builds is strong and elegant.
One major issue I have with the work, however, is that Birch seems to hold an idealistic view of humans, human values, and human decision-making—he is a staunch proponent of the democratisation of public policy. The answers in The Edge of Sentience are too often cast in terms of what’s good for us, not what’s good. After all, there are pressing issues that demand more from us, given the unfathomable scale of animal suffering that human beings cause yet are indifferent to.
Moreover, Birch fails to provide justifications for certain practices, such as animal testing—seemingly assuming scientific discovery is a goal in itself—yet makes ungrounded claims about the unique status of human beings: ‘human issues have a special urgency’ (p. 195). I found the chapter on foetuses to be disproportionate in relation to the other chapters.
Nonetheless, I respect Birch’s experience, his strategy, his delicacy when handling big questions, and the importance of this work in helping us achieve results. As I see it, this is a brilliant piece of philosophy that should be taken seriously and read at large.
An excellent, in-depth, very careful treatment of how to determine whether a being is sentient when the evidence is highly uncertain.
There’s far too much detailed information covered throughout this book, so my brief review will simply cover general themes and some highlights. Helpfully, Birch begins by exploring various definitions of sentience before settling on the one he uses throughout the book: sentience is valenced phenomenal consciousness, and thus a sentient being is one with the capacity to have such valenced experiences. The phenomenal consciousness part refers to having some form of subjective experience, and the valenced part refers to whether that experience can be positive or negative. Birch distinguishes this definition of sentience from the concept of sapience (human-level intelligence and reflective thought) and selfhood (a unified persisting self, i.e., psychological continuity).
Birch then explains how three quite different meta-ethical approaches—classical utilitarianism, neo-Kantianism, and Martha Nussbaum’s capabilities approach—all converge on a similar conclusion about the importance of sentience through somewhat different paths. This reassures the reader that his approach to sentience does not require you to assume a specific meta-ethical stance.
An overarching theme of the book is how to apply the precautionary principle based on a bottom-up approach (rather than top-down) to take into account all the uncertainty surrounding scientific issues related to sentience. Birch suggests this precautionary approach because of the asymmetries between false positives and false negatives with respect to identifying a being as sentient or not. If we treat a nonsentient being as sentient (false positive), the harm will come from the restrictions we imposed on how we treat such beings—this likely means undermining innovation and economic output. If we treat a sentient being as nonsentient (false negative) or simply ignore sentience, we are likely causing untold amounts of gratuitous suffering. Thus, Birch’s first proposed framework principle is: “A duty to avoid causing gratuitous suffering. We ought, at minimum, to avoid causing gratuitous suffering to sentient beings either intentionally or through recklessness/negligence. Suffering is not gratuitous if it occurs in the course of a defensible activity despite proportionate attempts to prevent it. Suffering is gratuitous if the activity is indefensible or the precautions taken fall short of what is proportionate” (p. 131).
Birch then spends a lot of time discussing where the “edge of sentience” is in human brains, other animals (with particular focus on invertebrates like cephalopod molluscs and decapod crustaceans), as well as organoids and, later in the book, in future AIs. Birch worries less about evidence that demonstrates evidence with high certainty, and instead aims to demonstrate the less demanding “sentience candidature”: that there is enough evidence to take seriously the realistic possibility of sentience. He does so with the aforementioned invertebrates, as well as insects.
This material caused me to update my beliefs on a few things: First, I now place higher probability that insects are sentience than previously (though I still think it’s much more likely they’re not sentient). Birch adduces evidence that insects have a behavioural core control unit, and that some insects, like bees, have working memory and forms of associative learning that seem to be directed by conscious experience in humans. But Birch concedes there is a paucity of evidence demonstrating even sentience candidature in other insects like spiders, and as such that we should be careful in using this absence of evidence to suggest an evidence of absence.
Second, Birch discusses the contentious issue of fetal sentience and whether the legality of abortion should track scientific evidence of when sentience emerges. Ultimately, Birch concludes that a rights-based approach to the permissibility of abortion is preferable to tying abortion permissibility to sentience. This is because there is now evidence of sentience candidature in fetuses at a much younger age (beginning in the second trimester) than previously and currently still popularly thought (24 weeks). I think he’s probably right that we want to tie abortion legality to a woman’s right to choose—using the principle that a woman should be the primary decision-maker over who can use her body. The danger in tying abortion legality to evidence of sentience is that we may discover evidence that pushes the likely emergence of sentience earlier and earlier, nullifying the right to abort such fetuses.
Instead, Birch proposes using the concept of proportionality to inform realistic preventative measures we can take. For example, even if insects are sentient, it would be completely burdensome to try to avoid stepping on insects outside or refrain from driving so as to kill the many insects that die when we drive.
Interestingly, Birch barely mentions factory farmed animals and the untold amounts of gratuitous suffering we impose on such animals. I don’t know if this is because such animals are not on the “edge of sentience”, or if Birch doesn’t want to tackle the inevitable conclusion (following from Birch’s aforementioned first proposed framework principle) that a proportionate response would surely be for us all to stop using animals in this way, or some other concern. Regardless, I found this to be one of the few—but not insignificant—weaknesses of the book.
For a challenging yet extremely important, thoughtful, careful read, I highly recommend this book.
Interesting read! Quite a complete and scientific overview of a relatively niche topic: sentience.
Based on ethics and reasonable frameworks it explores how to look at uncertainty regarding sentience in disabled humans, fetuses, animals, plants, and artificial intelligence. It gives quite a nuanced and scientific view towards abortion as well.
Introduction to very useful concepts and theory, including sentience candidates, investigation priorities, precaution, risk, and proportionality. Practical and useful. My only criticism is that there is a lot of material that is not easy to grasp if you are a general reader (especially the chapters on theories of sentience)
Thoroughly enjoyed. Not for everyone, but if the topics are of interest I'd recomend reading. I want to put 3 star, but I know I'm a strict rater compared to the norm, and this book is underread, so it gets 4.
Fascinating exploration of the concept of sentience as we know it, examining it through lots of valences (medical/AI/etc.) — a little denser than I expected but interesting nonetheless!
A careful philosophical look at sentience across the natural and artificial worlds. Birch emphasises our uncertainty but still provides a comprehensive framework to identify potential sentience and give guidelines around ethical treatment. Reads a bit like a long but accessible academic paper. Useful and thought-provoking.
The important thing about this book is that the proposals it contains drove an actual public policy success. The UK now has a framework for preventing gratuitous suffering in octopi and decapod crustaceans.
What I love about this book is that it is pragmatist through and through. The key move is the replacement of the concept of "sentient being" with the bridging concept, "sentience candidate".
S is a sentience candidate if "there is an evidence base that implies a realistic possibility of sentience in S that it would be irresponsible to ignore when making policy decisions that will affect S, and is rich enough to allow the identification of welfare risks and the design and assessment of precautions."
"... we will be asking: is this system a sentience candidate, an investigation priority, or neither of these? This is a useful shift from the question: is the system sentient? The first question, unlike the second, is one we can answer using current evidence, and our answers can command widespread support and confidence from people with many different theoretical sympathies."
The kind of moral consideration that sentience candidates command from us is management of sentience risk. In particular, we need to manage the risk of causing gratuitous suffering. Note that this moves the problem into a domain that can be addressed by bodies of risk management knowledge.
Experts are the ones equipped to say, based on knowledge of metaphysical theories, ethical systems, legal frameworks, and neuroscience, what the level of sentience risk is for given public policy options regarding a sentience candidate. However, in a democracy, only ordinary citizens are capable of deciding what public policy responses are proportional to the level of risk involved; such responses will come with their own offsetting risks and costs. Hence, Birch recommends that public policy be informed by citizens' panels (which themselves would be informed by expert testimony).
This commitment of Birch's to democracy and pluralism can be frustrating. In particular, he remains agnostic to public policy responses within a rather large "zone of reasonable disagreement". Here we get the interesting notion of the reasonable interlocutor:
"A reasonable person is one who displays certain characteristic virtues to at least a minimal degree: they care about, and respond to, evidence and reasons; they respect scientific evidence, on the whole, even if they are sceptical of some scientific results; they want to reach agreement with those they disagree with; they are not wantonly reckless in the face of risk; and their values are not completely abhorrent."
Such people won't be able to come to a consensus about sentience, but they "may agree about the structure of the option space" and thus form a "meta-consensus" about what is reasonable. Keeping this idea in mind, Birch leaves on the table some minority-held theories of consciousness that one might rather he dismiss. Part of the point of doing this is epistemic humility, and part of the point is building consensus. If experts leave popular options off the table, then the whole process of citizens' panels isn't going to have buy-in, which is the whole point.
For me, I don't like that Birch leaves AI sentience on the table. I think it's absurd to ignore that current popular discussions of AI sentience are heavily influenced by proven liars who stand to benefit financially from convincing investors that their technology is much more advanced than it is. I think that leaving AI sentience on the table at this point is mistaking the appearance of neutrality for actual neutrality.
The kind of AI that actually exists now is LLMs. LLMs do not possess any of Birch's R1-R5 brain structures constituting the zone of reasonable disagreement about consciousness.
"Ironically, transformers, for their remarkable linguistic abilities, appear to have moved the AI industry away from architectures more closely inspired by the brain."
Worse, there is a prohibitively serious methodological problem with the case of LLMs.
"Our working party in 2022–2023 agreed that there is simply no way to assess sentience in an LLM on the basis of its linguistic behaviour, given the gaming problem."
There are less popular, more experimental candidates for AI that would represent more serious challenges. These include (the incredibly cool) OpenWorm, the complete digitized connectome of the C. Elegans nervous system. But even this has serious problems preventing it from sentience candidacy (never mind that C. Elegans is not itself a sentience candidate).
"But another, deeper reason is that the connectome does not give us the whole story about the functioning of the nervous system. Perhaps most obviously, it does not tell us the synaptic weights: the degree of influence of one neuron’s firing over that of another. It also does not tell us how these weights can be modified by experience—how the system can learn. More fundamentally, there is a lot that neurons do beyond simply firing, and indeed the neuroscience of C. elegans is a rich source of information about what else a biological neuron can do. For example, C. elegans is able to steer towards the source of an attractive odour or away from an aversive odour. This behaviour relies on processing within a single interneuron. Part of the neuron’s axon keeps track of where the head is located as it sweeps from side to side, while another part of the axon keeps track of the intensity of the odour, and these two pieces of information are integrated inside the neuron to regulate steering. The internal spatial organization of the axon, plus its spatial relationships to sensory and motor neurons, are all part of the story of how it can do this job. A full emulation of C. elegans would have to go below the neuronal level to emulate the dynamics within neurons, which often seem to depend on the finer details of how neurons are arranged in space."
Indeed, it's not clear how we could argue at this point that any existing inorganic system has the kind of brain structures that our best theories associate with sentience (out of those theories that have a coherent way of talking about physical implementation). And yet, Birch argues:
"At the same time, we should expect more and more people to develop strong feelings about the individual AI systems in their own lives. If these AI companions are sentient, these feelings might be reciprocated. But if they are not, human lives could become increasingly absurd, as people become ever more devoted to non-sentient companions at the expense of their relationships with real sentient beings. All of this means we cannot simply ignore the question of what it takes for an AI system to be a sentience candidate. That debate is coming, whether we are ready or not."
To me this is conceding ground to an unreasonable disrespect for scientific evidence just because a position is popular. This move appears to be driven by a (well-motivated) desire to drive for consensus. But what would I propose we do instead? Again, these proposals drove an actual public policy success. It's also worthwhile thinking about what it would mean for LLMs to be treated as investigation priorities.
"Proposal 26. Codes of good practice and licensing (II). There should be a licensing scheme for companies attempting to create artificial sentience candidates, or whose work creates even a small risk of doing so, even if this is not an explicit aim. Obtaining a license should be dependent on signing up to (and, where necessary, funding the creation of ) a code of good practice for this type of work that includes norms of transparency."
It's fun to imagine Sam Altman blowing smoke about AI sentience risk and having to backpedal when someone suggests that we should take reasonable steps to do something about it. Notably, none of these con men actually make their products transparent so that the supposed sentience risk can be evaluated, which is something you'd do if you were making your claims in good faith.
All of this is very cool to think about and it's very nice to see progress in public policy.
This is a strikingly consensus-building piece of public philosophy. I didn't get much of a sense of Birch's own hot takes — the book is more like an attempt to inform public policy in an extremely legitimate and reasonable-sounding way. So there are attempts to describe broad, agreeable-sounding principles of caution, and to draw big circles around zones of reasonable disagreement. It's less coming from a ‘clear and coherent theory of ethics’ and more from ‘broadly appealing political theory’, which sounds like more of a dig than I mean it to be.
Two minor gripes: if you think the ‘edge’ of the capacity to feel pain is drawn (say) somewhere below birds and mammals, but above most invertebrates, then industrial animal farming is surely the worst thing humans are doing in the world, in terms of direct and ongoing harm. And presumably policy in rich countries is at least partly amenable to good philosophical arguments about sentience. But there wasn't much discussion of factory farming at all, compared to topics like animal experimentation or ‘neural organoids’. I worry about zones of policymaking occupying more attention than factory farming (perhaps because they are newer and so less settled), despite being several orders of magnitude smaller in scale.
There was also a chapter on insects, arguing that (many) insects should be considered ‘sentience candidates’; that is, capable of feeling pain. That feels to me like it could be a staggeringly important fact about the world; but I felt a bit like Birch passed over (or even downplayed) the practical upshots. It's true that humans don't directly and deliberately harm insects in the way we do other vertebrates (e.g. through farming), but I worry about an implicit maxim which says we humans only have obligations toward creatures we own or directly expropriate.
So I found those parts a bit frustrating. But they only stood out because the substance of the book — especially the more sciencey parts — are very, very good. And the topic matters a huge amount. It's quite dizzying to appreciate how ignorant we are about minds different from our own.
A crucial determinant of how we should treat others is whether they can be judged as sentient: meaning that they experience conscious states that have a valenced character (feel good or bad). The most important (but not sole) case is judging whether they feel pain and can be said to suffer.
I recommend this book highly for those interested in the science, ethics, and practical policy implications of sentience in animals, AI systems, and humans with consciousness disorders. Birch surveys all of the relevant scientific research as well as the philosophical theories that bear on the topic. He will well-positioned to do so as a philosopher who also conducts empirical research on animal consciousness (at the London School of Economics).
Birch takes extreme care to be even-handed in his treatment (he is not pushing one particular theory of consciousness, for example), and his eye is always on the pragmatic implications and on the shaping of government policy (he has had experience as a policy advisor in the UK).
I am very interested in the question of consciousness in other living things, and Birch describes how research (along with more enlightened thinking) are pushing the boundaries of plausible sentience into many of the important invertebrate taxa. The section on human consciousness disorders is also fascinating and things have been changing fast there as well. Other topics concern lab-developed neural organoids (interesting and weird), and, of course, AI systems (here the discussion is understandably sketchier). Just impressively comprehensive.
The book is free (!) to download from Oxford Academic.
What makes this book so resonant for me is how it grounds a seemingly science-fiction dilemma in practical, immediate terms. As we develop increasingly complex Large Language Models and other forms of AI, we are creating systems whose internal workings and emergent properties are not fully understood. We're building beyond our own comprehension. Birch's framework provides a much-needed guide for navigating this uncharted territory. It forces us, as developers and ethicists, to confront uncomfortable questions: • What kinds of computational architectures or behaviors should be considered indicators of potential sentience? • If we apply the precautionary principle, what does that mean for the way we train, test, and ultimately "retire" advanced AI models? • How do we build systems ethically when the very nature of their subjective experience—if any—remains a "black box"? The Edge of Sentience is not just an academic exercise; it's an essential and timely guide for the AI community. It urges us to be proactive in our ethical reasoning, ensuring that by the time we have to seriously ask "Is it sentient?", we have already established a compassionate and cautious way to answer. It's a foundational text for the next decade of AI development and governance.
A thorough review of philosophical considerations on sentience. When does an embryo become sentient? Do worms, spiders or octopuses have sentience? When do patients in coma have sentience? What are the criteria under which we can consider an animal as sentient? Sadly the book is 95% philosophy and 5% science otherwise it would have made it a 5 stars for me.