Geoffrey A. Moore's Blog

July 5, 2022

Starting the Dialogue: Discussing the Infinite Staircase via Book Review

In his review of The Infinite Staircase, Bill Bartlett has done me the honor every author most cherishes—he has read my book thoughtfully and has engaged directly with its claims. He and I don’t see eye to eye on many of these claims, but we both have deep respect for Western philosophy and religion, so I welcome the opportunity to do a kind of Point/Counterpoint with his review.  In this context, I am reproducing what he says first and then interspersing my commentary in a different font color.  Here goes.

Book Review: The Infinite Staircase

When I first came across The Infinite Staircase, I was intrigued for two reasons: The title suggested that the book was not in the same vein as the author’s previous work and that it would tackle morality from a secular viewpoint. As such the title was extremely well chosen.

The Infinite Staircase is the latest from Geoffrey Moore, who is famous for books such as Crossing the Chasm and Zone to Win, which I quote quite frequently when talking about product management. It is a departure from those books as it deals with the strategy for something larger than products and companies: life itself. It attempts to give a description of the universe and human life in it, and then derive from that a framework for grounding ethics and morality. Moore remarks that the world is becoming less and less religious which is eroding the foundations that fostered ethical behavior. Moore sees the need for a secular foundation in order to regain the stability that society needs and that ethics provides.

I don’t want to quarrel with this representation, but I do think it is a bit one-sided when it comes to my commitment to a secular metaphysics.  I am blown away by the magnitude and wonder that the secular story of creation tells.  The fact that humanity has been able to develop such a complete and verifiable explanation of how we got from a Big Bang to the present day astounds me.  So, I do not want to set it aside when it comes to establishing the grounds for ethical behavior.  It is not, in other words, that I am disappointed with religion, although I do take it to task from time to time, but rather that I am unwilling to sideline what I consider to be some of humanity’s best work when it comes to tackling issues of spiritual and moral importance.

Moore’s proposed foundation for ethics is the various sciences which he places on consecutive steps of a staircase. Each stair emerges out of the previous one: it is constrained by the previous stair but not wholly predicted by it. For example, chemistry emerges from physics because physics applies a constraint on what chemical entities can do without being able to predict every behavior that they have. It is a shame that Moore does not explain the concept of emergence and assumes the reader is familiar with it. I myself was skeptical that chemistry emerges from physics and was glad to be able to find a well-researched article on the subject.

I confess to being guilty as charged here. I am in awe of emergence and how it generates complexity, and it certainly deserves a strong foundation.  In the bibliography of The Infinite Staircase, I do reference some works on the topic, of which I think John Holland’s Complexity: A Short Introduction is the best one to start with.

Because of emergence, there is no single science that completely describes and predicts everything in the universe. Therefore any unifying theory must contain all of them. In fact, it must contain many other sciences, some of which have not been explored yet. Hence the fact that the staircase is assumed to be infinite in both directions. In spite of this, Moore feels that we have enough of a grasp on the “middle” of the infinite staircase in order to ground ethics.

In fact, Moore focuses on a smaller subset of the staircase he describes by stating that goodness begins with desire. This is surprising given that many points of contention in today’s society have a biological component.

Unfortunately, Moore’s proposed framework for ethics is flawed for several reasons that I would like to discuss.

Deriving an “ought” from an “is”

From the beginning, Moore had his work cut out for him. Many have tried and failed to do what Moore attempts, that is to ground a theory of what ought to be in a theory of what can be. Many philosophers have weighed in on this problem. David Hume is famously credited for suggesting that one cannot derive an “ought” from an “is”. Jean-Paul Sartre, quoting Dostoevsky, affirmed that since God does not exist, anything is justifiable.

the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

— David Hume. A Treatise of Human Nature (1739)

Bill is correct that I do believe you can derive an ought from an is, that this is an important objective for the book, and that it does put me at odds with Messers Hume, Sartre, and Dostoevsky.  My claim is that consciousness emerges from desire (this aligns with Hume’s famous comment that reason is in service to, and not the master of, the passions).  I take this relationship between desire and consciousness to establish the is.  Desire is driving behavior, and we have no choice in that matter.  We are compelled to desire.  The vehicle we are riding on is irresistibly in motion—all we can do is seek to steer it.

That’s where values come in.  They emerge from conscious beings interacting socially with one another, specifically within the context of raising families and interacting with neighbors, something we can see in higher order mammals who nurture their young, discipline their peers, court their mates, and defend their group.  All four of these behavioral domains entail oughts, even among pre-linguistic animals, and certainly within human society. These oughts emerge from the prior is.  We may see this as mysterious, perhaps, but it is not complicated.  We are all acting out strategies for living that seek out what I have termed a Darwinian Mean between desire and values. 

Some of the most recent attempts are from Christopher Hitchens, Richard Dawkins, Sam Harris and Daniel Dennett, collectively known as the Four Horsemen of New Atheism. In particular, Sam Harris wrote The Moral Landscape, a Ph.D. thesis that was turned into a popular book, where he attempted to ground morality in neuroscience and an evaluation of the mental states of human beings. He posited that moral action is anything that promotes human flourishing as defined by the allegedly factual self-reporting of each person’s well-being. Harris’ work was criticised by both religious and atheist people. Harris called for a competition of essays critiquing the book and the winning essay was posted on his blog. Many critical responses state that Harris never escapes the problem of deriving an “ought” from an “is”.

My claim is that Sam and others fail because they seek to ground values in language when in fact they emerge prior to language.  That said, to explore the nature of morality via self-reporting of well being is an interesting idea.  Psychological well being could plausibly be a signal of what one might call social homeostasis, in the same way as good health is a signal of biological homeostasis.  I believe that the “value of values” is to support social homeostasis, the well being of the group, and that is why they emerged among social animals (and not among asocial ones). 

It appears to me that Moore’s work falls into the same chasm. The “turn” in Chapter 6 is jarring. After having extolled the benefits of Transcendental Meditation as providing easily accessible spiritual support, Moore goes on to ground goodness in a series of archetypes found in society: maternal love, paternal love, sibling love and communal friendship. Moore also defines a way of measuring goodness, a sort of set of KPIs: Is Good, Feels Good, Works Good.

It is a bit humbling to have one’s ideas summarized so baldly, but Bill has me exactly right. 

These proposals have several problems. First of all, they are not universal, rather they are very culturally specific. Moore may provide evidence that many animal species exhibit these archetypes and a certain level of ethical behavior that is beneficial to the species, but there is nothing that shows that all human cultures across the world do.

Here Bill and I part company.  Yes, any given set of values are culturally specific—indeed, they have to be if they are to serve the community that embraces them.  But values per se are universal, at least among mammals.  Thus, when it comes to maternal values, for example, all mammals nurture their young.  That is a universal value.  It is not negotiable.  There is no viable human culture that does not commit to it.  Sadly, there was a famously horrific societal experiment conducted in Romania during the 1950s in which a generation of infants were not nurtured, and the results were predictably catastrophic.  The same holds for the other values I cite.  They all have different manifestations, but since they are mammalian in origin, and since all humans are mammals, they are universal. 

Secondly, Moore’s measure of goodness never manages to distinguish itself from moral relativism.

That is because Moore does not want to distinguish goodness from moral relativism!  Moral absolutism, to my mind, causes much more societal damage than moral relativism.  This is my biggest beef with religion.  I love the values, but I do not support the absolutism.  It is a human invention that is designed to confer power onto selected individuals, too many of whom have shown a propensity to abuse that power.  

Thirdly, while all of this may be nice, and I would love for more people to aspire to many of the ideals that Moore provides, there is little to no argument as to why anyone must buy into it. There is nothing compelling enough to stop a genocidal maniac and his followers from truly believing that their race is better than others.

I agree with Bill.  Nothing I say is compelling enough to stop a genocidal maniac.  But there is more to it than that.  Implicit in his critique is the idea that a proper morality would supply such an argument, and this is where we really do part company.

The conventional view of morality is that it can be captured in a moral code, and that where that code is authorized, proper moral judgment is based on applying it to the situation at hand.  This implies that morality comes from above, a product of applying analytics to metaphysics to determine the right way to act.  My position is the exact opposite.  I claim that morality comes from below, from our mammalian heritage, and our moral sense is initially non-verbal, only later to be rationalized.  In other words, we know the difference between right and wrong not through the application of reason but rather through our intuitive assessment of behavior based on the social norms we were raised with.  Bill wants something more definitive than that, and I don’t blame him.  I just don’t think it exists.  Worse, when people insist on asserting such claims as absolute, they have the capacity to generate genocides of their own, genocides in the name of all that is holy.  It is a travesty, to be sure, but it is not an unfamiliar one.

Mortality driving behavior

In fact, Moore seems to undermine his own premise when bringing mortality into the equation.

Mortality, like immortality, sets the ultimate context for ethics. Whereas belief in immortality typically implies an ethic of ultimate obedience, belief in mortality typically aligns with a journey of self-realization.

Mortality is the ultimate statute of limitations on behavior. While this isn’t entirely true, since many people care about the future of the loved ones they will leave behind, I agree with the statement in general. Mortality reduces the consequences of one’s actions and therefore provides less urgency to fully comply with any ethical framework. Mortality does however seem to push people to think about what they will leave behind and to seek self-realization, a potentially selfish pursuit.

Bill and I do not see mortality through the same lens.  I do not see it as a statute of limitations on behavior.  I see it as the fundamental enabler of our existence.  Evolution is based on natural selection which in turn is powered by mortality.  Without death, there is no evolution, hence no you, no me, nor anyone or anything else we love. 

But set all that aside, and I still will argue that mortality if foundational to our identity.  Death makes life precious.  It also positions us in relation to an enterprise far greater than ourselves, one that precedes our arrival and will continue on long after we are gone.  Our identity is tied to our participation in this enterprise, enabled by the narratives which our culture has transmitted to us.  The only question is, how we will enact our participation, and to what end.  That is where ethics comes in.

Self-realization is not selfish.  Ego-realization is selfish.  This is where mindfulness and meditation come in.  These practices allow consciousness to experience self as connected to a source of spiritual connection and refreshment, thereby giving the ego the energy and centeredness it needs to act ethically under challenging conditions. 

On the other hand, immortality doesn’t necessarily carry with it a moral imperative. In order for that, one must believe in some force that will ensure that all immoral behavior will eventually and inescapably lead to unwanted consequences.

Logically, this makes sense, but I have never heard of an immortality narrative that did not incorporate some form of divine judgment. 

Genes and memes

There is not much to say about Moore’s reference to Richard Dawkins’ concept of memes. Meme theory and memetics have been heavily criticised by experts from many different fields. For instance, Dr Luis Benitez-Bribiesca pointed to various differences between genes and memes including the high rate of mutation, the lack of a code script, and instability. For this reason memes cannot account for the observed emergence of common narratives and culture.

Sorry, but I am not willing to cede the field here.  My claim is that memes are like genes in the following ways:

Both encode strategies for living that are communicated across generations, the one biologically, the other socially. Both are subject to natural selection, leading to the spread of increasingly successful strategies for living while weeding out the unsuccessful ones.Both are also subject to sexual selection, leading to the spread of increasingly appealing strategies for living while weeding out the unattractive ones.

Now, as to the objections raised by Dr. Benitex-Bribiesca, here are my replies:

There is no clear-cut definition of a meme.  I claim that memes are properly defined as strategies for living that inform behaviors and are spread through imitation—something we can see emerging in social animals but made fully manifest with the arrival of language.Memes cannot be subjected to rigorous scientific investigation because they are too heterogenous to study in a systematic way.  I claim that there are a number of widely disseminated academic disciplines devoted entirely to the study of memes, of which history, philosophy, and literary studies are three.The mutation of memes is unconstrained in ways that are so different from genes that the metaphor is not useful.  I agree that the mechanisms for transmission are quite different.  But both are ultimately selected for or against based on the behavior of the organism enacting the strategy.  To be sure, each individual embracing a given strategy will put their individual spin on it—that’s the source of mutation.  Some of these mutations will have more success either in propagating (winning the war of sexual selection) or succeeding (winning the war of natural selection)—that’s what leads to the evolution of increasingly complex and inclusive strategies for living, the sciences being a particularly impressive example thereof.Memetics is nothing more than pseudoscientific dogma encased in itself.  Frankly, this is just name-calling.  Still, to be fair, I agree that memetics is not a science, certainly not in the way that genetics is.  I believe, however, it is can be useful to describe forces that act on human beings, to develop an understanding of the opportunities and threats they pose, and to propose tactics for capitalizing on the former and defending oneself against the latter.

Given the above, I think it is wrong to say that memes cannot account for the observed emergence of common narratives and culture.

Epigenetics provide a more coherent description of how culture is inherited. Linguistics, complex systems theory and the works of Gilles Deleuze and Félix Guattari provide better explanations for how narratives propagate and influence culture.

I am not familiar with this body of work, but I expect I would find it congenial.  I am not seeking to proselytize memetics per se.  I do think they are a useful vehicle for understanding the impact of narratives and analytics on human experience, and I take that to be the reason they generate so much analysis.

Religion fails as a grounds for moral behavior

Peppered throughout the book there are jabs at religion’s alleged shortcomings to provide suitable grounds for ethics.

This is the only comment in this entire dialog that I truly take exception to.  I make the point several times throughout The Infinite Staircase that religion is indeed very well suited to authorize ethics and that committing to a religious tradition is a time-tested way to living an ethical life.  My issue is that I do not find the metaphysics embedded in religious narratives credible.  The story is compelling, but the evidence is missing.  On the other hand, I find secular metaphysics to be very credible.  Here there is a surprising amount of consensus across a broad range of evidence.  The challenge is that secular metaphysics do not authorize ethics anywhere near as clearly or effectively as religion does.  That’s why I wrote the book.  I hold that the function of ethics is to align human behavior with metaphysics, and I seek to build that connection as best I can. 

Most of these arguments use The Great Chain of Being as a strawman.  Others are a weak form of the problem of evil that many have countered, including William Lane Craig and Tim Keller. A few are easily pushed aside by exploiting gaping holes in the Staircase.  For instance, Moore states that the Staircase removes the need for a God to explain the universe, yet he never proves that God could never be encountered at either end of the Infinite Staircase, nor does he answer the question of why the laws of physics, chemistry, biology etc. are the way they are and not something else. This question seems to be swept under the rug of emergentism, which is arguably insufficient. The theory of evolution had this problem for many years, because many traits seemed to have evolved too quickly. The concept of exaptation, the process under which a trait that evolved for one purpose is radically repurposed for another, was the key to solving this problem. Still, this does not solve the problem of why the constants that appear in the laws of physics are so fine-tuned. Small variations in these constants would not have permitted any order to arise in the universe. Many scientists have attempted to solve this, some by positing that our universe is one of many in a multi-verse, each universe a different set of physical laws. Also, while physics describes much of what happened since the Big Bang, it says nothing of what happened before. This was Stephen Hawking’s nemesis. After having shown that the universe started as a singularity, or black hole, he spent a good part of the rest of his life coming up with various theories about what caused the Big Bang without relying on the super-natural.

The paragraph above, to my mind, is unnecessarily belligerent.  Removing the need for a God to explain the universe need not be taken as an attack on the validity of religious faith.  I would position it instead as exploring the possibility of exploring life from a radically different perspective.  I do not think it is possible to prove there is no God—how do you prove a negative?  We are better served if we seek to advance the case for whatever positive captures our allegiance.

As for why all the laws are as they are, and why key constants are tuned so precisely to support order to arise in the universe, I support the argument from anthropomorphism that says things have to be as they are or we would not be having this conversation.  As for the claim that evolution is flawed because many traits seemed to have evolved too quickly, that implies all innovation must unfold linearly, that course-correcting through exaptation is improbable, that an exponential rate of change is not possible, all assertions that advocate for punctuated equilibrium would take exception to.  As for multiverses, I find one universe far more than I can comprehend—I have no interest in taking on more than one.  Ditto for what happened before the Big Bang—I don’t even have a placeholder for what caused that, nor do I believe we need one.

In fact, the existence of God and the veracity of the Bible form a very simple and coherent grounding for ethics. Goodness is based on God’s nature revealed by his actions and commands. Every human has fallen short, yet Jesus’ sacrifice and resurrection assure the salvation of all who believe. The only thing lacking is incontrovertible proof of the premises.

Despite all the disagreement Bill and I have about metaphysics, I support him in the above paragraph.  At the end of the day, the point is not who is right or wrong about the nature of the universe, the point is to live an ethical life.  This is no easy task, and we all fall short in one way or another.  We need spiritual support to keep on going.  Religious faith may lack incontrovertible proof of its premises, but it sure has a heck of a track record, and it would be foolish to repudiate it. 

Conclusion

All in all, the first two-thirds of Moore’s book are very interesting but the final third falls short. The construction of the Infinite Staircase is compelling. Apart from the reference to meme theory, there is a trove of information to be dug into. The inclusion of values, culture and narrative in the staircase is an important one. Narrative is a powerful, misunderstood and underused tool for corporate and societal change. John Seely Brown said that “we have moved from the age of enlightenment to the age of entanglement.” The turn towards deriving ethics falls flat. Many of the proposed ideals are commendable but remain wishful thinking and are culturally specific.

As a former academic, I am deeply committed to the marketplace of ideas and the kinds of dialog that make it work.  I am honored that Bill has taken my book seriously, and I hope this exchange will be of value to both of our readerships.

Geoffrey Moore

 •  0 comments  •  flag
Share on Twitter
Published on July 05, 2022 12:42

Starting the Dialogue: Discussing the Infinite Staircase via Book Review

In his review of The Infinite Staircase, Bill Bartlett has done me the honor every author most cherishes—he has read my book thoughtfully and has engaged directly with its claims. He and I don’t see eye to eye on many of these claims, but we both have deep respect for Western philosophy and religion, so I welcome the opportunity to do a kind of Point/Counterpoint with his review.  In this context, I am reproducing what he says first and then interspersing my commentary in a different font color.  Here goes.

📗 Book Review: The Infinite Staircase

When I first came across The Infinite Staircase, I was intrigued for two reasons: The title suggested that the book was not in the same vein as the author’s previous work and that it would tackle morality from a secular viewpoint. As such the title was extremely well chosen.

The Infinite Staircase is the latest from Geoffrey Moore, who is famous for books such as Crossing the Chasm and Zone to Win, which I quote quite frequently when talking about product management. It is a departure from those books as it deals with the strategy for something larger than products and companies: life itself. It attempts to give a description of the universe and human life in it, and then derive from that a framework for grounding ethics and morality. Moore remarks that the world is becoming less and less religious which is eroding the foundations that fostered ethical behavior. Moore sees the need for a secular foundation in order to regain the stability that society needs and that ethics provides.

I don’t want to quarrel with this representation, but I do think it is a bit one-sided when it comes to my commitment to a secular metaphysics.  I am blown away by the magnitude and wonder that the secular story of creation tells.  The fact that humanity has been able to develop such a complete and verifiable explanation of how we got from a Big Bang to the present day astounds me.  So, I do not want to set it aside when it comes to establishing the grounds for ethical behavior.  It is not, in other words, that I am disappointed with religion, although I do take it to task from time to time, but rather that I am unwilling to sideline what I consider to be some of humanity’s best work when it comes to tackling issues of spiritual and moral importance.

Moore’s proposed foundation for ethics is the various sciences which he places on consecutive steps of a staircase. Each stair emerges out of the previous one: it is constrained by the previous stair but not wholly predicted by it. For example, chemistry emerges from physics because physics applies a constraint on what chemical entities can do without being able to predict every behavior that they have. It is a shame that Moore does not explain the concept of emergence and assumes the reader is familiar with it. I myself was skeptical that chemistry emerges from physics and was glad to be able to find a well-researched article on the subject.

I confess to being guilty as charged here. I am in awe of emergence and how it generates complexity, and it certainly deserves a strong foundation.  In the bibliography of The Infinite Staircase, I do reference some works on the topic, of which I think John Holland’s Complexity: A Short Introduction is the best one to start with.

Because of emergence, there is no single science that completely describes and predicts everything in the universe. Therefore any unifying theory must contain all of them. In fact, it must contain many other sciences, some of which have not been explored yet. Hence the fact that the staircase is assumed to be infinite in both directions. In spite of this, Moore feels that we have enough of a grasp on the “middle” of the infinite staircase in order to ground ethics.

In fact, Moore focuses on a smaller subset of the staircase he describes by stating that goodness begins with desire. This is surprising given that many points of contention in today’s society have a biological component.

Unfortunately, Moore’s proposed framework for ethics is flawed for several reasons that I would like to discuss.

Deriving an “ought” from an “is”

From the beginning, Moore had his work cut out for him. Many have tried and failed to do what Moore attempts, that is to ground a theory of what ought to be in a theory of what can be. Many philosophers have weighed in on this problem. David Hume is famously credited for suggesting that one cannot derive an “ought” from an “is”. Jean-Paul Sartre, quoting Dostoevsky, affirmed that since God does not exist, anything is justifiable.

the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

— David Hume. A Treatise of Human Nature (1739)

Bill is correct that I do believe you can derive an ought from an is, that this is an important objective for the book, and that it does put me at odds with Messers Hume, Sartre, and Dostoevsky.  My claim is that consciousness emerges from desire (this aligns with Hume’s famous comment that reason is in service to, and not the master of, the passions).  I take this relationship between desire and consciousness to establish the is.  Desire is driving behavior, and we have no choice in that matter.  We are compelled to desire.  The vehicle we are riding on is irresistibly in motion—all we can do is seek to steer it.

That’s where values come in.  They emerge from conscious beings interacting socially with one another, specifically within the context of raising families and interacting with neighbors, something we can see in higher order mammals who nurture their young, discipline their peers, court their mates, and defend their group.  All four of these behavioral domains entail oughts, even among pre-linguistic animals, and certainly within human society. These oughts emerge from the prior is.  We may see this as mysterious, perhaps, but it is not complicated.  We are all acting out strategies for living that seek out what I have termed a Darwinian Mean between desire and values. 

Some of the most recent attempts are from Christopher Hitchens, Richard Dawkins, Sam Harris and Daniel Dennett, collectively known as the Four Horsemen of New Atheism. In particular, Sam Harris wrote The Moral Landscape, a Ph.D. thesis that was turned into a popular book, where he attempted to ground morality in neuroscience and an evaluation of the mental states of human beings. He posited that moral action is anything that promotes human flourishing as defined by the allegedly factual self-reporting of each person’s well-being. Harris’ work was criticised by both religious and atheist people. Harris called for a competition of essays critiquing the book and the winning essay was posted on his blog. Many critical responses state that Harris never escapes the problem of deriving an “ought” from an “is”.

My claim is that Sam and others fail because they seek to ground values in language when in fact they emerge prior to language.  That said, to explore the nature of morality via self-reporting of well being is an interesting idea.  Psychological well being could plausibly be a signal of what one might call social homeostasis, in the same way as good health is a signal of biological homeostasis.  I believe that the “value of values” is to support social homeostasis, the well being of the group, and that is why they emerged among social animals (and not among asocial ones). 

It appears to me that Moore’s work falls into the same chasm. The “turn” in Chapter 6 is jarring. After having extolled the benefits of Transcendental Meditation as providing easily accessible spiritual support, Moore goes on to ground goodness in a series of archetypes found in society: maternal love, paternal love, sibling love and communal friendship. Moore also defines a way of measuring goodness, a sort of set of KPIs: Is Good, Feels Good, Works Good.

It is a bit humbling to have one’s ideas summarized so baldly, but Bill has me exactly right. 

These proposals have several problems. First of all, they are not universal, rather they are very culturally specific. Moore may provide evidence that many animal species exhibit these archetypes and a certain level of ethical behavior that is beneficial to the species, but there is nothing that shows that all human cultures across the world do.

Here Bill and I part company.  Yes, any given set of values are culturally specific—indeed, they have to be if they are to serve the community that embraces them.  But values per se are universal, at least among mammals.  Thus, when it comes to maternal values, for example, all mammals nurture their young.  That is a universal value.  It is not negotiable.  There is no viable human culture that does not commit to it.  Sadly, there was a famously horrific societal experiment conducted in Romania during the 1950s in which a generation of infants were not nurtured, and the results were predictably catastrophic.  The same holds for the other values I cite.  They all have different manifestations, but since they are mammalian in origin, and since all humans are mammals, they are universal. 

Secondly, Moore’s measure of goodness never manages to distinguish itself from moral relativism.

That is because Moore does not want to distinguish goodness from moral relativism!  Moral absolutism, to my mind, causes much more societal damage than moral relativism.  This is my biggest beef with religion.  I love the values, but I do not support the absolutism.  It is a human invention that is designed to confer power onto selected individuals, too many of whom have shown a propensity to abuse that power.  

Thirdly, while all of this may be nice, and I would love for more people to aspire to many of the ideals that Moore provides, there is little to no argument as to why anyone must buy into it. There is nothing compelling enough to stop a genocidal maniac and his followers from truly believing that their race is better than others.

I agree with Bill.  Nothing I say is compelling enough to stop a genocidal maniac.  But there is more to it than that.  Implicit in his critique is the idea that a proper morality would supply such an argument, and this is where we really do part company.

The conventional view of morality is that it can be captured in a moral code, and that where that code is authorized, proper moral judgment is based on applying it to the situation at hand.  This implies that morality comes from above, a product of applying analytics to metaphysics to determine the right way to act.  My position is the exact opposite.  I claim that morality comes from below, from our mammalian heritage, and our moral sense is initially non-verbal, only later to be rationalized.  In other words, we know the difference between right and wrong not through the application of reason but rather through our intuitive assessment of behavior based on the social norms we were raised with.  Bill wants something more definitive than that, and I don’t blame him.  I just don’t think it exists.  Worse, when people insist on asserting such claims as absolute, they have the capacity to generate genocides of their own, genocides in the name of all that is holy.  It is a travesty, to be sure, but it is not an unfamiliar one.

Mortality driving behavior

In fact, Moore seems to undermine his own premise when bringing mortality into the equation.

Mortality, like immortality, sets the ultimate context for ethics. Whereas belief in immortality typically implies an ethic of ultimate obedience, belief in mortality typically aligns with a journey of self-realization.

Mortality is the ultimate statute of limitations on behavior. While this isn’t entirely true, since many people care about the future of the loved ones they will leave behind, I agree with the statement in general. Mortality reduces the consequences of one’s actions and therefore provides less urgency to fully comply with any ethical framework. Mortality does however seem to push people to think about what they will leave behind and to seek self-realization, a potentially selfish pursuit.

Bill and I do not see mortality through the same lens.  I do not see it as a statute of limitations on behavior.  I see it as the fundamental enabler of our existence.  Evolution is based on natural selection which in turn is powered by mortality.  Without death, there is no evolution, hence no you, no me, nor anyone or anything else we love. 

But set all that aside, and I still will argue that mortality if foundational to our identity.  Death makes life precious.  It also positions us in relation to an enterprise far greater than ourselves, one that precedes our arrival and will continue on long after we are gone.  Our identity is tied to our participation in this enterprise, enabled by the narratives which our culture has transmitted to us.  The only question is, how we will enact our participation, and to what end.  That is where ethics comes in.

Self-realization is not selfish.  Ego-realization is selfish.  This is where mindfulness and meditation come in.  These practices allow consciousness to experience self as connected to a source of spiritual connection and refreshment, thereby giving the ego the energy and centeredness it needs to act ethically under challenging conditions. 

On the other hand, immortality doesn’t necessarily carry with it a moral imperative. In order for that, one must believe in some force that will ensure that all immoral behavior will eventually and inescapably lead to unwanted consequences.

Logically, this makes sense, but I have never heard of an immortality narrative that did not incorporate some form of divine judgment. 

Genes and memes

There is not much to say about Moore’s reference to Richard Dawkins’ concept of memes. Meme theory and memetics have been heavily criticised by experts from many different fields. For instance, Dr Luis Benitez-Bribiesca pointed to various differences between genes and memes including the high rate of mutation, the lack of a code script, and instability. For this reason memes cannot account for the observed emergence of common narratives and culture.

Sorry, but I am not willing to cede the field here.  My claim is that memes are like genes in the following ways:

Both encode strategies for living that are communicated across generations, the one biologically, the other socially. Both are subject to natural selection, leading to the spread of increasingly successful strategies for living while weeding out the unsuccessful ones.Both are also subject to sexual selection, leading to the spread of increasingly appealing strategies for living while weeding out the unattractive ones.

Now, as to the objections raised by Dr. Benitex-Bribiesca, here are my replies:

There is no clear-cut definition of a meme.  I claim that memes are properly defined as strategies for living that inform behaviors and are spread through imitation—something we can see emerging in social animals but made fully manifest with the arrival of language.Memes cannot be subjected to rigorous scientific investigation because they are too heterogenous to study in a systematic way.  I claim that there are a number of widely disseminated academic disciplines devoted entirely to the study of memes, of which history, philosophy, and literary studies are three.The mutation of memes is unconstrained in ways that are so different from genes that the metaphor is not useful.  I agree that the mechanisms for transmission are quite different.  But both are ultimately selected for or against based on the behavior of the organism enacting the strategy.  To be sure, each individual embracing a given strategy will put their individual spin on it—that’s the source of mutation.  Some of these mutations will have more success either in propagating (winning the war of sexual selection) or succeeding (winning the war of natural selection)—that’s what leads to the evolution of increasingly complex and inclusive strategies for living, the sciences being a particularly impressive example thereof.Memetics is nothing more than pseudoscientific dogma encased in itself.  Frankly, this is just name-calling.  Still, to be fair, I agree that memetics is not a science, certainly not in the way that genetics is.  I believe, however, it is can be useful to describe forces that act on human beings, to develop an understanding of the opportunities and threats they pose, and to propose tactics for capitalizing on the former and defending oneself against the latter.

Given the above, I think it is wrong to say that memes cannot account for the observed emergence of common narratives and culture.

Epigenetics provide a more coherent description of how culture is inherited. Linguistics, complex systems theory and the works of Gilles Deleuze and Félix Guattari provide better explanations for how narratives propagate and influence culture.

I am not familiar with this body of work, but I expect I would find it congenial.  I am not seeking to proselytize memetics per se.  I do think they are a useful vehicle for understanding the impact of narratives and analytics on human experience, and I take that to be the reason they generate so much analysis.

Religion fails as a grounds for moral behavior

Peppered throughout the book there are jabs at religion’s alleged shortcomings to provide suitable grounds for ethics.

This is the only comment in this entire dialog that I truly take exception to.  I make the point several times throughout The Infinite Staircase that religion is indeed very well suited to authorize ethics and that committing to a religious tradition is a time-tested way to living an ethical life.  My issue is that I do not find the metaphysics embedded in religious narratives credible.  The story is compelling, but the evidence is missing.  On the other hand, I find secular metaphysics to be very credible.  Here there is a surprising amount of consensus across a broad range of evidence.  The challenge is that secular metaphysics do not authorize ethics anywhere near as clearly or effectively as religion does.  That’s why I wrote the book.  I hold that the function of ethics is to align human behavior with metaphysics, and I seek to build that connection as best I can. 

Most of these arguments use The Great Chain of Being as a strawman.  Others are a weak form of the problem of evil that many have countered, including William Lane Craig and Tim Keller. A few are easily pushed aside by exploiting gaping holes in the Staircase.  For instance, Moore states that the Staircase removes the need for a God to explain the universe, yet he never proves that God could never be encountered at either end of the Infinite Staircase, nor does he answer the question of why the laws of physics, chemistry, biology etc. are the way they are and not something else. This question seems to be swept under the rug of emergentism, which is arguably insufficient. The theory of evolution had this problem for many years, because many traits seemed to have evolved too quickly. The concept of exaptation, the process under which a trait that evolved for one purpose is radically repurposed for another, was the key to solving this problem. Still, this does not solve the problem of why the constants that appear in the laws of physics are so fine-tuned. Small variations in these constants would not have permitted any order to arise in the universe. Many scientists have attempted to solve this, some by positing that our universe is one of many in a multi-verse, each universe a different set of physical laws. Also, while physics describes much of what happened since the Big Bang, it says nothing of what happened before. This was Stephen Hawking’s nemesis. After having shown that the universe started as a singularity, or black hole, he spent a good part of the rest of his life coming up with various theories about what caused the Big Bang without relying on the super-natural.

The paragraph above, to my mind, is unnecessarily belligerent.  Removing the need for a God to explain the universe need not be taken as an attack on the validity of religious faith.  I would position it instead as exploring the possibility of exploring life from a radically different perspective.  I do not think it is possible to prove there is no God—how do you prove a negative?  We are better served if we seek to advance the case for whatever positive captures our allegiance.

As for why all the laws are as they are, and why key constants are tuned so precisely to support order to arise in the universe, I support the argument from anthropomorphism that says things have to be as they are or we would not be having this conversation.  As for the claim that evolution is flawed because many traits seemed to have evolved too quickly, that implies all innovation must unfold linearly, that course-correcting through exaptation is improbable, that an exponential rate of change is not possible, all assertions that advocate for punctuated equilibrium would take exception to.  As for multiverses, I find one universe far more than I can comprehend—I have no interest in taking on more than one.  Ditto for what happened before the Big Bang—I don’t even have a placeholder for what caused that, nor do I believe we need one.

In fact, the existence of God and the veracity of the Bible form a very simple and coherent grounding for ethics. Goodness is based on God’s nature revealed by his actions and commands. Every human has fallen short, yet Jesus’ sacrifice and resurrection assure the salvation of all who believe. The only thing lacking is incontrovertible proof of the premises.

Despite all the disagreement Bill and I have about metaphysics, I support him in the above paragraph.  At the end of the day, the point is not who is right or wrong about the nature of the universe, the point is to live an ethical life.  This is no easy task, and we all fall short in one way or another.  We need spiritual support to keep on going.  Religious faith may lack incontrovertible proof of its premises, but it sure has a heck of a track record, and it would be foolish to repudiate it. 

Conclusion

All in all, the first two-thirds of Moore’s book are very interesting but the final third falls short. The construction of the Infinite Staircase is compelling. Apart from the reference to meme theory, there is a trove of information to be dug into. The inclusion of values, culture and narrative in the staircase is an important one. Narrative is a powerful, misunderstood and underused tool for corporate and societal change. John Seely Brown said that “we have moved from the age of enlightenment to the age of entanglement.” The turn towards deriving ethics falls flat. Many of the proposed ideals are commendable but remain wishful thinking and are culturally specific.

As a former academic, I am deeply committed to the marketplace of ideas and the kinds of dialog that make it work.  I am honored that Bill has taken my book seriously, and I hope this exchange will be of value to both of our readerships.

Geoffrey Moore

 •  0 comments  •  flag
Share on Twitter
Published on July 05, 2022 05:01

May 8, 2022

Testing for Truth

It is more than commonplace these days to lament that truth has become a casualty of our digital era.  It is also baloney.  There are clear tests for truth that have served humankind for centuries, and there is no reason to abandon them now.  We just need to bring them back into focus.

Among philosophers there are three broad schools of truth, each foregrounding a different attribute, as follows:

The Correspondence theory of truth, which says that if a statement can be verified by a large number of observations undertaken by a diverse population of observers, then it is true.  Water boils at 100 degrees centigrade at sea level.The Coherence theory of truth, which says that if a statement is consistent with one’s time-tested system of beliefs, then it is true.  Pigs cannot fly.The Pragmatic theory of truth, which says that if a statement enables successful action in the world, then it is true.  Wikipedia is a trustworthy source of information.

Unfortunately, any one of these theories of truth can be co-opted by the forces of disinformation.  Thus, by selecting a small number of observations that have been pre-selected to back up your claim, you can assert correspondence truth because these facts do indeed correspond to the claim.  Similarly, by creating a conspiracy theory and recruiting people who are predisposed to want to believe it, you can assert coherence truth because your claims are indeed consistent with the theory.  And finally, if you are able to use whatever claims you make to get elected to public office, then you can assert pragmatic truth because you were indeed successful in winning the election.

It is much harder, on the other hand, to subvert an integrated understanding of truth that combines all three schools into one battery of tests:

As represented by this Venn diagram, the area where all three circles overlap represents our best testing ground for truth.  That is, to be reliably true, a claim must be correspondent, coherent, and pragmatic—and not, which two do you want?

Let us apply this test to the claim that the 2020 election was stolen from Donald Trump.  For people who believe in the relevant conspiracy theories, this claim is completely coherent.  However, it is neither correspondent nor pragmatic.  That is, there has been little if any credible evidence produced in support of the claim, and over fifty state and federal judges have dismissed lawsuits filed in support of it.  We can say with confidence therefore that the claim is untrue.

A similar approach can be taken to claims that the Covid vaccine is too dangerous to be used.  Again, this is based on a coherence theory of truth anchored in political or metaphysical narratives that are deeply compelling to its proponents.  They believe this to be true, and they are acting accordingly.  However, from a correspondence theory perspective, after two and a half years, there is overwhelming evidence that the vaccine is safe to administer.  And from a pragmatic perspective, there is tragic mortality data testifying to the fates of those who were not vaccinated.  Again, we can say with confidence that the claim that the vaccine is too dangerous to use is untrue.

Truth, however, can be decidedly unpopular, so there will always be strong social pressures to acquiesce to alternative claims, which, while untrue, are more palatable.  Such pressures play into the hands of the autocratic and the righteous across the entirety of the political spectrum.  This is not a new challenge.  Demagogues and dictators have played this game throughout history.  And history teaches us that allowing such people to exploit their narratives without contradiction undermines the rule of law and the foundations of liberal democracy.  Today, as Americans, we are privileged to live under rule of law, but much as we would like it to be, that does not make it an entitlement.  Rather it is a freedom we must commit to preserving.  As citizens, therefore, regardless of party or persuasion, we must make ourselves competent in the tests for truth and ensure that our children are well trained in them as well.  That’s what I think.  What do you think?

 •  0 comments  •  flag
Share on Twitter
Published on May 08, 2022 20:32

Testing for Truth

It is more than commonplace these days to lament that truth has become a casualty of our digital era.  It is also baloney.  There are clear tests for truth that have served humankind for centuries, and there is no reason to abandon them now.  We just need to bring them back into focus.

Among philosophers there are three broad schools of truth, each foregrounding a different attribute, as follows:

The Correspondence theory of truth, which says that if a statement can be verified by a large number of observations undertaken by a diverse population of observers, then it is true.  Water boils at 100 degrees centigrade at sea level.The Coherence theory of truth, which says that if a statement is consistent with one’s time-tested system of beliefs, then it is true.  Pigs cannot fly.The Pragmatic theory of truth, which says that if a statement enables successful action in the world, then it is true.  Wikipedia is a trustworthy source of information.

Unfortunately, any one of these theories of truth can be co-opted by the forces of disinformation.  Thus, by selecting a small number of observations that have been pre-selected to back up your claim, you can assert correspondence truth because these facts do indeed correspond to the claim.  Similarly, by creating a conspiracy theory and recruiting people who are predisposed to want to believe it, you can assert coherence truth because your claims are indeed consistent with the theory.  And finally, if you are able to use whatever claims you make to get elected to public office, then you can assert pragmatic truth because you were indeed successful in winning the election.

It is much harder, on the other hand, to subvert an integrated understanding of truth that combines all three schools into one battery of tests:

The-Test-for-Truth

As represented by this Venn diagram, the area where all three circles overlap represents our best testing ground for truth.  That is, to be reliably true, a claim must be correspondent, coherent, and pragmatic—and not, which two do you want?

Let us apply this test to the claim that the 2020 election was stolen from Donald Trump.  For people who believe in the relevant conspiracy theories, this claim is completely coherent.  However, it is neither correspondent nor pragmatic.  That is, there has been little if any credible evidence produced in support of the claim, and over fifty state and federal judges have dismissed lawsuits filed in support of it.  We can say with confidence therefore that the claim is untrue.

A similar approach can be taken to claims that the Covid vaccine is too dangerous to be used.  Again, this is based on a coherence theory of truth anchored in political or metaphysical narratives that are deeply compelling to its proponents.  They believe this to be true, and they are acting accordingly.  However, from a correspondence theory perspective, after two and a half years, there is overwhelming evidence that the vaccine is safe to administer.  And from a pragmatic perspective, there is tragic mortality data testifying to the fates of those who were not vaccinated.  Again, we can say with confidence that the claim that the vaccine is too dangerous to use is untrue.

Truth, however, can be decidedly unpopular, so there will always be strong social pressures to acquiesce to alternative claims, which, while untrue, are more palatable.  Such pressures play into the hands of the autocratic and the righteous across the entirety of the political spectrum.  This is not a new challenge.  Demagogues and dictators have played this game throughout history.  And history teaches us that allowing such people to exploit their narratives without contradiction undermines the rule of law and the foundations of liberal democracy.  Today, as Americans, we are privileged to live under rule of law, but much as we would like it to be, that does not make it an entitlement.  Rather it is a freedom we must commit to preserving.  As citizens, therefore, regardless of party or persuasion, we must make ourselves competent in the tests for truth and ensure that our children are well trained in them as well.  That’s what I think.  What do you think?

 •  0 comments  •  flag
Share on Twitter
Published on May 08, 2022 12:40

April 27, 2022

Some Inflection Points in the Philosophy of Mind

This post, like those that precede it, is based on reacting to an article that one way or another has captured my imagination.  In this case, the article was actually about Artificial Intelligence and whether it could really be called intelligence or not.  What interested me, however, was the way it organized itself around a set of philosophical positions that evolved historically.  What I have done, therefore, is to cut and paste those bits in order to create a context for discussing my own philosophy of mind.  This is, of course, totally unfair to the authors of the article, hence I post a link at the end where you can go read their work for its own sake.  For the time being, however, please bear with my approach, at least long enough to see if you think it bears any fruit.

The 17th-century French philosopher René Descartes was combating materialism, which explains the world, and everything in it, as entirely made up of matter.[2] Descartes separated the mind and body to create a neutral space to discuss nonmaterial substances like consciousness, the soul, and even God. This philosophy of the mind was named cartesian dualism.[3]

Dualism argues that the body and mind are not one thing but separate and opposite things made of different matter that inexplicitly interact.[4] Descartes’s methodology to doubt everything, even his own body, in favor of his thoughts, to find something “indubitable,” which he encapsulated in his famous dictum Cogito, ergo sum, I think, therefore I am.

Where Descartes gets off track is in believing his mind to be independent of his social history.  We know from the examples of feral children that without socialization, there is no language, there are no narratives, there are no analytics, and hence there is no mind.  Because he was using words to communicate, words that he did not invent but were taught to him by his mother and others, what Descartes should have said was, I think, therefore we are. 

Mind, in other words, emerges from brain under conditions of socialization.  As described in The Infinite Staircase, consciousness comes into being with brain, whereas mind comes into being with language and narrative.  Even though our mental experience is private, our mind is not.  This touches on our very identity which can never be completely independent of our social situation.  There simply can be no mind and no self, without social underpinnings.  Thus, it is that mind can never be reduced to brain any more than life can be reduced to a handful of elements from the Periodic Table.

It wasn’t until the early 20th century that dualism was legitimately challenged.[6][7] So-called behaviorism argued that mental states could be reduced to physical states, which were nothing more than behavior.[8] Aside from the reductionism that results from treating humans as behaviors, the issue with behaviorism is that it ignores mental phenomena and explains the brain’s activity as producing a collection of behaviors that can only be observed. Concepts like thought, intelligence, feelings, beliefs, desires, and even hereditary genetics are eliminated in favor of environmental stimuli and behavioral responses.

Consequently, one can never use behaviorism to explain mental phenomena since the focus is on external observable behavior. Philosophers like to joke about two behaviorists evaluating their performance after sex: “It was great for you, how was it for me?” says one to the other.[9][10] By concentrating on the observable behavior of the body and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about intelligence.

It is easy to patronize behaviorism, but we learn nothing by so doing.  Instead, we should note first that behaviorism arose from a desire in the social sciences to become more like the physical sciences.  It was specifically interested in coopting the latter’s ability to apply mathematics to measurable data as a means for expanding the domain of verifiable knowledge.  That desire itself was prompted by a prior era’s unacknowledged reliance on narratives and anecdotes, especially in psychology, ethics, and metaphysics, all of which could be compelling, none of which could be conclusively tested.  In such situations, research cannot build reliably upon prior findings, and so knowledge can only develop through dialectical disagreement as opposed to linear extrapolation.  Thus, Newton could build upon Copernicus in ways that Aristotle could never build upon Plato.

The fatal flaw in behaviorism is that it is impossible to explain human behavior without reference to narrative.  All societies convert their competence in language into storytelling, and the stories they tell are core to everyone’s self-understanding.  These stories are simply too central to human affairs to  eliminate as relevant data.  To be fair, they may not be reliable sources of data about the topics they address, so instead, we need to treat them as data in and of themselves.  That is, they may not accurately represent the phenomena they purport to describe, but they are nonetheless a force in the world they occupy, and for that reason, we need to include them if we are to gain a full understanding of what is going on around us.

Behaviorism saw a decline in influence that directly resulted in the inability to explain intelligence. It was displaced by a reorientation of psychology toward the brain dubbed the cognitive revolution. The revolution produced modern cognitive science, and functionalism became the new dominant theory of the mind. Functionalism views intelligence (i.e., mental phenomenon) as the brain’s functional organization where individuated functions like language and vision are understood by their causal roles.

Unlike behaviorism, functionalism focuses on what the brain does and where brain function happens.[16] However, functionalism is not interested in how something works or if it is made of the same material. It doesn’t care if the thing that thinks is a brain or if that brain has a body. If it functions like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time.

Functional organization is an important step forward because it aligns both brain and mind with strategies for living.  All living beings manifest strategies for living—that’s what unites us all.  One of the functions of language is that it enables us humans, through the mechanisms of narratives and analytics, to explore strategic possibilities, both in fictional and historical terms, and to critique them through analytics, using natural language and/or mathematics.  It also allows us to engage and enlist others in our undertakings by communicating, both intellectually and emotionally, the strategic rewards of so doing.  Functional organization, in this context, bridges the gap between private mental experience and public social interactions, both of which are required for intelligence to be operational in the world. 

The American philosopher and computer scientist Hilary Putnam evolved functionalism in Psychological Predicates with computational concepts to form computational functionalism.[17][18] Computationalism, for short, views the mental world as grounded in a physical system (i.e., computer) using concepts such as information, computation (i.e., thinking), memory (i.e., storage), and feedback.[19][20][21] Today, artificial intelligence research relies heavily on computational functionalism, where intelligence is organized by functions such as computer vision and natural language processing and explained in computational terms.

Unfortunately, functions do not think. They are aspects of thought. The issue with functionalism — aside from the reductionism that results from treating thinking as a collection of functions (and humans as brains) — is that it ignores thinking. While the brain has localized functions with input–output pairs (e.g., perception) that can be represented as a physical system inside a computer, thinking is not a loose collection of localized functions.

The meaning of the word think in the paragraph above is unclear, and as a result, the rest of the paragraph does not lead to any useful conclusion.  But there is another dimension of computational functionalism that is important to note: it is entirely analytic.  Indeed, the whole of both AI and machine learning is entirely analytic.  We need to ask ourselves, what might that be leaving out?

The short answer is, biochemistry.  The analytic model of mental activity, with its embedded metaphor of computers and computing, is based entirely on electronic signaling.  There are no hormones at work.  But among living things, long before there were any nerves to carry sensory information, there were chemical signals that served the same end.  Such signals circulate from cell to cell, activating receptors that, in turn, generate some response out of which behavior emerges, something we see every day in the way flowers turn to the sun or the way sleepiness takes over our minds. 

Now, to be fair, you won’t survive very long as a mobile animal if you don’t have an electric nervous system, so we are talking about an and here, not an or.  But it is an incredibly important and because biochemical signals are the mechanism by which homeostasis is maintained.  Homeostasis is what the AI/ML models leave out.  It is a big miss.

Homeostasis is a condition of equilibrium in which an organism is best fit to operate.  All living things seek homeostasis all the time.  Thus, both emotionally and intellectually, we humans are wired to seek it, and it shapes everything we do.  Strategies for living, in this context, are simply mechanisms for achieving a more desirable condition of equilibrium, one in which we are better fit to operate.  We can experience this phenomenon as desire or need or fear or hope, but by way of one emotion or another, we will be motivated to act.  That is what separates human intelligence from all forms of computational functionalism.

John Searle’s famous Chinese Room thought experiment is one of the strongest attacks on computational functionalism. The former philosopher and professor at the University of California, Berkley, thought it impossible to build an intelligent computer because intelligence is a biological phenomenon that presupposes a thinker who has consciousness. This argument is counter to functionalism, which treats intelligence as realizable if anything can mimic the causal role of specific mental states with computational processes.

If you are not familiar with this thought experiment, click on the URL in the paragraph above before reading further.  While Searle’s argument is correct as far as it goes, it is misleading because it does not honor the idea that a competence achieved without understanding could convert to true understanding at some later date.  That is contradicted by many commonplace experiences.  Think of the way you learn multiplication tables.  You don’t need to understand why 8 X 6 = 48.  You just have to memorize that it does.  But sometime later on you do realize that a rectangle that is 8 inches on one side and 6 inches on the other actually does contain 48 square inches, and the world becomes a little less mysterious.  Now, to be fair to Searle, this may be exactly his point, that the two are not the same, and I of course, would agree.  But I would argue that the two are connected more closely than he implies.  And that leads me to believe that a computationally functional system might be able to evolve future understanding from current competence by hypothesizing strategies and testing to see if they can be fulfilled by using the mechanisms it has already mastered.  At minimum, I think AI and ML will eventually want to develop a philosophy of mind in order to take their discipline to the next level. 

That’s what I think.  What do you think? P.S. You can read the original article  here .

 •  0 comments  •  flag
Share on Twitter
Published on April 27, 2022 15:55

Some Inflection Points in the Philosophy of Mind

This post, like those that precede it, is based on reacting to an article that one way or another has captured my imagination.  In this case, the article was actually about Artificial Intelligence and whether it could really be called intelligence or not.  What interested me, however, was the way it organized itself around a set of philosophical positions that evolved historically.  What I have done, therefore, is to cut and paste those bits in order to create a context for discussing my own philosophy of mind.  This is, of course, totally unfair to the authors of the article, hence I post a link at the end where you can go read their work for its own sake.  For the time being, however, please bear with my approach, at least long enough to see if you think it bears any fruit.

The 17th-century French philosopher René Descartes was combating materialism, which explains the world, and everything in it, as entirely made up of matter.[2] Descartes separated the mind and body to create a neutral space to discuss nonmaterial substances like consciousness, the soul, and even God. This philosophy of the mind was named cartesian dualism.[3]

Dualism argues that the body and mind are not one thing but separate and opposite things made of different matter that inexplicitly interact.[4] Descartes’s methodology to doubt everything, even his own body, in favor of his thoughts, to find something “indubitable,” which he encapsulated in his famous dictum Cogito, ergo sum, I think, therefore I am.

Where Descartes gets off track is in believing his mind to be independent of his social history.  We know from the examples of feral children that without socialization, there is no language, there are no narratives, there are no analytics, and hence there is no mind.  Because he was using words to communicate, words that he did not invent but were taught to him by his mother and others, what Descartes should have said was, I think, therefore we are. 

Mind, in other words, emerges from brain under conditions of socialization.  As described in The Infinite Staircase, consciousness comes into being with brain, whereas mind comes into being with language and narrative.  Even though our mental experience is private, our mind is not.  This touches on our very identity which can never be completely independent of our social situation.  There simply can be no mind and no self, without social underpinnings.  Thus, it is that mind can never be reduced to brain any more than life can be reduced to a handful of elements from the Periodic Table.

It wasn’t until the early 20th century that dualism was legitimately challenged.[6][7] So-called behaviorism argued that mental states could be reduced to physical states, which were nothing more than behavior.[8] Aside from the reductionism that results from treating humans as behaviors, the issue with behaviorism is that it ignores mental phenomena and explains the brain’s activity as producing a collection of behaviors that can only be observed. Concepts like thought, intelligence, feelings, beliefs, desires, and even hereditary genetics are eliminated in favor of environmental stimuli and behavioral responses.

Consequently, one can never use behaviorism to explain mental phenomena since the focus is on external observable behavior. Philosophers like to joke about two behaviorists evaluating their performance after sex: “It was great for you, how was it for me?” says one to the other.[9][10] By concentrating on the observable behavior of the body and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about intelligence.

It is easy to patronize behaviorism, but we learn nothing by so doing.  Instead, we should note first that behaviorism arose from a desire in the social sciences to become more like the physical sciences.  It was specifically interested in coopting the latter’s ability to apply mathematics to measurable data as a means for expanding the domain of verifiable knowledge.  That desire itself was prompted by a prior era’s unacknowledged reliance on narratives and anecdotes, especially in psychology, ethics, and metaphysics, all of which could be compelling, none of which could be conclusively tested.  In such situations, research cannot build reliably upon prior findings, and so knowledge can only develop through dialectical disagreement as opposed to linear extrapolation.  Thus, Newton could build upon Copernicus in ways that Aristotle could never build upon Plato.

The fatal flaw in behaviorism is that it is impossible to explain human behavior without reference to narrative.  All societies convert their competence in language into storytelling, and the stories they tell are core to everyone’s self-understanding.  These stories are simply too central to human affairs to  eliminate as relevant data.  To be fair, they may not be reliable sources of data about the topics they address, so instead, we need to treat them as data in and of themselves.  That is, they may not accurately represent the phenomena they purport to describe, but they are nonetheless a force in the world they occupy, and for that reason, we need to include them if we are to gain a full understanding of what is going on around us.

Behaviorism saw a decline in influence that directly resulted in the inability to explain intelligence. It was displaced by a reorientation of psychology toward the brain dubbed the cognitive revolution. The revolution produced modern cognitive science, and functionalism became the new dominant theory of the mind. Functionalism views intelligence (i.e., mental phenomenon) as the brain’s functional organization where individuated functions like language and vision are understood by their causal roles.

Unlike behaviorism, functionalism focuses on what the brain does and where brain function happens.[16] However, functionalism is not interested in how something works or if it is made of the same material. It doesn’t care if the thing that thinks is a brain or if that brain has a body. If it functions like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time.

Functional organization is an important step forward because it aligns both brain and mind with strategies for living.  All living beings manifest strategies for living—that’s what unites us all.  One of the functions of language is that it enables us humans, through the mechanisms of narratives and analytics, to explore strategic possibilities, both in fictional and historical terms, and to critique them through analytics, using natural language and/or mathematics.  It also allows us to engage and enlist others in our undertakings by communicating, both intellectually and emotionally, the strategic rewards of so doing.  Functional organization, in this context, bridges the gap between private mental experience and public social interactions, both of which are required for intelligence to be operational in the world. 

The American philosopher and computer scientist Hilary Putnam evolved functionalism in Psychological Predicates with computational concepts to form computational functionalism.[17][18] Computationalism, for short, views the mental world as grounded in a physical system (i.e., computer) using concepts such as information, computation (i.e., thinking), memory (i.e., storage), and feedback.[19][20][21] Today, artificial intelligence research relies heavily on computational functionalism, where intelligence is organized by functions such as computer vision and natural language processing and explained in computational terms.

Unfortunately, functions do not think. They are aspects of thought. The issue with functionalism — aside from the reductionism that results from treating thinking as a collection of functions (and humans as brains) — is that it ignores thinking. While the brain has localized functions with input–output pairs (e.g., perception) that can be represented as a physical system inside a computer, thinking is not a loose collection of localized functions.

The meaning of the word think in the paragraph above is unclear, and as a result, the rest of the paragraph does not lead to any useful conclusion.  But there is another dimension of computational functionalism that is important to note: it is entirely analytic.  Indeed, the whole of both AI and machine learning is entirely analytic.  We need to ask ourselves, what might that be leaving out?

The short answer is, biochemistry.  The analytic model of mental activity, with its embedded metaphor of computers and computing, is based entirely on electronic signaling.  There are no hormones at work.  But among living things, long before there were any nerves to carry sensory information, there were chemical signals that served the same end.  Such signals circulate from cell to cell, activating receptors that, in turn, generate some response out of which behavior emerges, something we see every day in the way flowers turn to the sun or the way sleepiness takes over our minds. 

Now, to be fair, you won’t survive very long as a mobile animal if you don’t have an electric nervous system, so we are talking about an and here, not an or.  But it is an incredibly important and because biochemical signals are the mechanism by which homeostasis is maintained.  Homeostasis is what the AI/ML models leave out.  It is a big miss.

Homeostasis is a condition of equilibrium in which an organism is best fit to operate.  All living things seek homeostasis all the time.  Thus, both emotionally and intellectually, we humans are wired to seek it, and it shapes everything we do.  Strategies for living, in this context, are simply mechanisms for achieving a more desirable condition of equilibrium, one in which we are better fit to operate.  We can experience this phenomenon as desire or need or fear or hope, but by way of one emotion or another, we will be motivated to act.  That is what separates human intelligence from all forms of computational functionalism.

John Searle’s famous Chinese Room thought experiment is one of the strongest attacks on computational functionalism. The former philosopher and professor at the University of California, Berkley, thought it impossible to build an intelligent computer because intelligence is a biological phenomenon that presupposes a thinker who has consciousness. This argument is counter to functionalism, which treats intelligence as realizable if anything can mimic the causal role of specific mental states with computational processes.

If you are not familiar with this thought experiment, click on the URL in the paragraph above before reading further.  While Searle’s argument is correct as far as it goes, it is misleading because it does not honor the idea that a competence achieved without understanding could convert to true understanding at some later date.  That is contradicted by many commonplace experiences.  Think of the way you learn multiplication tables.  You don’t need to understand why 8 X 6 = 48.  You just have to memorize that it does.  But sometime later on you do realize that a rectangle that is 8 inches on one side and 6 inches on the other actually does contain 48 square inches, and the world becomes a little less mysterious.  Now, to be fair to Searle, this may be exactly his point, that the two are not the same, and I of course, would agree.  But I would argue that the two are connected more closely than he implies.  And that leads me to believe that a computationally functional system might be able to evolve future understanding from current competence by hypothesizing strategies and testing to see if they can be fulfilled by using the mechanisms it has already mastered.  At minimum, I think AI and ML will eventually want to develop a philosophy of mind in order to take their discipline to the next level. 

That’s what I think.  What do you think? P.S. You can read the original article  here .

 •  0 comments  •  flag
Share on Twitter
Published on April 27, 2022 12:39

April 6, 2022

The Hard Problem—It’s Not That Hard

We human beings like to believe we are special—and we are, but not as special as we might like to think.  One manifestation of our need to be exceptional is the way we privilege our experience of consciousness.  This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:


. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?


— David Chalmers, Facing up to the problem of consciousness


The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience.It may further include the question of why these processes are accompanied by that particular experience rather than another experience.20

The key word here is experience.  It emerges out of cognitive processes, but it is not completely reducible to them.  For anyone who has read much in the field of complexity, this should not come as a surprise.  All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate.  Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence.  Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct.  Shakespeare is reducible to quantum effects?  Good luck with that. 

Most people’s problems with explaining experience, on the other hand, is that they place it too high.  They want to use their own personal experience as a grounding point.  The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets.  Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them.  But there is a much better way to understand what qualia really are—namely, the prelinguistic mind’s predecessor to ideas.  That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.  Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated.  Experiences are what we remember.  That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged.  Of course, we do have experiences with language directly—lots of them.  But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability. 

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences.  Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection.  Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival.  Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends.  There is no magic here, only a change of medium.Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.”  The answer to this question is, “level by level.”  First sentience has to emerge from non-sentience.  That happens with the emergence of life at the cellular level.  Then sentience has to spread beyond the cell.  That happens when chemical signaling enables cellular communication.  Then sentience has to speed up to enable mobile life.  That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems.  Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences.  Again, no miracles required.Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior.  If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience.  Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later.  Specifically, to experience anything as like anything else is not possible without the intervention of language.  That is, likeness is not a qualia, it is a language-enabled idea.  Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats. 

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves.  Selves, in other words, are creations of language.  More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives.  This is a completely language-dependent undertaking and thus not available to pets or infants.  Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves. 

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness. 

Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducibleYes, but let’s not be mysterious here.  Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements.  To say something is irreducible does not mean that it is unexplainable. Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate.  In this case, the dangerous phrase is “the nature of seeing.”  There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here.  Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land).  Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors.  But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me.  I don’t think I’ve exhausted the topic, so let me close by saying,That’s what I think—what do you think?

 •  0 comments  •  flag
Share on Twitter
Published on April 06, 2022 17:29

The Hard Problem—It’s Not That Hard

We human beings like to believe we are special—and we are, but not as special as we might like to think.  One manifestation of our need to be exceptional is the way we privilege our experience of consciousness.  This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:


. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?


— David Chalmers, Facing up to the problem of consciousness


The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience. It may further include the question of why these processes are accompanied by that particular experience rather than another experience. 20

The key word here is experience.  It emerges out of cognitive processes, but it is not completely reducible to them.  For anyone who has read much in the field of complexity, this should not come as a surprise.  All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate.  Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence.  Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct.  Shakespeare is reducible to quantum effects?  Good luck with that. 

Most people’s problems with explaining experience, on the other hand, is that they place it too high.  They want to use their own personal experience as a grounding point.  The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets.  Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them.  But there is a much better way to understand what qualia really are—namely, the prelinguistic mind’s predecessor to ideas.  That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.  Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated.  Experiences are what we remember.  That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged.  Of course, we do have experiences with language directly—lots of them.  But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability. 

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences.  Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection.  Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival.  Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends.  There is no magic here, only a change of medium.Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.”  The answer to this question is, “level by level.”  First sentience has to emerge from non-sentience.  That happens with the emergence of life at the cellular level.  Then sentience has to spread beyond the cell.  That happens when chemical signaling enables cellular communication.  Then sentience has to speed up to enable mobile life.  That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems.  Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences.  Again, no miracles required.Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior.  If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience.  Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later.  Specifically, to experience anything as like anything else is not possible without the intervention of language.  That is, likeness is not a qualia, it is a language-enabled idea.  Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats. 

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves.  Selves, in other words, are creations of language.  More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives.  This is a completely language-dependent undertaking and thus not available to pets or infants.  Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves. 

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness. 

Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible.  Yes, but let’s not be mysterious here.  Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements.  To say something is irreducible does not mean that it is unexplainable. Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate.  In this case, the dangerous phrase is “the nature of seeing.”  There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here.  Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land).  Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors.  But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me.  I don’t think I’ve exhausted the topic, so let me close by saying,That’s what I think—what do you think?

 •  0 comments  •  flag
Share on Twitter
Published on April 06, 2022 12:35

March 14, 2022

Free Will

Free will means just what you think it means, despite all the poppycock you may sometimes hear to the contrary.  That poppycock comes in a number of forms.  The simplest is based on materialist reductionism.  In this worldview, we inhabit a clockwork universe in which all interactions obey the laws of physics and are thus deterministic, every effect being the result of a prior cause, all present reality being therefore predictable as the outcome of an unbroken chain of cause and effect, all the way back to the Big Bang. 

This is simply wrong.  No effect is the result of a single cause.  None.  Not one.  Nada.  There is no “chain” of cause and effect.  There is no “clockwork universe.”  Every outcome emerges from a synthesis of multiple causes interacting in non-deterministic ways, following the laws of chaos.  These laws do support certain kinds of predictability, notably those associated with entropy and statistical probability, but Heraclitus had it right—you cannot step into the same river twice.  (In fact, if you want to get really Zen, you cannot step into the same river once.)

To put this in its proper context, the universe is better understood as a hierarchy of systems, each emerging out of the system layers below, each supporting the system layers above.  This is the worldview described in The Infinite Staircase, which segments that hierarchy into three major zones, each governed by a different forcing function.  The bottom layers represent the physical basis of all life as described by the disciplines of physics, chemistry, and biology.  Here indeed materialism reigns, the governing force being entropy.  Atop this zone emerges a second set of system layers organized around the emergence of consciousness, born in service to desire, and enabling the development of values and cultures, all prior to the arrival of humanity.  In this middle zone evolution reigns, the governing forces being natural and sexual selection.  Finally, the highest layers of the staircase are organized into a third zone around the emergence of language, which in turn enables narrative, analytics, and theory.  Here the forcing function is an idealist version of natural and sexual selection applied to the evolution of stories and ideas, or what we called memes.

Human life unfolds across all levels of this staircase.  So, yes, we are subject to the laws of physics, chemistry, and biology.  But these are not deterministic because they interoperate with the laws of desire, consciousness, values, and culture, which in turn interoperate with the laws of language, narrative, analytics, and theory.  Daily living synthesizes operations across all these levels.  It is inherently both complex and dynamic, a world apart from anything clockwork.  In other words, we are river-rafting our way through life, not playing a game of billiards.

When we apply the framework of emergent complexity to the domain of free will, we can begin by positioning our brain at the levels of physics, chemistry, biology, our will at the levels of desire, consciousness, values, and culture, and our mind at the levels of language, narrative, analytics, and theory.  Will emanates from our brain under the biological influence of all kinds of hormonal signaling, manifesting itself as desire, stimulating consciousness to take action.  This is not something we can control, so we are not responsible for our desires per se.  Free will emerges in conjunction with mind.  What we can control, in other words, is how we process our desires through all the higher levels in the staircase and how we act in relation to them.  That is what it means to exercise free will.

But we need to be careful with this word free.  It should not be taken to mean unconditional or unconstrained.  Each of us is born into a life situation we did not choose and could not influence, but one that has influenced us deeply in so many ways.  If we were born into a happy and safe home and raised to love and respect others, how normal can we take our virtues to be?  Or, if we were born into dark circumstances, and those circumstances subsequently led us to perform dark deeds, how much responsibility should we bear for our actions?  These questions are at the core of a dilemma Western liberal democracies are all struggling to address.  How do we balance empathy and accountability?

Historically in the US, Democrats have been the party of empathy, and Republicans the party of accountability, and the interplay between the two has allowed us to get on with things reasonably well.  But in the present era Trumpism has thrown a spanner into the works.  Trumpism advocates repudiating both empathy and accountability.  That is its greatest appeal.  It liberates its proponents from the personal and social challenges of being adult. 

Now, we need to note we are no longer talking about free will.  We are talking about the human condition.  Our options in life are both empowered by and curtailed by our individual circumstances.  Nonetheless, as long as we are conscious and not under duress, we have the option to choose from within the available alternatives the course of action we will pursue.  When we act on that choice, we know there will be consequences although we can never be sure what they will be.  We hope for the best, and sometimes that’s what we get, and all is good.  At other times, however, things do not turn out well, sometimes where it is not even our own fault.  In such a condition, rather than repudiating empathy and accountability, it is critical that we embrace them both and cope with the internal conflicts that inevitably arise.  Coping is the ultimate in adult behavior. Often it is the best that free will can do, and frankly, it’s not all that bad.  We don’t need perfect outcomes.  We just need to keep on keeping on. 

That still leaves us, however, needing a better understanding of the experience of free will and the degrees of freedom it entails.  Free will is experienced through language, primarily at the level of narrative, secondarily at the level of analytics.  In this context, we channel our will by imagining narratives about ourselves and our circumstances and then analyzing our options within that context. 

For example, suppose we are thinking about buying a car.  First of all, how would we use it?  Is there a narrative about commuting, or carrying around kids, or going on a trip, or becoming an Uber driver?  There may be more than one narrative, of course, but there will never be no narrative.  Narratives are the vehicle by which we encounter the present moving into the future and seek to shape the latter to our ends.

Continuing with this process we might continue to imagine what kind of car it might be.  What color?  What style?  This will have a lot to do with the image we want the car to represent.  Can this car really be us?  How would we feel about driving it around?  What would our friends think?  Our spouse?  Our children?  Part of living out narratives, it turns out, is needing to stay in character.  We all have a lot of history invested in being ourselves.  We have identities and values to maintain, both for our own sake and to maintain faith with others.  Will this car reinforce our identity and values, or will it depart from them?   And which do we want?

All these questions represent the imagination exploring the possibilities of free will through the mechanism of narrative as it interacts with our identity and values.  At some point, we will likely pause to engage the analytics side of our house.  This will be the voice of reason speaking.  Can we afford this car?  Is it the right size for our lifestyle?  Will it be comfortable for others riding in it?  What kind of mileage will it get?  Where will we park it?  Should it be an electric vehicle for the betterment of the planet?  If so, how would we recharge it?  Does it have to be a new car?  Should we lease it or buy it?

Reason represents the reality principle engaging with imagination’s fantasizing in order to converge on a choice that is both satisfying and realistic.  As we do so, we will likely add some more dimensions to this scenario.  Is this our decision to make, or does it need to be negotiated with significant others?  What might they want in a car?  Do they support the identity themes and values implied by our choice?  Are there features they feel this car simply must have, or alternatively, ones they could not possibly abide? 

In sum, free will involves an ongoing interplay of narrative and analytics in both the psychological and social realms.  It is neither pure fantasy, nor is it pure reasoning.  It is more like our adult ego mediating between the impulses of our childlike id and the cautioning of our parental superego. 

This same dynamic plays out in darker situations as well, but here the relationships are distorted by fear, anger, hatred, and the like.  This empowers the id, disempowers the superego, and puts the ego in a bind.  Imagined scenarios highlight moral wrongs and spawn fantasies of revenge, while reason, far from providing a counterbalance, gets co-opted into planning tactics.  We are still in the domain of free will—nothing is predetermined—but formidable forces are operating both within and around us, and they are not all under our control.  How accountable can we be in such circumstances?  How much empathy do we warrant?

These are questions that are not resolvable by any fixed formula.  Legal justice depends upon accountability.  Social justice depends upon empathy.  Both are necessary.  Without legal justice, there can be no social justice.  Without social justice, the fabric of society will tear itself apart.  This has to be an and.  It cannot be an or.  We cannot afford to polarize the conflict between accountability and empathy.  We must step up to it.  That is the ultimate exercise of free will. That’s what I think.  What do you think?

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2022 18:19

Free Will

Free will means just what you think it means, despite all the poppycock you may sometimes hear to the contrary.  That poppycock comes in a number of forms.  The simplest is based on materialist reductionism.  In this worldview, we inhabit a clockwork universe in which all interactions obey the laws of physics and are thus deterministic, every effect being the result of a prior cause, all present reality being therefore predictable as the outcome of an unbroken chain of cause and effect, all the way back to the Big Bang. 

This is simply wrong.  No effect is the result of a single cause.  None.  Not one.  Nada.  There is no “chain” of cause and effect.  There is no “clockwork universe.”  Every outcome emerges from a synthesis of multiple causes interacting in non-deterministic ways, following the laws of chaos.  These laws do support certain kinds of predictability, notably those associated with entropy and statistical probability, but Heraclitus had it right—you cannot step into the same river twice.  (In fact, if you want to get really Zen, you cannot step into the same river once.)

To put this in its proper context, the universe is better understood as a hierarchy of systems, each emerging out of the system layers below, each supporting the system layers above.  This is the worldview described in The Infinite Staircase, which segments that hierarchy into three major zones, each governed by a different forcing function.  The bottom layers represent the physical basis of all life as described by the disciplines of physics, chemistry, and biology.  Here indeed materialism reigns, the governing force being entropy.  Atop this zone emerges a second set of system layers organized around the emergence of consciousness, born in service to desire, and enabling the development of values and cultures, all prior to the arrival of humanity.  In this middle zone evolution reigns, the governing forces being natural and sexual selection.  Finally, the highest layers of the staircase are organized into a third zone around the emergence of language, which in turn enables narrative, analytics, and theory.  Here the forcing function is an idealist version of natural and sexual selection applied to the evolution of stories and ideas, or what we called memes.

Human life unfolds across all levels of this staircase.  So, yes, we are subject to the laws of physics, chemistry, and biology.  But these are not deterministic because they interoperate with the laws of desire, consciousness, values, and culture, which in turn interoperate with the laws of language, narrative, analytics, and theory.  Daily living synthesizes operations across all these levels.  It is inherently both complex and dynamic, a world apart from anything clockwork.  In other words, we are river-rafting our way through life, not playing a game of billiards.

When we apply the framework of emergent complexity to the domain of free will, we can begin by positioning our brain at the levels of physics, chemistry, biology, our will at the levels of desire, consciousness, values, and culture, and our mind at the levels of language, narrative, analytics, and theory.  Will emanates from our brain under the biological influence of all kinds of hormonal signaling, manifesting itself as desire, stimulating consciousness to take action.  This is not something we can control, so we are not responsible for our desires per se.  Free will emerges in conjunction with mind.  What we can control, in other words, is how we process our desires through all the higher levels in the staircase and how we act in relation to them.  That is what it means to exercise free will.

But we need to be careful with this word free.  It should not be taken to mean unconditional or unconstrained.  Each of us is born into a life situation we did not choose and could not influence, but one that has influenced us deeply in so many ways.  If we were born into a happy and safe home and raised to love and respect others, how normal can we take our virtues to be?  Or, if we were born into dark circumstances, and those circumstances subsequently led us to perform dark deeds, how much responsibility should we bear for our actions?  These questions are at the core of a dilemma Western liberal democracies are all struggling to address.  How do we balance empathy and accountability?

Historically in the US, Democrats have been the party of empathy, and Republicans the party of accountability, and the interplay between the two has allowed us to get on with things reasonably well.  But in the present era Trumpism has thrown a spanner into the works.  Trumpism advocates repudiating both empathy and accountability.  That is its greatest appeal.  It liberates its proponents from the personal and social challenges of being adult. 

Now, we need to note we are no longer talking about free will.  We are talking about the human condition.  Our options in life are both empowered by and curtailed by our individual circumstances.  Nonetheless, as long as we are conscious and not under duress, we have the option to choose from within the available alternatives the course of action we will pursue.  When we act on that choice, we know there will be consequences although we can never be sure what they will be.  We hope for the best, and sometimes that’s what we get, and all is good.  At other times, however, things do not turn out well, sometimes where it is not even our own fault.  In such a condition, rather than repudiating empathy and accountability, it is critical that we embrace them both and cope with the internal conflicts that inevitably arise.  Coping is the ultimate in adult behavior. Often it is the best that free will can do, and frankly, it’s not all that bad.  We don’t need perfect outcomes.  We just need to keep on keeping on. 

That still leaves us, however, needing a better understanding of the experience of free will and the degrees of freedom it entails.  Free will is experienced through language, primarily at the level of narrative, secondarily at the level of analytics.  In this context, we channel our will by imagining narratives about ourselves and our circumstances and then analyzing our options within that context. 

For example, suppose we are thinking about buying a car.  First of all, how would we use it?  Is there a narrative about commuting, or carrying around kids, or going on a trip, or becoming an Uber driver?  There may be more than one narrative, of course, but there will never be no narrative.  Narratives are the vehicle by which we encounter the present moving into the future and seek to shape the latter to our ends.

Continuing with this process we might continue to imagine what kind of car it might be.  What color?  What style?  This will have a lot to do with the image we want the car to represent.  Can this car really be us?  How would we feel about driving it around?  What would our friends think?  Our spouse?  Our children?  Part of living out narratives, it turns out, is needing to stay in character.  We all have a lot of history invested in being ourselves.  We have identities and values to maintain, both for our own sake and to maintain faith with others.  Will this car reinforce our identity and values, or will it depart from them?   And which do we want?

All these questions represent the imagination exploring the possibilities of free will through the mechanism of narrative as it interacts with our identity and values.  At some point, we will likely pause to engage the analytics side of our house.  This will be the voice of reason speaking.  Can we afford this car?  Is it the right size for our lifestyle?  Will it be comfortable for others riding in it?  What kind of mileage will it get?  Where will we park it?  Should it be an electric vehicle for the betterment of the planet?  If so, how would we recharge it?  Does it have to be a new car?  Should we lease it or buy it?

Reason represents the reality principle engaging with imagination’s fantasizing in order to converge on a choice that is both satisfying and realistic.  As we do so, we will likely add some more dimensions to this scenario.  Is this our decision to make, or does it need to be negotiated with significant others?  What might they want in a car?  Do they support the identity themes and values implied by our choice?  Are there features they feel this car simply must have, or alternatively, ones they could not possibly abide? 

In sum, free will involves an ongoing interplay of narrative and analytics in both the psychological and social realms.  It is neither pure fantasy, nor is it pure reasoning.  It is more like our adult ego mediating between the impulses of our childlike id and the cautioning of our parental superego. 

This same dynamic plays out in darker situations as well, but here the relationships are distorted by fear, anger, hatred, and the like.  This empowers the id, disempowers the superego, and puts the ego in a bind.  Imagined scenarios highlight moral wrongs and spawn fantasies of revenge, while reason, far from providing a counterbalance, gets co-opted into planning tactics.  We are still in the domain of free will—nothing is predetermined—but formidable forces are operating both within and around us, and they are not all under our control.  How accountable can we be in such circumstances?  How much empathy do we warrant?

These are questions that are not resolvable by any fixed formula.  Legal justice depends upon accountability.  Social justice depends upon empathy.  Both are necessary.  Without legal justice, there can be no social justice.  Without social justice, the fabric of society will tear itself apart.  This has to be an and.  It cannot be an or.  We cannot afford to polarize the conflict between accountability and empathy.  We must step up to it.  That is the ultimate exercise of free will. That’s what I think.  What do you think?

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2022 12:33