Philosophy discussion
The Mind
>
The Mind-Body Problem

Isn't that the whole significance of duality/trinity concepts?
Yin/yang illustrates the two parts into which we have deconstructed the whole. Yet something is missing. The synergy of the parts generates something greater then their sum.

The body-mind problem arises only when we make it. In our imagination we abstract from the concrete living unity, create image of purely objective material world and then wonder there is no place for consciousness. Of course not, we have abstracted from it. As if this abstraction did not exist but in the imagination of the conscious human being. Now we stand in front of the puzzle and wonder where the consciousness is and how is it connected with this world. And here comes another idea: What if we add consciousness to the picture and make up some laws connecting the consciousness with other things? But it won't work for consciousness cannot be just part of the picture. It cannot be part, for it must always comprise everything else (see Kant's unity of aperception) and it cannot be in the picture for the picture is in consciousness.
Being thing among other things and at the same time being conscious of this being, being at the same time the inside and the outside of the same human being - it makes this connection principally inaccessible. I imagine psychophysical unity as a coin which has two sides. We can observe connection between both sides only if we look at it from the outside. But imagine you are the coin itself - then you cannot step out of yourself and observe the connection between both sides, at most you can BE both sides. In my view, it is not hindrance which could be removed by scientific research. For consciousness is either the inside or it is not consciousness at all. It is immediately accessible only to the one being conscious and we can never look at it from the outside. What room is there left for finding a natural law describing this strange living knot, which is at the same time identical, discrete and cannot be observed?
Maybe the reason philosophers of the past have not dealt specifically with this problem is not they lacked neuroscience, but they were not trapped in this imaginative construction which on the one hand proved to be extremely useful in controlling nature, but on the other has obscured the primary reality which we are living. The same holds true of modern philosophy after phenomenology, which strives to grasp the world as we encounter it before applying our derailed imaginative constructs and thus does not enter the level where this problem exists. If we grasp the whole united conscious body from the beginning, there is no need to invent subsequently some artificial connections. But I believe imaginative character of the problem has a great advantage. Since it is problem we have thought up, we can also think it away. We can find this striking intuition already in Spinoza, written still in the time when the great divide between thought and extension was established. Spinoza says thought and extension correspond each other because they just different attributes of the same substance. And despite the confusing details of his explanation, the main insight is clear and simple: Consciousness and body correspond each other because I am my body (although Spinoza assigns the attributes to God).

Pavel, I have not come across Chalmers' The Conscious Mind, and thank you for the reference. I am, though, familiar with Chalmers' The Character of Consciousness. That is to say, I've borrowed it from the library, read sections, but not the whole of this really quite big and difficult book. (The two titles are so similar, I thought we might be talking about the same book here, but a search shows they are distinct.)
Actually, I quite agree with what you say about the mind-body problem, and I don't think the way we see things is so very far apart. For example, with Chalmers: The Character of Consciousness begins (I paraphrase somewhat) by telling us that Consciousness is the little bit of the mind that can't be explained by cognitive science. The search to explain it is therefore important. I was dismayed to read this. Mind may be scientifically studied by Psychology, a perfectly respectable science, and Matter by Physics. There is no more reason to expect a scientist in the former field to provide an explanation of why Mind exists, than for a physicist to explain why Matter exists. Chalmers makes an initial assumption that demands proof, or at least comment. On the other hand, when you get into the book it is full of well thought out arguments, and you can get a lot of pleasure and benefit from reading it. Perhaps you feel the same about Chalmers.
Notice my post did not use this well-worn word "consciousness". I think it has become such a pet word because it avoids the spiritual associations of "soul" and the philosphical associations of "mind". But it captures the same idea. And yes, there is a move to explain it by creating a material world and then finding a way to slot "consciousness" into place within that world. The explainers, the atheist-materialist crowd of the Dawkins, Dennett type, create an endless stream of propaganda for this point of view. (I counted about 300 book titles of the science-explains-everything type in my modestly sized local bookshop this morning.) In fact it was as a reaction to all this stuff that I got interested in philosophy.
But the puzzle I tried to explain in my first post: I feel that it doesn't matter how you view the mind-body problem, the puzzle itself does not go away.
I suppose one could say, how memories are stored in the brain is not a problem of philosophy, but of neuroscience/psychology,
"4.1121 Psychology is no nearer related to philosophy, than is any other natural science." (Tractatus).
For your last paragraph, Pavel, I am handicapped by not knowing anything about phenomenology, but don't explain, I'll try to figure it out. (I looked at your blog, and the for first time ever regretted my inability to read Czech!)

I share your feeling the correspondence will never be understood, but I am not sure the feeling is separable from our view of the body-mind problem. For what is the puzzle? Is there some difference between encoding sound of speech in a sentence like “I feel ashamed” and “encoding” the feeling itself? Is not the feeling encoded in the sound just as the sound in the letters?
Maybe in the first case we know which letter corresponds to which sound, both letters and sounds are external phenomena, can be observed, put in mutual correspondence and there may even be structural coherence between them, while in the second case there is no thinkable coherence between external manifestation and internal feeling. We have no idea what similarity is there between the feeling and the speech and what they could have in common. The relation can be hardly called encoding and yet the speech is nothing but expression or manifestation of the feeling. The correspondence is just brute fact. In the same way, whatever can be found in the brain, it is quite different from what we experience. Properly speaking we can find no memories in the brain and yet it seems the brain is “organ” of memories. Here we come back to your question: Perhaps memories are not stored in the brain like data in computers, but it relates to them similarly as the speech to its meaning.
Martin, I am still not sure I have got correctly the puzzle you are trying to solve. If not, please, correct me. But I think we cannot understand how memories are stored in the brain because they are not actually “stored” there, but it is relation of another kind (the brute fact body-mind unity which I tried to outline in the previous post).
Anyway, I am no expert in this field, but I did not want to miss the opportunity to express my sympathy for your search, for it is not so long I have been for long weeks troubled by similar body-mind puzzles.
P.S. I see, recommending a book to which I gave one star is strange. The rating expresses my failed expectations when I realized Chalmers uses quite awkward apparatus to defend what initially seemed to be sound intuition. But although it is not my cup of tea, I suppose it could be interesting for someone with different focus and different background. Moreover, I have made experience sometimes we can really benefit from reading even books which are plainly wrong, like Dawkins or Marx, for they can make us sensitive to what the real problem is. And finally it is the only book I have read which deals with this specific question of consciousness and information.

I think that is a very good remark.
I have found (or I am finding) that it is very easy to make "philosophical" remarks -- you read some philosophy, perhaps superficially, and imitate it. But if you are deeply disturbed or bewildered by some philosophical idea of your own that you have had, it can be very difficult to find a way of expressing exactly what it is. I think that is where I am now.
I think I'll go to Chalmers' Conscious Mind and report back later. I hope I find it instructive.
("Give instruction to a wise man, and he will be yet wiser" -- not that I pretend to be wise.)

I know very well what you mean.

Are you doubting our ability to work out the physiological underpinnings of memory, or are you hoping that we won't ruin the mystery?
If you resolve the paradox by deciding that the external view is exclusively correct, then the internal view is relegated to being a mere illusion. It can be quite disconcerting to think of the experience of your own mind as an illusion.

Note that this is independent of any particular theory of mind-body interaction. That brings us to J's point: one may think the mind is an illusion (we are all zombies), but the question remains: my brain has stored in it information that relates to the history of the late Roman Republic, say, so that my body, through the vocal tract, gives answers to vocal utterances for other people (whether they have real minds or illusory minds) such as "when did Caesar die?", "how were Tribunes elected?", "can we ever escape from seeing those events through the speeches of Cicero?" and so on. It's not that I doubt the abilities of physiologists, it's that I cannot imagine any scheme of encoding that all this information could have in a brain.

If I read your post correctly, you are saying that the mind is an intangeable thing (soul?) for which the brain acts as connection to the body.
While I cannot disprove that point beyond questioning how an intanegable can interact with a tangeable brain. I can point out that an intangeable mind does not solve the problems of continuity of self. Specifically, I would point out Nietzche's response to "cogito ergo sum."
In a masterpiece of skepticism, Nietzche pointed out that you mind may only be a collection of memories, ideas, beliefs and experiences. So you think that you have a body because those experiences are in the collection, and you think your name is Martin because there is a memory of that name. However there is no reallity to this mind. If I were to replace the memory of Martin with a memory of Peter what happened to Martin?


'Even physics is only a way of interpreting the world ... and not of explaining the world. But in so far as it relies on our belief in our senses, physics is taken for more than that ... to an era with essentially plebeian tastes this is enchanting, persuasive, convincing, for it instinctively follows the canonised truth of ever-popular sensualism [basic empiricism] ... including those Darwinists and anti-teleologists among the physiological workers with their principle of the "least possible energy" and the greatest possible stupidity.'
– Nietzsche (Beyond Good & Evil, §14)

Yes I like your quote from Nietzsche.
I realise I didn't quite understand your first post. I thought you were accusing me of not being a materialist, whereas I think you were accusing me of being a materialist. Perhaps you could say a bit more.
Anyway, I now am into Chalmers' The Conscious Mind, and am finding it unhelpful, but will post something further soon.

I make no accusations, and I apologize if I came on to strongly. I am trying to define positions, before proceeding. I admit to a certain lack of finesse.
Please correct any mistakes in my reading.
You are questioning the nature of a mind. Specifically the methods by which memories are formed and retained. I use the article "a" here, in reference to mind, instead of "the", so as to restrict my thoughts to a general theory of mind. Hopefully this will avoid the larger questions of
self.
It seems that the materialists have the inside track on answering the question. Since the mid 20th century, the external view has advanced far faster than has the internal view. For example, the internal view is still busily arguing over Freud and Jung, while the external view has gone from trepanation (to relieve pressure), to repairing anurisms and formulating drugs that actually work on chronic depression.
I do not believe that arguing in favor of the brain being too subtle and/or complex to fully understand will stand up to scrutiny. Predictions about future knowledge are invariably based on current knowledge and are therefore rendered irrelevant by each and every new discovery. Consider how Columbus might react if queried on the possibility of future generations flying across the Atlantic in less than a day.
This might not be the end of the argument. Humans demonstrate a unique ability to externalize memory. That is, to record memories in such a way as to be able to pass them to others or even themselves at a future time. This is more than just language. Consider the act of doing long division with pencil and paper. By passing data back and forth between the paper and your brain you are setting up a feedback loop. The loop allows you to simplify a complex process by retaining individual steps outside of your brain without losing/forgetting them. Does the paper become part of your mind? The answer to that question may shape the future of humanity.

More generally, I'm not trying to get into a big body-soul debate.
Let me come back with some particular examples to illustrate what I mean.

Noone has, that is the problem.

Chalmers is not a slave to current materialist thinking, but he adopts many of its inherent assumptions. For example,
"Consciousness is the biggest mystery. It may be the largest outstanding obstacle in our quest for a scientific understanding of the universe."
This is how the book opens. Similarly he never doubts that every mental event must have a corresponding brain event, or that everything we know must be stored in some material form in the brain. These empirical assumptions tend to be treated as necessary truths. He is also fully optimistic that consciousness will get explained one day,
"I am an optimist about consciousness. I think we will eventually have a theory of it ..."

You often hear it said that memory in humans is strongly connected with language, in the sense that it is our ability to use language that enables us to have long memories. This gives rise to the idea that memories in the brain have some sort of "linguistic" structure. I doubt this. In the past, deaf people who never learnt language were a common group. (Perhaps they still are in poorer nations.) They were of course disadvantaged, but with fully human understanding and memory retention. Other mammals exhibit long-term memory, or at least, the memory associated with habit forming. In our brains, language is localised in certain areas, memory is everywhere (I believe this is the current thinking -- it used to thought that memory was localised, too.) We remember smells, tastes, sounds, that fequently have no language correlate. We remember people's faces, and that seems to be almost an instinctive ability, shared with other primates, where the face is often quite distinctive in its animal group.
A toddler learns a word, "sheep", from having a children's book read to it. If it sees a picture of a sheep or a real sheep elsewhere, it may say "sheep", and delight its parents. Now the concept, sheep, is a universal. It is easy to make the mistake of supposing there are many sheep, and only one word, "sheep", used to indicate them, just as you might pin a single label in turn onto a series of objects. But the word "sheep" is also a univeral. The toddler hears the word "sheep", and learns to recognise it, from the many instances of it being said. All instances will differ slightly. It learns what a sheep is from similar differing instances of sheep being seen. The two things are analogous, and there is as great a mystery as to how the word "sheep" is stored in the brain, so that it can be heard and spoken appropriately, as there is about how the idea of a sheep can be stored in the brain so that an actual sheep in the visual field can be recognised as one.
(Here computer modelling exerts its malign influence. We might store "sheep" in a computer as five letters, each represented by its 8 bit ASCII code (or 16 bit Unicode code). So storing the word is easy, but modelling the beast that the word stands for is hard. But sheep are woolly, so we can at least connect the word "sheep" with the word "woolly". And so on ... knowledge is built up through words. It is an easy step to imagine the same scheme within the brain.)
But let us suppose the toddler learns the two things, the word "sheep" and the animal that the word refers to, and these give rise to corresponding structures in the brain that represent that knowledge. A second toddler learns the same thing: is there any similarity between the two brain structures representing this knowledge? The fact that there almost certainly isn't often comes up in philosophy, and is given a name, though I can't recall it at the moment. The point being, if A knows X and B knows X, how can you establish a materialist view of the identity of their knowledge if X is represented in quite different ways in the brain structures of A and B? The way "sheep" gets stored in the brain of a toddler will depend on the details of the new idea being learnt, and the past history of brain states. This will be as true of the word sheep as of sheep the animal. Furthermore it will no doubt change over time, as the brain continuously reforms itself with new synapse connections between neurones.
Suppose therefore you could look into a brain (as in Leibniz's thought experiment) and see all its workings, like looking at clockwork inside a glass case. The structures you would see might correspond to mental knowledge, but you could never work out what was known from the structures seen. You would need a complete history of all past brain states, and exactly what had happened to the whole organism (such as seeing a picture of a sheep in a book) to cause a change of brain state. Of course you might get some limited understanding by asking a question ("are sheep woolly?") and observing what parts of the brain are accessed in answering it, but quite apart from the difficulty of trying to interpret the immense complexity of the resulting movements, you couldn't work out all the knowledge that the brain stores that way, because there are too many questions to ask, and the lack of total knowledge would restrict the understanding that could be gained of local knowledge.
In fact the infinite diversity of things in the real world has led me to doubt that the brain can be said to hold information about the facts in the world at all, at least at the level of what we think of as an encoding scheme, where a fact is reduced to binary signal structure, 01100101010... and there is some defined mapping between the fact and the 0 and 1 bits. I find this especially true when I see language, not as the key by which facts are stored, but as just another large collection of facts which the brain has to store.

I agree that memory is not binary. There are more than two neurotransmitters therefore more information than yes or no can be sent across the synapse gaps.
As for your statement that the brain doesn't contain information about the outside world. No, it does not. The brain has no sensory apparatus of its own. It interprets the data which it recieves from outside sources. Therefore when you say you say, "I see a sheep." What you mean is light reflected off of that animal triggered cones and rods on my retina to send electrochemical impulses to a part of my brain. My brain interpretted an image of a sheep, based on those impulses, memories and other mental states.
When you remember something, you are interpreting previous interpretations.


Initial disclaimer: I'm not a philosopher, although I've been dabbling in reading some in the past few years. Also, I haven't read Chalmer's book(s), so can't speak to his evidence or claims.
You ask if the mind-body problem will ever be resolved, and I say yes, but not today. Maybe in 100 years. Maybe in 500 years.
You said you can't imagine memory being stored in the brain. I'm not trying to be snarky, here, but just because you can't imagine it doesn't mean it may not be factual. And vice versa, of course. There are things we can imagine which are not true. I bring this up because our perceptions of how the mind works may be different from what the mechanisms actually are, as well as what objective evidence shows.
For example, we perceive a fluidity to our memory, and that seems to lead you (and probably most of us) to think there's a file with "Brutus killed Caesar", but that doesn't seem to be the case. What's key in memory is that it appears to be reconstructed every time we use it. And that is done by association and pathways. When pathways fail, memory fails, even if the "files" still exist.
A second point I wanted to make, which J. also brought up, is that a computer isn't really like a brain any more than a camera is like an eye. And a brain isn't like a computer. We use our technological metephors, but metaphors aren't reality. Another example I'll add to J.'s is that synapses can be modulated. While neurons do fire all-or-nothing, which is reminiscent of the 0-1 of computers, they can be modulated to fire with more or with less stimulus. They can be inhibited from firing at all. If one part of the brain says, "fire", but another part with inputs to that same neuron says, "no, not right now, there's something more important to do", it will be prevented from firing, or maybe simply delayed in firing. No computer can do this. Yes, there are algorithms which can tell a computer how to prioritize certain things, but not on any level with the brain. Or any of the rest of the body, for that matter. I'm just saying, hold all computer/AI analogies loosely. Analogies can be helpful, but they aren't identity.
On the sheep issue...if you've ever noticed children learning about the world, the first 4-legged animal's name they learn will get applied to the next 4-legged animal they see. So, if the first is a sheep, the next time they see a 4-legged animal, that will also be called a sheep until corrected. In this way, children develop criteria for differentiating a dog from a sheep from a cow, etc. It's a process, not a polaroid stored in the sheep file.
So, I'm trying to say a couple of things. First, the only way to know the truth of the matter is to accept objective evidence. HOWEVER, we have only scratched the surface of understanding how the material brain works. People will naturally have their viewpoints about what it all means. But it's too early to know if the mind is only an epiphenomenon of the brain, or even an emergent property. All it takes is a new study (one peer reviewed and replicated) showing something different from what was thought before, to pull the rug out from under pet interpretations.
Secondly, science can only deal with the material world. It doesn't deal with metaphysics because it can only deal with what's testable, replicable, and falsifiable. Therefore, many scientists will end up seeing the mind as material. In other words, they have a priori assumptions, which material experiment will validate.
Third, you say you don't want to get into the issue of consciousness, but that's really at the root of the question, IMO. For further discussion, we'd have to define mind and define consciousness. But, if as you suggest, you doubt the brain capable of storing memory, then there has to be somewhere else it is stored. (And storage is a poor term because it gets back to suggesting file cabinets.) In other words, there is either something material or something non-material (or both) for being a repository of memory. There really isn't a lot of difference between memory and consciousness, in terms of this argument.
In both we have a perception of ourselves as something which is not just meat. Consciousness takes in more than memory, of course. It is also conjectured to be an epiphenomenon of the brain or an emergent property.
Keep in mind that much study on learning and memory comes from working with non-vertebrate and even one-celled animals, in addition to primates and humans. Memory exists on the material level and is demonstrable as such. Does all memory do so? There's no evidence that it doesn't, but nothing can be ruled out, IMO.
I guess my real point is, don't ignore the objective evidence -- in fact, learn more about it -- but also don't consider the more global viewpoints about the evidence to be engraved in stone. Science marches on, new things are learned, new interpretations have to be found. 500 years ago it was just common sense that the earth was the center of the universe. Every sunrise and sunset told us so. The sun, the moon, the stars all traveled across the sky while we stood still. 500 years from now we'll know a whole lot more about mind, brain, consciousness. Enjoy the quest, but know there's no final answer.
Hope this wasn't too rambling and hope it addressed some of your question.

Duffy,
Fair enough.
Therefore when you say, "I see a sheep" what you mean is, "I believe I see a sheep." You believe this because of the sequence delineated in my previous post.
This seems a fair edit. If they don't believe that they see a sheep than they are lying.

"I'm not trying to be snarky, here, but just because you can't imagine it doesn't mean it may not be factual."
You are right. And I don't mind at all being judged as (perhaps) a bit unimaginative. But there is the question, what if no one else can imagine it either? To be a little more exact, I wasn't quite saying that I couldn't imagine memories being stored in the brain: rather that I can't imagine any scheme for their storage.
I'm not at all hung up on the brain-computer analogy. It has rather crept in accidentally. But before that I wanted to mention two lines of argument that I think one should be a little wary of --
Searle drew attention to these arguments (among a list of others), though I don't now recall where in Searle I read it.
I'll call them the MoreTime argument and the Galileo argument. They are used by materialists in an attempt to reinforce the foundations of their thinking.
The MoreTime argument is to promise a time after which new empirical evidence will prove the truth of their theoretical assertions. "Just give us more time, and we'll get to a complete explanation of consciousness in terms of particle physics" etc. Typically used on the promise of machines that will one day think.
The Galileo argument asserts the superiority of their unlikely assertions over plain common sense. Galileo himself frequently, though not necessarily, gets a mention. "You don't believe your mind is an illusion? That's because you can't see beyond your common sense view. In the past it was common sense to see the earth as fixed and the sun going round it. Galileo said the opposite and was ridiculed and reviled for his ideas. But Galileo was proved right." Whenever I come across this argument now I hear in my mind Queen singing "Galileo! Galileo!" from Bohemian Rhapsody.
So, Libyrinths, we get the Galileo argument at the end of your post, "500 years ago it was just common sense that the earth was the center of the universe." and so on ... In J's "Consider how Columbus might react ...", or your "yes, but not today. Maybe in 100 years. Maybe in 500 years." we see the MoreTime argument coming in. Similarly "it's too early to know if the mind is only an epiphenomenon of the brain, or even an emergent property." In other words, give us more time.
An underlying assumption of the MoreTime argument is that any outstanding problems will be solved by further empirical investigation, which in pratice means Government funded scientific research. It is therefore an argument of the materialist, used to deflect doubts of the truth of the materialist position.
And the question arises, how much more time? A year? Ten years? Five hundred? No one can say. Personally, I'm getting impatient of waiting in my old age ...
But about the brain-computer analogy. I was rather following Chalmers here, and my actual view of the analogy is about the same as Searle's. In other words, it is a bad analogy. I used it for its familiarity. But if we assume that memories are stored in the brain, they must have some form of material representation, since the brain is material. Perhaps a musical score, or a knitting pattern, is a better analogy. Beethoven's 9th can be represented in many ways: the printed music, Beethoven's holograph manuscript, the shape of the furrows in the spiral groove in an LP, the digital signal on a CD, a silent film showing the depression of every stop on every instrument of an orchestra playing it, and so on. We can find a mapping from one representation to another. But it all depends on music being recordable in a coded form in the first place, for example as treble and bass staves with dots for the notes and other notations. But how can we encode any memory? If we can find out later (the MoreTime argument), we should be able to imagine it now, and we we can't imagine it, I think it possible to doubt that this is what the brain does.
What you say of a child learning words is quite correct, but to me it illustrates the problem rather than going any way towards solving it.


I never said that More Time will validate anyone's current pet theory of the mind. In fact, I said just the opposite. I said that time will give us more knowledge about things which are testable. With that knowledge, new interpretations are possible if not probable. Time and time again things which seem like "common sense" are found to be incorrect when put to scientific tests. Scientists start with a hypothesis and set out to test it. In testing, surprises happen all the time. Current interpretations of data change when contradictory data is found and proven to be correct. In testing, the unexpected, the unimagined is discovered. Prior interpretations and models have to be revised or even thrown out in favor of new ones.
You can somehow distort my example of how our views of the physical world change over time with new discoveries, by using some off-the-wall accusation about Galileo -- whom I never referenced nor used as any example -- but it won't change the facts -- FACTS -- of history, nor of the history of science, nor the history of ideas, no matter how many times you invoke Bohemian Rhapsody.
You say you want the truth. By what process do you seek the truth? What are your criteria for truth? As you totally dismiss one or more accepted methods of attempting to attain at least one aspect of the truth, what do you consider valid evidence for truth? Your own ability to imagine things? The general public's woeful ignorance of the facts, methods and history of science? Does that lead to actual truth or only an individual one for yourself.
Precisely what is it you think the brain actually does, if you deny any role in memory? Perhaps you also can't imagine that it controls muscle movement or regulates body temperature or enables you to see and hear and feel. What do you think the liver does? Kidneys? Lungs?
How is it that other members of the animal kingdom have been demonstrated to exhibit learning and memory and that this can be shown to be "encoded" physically -- biochemically and genetically -- but somehow humans, with the same biological, biochemical and genetic mechanisms suddenly leap beyond any physical "encoding" to some abstraction for memory storage?
Please tell us the nature of this non-material storehouse of memory you envision. I assume it is a metaphysical construct of some sort, that which is not testable, not replicable, not even falsifiable. How do memories get created and stored there? How are they retrieved from this place? Why are memories so often unreliable? Is it because they're stored offsite, as it were? How can you know the memories retrieved are truly your own and not someone else's?
There's nothing wrong with a belief system, but if you want an honest discussion in pursuit of the truth, you have to 1) be willing to offer a clear and logical alternative to the beginnings of understanding gained by scientific endeavors; 2) be willing and able to show why your system is not only logically and evidentially superior to, but actually supplants, any factual evidence in the real world. Maybe tell us why a world of Forms/Ideas, if that's what it is, is more real than any concrete factual data. And why the world of the demonstrable and testable can be summarily dismissed as having any validity or evidence in pursuit of the truth.
Just curious, here, but do you accept the geological/geophysical/paleontological record which shows the earth to be ~4.5 billion years old? Perhaps your answer to that question will tell us all we need to know, and we can skip the rest.

To be honest, Duffy, I had not really thought about that. It depends what is implied by "genuine AI", perhaps.
There was a philosopher W E Johnson, who in 1920 or thereabouts distinguished what he called substantive and epistemic conditions of understanding (or knowledge). Rather long words, aren't they? I'll call them inner and outer for short. The outer knowledge is what you present to others; the inner knowledge is your own understanding. So imagine two schoolboys A and B who have to prove in a test that they know Euclid's proof of Pythagoras' Theorem. Both take the test and get equal marks. A has written out the proof (outer knowledge) and understands what it's about (inner knowledge). B understands nothing of geometry, has learnt the proof by heart, and copied it out parrot fashion. B has the outer but not the inner knowledge.
Searle's Chinese Room argument is told in a picturesque way, but it boils down to saying that the Room has outer knowledge of Chinese, but not inner knowledge. He challenges the AI community with the accusation that the systems they build are similarly deficient.
Now suppose you built an AI system that passed the Turing test, while only possessing outer knowledge. Would that be classed as "genuine AI"? Perhaps it would, in which case I'd say the way it represented knowledge would not explain how we humans represent it, since we (I believe) differ in having inner knowledge. Here one might doubt that an AI system without inner knowledge could ever pass the Turing test.
Or suppose you build an AI system that passes the Turing test, while demonstrably possessing inner knowledge. In that case the way knowledge was stored in such a system would, I think, give a plausible model of how knowledge is stored in the brain. But here one might doubt that such an AI system will ever be built. Turing's expectations were ambitious here: he thought a "child" brain might be built, which would learn information and grow to an "adult" brain. This is in the last section of the 1950 paper, "Computing Machinery and Intelligence". Such a procedure might lead to an AI "brain" whose functioning could not be understood, or not understood fully. Turing seems to expect this possibility. In that case we might understand the artificial brain no better than a real one.
(Turing's paper has a MoreTime plea, of course: it is 50 years, and does actually succeed in mentioning Galileo!)

Would an A.I. give any real insight into the functioning of human brains?
Probably not. Answering the question in the affirmative requires an assertion that similar problems will, invariably, be solved the same way. However nature is replete with examples of different organisms evolving different strategies for dealing with the same problem. For example the problem of scarcity has driven some animals to migrate, some animals to hibernate and still others to become generalists.
Asserting that the previous examples demonstrate evolved systems, which have no relevance to an artificial system, is also useless. If a system we designed to mimic a human mind, mimics a human mind, then we have demonstrated an ability to utilize information which we already had.


I have no opinion of you one way or the other, but you have validated my suspicion that you have an agenda, and I don't intend to play.
Bye, Martin.

BTW, I have a sneaking suspicion that the "inner knowledge" you are talking about is very much akin to Wittgenstein's beetle.

Hi Libyrinths --
I've read the post in question and don't quite see Martin's response as setting up a straw man argument. You also, in your last post, impute motive to your correspondent. This is easy and tempting, but it falls more in the category of rhetoric than logic and argumentation. Your post prior to that gives the impression, rightly or no, that you're aiming your remarks at the other poster rather than what he said.
I've found that the discussion of the mind-body problem isn't as straightforward as I once thought, and that the juxtaposition of materialist and idealist explanations implies points of view that may not be there. This fallacy of the excluded middle gives it more ideological flavor than it deserves. You're right to ask for clarity and correct misunderstandings of your position, but in the future please phrase what you say a little more charitably.

(One can think of amusing exceptions. I once knew a guy who had to report the deliberations of a series of technical meetings to a series of management meetings. He confessed to me that he did not understand what he was reporting, but none of the managers ever worked that out.)
But it's not so obvious we can do this with computers. I thinks Searle's point would be that the failure of psychological tests to discern the absence of epistemic understanding in an AI system is not sufficient proof that the AI system has it. So the Chinese Room passes all tests for understanding Chinese, but does not, in fact, understand Chinese.
But of course, there is something we can do with an AI system that we can't do with humans, pull it apart and see how it works. This is like forcing entry to the Chinese Room. And then we may find that the knowledge is represented in such a way that we seriously doubt that the system has epistemic understanding. Another of Searle's points being that the AI system is just juggling symbols, it doesn't know what the symbols stand for, and it has no means of finding out.
I think Searle's philosophy makes the point here, though perhaps my "inner" and "outer" distinction does not express it properly. Incidentally, I'm not familiar with Wittgenstein's beetle ...

As for the beetle, from section 293 of the Philosophical Investigations (on the subject of pains, and not inner understanding):
"Suppose everyone had a box with something in it: we call it a "beetle". No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking at his beetle. --Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing. --But suppose the word "beetle" had a use in these people's language? --If so it would not be used as the name of a thing. The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. --No, one can 'divide through' by the thing in the box; it cancels out, whatever it is.
That is to say: if we construe the grammar of the expression of sensation on the model of 'object and designation' the object drops out of consideration as irrelevant."

John Searle, Minds, Brains and Science, 1984
John Searle, Rediscovery of the Mind, 1994
My suggestion was that you can test a human for epistemic conditions of knowledge, but perhaps not an AI system. But the definition I realise is circular, since to me "genuine AI" would be defined as one that passed objective tests for subjective (inner) knowledge.
For me, the time to think about the Turing test would be when there is something to test it out on. Incidentally I think the failure to create genuine AI systems (in 1950 Turing predicted 50 years before we had them) supports my idea about the difficulty, or perhaps impossibility, of encoding our mental knowledge.
Thanks for the beetle reference!

I'm not sure that your assertion that AI has failed because of a lack of human style memory is valid.
One could argue that our engram formation method is simply a variation on a system that first evolved in fish and has been elaborated on ever since. If memory like ours is the deciding factor in wether or not an organism is aware wouldn't we be surrounded by species with intellects very near our own? Or is it more likely that such inteligences arise from a confluence of factors?
I suspect that there are many possible routes to a self aware inteligence and we are simply biased towards the most familiar route. This could be a trap. Suppose we encountered an intellect which had arisen by a different route and failed to recognize it for what it was.


Duffy, I thought you'd know all about Searle. His influence in Britain may be on the wane, but about 15 years ago he was very well known, mainly for his lecturing and broadcasting work with the BBC. But to summarise, the
Chinese Room
is a closed building into which you push questions in Chinese, and out come answers in Chinese. So it understands Chinese you might think. But if you go inside there is a man with no knowledge of Chinese simulating an AI process from written instructions together with some gigantic register or card index system. Watch the linked video, with Searle's comments -- it only takes 9 minutes. (The Chinese girls are very charming!)
It's not about souls. In fact there is a soul in the Room, the man pushing the symbols about, but he doesn't understand Chinese, just as B has a soul but doesn't understand geometry.

My first reaction to the argument is this: Given the context of the experiment, I would say the following. The man in the room does not understand Chinese. The instructions he is following are just instructions, and they do not understand Chinese. The Chinese Room, however, understands Chinese well enough, given its input/output limitations.
The argument seems to posit that there is some THING called understanding, and that a native Chinese speaker is in possession of this thing, but the Chinese room is not. Searle compounds this mistake by insisting that the man in the room doesn't have any understanding. But it's not the man in the room who is being tested. It's the room itself. It would be simpler to state that the Room understands, but that there is no such thing, or mental state, called "understanding."
Similarly, I might want to argue that if the Room doesn't have an understanding of Chinese, then we have no way of knowing whether any native Chinese speaker has the understanding we are looking for. This type of "understanding" is a useless idea, one we could eliminate by dividing through, as Wittgenstein says in the beetle in the box passage.
I also have doubts about the thought experiment and what it assumes about the nature of language. It presents the idea of understanding Chinese as some sort of unified thing.
What if we made a different sort of room. Let's call it a Tic-Tac-Toe room. Everything is the same, except instead of Chinese, the room plays Tic-Tac-Toe against other people. It recognizes illegal moves, and plays according to the rules. Moreover, it never does worse than coming to a draw. Does the room understand the game? Does a person who plays as well as the room understand the game? Here, I am very tempted to say that there is nothing more to an understanding of Tic Tac Toe than the ability to play by the rules, and to play it well. The answers should be the same for Tic Tac Toe, and for Chinese. But for some reason, the Chinese thought experiment creates a temptation to say that there truly is something to the idea of "understanding," while Tic Tac Toe gives considerably less temptation to make such a mistake. At least that's how I see it.
And before, when I was talking about "souls," I was extrapolating from the idea of the existence of "mental states" like "understanding" which can't be definitely located anywhere, but instead arise from a series or system of behavior.

You say the man doesn't understand, but the whole system does. This was the response to Searle. Actually it was one of several responses to Searle, who was pleased that his opponents had many divided answers to his thought experiment. I think he anticipated the response and had his answer ready: it doesn't matter if you extend the scope of the understanding system from man to man+everything in the room, the stuff in the room doesn't know what the symbols stand for, and actually has no way of finding out. There is a Chinese symbol for 'tree'. I know what a tree is, I've climbed up one. To the Chinese room it's a symbol with manipulive associations with other symbols. It can't find out what a tree really is.
A computer can play TictacToe, but the program driving it might, under some circumstances, solve an associated but quite different problem. TicTacToe is a game, and this is what the computer doesn't know. The programmer knows it: in fact in computer science the data respresented in some model is understood by the humans who set it up. One doesn't expect the computer to know it.
I'm not just an armchair philosopher here. I have written a program to solve sudokus. I'm pretty poor at solving sudokus, but my program will solve 40,000 of them per second. But I have no illusions that the computer understands sudoku as I do. But Turing cites chess in a very similar way in his 1950 paper to illustrate the Turing test,
here it is,
Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
(The old "descriptive" notation.) But the AI system is just calling up a dumb chess solver like my sudoku solver.
Also I knew the AI group at Cambridge when I was there in the 80s, which got me thinking about these things. But the issue of knowledge representation in the brain, which of course is connected with all this: I only really started seriously thinking about it last year. It does trouble me: I think it's making my thinking increasingly Cartesian, and perhaps increasingly religious. Strange.

The idea behind the Chinese Room not only casts doubt on the Turing test as a test of a string AI, it also, in a more fundamental way, calls into question the wittgenstinian idea that the meaning of a word is it's use. I don't necessarily cling to that notion, but I'm reluctant to give it up for one that seems to me to rely on the assumption that there is some mental state which constitutes understanding or knowledge.
That's why I raised a couple of other points. First, there is the limitation of inputs and outputs. Part of how we demonstrate knowledge results from our interaction with the world. The Chinese Room is hampered in this respect, and so it's less likely to appear intelligent. Second was my point about treating "understanding Chinese" as a unitary thing. And third, was the idea that we are just as in the dark about whether a native Chinese speaker understands Chinese (I think Searle called this the Other Minds response). Searle dismissed this objection by saying that in cognitive sciences we presume the existence of mental states. I don't make any such presupposition, but I think that's exactly where our difference lies. Once again, I go back to the beetle in the box.



I think a lot words of wisdom were mentioned early in this topic that it's a lot more complicated or I might say "layered" when it comes to brain mind problems.
“Free will is an illusion. Free will is just a miscast problem in my opinion and that is what I try to do in the book is not just assert that but to tell the story about how the brain is built what we know comes from the factory and the brain how its organized in terms of all these modules to ultimately paint a picture that our brain works in an automatic way just like a wristwatch. And we have this belief that we’re acting as if we’re in charge and I say that it is an illusion.
The mechanism is a special module that we discovered in the left brain, your left brain my left brain, it’s called “‘the interpreter’. And what it does is it looks at our own behaviour, our own thinking, our own feelings, and it builds a theory, a narrative about, "Why am I feeling? Why did I just do that? Why am I having this hypothesis?" And it’s a storytelling mechanism of all our actions of all our feelings and it begins to become your idea of yourself. What you believe you to be. So this big strong thing we have, ‘the interpreter’ no wonder we think, ‘well that must be me moving my arm’ ‘I must be in charge’. So we build up this convenient theory to explain a vastly complex but automatic machine that is the human brain.”
Michael Gazzaniga (on charlie Rose...I transcribed what he said so it's a little scattered)


https://www.goodreads.com/topic/show/...
But anyway, thanks for posting.
The links at the end of #48 have been truncated, and it would be nice if you could correct that ...
Books mentioned in this topic
Minds, Brains and Science (other topics)The Rediscovery of the Mind (other topics)
Authors mentioned in this topic
Benjamin Libet (other topics)Benjamin Libet (other topics)
So I'm boldly starting a new thread ...
This is something I've been thinking about for months now. I should say, I don't (or believe I don't) hold any extreme or doctrinaire view of the mind-body problem. The common wisdom of so many contemporary philosophers, at least American ones, is that "brains cause minds". That is a quote from Searle, whose position is quite anti-materialist. Maybe, but I'm willing to entertain the belief that they don't, and even that there may be mental aspects or events that have no physical correlate. I certainly believe brains and minds are connected. I'm currently witnessing the decay of my mother-in-law's mind with the advance of her Alzheimer's disease of the brain. It is pitiful to witness, but also a terrible proof of mind-brain connection.
The problem is this: long-term memories, researchers tell us, are stored in the brain. Whether in the 10^11 (10 to the power 11) neurons, or the 10^14 synapses that form and reform to connect them, I don't know, but in a sense it doesn't matter. In detail, we are told that the memories are "encoded", perhaps even down to the molecular level.
But how are they encoded? Are there any theories on this? In an AI (artificial intelligence) simulation of thinking the basis of the encoding is, ultimately, linguistic. "Brutus killed Caesar" might be analysed into some RDBMS as (agent, action, victim, time, place) giving (brutus, kill, caesar, ides of march 44BC, rome) and then rules developed to make deductions from this and similar data. This is a crude simplification, but gives the idea: the point is that word, or word-representatives, go down to the level of storage. But the brain can't encode like that, words, prayers, poems are just other things we remember, like strong feelings, smells, tunes, pictures, faces, sounds of nature and so on.
An encoding works when there is something that can be analysed down to elements that one can encode. Speech can be encoded in the letters of an alphabet plus some punctuation symbols; a symphony can be encoded as a musical score; chemicals can be encoded by their molecular formulae; also dance steps and knitting patterns ...
But you can't encode everything in the world, or so it seems to me. I'm left with the feeling that the correspondence between brain and mind will never be understood, because we will never be able to recognise how a mental memory -- the smell of the food hall in our primary school for example -- gets stored in the brain.
It would be nice to find something written on this. Any ideas? I find traditional philosophers are silent because they predate modern neuroscience; nueroscience hasn't really got that far; contemporary thinkers simplify the problem away ("the brain is just a computer running a program" etc)