I enjoyed this book and found it useful. A few points;
I'm not deep enough into A.I. to know if this book has a specific argument in relation to that culture. The general structure seems to be that old A.I. researchers from the 50's onward tended to believe very deeply in the idea of a pure constructed mind, seperate from its physical expression and tested mainly in a simulated 'ideal' world and that more recently, from the 90's onwards, the tide has turned towards the idea of intelligence being much more closely interrelated with bodily expression in the real world.
If this is the case then I find this change largely fits my moral and intellectual intuitions and prejudices. I am pleased to see it.
I believe I agree with Steven Hannard about the symbol grounding problem; 'Meaning can enter the system only when part of the system is grounded in the world, rather than being part of a closed self-referential system of symbols.'
If we take this as true; "... the cognitivistic paradigm's neglect of the fact that intelligent agents live in a real physical world leads to significant shortcomings in explaining intelligence" - Rolf Pfeifer and Christian Scheier. - Then it does lead to some very interesting thoughts about just, what the fuck, A.I. researchers were doing for 30 to 40 years. Because the ideas of; 'high level functions might be connected to bodies', 'when making super-complex things, we should start from the bottom up' and 'elaborate, perfectly interiorley-coherent systems of symbols, often suck for solving real-world problems'; these are not deep, complex or especially hard to parse conceits. There is no mystery box that needs to open before you can think like this. They are reasonable common sense.
And if this is the case then perhaps instead we should be asking exactly why A.I. researchers spent a very long time fucking about with things that either didn't work, or barely worked at all. This does remind me of the radical surgical treatments for cancer described in Siddbartha Mukherjee's 'The Emperor of all Maladies' which excised huge amounts of muscle and flesh from, generally, female or low-status male patients, and which were carried on far, far too long with little provable positive effect to show, or of recent descriptions of themes and fashions in medical and scientific research which suggest that for genuinely new concepts to be accepted, in some cases you simply have to wait for a generation of old scientists to die. And very largely it reminds me (as a lot of things seem to) of 'The Master and His Emissary' by Ian McGilchrist, which describes the obsessional need of a left-brained culture and mind-state to re-create the world as an internally-coherent abstract simulation of itself, and also described the compulsive overconfidence, monopic obsession with coherent internally-consistent data and incurious indifference to 'dirty', 'messy' or incoherent data from outside the simulation that such cultures and mind-states tend to have.
It is still theoretically possible that I might be wrong. A.I. researchers do have a long tradition of being deeply, massively wrong in their predictions but just because they have been wrong every point up till now, doesn't necessarily mean they will be wrong _this_ time. (Though they probably are.) The very fact that consciousness is, as the book puts it several times, 'a mystery', means that it could actually be just around the corner and some random researcher could just pop out with a pure-math disembodied computer brain that says 'hello'.
But this probably won't happen. What it will probably be is a few centuries of dicking around with robots, getting gradually better and better until we either hit a hard limit, are prevented from research by mindcrime concerns or finally do it.
There is a lot of Alan Turing in this book. Apparently he is just one of those guys who had all the good ideas before anyone else, so if you are an A.I. researcher and you are wandering through what you think is an unexplored mansion of thought and you open a mysterious ancient chest of ideas then 50/50 that Alan fucking Turing will just tumble outta there.
I did think the art was really astoundingly ugly. This did not impede the utility of the book or the transmission of its ideas, I understood everything it wished me to understand, and if I didn't then the fault is mine rather than the books.
More precisely, the art is grainy with awkward, strange looking people and a general aura of alienation and physical discomfort. The A.I. robot character who acts as a presenter is really fucking sinister and strange looking.
It's possible that this is all a deliberate aesthetic choice, the odd, ganky feel of the presentation does fit quite neatly with the strange feel of simulated minds, the slightly odd men who are into them and the freaky robots they make to test them.
And, it had no effect on comprehension.
If you don't know anything about A.I. and want to find out, and if you prefer books to just reading stuff online, and you want something light (physically) and light (cognitively) that you can carry around in a pocket or something and deal with using your spare everyday cognitive energy, then I would recommend this book.