Computation and Intelligence brings together 29 readings in Artificial Intelligence that are particularly relevant to today's student/practitioner. With its helpful critique of the selections, extensive bibliography, and clear presentation of the material, Computation and Intelligence will be a useful adjunct to any course in AI as well as a handy reference for professionals in the field. The book is divided into five parts, each reflecting the stages of development of AI. The first part, Foundations, contains readings that present or discuss foundational ideas linking computation and intelligence, typified by A. M. Turing's Computing Machinery and Intelligence. The second part, Knowledge Representation, presents a sampling of numerous representational schemes by Newell, Minsky, Collins & Quillian, Winograd, Schank, Hayes, Holland, McClelland, Rumelhart, Hinton, and Brooks. The third part, Weak Method Problem Solving, fouses on the research and design of syntax-based problem solvers, including the most famous of these, the Logic Theorist and GPS. The fourth part, Reasoning in Complex and Dynamic Environments, presents a broad spectrum of the AI community's research in knowledge-intensive problem solving, from McCarthy's early design of systems with common sense to model-based reasoning. The two concluding selections, by Marvin Minsky and by Herbert Simon, respectively, present the recent thoughts of two of AI's pioneers who revisit the concepts and controversies that have developed during the evolution of the tools and techniques that make up the current practice of Artificial Intelligence.
Big compilation of seminal papers in AI. Emphasizes the '70s, which makes sense. The field grew by leaps and bounds in that decade. Newell, Winograd, McCarthy, Minksy, Rummelhart and McClelland, Turing, Simon, Hayes and Schank are here.
I really wish these papers, the primary sources, had been on my reading lists when I studied cognitive science in college. Later writers, especially ones not intimately involved in the field, have written some fanciful interpretations of the problems and issues introduced in these papers. The secondary literature in the area presents a lot of Manichean oppositions between different concepts of cognition, where the research itself has become extremely eclectic in its methodology and theory. I blame philosophers for many of the reified oppositions, like GOFAI versus connectionism, connectionism vs. embodied mind theory, etc. Sometimes the relevant philosophical differences are not even attended to: like the fact that some embodied mind theory is of a radically constructivist, idealist bent. But the frame problem, especially, has split into a handful of different philosophical problems, some more philosophical than others. --/-- Below this line = my own musings on this book ---/--- Includes Minsky's '74 paper on the "frame," his suggestion for a basic knowledge-representation structure. The term "frame" comes up again and again in the AI literature, in several different senses. McCarthy identified what he called the "frame problem" in '68. This is the difficulty of getting a model of a world in first-order logic (or other logics) to produce the obvious ~default inferences~ when the state of the world changes in ways that we might consider minor and unremarkable. My hair-style does not change when I pour milk into my coffee, but it probably does when a wind-speed accelerates rapidly in the area in which I am standing. This problem is surprisingly hard to solve in formal systems for modeling an environment, but it has nothing to do with people and their behavior or experience, per se. Murray Shanahan wrote a book detailing his and his colleagues attempts to solve that problem, _Solving The Frame Problem_. Minsky's concept of "frame," on the other hand, is inherently about how people "organize experience," as Erving Goffman put it. I wonder, intensely, whether Minsky read Goffman (1974) or Goffman read Minsky (1975). In its use within these two works, the notion of "frame" is quite similar, almost the same, and in both cases, psychological and sociological. This notion of frame, as Hubert Dreyfus would point out (I don't know if he actually has), is essentially the same thing as the phenomenological notion of horizon. Dreyfus, like Heidegger, thinks that these are intractably complex beasts, to the extent that the even McCarthy's original frame problem (default reasoning) could never be solved in a logic-based system. I think it's just a hard problem, not insoluble in principle-- I agree with Husserl and cognitivist AI on this issue. Glymour, Ford and Hayes open the collection with a paper that calls AI "Android Epistemology." That's right on the money, I think, except that epistemology makes it too philosophical, even. The part of epistemology that's amenable to empirical and formal research morphs into other disciplines around its edges, most notably, the "behavioral sciences." Obviously, there is a huge problem in moving from a symbolic, functional, "3rd person" mode to something that takes first-person experience into account. l Take the notions of "scripts" and "semantic networks" in '70s cognitive psychology, for example. (Theorists have since proved that these structures are formally equivalent to first-order logic in their representational capabilities). You can go pretty far in a theory of the cognitive mind, however, avoiding talk about the wholly irreducible part of what philosophers call "qualia" entirely. So far, in fact, that I think AI might be the link between phenomenology and formal logic. That is to say that, if most of phenomenology (including the social variety) consists in the unpacking of experienced meaning, and that meaning is basically symbolic, a large chunk of the research would consist of an account of how what Fodor calls "the language of thought" works; how thought happens by manipulating expressions in L.O.T. The huge gap between the phenomenological tradition on the one side, and a computational theory of the cognitive mind on the other, is that the latter project has been carried out in a functionalist way, eschewing the first-person perspective and the notion that the meaning-content of mental states is given in experience, in however unclear a form. In other words, the gap between the two comes down to how much of Descartes's methodology they can stomach.