Howard Gardner is the John H. and Elisabeth A. Hobbs Professor of Cognition and Education at the Harvard Graduate School of Education. He also holds positions as Adjunct Professor of Psychology at Harvard University and Senior Director of Harvard Project Zero. Among numerous honors, Gardner received a MacArthur Prize Fellowship in 1981. He has received honorary degrees from 26 colleges and universities, including institutions in Bulgaria, Chile, Greece, Ireland, Israel, Italy, and South Korea. In 2005 and again in 2008, he was selected by Foreign Policy and Prospect magazines as one of the 100 most influential public intellectuals in the world. The author of 25 books translated into 28 languages, and several hundred articles, Gardner is best known in educational circles for his theory of multiple intelligences, a critique of the notion that there exists but a single human intelligence that can be adequately assessed by standard psychometric instruments.
During the past two decades, Gardner and colleagues at Project Zero have been involved in the design of performance-based assessments; education for understanding; the use of multiple intelligences to achieve more personalized curriculum, instruction, and pedagogy; and the quality of interdisciplinary efforts in education. Since the middle 1990s, in collaboration with psychologist Mihaly Csikszentmihalyi and William Damon, Gardner has directed the GoodWork Project-- a study of work that is excellent, engaging, and ethical. More recently, with long time Project Zero colleagues Lynn Barendsen and Wendy Fischman, he has conducted reflection sessions designed to enhance the understanding and incidence of good work among young people. With Carrie James and other colleagues at Project Zero, he is also investigating the nature of trust in contemporary society and ethical dimensions entailed in the use of the new digital media. Among new research undertakings are a study of effective collaboration among non-profit institutions in education and a study of conceptions of quality, nationally and internationally, in the contemporary era. In 2008 he delivered a set of three lectures at New York's Museum of Modern Art on the topic "The True, The Beautiful, and The Good: econsiderations in a post-modern, digital era."
Probably the best and most extensive history of cognitive science and it's evolution out of mid-20th century developments in computer science, engineering, linguistics, philosophy, social science, neuroscience, artificial intelligence, and psychology. Gardner follows the story in detailed fashion showing how each of these disciplines contributed to the development of the cognitive science/philosophy of mind project, each being influenced by all the others.
Unfortunately, the book was written (1985) prior to the developments in embodied mind and dynamical systems research and prior to a lot of new studies in neuroscience and neuroplasticity. Things have come a LONG way since Gardner wrote so his topics seem rather dated focusing largely on first-generation cognitivism and the representational-computational-symbolic-processing-disembodied view of the mind. As Gardner was writing, this second-generation connectionism and parallel processing approach was just beginning to emerge, later to be extended into third-generation embodied mind/dynamical system cognitive science.
Although now (in its 1987 edition) a quarter of a century old, this book remains valuable not so much for its argument in favour of the development of the then relatively new discipline of cognitive science as for its insights into how science actually works.
Regardless of that, the book remains a useful history of six loosely related disciplines - the humanistic social 'science' of anthropology and the hard science of neuroscience at the edges of the proposed (in 1984) new science and its core cognitive disciplines of philosophy, psychology, linguistics and artificial intelligence.
Gardner's argument is that these disciplines are the basis for a science of cognition covering such problems as how we perceive the world (contributing to epistemology), how we imagine the world, how we categorise and classify the world and not only how we reason but if we can be said to be rational at all.
These are questions that do not replace philosophy - certainly not the ontological basis for existentialism or any viable philosophy of meaning - but they usefully limit the claims of philosophy to what cannot be known by evidence-based scientific method (so that philosophy still includes core questions not only of meaning but of value).
Gardner takes his story no further than 1987 (in the paperback edition). Much has happened since yet this is still an excellent guide to the relevant sciences up to the mid-1980s.
However, the general reader should be warned that he writes clearly but that he is telling a complex story for the benefit of his peers. You should expect to be stretched and perhaps to find it a difficult book if rewarding one.
Gardner's judgments strike me as generally sound and especially useful when helping us to understand why the mid-twentieth century behaviourist model crumbled so quickly and how the more simple models applying computing analogies to the human condition were already becoming unsustainable as he was writing his book.
The dialectic between computing and brain studies has been fruitful but a basic awareness of continental philosophy would have cast doubt on any project that would make simple analogies betwen evolved brain and the technology of calculation and analysis.
'Being human' is a highly complex business that owes a great deal to our inheritance as an evolved biological entity with predispositions for survival in a hostile world.
The 'social' or cultural is simply an extension of the peculiarities of our condition so that research that shows that our rationality is suspect should not be confused with any value judgment that our lack of rationality is necessarily a bad thing.
The serious follower of the relevant sciences may find this book merely a reference point for a subsequent 25 years of discovery and theory but the book remains valuable and cautionary, regardless, as a description of how scientific paradigms (in the Kuhnian sense) rise and fall.
Gardner is assiduous in arguing for his thesis but not being polemical. There is no case (it would seem) where he is not prepared not merely to put an argument but to put all the criticisms of the argument. He is wholly fair-minded and generous - and scientific.
The result is that we get a strong picture both of progress in science and theory (not the same thing) but also of the very contingent nature of all theory and even of much experimentation at any particular moment in time.
This is important because a belief in science and scepticism about claims made by scientists are not incompatible. This book helps us to understand why - it is something to be borne in mind when evaluating claims about any application of science as technology or public policy.
What might be true now (as behaviourism once affected public policy in the 1950s) might not be true tomorrow. Caution is the appropriate response to all applications of science that are directed ultimately at society or the individual - whether they be claims about 'nudge', climate change, 'peak oil', GMOs or whatever.
What science can do is tell us what is true to all intents and purposes when dealing with matter (rather than with consciousness working on and in relation to matter as in the social) and what is negatively proven to be not possible or to be unlikely.
However, in dealing with mind and societies, let alone meaning and values, its paradigms are going to be unreliable when it comes to telling us much about what we are or should do when our complexity and reflexiveness is taken into account.
The cognitive science project is an important project to pursue but will be dangerous if it moves from the descriptive to the normative or the prescriptive.
While there is no sense that Gardner wishes to pursue anything but responsible science, one cannot be so sure of policymakers or vested interests that stand between us and the top end of the scientific community.
We certainly cannot be sure of those in the twenty-first century who want to get in on the band wagon of state funding of cognitive science for purposes that are political - the manipulation of the population into a state of order or compliance.
And even amongst scientists, there are those who are deluded into value and meaning from self interest, creating problems, diseases and conditions for which cognitive manipulations are assumed to be the solution.
In other words, while we may fear that cognitive science may assist in creating some monster of the Singularity, a cognitively advanced AI, we would do better to be frightened of the use of cognitive science in the hands of authority to force us into compliance with its values rather than our own.
Fortunately, no matter how much funding is thrown at the new neuroscience or at the militarisation of anthropology or at the core investigations of cognition, there is reason to be optimistic.
Human resistance and creativity, the nature of man in time, his desires and wilfulness and the sheer complexity of the social, more complex than the weather which can never be predicted beyond a short period, guarantee the utter failure of such projects in the long run.
With luck the new investment in these sets of 'sciences' (actually rational evidence-based investigations that fall between the hard science of matter and the non-science of the so-called social sciences) will find some cures for genuine suffering.
They may also provide people with improved choices in life and give us insights into the destructive modelling of such idiocies as religion and ideology. We should not be luddite - just cautious.
As with so much in the twenty-first century, the intellectual class sits between the people and states made up of coalitions of special interests.
The quasi-hard sciences, while expressing a 'truth' of sorts, are tools and weapons that might be made available to either side in what amounts to a large-scale but covert social war.
NOT JUST A ‘HISTORY,’ BUT ALSO A DISCUSSION OF A GREAT MANY ISSUES
Howard Gardner is professor of Cognition and Education at Harvard University.
He wrote in the Preface to this 1985 book, “In the mid-1970s, I began to hear the term ‘cognitive science.’… I naturally became curious about the methods and scope of this new science… I decided that it would be useful and rewarding to undertake a study in which I would rely heavily on the testimony of those scholars who had founded the field as well as those who were at present its most active workers… I decided to make a comprehensive investigation of cognitive science in which I could include the long view---the philosophical origins, the histories of each of the respective fields, and current work that appears most central, and my own assessment of the prospects for this ambitious field.”
He explains in the Introduction, “I define cognitive science as a contemporary, empirically based effort to answer long-standing epistemological questions---particularly those concerned with the nature of knowledge, its components, its sources, its development , and its deployment. Though the term ‘cognitive science’ is sometimes extended to include all forms of knowledge… I apply the term chiefly to all efforts to explain human knowledge. I am interested in whether questions that intrigued our philosophical ancestors can be decisively answered, instructively reformulated, or permanently scuttled. Today cognitive science holds the key to whether they can be.” (Pg. 6)
He reports about Richard Rorty: “He arrives at the following conclusion: There is no way to account for the validity of our beliefs by examining the relation between ideas and their objects: rather, justification is a social process, an extended conversation, whereby we try to convince others of what we believe. We understand the nature of knowledge when we understand that knowledge amounts to justification of our belief, and not to an increasingly accurate representation of reality.”(Pg. 73)
He notes that researchers in the mainstream of cognitive science argue that, “in artificial intelligence… once one has provided computational accounts of knowledge, understanding, representation, and the like, the need for philosophical analyses will evaporate. After all, philosophy had once helped set the agenda for physics; but now that physics has made such tremendous strides, few physicists any longer care about the musings of philosophers.” (Pg. 87)
He explains, “How does the computer program ‘Logical Theorist’ actually work? The program discovers proofs for theorems in symbolic logic, of the kind originally presented by Whitehead and Russell’s ‘Principia Mathematica.’ … The demonstration that Logical Theorist could prove theorems was itself remarkable. It actually succeeded in proving thirty-eight of the first fifty-two theorems in Chapter 2. About half of the proofs were accomplished in less than a minute each… [Its programmers] stressed that they were demonstrating … thinking of the kind in which humans engage. After all, Logical Theorist could in principle have worked by brute force (like the proverbial monkey at the typewriter); but in that case, it would have taken hundreds or thousands of years to carry out what it actually achieved in a few minutes. Instead, however, LT worked by procedures that… were analogous to those used by human problem solvers.” (Pg. 147)
He says of an early computer in Marvin Minsky’s lab, “The computer’s difficulty is that it cannot look through the way in which it has been programmed in order to pick up the actual reference or a word or number. Having no insight about the subject matter of a problem, the computer is consigned to make blunders that, in human beings, would never happen or would be considered extremely stupid.” (Pg. 153)
He states, “a finite-state grammar cannot generate sentences in which one clause is embedded in or dependent upon another, while simultaneously excluding strings that contradict these dependencies. As an example, consider the sentence ‘The man who said he would help us is arriving today.’ Finite-state grammars cannot capture the structural link between ‘man’ and ‘is arriving’ which spans the intervening clause. Moreover… a finite-state grammar cannot handle linguistic structures that can recur indefinitely, such as the embedding of a clause within another clause (‘the dog that the girl that the dog…’ and so on. Even though such sentences soon become unwieldy for the perceiver, they are strictly speaking grammatical; and grammar must be able to account for (or generate) them.” (Pg. 185)
He points out, “Although [Noam] Chomsky himself describes linguistics as part of psychology, his ideas and definitions clash with established truth in psychology. He has had to contend not only with the strong residue of behaviorism and empiricist sentiment but also with suspicion about his formal methods, opposition to his ideas about language as a separate realm, and outright skepticism about with respect to his belief in innate ideas. While Chomsky has rarely been defeated in argument on his own ground (for a recent dramatic example, see his debate with Piaget), his particular notions and biases have thus far had only modest impact in mainstream psychology.” (Pg. 214)
He observes, “From one vantage point, the dispute between the Establishment and the ecological school can be depressing. Here we are, two thousand years after the first discussions about perception, several hundred years after the philosophical debates between the empiricists and the rationalists first raged, and leading scientists are still disagreeing about fundamentals. Though the current debate cannot be mapped directly onto other debates---nominalist versus realist, empiricist versus rationalist, unconscious inference versus ‘pickup’ of relevant information---the themes are familiar enough, and the arguments frequent enough, as to make one question whether there has been progress.” (Pg. 317)
He summarizes, “My own doubts about the computer as the guiding model of human thought stem from two principal considerations… The computer is simply executing what it has been programmed to execute, and standards of right and wrong do not enter into its performance. Only those entities that exist within, interact with, and are considered part of a community can be so judged. My other reservation … centers on the deep difference between biological and mechanical systems. I find it distorted to conceive of human beings apart from their membership in a species that has evolved over the millennia, and as other than organisms who themselves develop according to a complex interaction between genetic proclivities and environmental processes over a lifetime. To the extent that thought processes reflect this bio-developmental factors and are suffused with regressions, anticipations, frustrations, and ambivalent feelings, they will differ in fundamental ways from those exhibited by a nonorganic system.” (Pg. 388)
This book will be of keen interest to those studying such ‘cognitive’ matters.
O livro é uma introdução excelente às chamadas ciências cognitivas, embora sua data de lançamento seja um pouquinho antiga para uma área em expansão e constante atualização como são as C.C. Talvez uma das partes mais interessantes é o fato de o autor, H. Gardner, ser um cientista bastante conhecedor de outras áreas, inclusive filosoficamente - isso dá um tempo de abrangência intelectual para a obra. Em suma, o autor procura mostrar o surgimento das ciências cognitivas na década de 50 e 60, altamente relacionado ao surgimento dos computadores e dos desenvolvimentos como o da Teoria da Informação; em segundo lugar, tenta mostrar quais são as principais áreas que alimentam as ciências cognitivas, e seu interrelacionamento (as áreas são Filosofia, Psicologia, Inteligência Artificial, Linguística, Antropologia e Neurociência); num terceiro momento, Gardner busca abordar problemas tradicionais do pensamento ocidental sob a óptica da nova ciência (por exemplo, percepção, imaginação, racionalidade e categorização).
An nice overview, but rather dated, specially in the chapters regarding AI and neuroscience. The reviews of the more 'fringe' disciplines of anthropology and linguistics are interesting.
This book is a really interesting snapshot in time of cognitive science. First published in 1985, there are a number of anachronisms (neuroscience is a “fringe” discipline in 1985 but is critical now, and the description of AI is exceedingly out of date), but there’s also a lot of prescient work. Much of the research discussed in the psychology chapter is taught in textbooks today, and the innovative research areas discussed in the final section of the book (on imagery, categorization, and rationality) continue to be foundational today. A worthwhile history of the field - I’d read it after some years in the community.
Probably Gardner's best book and a welcome reminder of how recently most of our concepts for understanding the mind were developed. The chapter on categorisation is particularly thought-provoking.
an engaging read with a lot of accurate fact based research about the important developments in 6 parallel fields that lead to the cognitive revolution