In this groundbreaking new book, Manuel Delanda analyzes all the different genres of simulation (from cellular automata and genetic algorithms to neural nets and multi-agent systems) as a means to conceptualize the possibility spaces associated with causal (and other) capacities. Simulations allow us to stage actual interactions among a population of agents and to observe the emergent wholes that result from those interactions. Simulations have become as important as mathematical models in theoretical science. As computer power and memory have become cheaper they have migrated to the desktop, where they now play the role that small-scale experiments used to play. A philosophical examination of the epistemology of simulations is needed to cement this new role, underlining the consequences that simulations may have for materialist philosophy itself. This remarkably clear philosophical discussion of a rapidly growing field, from a thinker at the forefront of research at the interface of science and the humanities, is a must-read for anyone interested in the philosophy of technology and the philosophy of science at all levels.
Manuel DeLanda (b. in Mexico City, 1952), based in New York since 1975, is a philosopher, media artist, programmer and software designer. After studying art in the 1970s, he became known as an independent filmmaker making underground 8mm and 16mm films inspired by critical theory and philosophy. In the 1980s, Manuel De Landa focused on programing, writing computer software, and computer art. After being introduced to the work of Gilles Deleuze, he saw new creative potential in philosophical texts, becoming one of the representatives of the 'new materialism'.
How do things come to be the sorts of things they are? What sort of processes engender, sustain, and decompose the rich furniture of the world amongst which we find ourselves? These are the questions that from the very beginning, have animated Manuel DeLanda’s quest to understand the nature of things. In Philosophy and Simulation, DeLanda extends his inquiries into the fascinating and unsung world of scientific modelling, documenting the myriad of creative ways in which science has attempted to capture the dynamism of the universe in order to teach us more about it. From the micro to the macro, vortexes in the laboratory to the formation of nation-states, DeLanda’s polymathic intelligence is relentlessly probing in both scope and scale, making for an adventure of reading quite unlike any other.
That all said, this isn’t quite your typical work of popular science, if indeed it could be called that at all. While pretty much avoiding any reference to the math behind it all, DeLanda writes at a level barely one step removed from invoking it, which is to say that the writing here is both concise and complex, if nonetheless ultimately accessible to the motivated layman. Indeed, as the title gives away, and as anyone who has followed DeLanda’s intellectual trajectory to date would know, Philosophy and Simulation is unquestionably a book of - what else? - philosophy. As such, while the work remains entirely able to stand on its own merits, its true brilliance lies in the way serves to flesh out the Deleuzian inspired ontology that DeLanda has been developing across his works for a while now.
What then, of this ontology? Well, following the contours of DeLanda’s naturalised, scientifically informed Deleuze (who here barely rates a mention save for the appendix), things come to be as they through a dual process involving both (1) ‘mechanisms of emergence’ on the one hand, and (2) 'mechanism-independent' components on the other. While each chapter here more or less explains and demonstrates how a particular ‘mechanism of emergence’ functions (be it that of the learning capacity of animals, or the formation of life out of the ‘prebiotic soup’), the true virtue of simulation, for DeLanda, lies in its ability to illuminate the functioning of the second prong of individuation: mechanism-independent singularities which structure the space of possibilities by which things come to be.
That last sentence will no doubt be a bit of a garble to anyone unversed in the systems science lingo employed by DeLanda, but the basic idea is that these mechanism independent components serve to explain how it is that different systems, composed of vastly different material components (in this case, 'natural' systems and artificially simulated ones), can nonetheless manifest very similar, if not identical behaviour. It’s only by admitting the reality of these mechanism-independent structures that we can explain, in other words, the ‘unreasonable effectiveness of mathematics’. Thus it is that the real protagonists of Philosophy and Simulation are modelling systems like genetic algorithms, neural nets and artificial chemistries, all of which are detailed by DeLanda in exquisite fashion.
Like much of DeLanda’s works, detail adorns the pages here in abundance, with DeLanda parsing the nitty gritty of his subject matter in a manner both exhausting and captivating. So much so in fact, that it’s easy to lose sight of larger picture while grappling with the flood of information discharged herein. DeLanda also has a tendency to 'argue by illustration’, as it were, letting his scientific vignettes do the philosophical work for him, while leaving some of the finer points of argument unilluminated. For instance, despite all the work of exemplification, the central concept of the book, emergence, felt surprisingly under-developed at the conceptual level. Still, whatever one makes of DeLanda’s carefully constructed worldview, Philosophy and Simulation remains a profoundly impressive work of scientifically literate philosophy.
Mixed feelings on this book. It is a goldmine for its quick, readable introductions to various simulations and how they can generate "emergent behavior". This is fascinating stuff.
That said, I was a bit disappointed at how little philosophy there seemed to be. I was looking for a bit more exploration of the kinds of far-reaching implications these sorts of experiments might have for how we view the things, whether it be the mind or social structures or the development of language or whatever.
Still, I'm very glad I read it, and view it as a good introduction that will smooth the way for me to explore further.
“Immediate assurance the service provided will be repaid. This is an example of a social dilemma a social interaction that can have collective benefits but that is endangered by the possibility of greater individual gains. In the simplest social dilemmas interactions between pairs of agents are structured by two opportunities and two risks: the tempting opportunity to cheat a cooperator gaining at its expense (called "temptation" or simply "T"); the opportunity of mutual gain by cooperators (a "reward" or "R"); the risk of mutual cheating (a "punishment" or "P"); and the risk for a cooperator of being cheated (being made a "sucker" or "S"). The field of mathematics that studies strategic interactions models them as games so these opportunities and risks are referred to as the "payoffs" of the game. What makes a strategic interaction a social dilemma is the ordering of the payoffs. In the situation referred to as the Prisoner's dilemma the payoffs are arranged like this: T > R > P > S. If R > T then there is no conflict between individual and collective gains so all dilemmas must have + > R. But this particular arrangement also implies that the worst thing that can happen is to be a cooperator that is cheated (P> S). A different dilemma can be generated by the arrangement I > R> S> P. This is called "Chicken"-after the game in which two drivers on a collision course play to see who swerves out of the way first-because unlike the previous case the worst outcome is when neither driver swerves and both crash (S > P). Like all opportunities and risks the payotis of strategic interactions structure a space of possibilities, an objective possibility space shared by the interacting agents.”
“A finite state automaton represents a capacity minimum while a so-called Turing machine represents a capacity maximum. The main factor contributing to this greater capacity is access to memory, a degree of access that varies from zero in the case of finite state automata to absolute in the case of a Turing machine and its infinite memory "tape." The gap between the minimum and maximum of computing capacity can be bridged because the different automata stand in a relation of part-to-whole to one another. A Turing machine, for example, needs a finite state.”
“It also assumes the existence of an arrow of time pointing from past to future, that this, it assumes that the temporal order in which a series of events occurs influences its outcome. But if we want to explain the emergence of this dependence on temporal order we must start with component parts in which the arrow of time does not yet exist.”
“Symbiotic species, for example, may come to depend on each other so closely that their evolution becomes tightly coupled, as illustrated by the way in which many contempo. rary plants have developed an obligatory relation with the insects that pollinate them. Evolutionary interactions are different from ecological ones not only because the latter take place in relatively short time scales—the periods of the oscillations in density, for example, are measured in years—while the former take place in much longer time scales, but also because they are associated with different possibility spaces.”
“The evolutionary histories of microorganisms with and without a nucleus (eukaryotes and prokaryotes) testify to the importance of this distinction: while the former went on to form all the different plants and animals that exist today, the latter followed not a divergent but a convergent path. That is, instead of yielding a plurality of gene pools more or less isolated trom each other they generated what is basically a giant gene pool spanning the entire planet.”
“First of all, an emergent representation is not explicitly stored as such, the product of the learning process being a configuration of connection weights that can recreate it when presented with the right input. In other words, what is stored is not a static representation but the means to dynamically reproduce it. Second, unlike a photograph these representations are dispersed or distributed in all the hidden units and are thus closer to a hologram. This means that they can be superimposed on one another so that the same configuration of weights can serve to reproduce several representations depending on its input, simulating the ability of insects to associate several colors or odors with the presence of food. The dispersed way in which extracted prototypes are represented is so important for the capacity of a neural net to generalize that these emergent representations are usually referred to as distributed representations. Finally, unlike the conventional link between a symbol and what the symbol stands for, distributed representations are connected to the world in a non-arbitrary way because the process through which they emerge is a direct accommodation or adaptation to the demands of an external reality.“
“When neural nets are studied in a disembodied way, that is, when their input is preselected and prestructured by the experimenter and their output is simply an arbitrary pattern of activation that has no effect on an external environ-ment, this emergent intentionality is not displayed. But the moment we embody a neural net and situate the simulated body in a space that can affect it and be affected by it, the creatures behave in a way that one feels compelled to characterize as intentional, that is, as oriented toward external opportunities and risks.”
“The memories that birds and mammals form of actually lived episodes, for example, are more or less vivid re-experiences of events or situations in which objects play specific roles (such as agents or patients) and in which their interactions make sense to the animal. The content of autobiographical memories in animals must be thought of as endowed with significance not with signification, which is a linguistic notion. The significance of a scene or event is related to its capacity to make a difference in an animal's life, to its capacity to affect and be affected by the animal's actions, while signification is a semantic notion referring to the meaning of words or sentences. Birds and non-human mammals maybe incapable of dealing with signification but they surely can attribute significance to the opportunities and risks that their environment affords them.”
“The restaurant-going script was created by first subdividing the possibility space into types of restaurant (fancy restaurant, cafeteria, fast-food joint) since the action opportunities in each of these types is quite different. Then the different restaurant types (called "tracks") were subdivided into scenes "Entering" "Ordering," "Eating," "Exiting,"“
“The two novel architectures are recurrent neural nets and self-organizing maps. Recurrent neural nets are like multilayer perceptrons augmented with feedback connec-tions. In regular multilayer perceptrons there is backward movement of information about the degree of discrepancy between current and desired output patterns but this feedback operates only during train-ing. After the weights of the connections have converged on their final values the neural net ceases to use any feedback. Recurrent neural nets, on the other hand, use feedback during actual operation. The simplest version adds to a multilayer perception an extra set of units, called "context units," linked to the hidden layer by connections with unchangeable weights and operating at full strength.”
“The training of a self-organizing map is unsu-pervised. The way in which the training proceeds is as follows. Every unit in the input layer is connected to every unit in the map layer, the weights of the connections set to random values at the beginning of the process. An activation pattern from a set of training examples is then presented to the input layer. Because an activation pattern is simply a list of intensities that can be expressed as numbers, and a configuration of weights in each set of connections is a list of strengths that can also be numerically expressed, the activation pattern of the input units can be compared for its degree of similarity to the weight configuration of each set of connections to the map layer. After stimulating the input layer a comparison is performed for each set of con nections and the weight configuration that happens to be more similar to the input pattern is selected. That is, at the start of the trainins whatever similarity there may be is completely by chance but it nevertheless allows us to pick a particular unit in the map layer as the "winner."”
“The changes necessary to break the linguistic dependence of scripts should be easier to implement in Discern than in the original symbolic version because the task of making sense of situations is carried out in the former with emergent distributed representations used as building blocks to compose equally emergent hierarchical taxonomies. Those distributed representations could, for example, be supplied by a video camera using the face recognition neural net.”
“Groups of neurons in real brains, each of which extracts different features from the same object, tend to fire in synchrony with one another when perceiving that object. In other words, temporal simultaneity in the firing behavior of neurons can act as a dynamic binder. This insight has already been successfully exploited in some modular neural net designs that perform script-like inferences.“
“Since we will now be concerned with larger wholes with their own emergent properties the details of how individual agents perceive and memorize can be taken for granted. This means that simulations can use linguistically expressed rules and procedures to generate agent behavior without begging the question.”
“In one simulation, for example, agents were matched into groups using the norm "do not choose cheaters as members" and the metanorm "do not choose those that choose cheaters." Both degree of cooperativeness and degree of vengefulness were coded into a genetic algorithm and a population was allowed to evolve different strategies as the agents made decisions about group formation and choices to cooperate or cheat. The outcome depended on three conditions affecting group formation. The first condition related to the way in which membership was decided, that is, whether a unilateral choice by an agent who wanted to join was enough or whether acceptance or rejection of membership depended also on the group (mutual choice). With unilateral choice there was no room for metanorms so the outcome was predictably that low cooperators cheated on high cooperators. The second condition involved the possibility for vengeful members to split from a group to express their dissatisfaction with those who refused to punish. Without this possibility genes for high vengefulness could not get established and low cooperators continued to cheat. Finally, the third condition involved the possibility that agents who may have departed from a previous group could reenter another group allowing resilient groups of cooperators to recruit new members and grow at the expense of other groups. The strategy combining mutual choice, splitting, and rejoining performed best as a solution to the multi-person Prisoner's Dilemma.”
“Since the activities of the agents had to take place in a simulated space with the same characteristics as the original island, a standard format for the storage of spatial data, called a Geographic Information System (or GIS), was used. A GIS is basically a map on which multiple layers are superimposed each containing information about a particular landscape property (soil composition, distributions of plant species, water availability). In Magical these maps were given an additional use: to represent the knowledge that the agents acquired about the landscape. In other words, the cognitive maps that the agents formed of resource distributions, maps that would have been created by real hunter-gatherers through a combination of episodic and semantic memory, were each represented by a GIS map. That way when the agents gathered at night to report their daily findings the linguistic exchange among them could be modeled by merging all the individual GIS maps to form a composite one that was then redistributed among all the members of the group.”
This book is an introduction (I assume) to DeLanda's philosophy, which focuses on emergent properties and how they can emerge from combinations of simpler components with simple properties. A central idea is the ability to use computer simulation to test and verify philosophical hypotheses. To that end, DeLanda describes simulations of a wide variety of topics - everything from the "prebiotic soup" to multicellular life, insects and human societies with economics and hierarchies. Even though this book is easy to read, it is densely packed with information but always avoid getting lost in technical detail, a feat in itself.
i am generally a foreigner to philosophy and quite skeptical
however this book was quite amazing - it does a case study of cell behavior and thunderstorms and other emergent systems.
really great read from a scientific / systems / engineering / modeling perspective... anyone doing any sort of systems design should read it. very clear and englightening concept of emergence.
neat to see how mankind struggles with - neat to compare this book to, say, alexander's 'notes on the synthesis of form' and his 'generative grammar' - and note how far we've come in 50 years with 'systems design'
It's more useful to read the appendix together with the individual chapters. DeLanda has a gift for taking abstract Deleuzian concepts and explaining them in a very palatable, empirical way. Two key ideas discussed in this work are the concept of the gradient, where different entities affect one another via deterritorialisation and its dissipation, which refers to the formation of boundaries (i.e. territorialisation). These boundaries, it should be noted, are partial and not immutable, since they are constituted by the capacities and tendencies within the assemblage. Hence, even as a new substance emerges from the interaction between two different entities, the former remains constrained by the presence of visible and non-visible properties.