A prominent part of everyday thought is thought about mental states. We ascribe states like desire, belief, intention, hope, thirst, fear, and disgust both to ourselves and to others. We also use these ascribed mental states to predict how others will behave. Ability to use the language of mental states is normally acquired early in childhood, without special training.
This naïve use of mental state concepts is variously called folk psychology, theory of mind, mentalizing,or mindreading and is studied in both philosophy and the cognitive sciences, including developmental psychology, social psychology, and cognitive neuroscience.
One approach to mindreading holds that mental-state attributors use a naïve psychological “theory” to infer mental states in others from their behavior, the environment, or their other mental states, and to predict their behavior from their mental states.
This is called the theory theory (TT). A different approach holds that people commonly execute mindreading by trying to simulate, replicate or reproduce in their own minds the same state, or sequence of states, as the target. This is the simulation theory (ST).
Another possible label for simulation is empathy.In one sense of the term, empathy refers to the basic maneuver of feeling one’s way into the state of another, by “identifying”with the other, or imaginatively putting oneself in the other’s shoes. One does not simply try to depict or represent another’s state, but actually to experience or share it.
Of course, mental life may feature empathic acts or events that are not deployed for mindreading. But the term simulation theory primarily refers to an account of mindreading that accords to empathy, or simulation, a core role in how we understand, or mindread, the states of others.
Historical Antecedents of The Debate
|Historical Antecedents of The Debate|
A historical precursor of the ST/TT debate was the debate between positivists and hermeneutic theorists about the proper methodology for the human sciences. Whereas positivists argued for a single, uniform methodology for the human and natural sciences, early-twentieth-century philosophers like Wilhelm Dilthey and R.G. Collingwood advocated an autonomous method for the social sciences, called Verstehen, in which the scientist or historian projects herself into the subjective perspective or viewpoint of the actors being studied.
Contemporary simulation theory, however, makes no pronouncements about the proper methodology of social science; it only concerns the prescientific practice of understanding others. The kernel of this idea has additional historical antecedents. Adam Smith, Immanuel Kant, Arthur Schopenhauer, Friedrich Nietzsche, and W. V. Quine all wrote of the mind’s empathic or projective propensities.
Quine (1960) briefly endorsed an empathy account of indirect discourse and propositional attitude ascription. He described attitude ascriptions as an “essentially dramatic idiom” rather than a scientific procedure, and this encouraged him to see the attitudes as disreputable posits that deserve to be eliminated from our ontology.
The Beginning of the Debate
|The Beginning of the Debate|
It was in the 1980s that three philosophers—Robert Gordon, Jane Heal, and Alvin Goldman—first offered sustained defenses of simulation theory as an account of the method of mindreading. They were reacting partly to functionalist ideas in philosophy of mind and partly to emerging research in psychology.
According to analytic functionalism, our understanding of mental states is based on commonsense causal principles that link states of the external world with mental states and mental states with one another. For example, if a person is looking attentively at a round object in ordinary light, he is caused to have a visual experience as of something round.
If he is very thirsty and believes there is something potable in a nearby refrigerator, he will decide to walk toward that refrigerator. By using causal platitudes of this sort, attributors can infer mental states from the conditions of an agent’s environment or from his previous mental states.
One might start with beliefs about a target’s initial mental states plus beliefs in certain causal psychological principles, feed this information into one’s theoretical reasoning system, and let the system infer the “final” states that the target went into or will go into. This TT approach assumes that attribution relies on information about causal principles, so TT is said to be a “knowledge rich” approach.
Simulationists typically doubt that ordinary adults and children have as much information, or the kinds of information, that TT posits, even at a tacit or unconscious level. simulation theory offers a different possibility, in which attributors are “knowledge-poor” but engage a special mental skill: the construction of pretend states.
To predict an upcoming decision of yours, I can pretend to have your goals and beliefs, feed these pretend goals and beliefs into my own decision-making system, let the system make a pretend decision, and finally predict that you will make this decision. This procedure differs in three respects from the theorizing procedure. First, it involves no reliance on any belief by the attributor in a folkpsychological causal principle.
Second, it involves the creation and deployment of pretend, or make-believe, states. Third, it utilizes a mental system, here a decision-making system, for a non-standard purpose, for the purpose of mindreading rather than action. It takes the decisionmaking system “off-line.”
Daniel Dennett (1987) challenged ST by claiming that simulation collapses into a form of theorizing. If I make believe I am a suspension bridge and wonder what I will do when the wind blows, what comes to mind depends on the sophistication of my knowledge of the physics of suspension bridges. Why shouldn’t makebelieve mindreading equally depend on theoretical knowledge?
Alvin Goldman (1989) parried this challenge by distinguishing two kinds of simulation: theory-driven and process-driven simulation. A successful simulation need not be theory driven. If both the initial states of the simulating system and the process driving the simulation are the same as, or relevantly similar to, those of the target system, the simulating system’s output should resemble the target’s output, enabling the prediction to be accurate.
Jane Heal (1994) also worried about a threat of simulation theory collapsing into TT. If ST holds that one mechanism is used to simulate another mechanism of the same kind, she claimed, then the first mechanism embodies tacit knowledge of theoretical principles of how that type of mechanism operates.
Since defenders of TT usually say that folk-psychological theory is known only tacitly, this cognitive science brand of simulation would collapse into a form of TT. This led Jane Heal to reject such empirical claims about sub-personal processes. Instead, she proposed (1998) that ST is in some sense an a priori truth.
When we think about another’s thoughts, we “co-cognize” with our target; that is, we use contentful states whose contents match those of the target. Heal has claimed that such cocognition is simulation, and is an a priori truth about how we mindread.
Martin Davies and Tony Stone (2001) criticize Jane Heal’s proposed criterion of tacit knowledge possession. Yet another way to rebut the threat of collapse is to question the assumption that the integrity or robustness of simulation can be sustained only if it is not underpinned by theorizing.
The assumption is that simulation is a sham if it is implemented by theorizing; ST implies that no theorizing is used. Against this, Alvin Goldman (2006) argues that theorizing at an implementation level need not conflict with higher-level simulation, and the latter is what simulation theory insists upon.
According to the standard account, simulational mindreading proceeds by running a simulation that produces an output state (e.g., a decision) and “transferring” that output state to the target. “Transference” consists of two steps: classifying the output state as falling under a certain concept and inferring that the target’s state also falls under that concept.
Robert Gordon (1995) worries about these putative steps. Classifying one’s output state under a mental concept ostensibly requires introspection, a process of which Robert Gordon is leery. Inferring a similarity between one’s own state and a target’s state sounds like an analogical argument concerning other minds, which Ludwig Wittgenstein and others have criticized.
Also, if the analogy rests on theorizing, this undercuts the autonomy of simulation. Given these worrisome features of the standard account, Robert Gordon proposes a construal of simulation without introspection or inference “from me to you.”
Robert Gordon replaces transference with “transformation.” When I simulate a target, I “recenter”my egocentric map on the target. In my imagination, the target becomes the referent of the first-person pronoun “I” and his time of action, or decision, becomes the referent of “now.” The transformation Gordon discusses is modeled on the transformation of an actor into a character he is playing.
Once a personal transformation is accomplished, there is no need to “transfer” my state to him or to infer that his state is similar to mine. But there are many puzzling features of Gordon’s proposal. He describes the content of what is imagined, but not what literally takes place.
Mindreaders are not literally transformed into their targets (in the way princes are transformed into frogs) and do not literally lose their identity.We still need an account of a mindreader’s psychological activities. Unless he identifies the type of his output state and imputes it to the target, how does the activity qualify as mindreading, that is, as believing of the target that she is in state M?
Merely being oneself in state M, in imagination, does not constitute the mindreading of another person. One must impute a state to the target, and the state selected for imputation is the output state of the simulation, which must be detected and classified.
First-person mental-state detection thereby becomes an important item on the simulation theory agenda, an item on which simulationists differ, some, such as Harris (1992) and Goldman (2006), favoring introspection and others, such as Robert Gordon (1995), resisting it.
Different theorists favor stronger or weaker versions of ST, in which “information” plays no role versus a moderate role. Robert Gordon favors a very pure version of ST, whereas Goldman favors more of a hybrid approach, in which some acts of mindreading may proceed wholly by theorizing, and some acts may have elements of both simulation and theorizing.
For example, a decision predictor might use a step of simulation to determine what he himself would do, but then correct that preliminary prediction by adding background information about differences between the target and himself. Some theory theorists have also moved toward a hybrid approach by acknowledging that certain types of mindreading tasks are most naturally executed by a simulation-like procedure (Nichols and Stich 2003).
What exactly does ST mean by the pivotal notion of a “pretend state”? Mental pretense may not be essential for simulational mindreading, for example, for the reading of people’s emotional states as discussed at the end of this article. But most formulations of simulation theory appeal to mental pretense.
Mental pretense is often linked to imagining, but imagining comes in different varieties.One can imagine that something is the case, for example, that Mars is twice as large as it actually is, without putting oneself in another person’s shoes. Alvin Goldman (2006) proposes a distinction between two types of imagining: suppositional-imagining and enactive-imagining.
Suppositional imagining is what one does when one supposes, assumes, or hypothesizes something to be the case. It is a purely intellectual posture, though its precise connection to other intellectual attitudes, like belief, is a delicate matter.
Enactive imagining is not purely intellectual or doxastic. It is an attempt to produce in oneself a mental state normally produced by other means, where the mental states might be perceptual, emotional, or purely attitudinal. You can enactively imagine seeing something—you can visualize it—or you can enactively imagine wanting or dreading something.
For purposes of ST, the relevant notion of imagination is enactive imagination. To pretend to be in mental state M is to enactively imagine being in M. If the pretense is undertaken for mindreading, one would imagine being in M and “mark” the imaginative state as belonging to the target of the mind-reading exercise.
Can a state produced by enactive imagining really resemble its counterpart state, the state it is meant to enact? And what are the respects of resemblance? Gregory Currie (1995) advanced the thesis that visual imagery is the simulation of vision, and Gregory Currie and Ian Ravenscroft extended this proposal to motor imagery.
They present evidence from cognitive science and cognitive neuroscience to support these ideas, highlighting evidence of behavioral and neural similarity (Gregory Currie and Ian Ravenscroft 2002). Successful simulational mindreading would seem to depend on significant similarity between imagination-produced states and their counterparts. However, perfect similarity, including phenomenological similarity, is not required (Goldman 2006).
Gordon’s first paper on simulation theory (1986) appealed to research in developmental psychology to support it. Psychologists Heinz Wimmer and Josef Perner (1983) studied children who watched a puppet show in which a character is outside playing while his chocolate gets moved from the place he put it to another place in the kitchen.
Older children, like adults, attribute to the character a false belief about the chocolate’s location; three-year-olds, by contrast, do not ascribe a false belief. Another experiment showed that older autistic children resemble three-yearolds in making mistakes on this false-belief task (Baron Cohen, Leslie, and Frith 1985).
This was interesting because autistic children are known for a striking deficit in their capacity for pretend play. Gordon suggested that the capacity for pretense must be critical for adequate mindreading, just as ST proposes.
Most developmental psychologists offered a different account of the phenomena, postulating a theorizing deficit as the source of the poor performances by both three-year-olds and autistic children. It was argued that three-year-olds simply do not possess the full adult concept of belief as a state that can be false, and this conceptual “deficit” is responsible for their poor false-belief task performance.
The conceptual-deficit account, however, appears to have been premature. First, when experimental tasks were simplified, three-year-olds and even younger children sometimes passed false-belief tests. Second, researchers found plausible alternative explanations of poor performance by three-year-olds, explanations in terms of memory or executive control deficiencies rather than conceptual deficiencies.
Thus, the idea of conceptual change— assumed to be theoretical change—was undercut. This had been a principal form of evidence for TT and, implicitly, against ST. It has proved difficult to design more direct tests between TT and ST.
Shaun Nichols, Stephen Stich, and Alan Leslie (1995) cite empirical tests that allegedly disconfirm ST. One of these types of empirical tests involves the “endowment effect.” The endowment effect is the finding that when people are given an item, for example, a coffee mug, they come to value it more highly than people who do not possess one. Owners hold out for significantly more money to sell it back than do nonowners who are offered a choice between receiving a mug and receiving a sum of money.
When asked to predict what they would do, before being in such a situation, subjects underpredict the price that they themselves subsequently set. Shaun Nichols, Stephen Stich, and Leslie argue that TT readily explains this underprediction; people simply have a false theory about their own valuations. But ST, they argue, cannot explain it.
If simulation is used to predict a choice, there are only two ways it could go wrong. The predictor’s decision making system might operate differently from that of the target, or the wrong inputs might be fed into the decision-making system.
The first explanation does not work here, because it is the very same system. The second explanation also seems implausible because the situation is so transparent. This last point, however, runs contrary to the evidence.
Research by George Loewenstein and other investigators reveals countless cases in which self- and other-predictions go wrong because people are unable to project themselves accurately into the shoes of others, or into their own future shoes.
The actual current situation constrains their imaginative construction of future or hypothetical states, which can obviously derail a simulation routine (Van Boven, Dunning, and George Loewenstein 2000). So simulation theory has clear resources for explaining underpredictions in endowment effect cases.
One of the best empirical cases for simulation is found in a domain little studied in the first two decades of empirical research on mindreading. This is the domain of detecting emotions by facial expressions.
Goldman and Sripada (2005; also Goldman, 2006) survey findings pertaining to three types of emotions: fear, disgust, and anger. For each of these emotions, brain-damaged patients who are deficient in experiencing a given emotion are also selectively impaired in recognizing the same emotion in others’ faces. Their mindreading deficit is specific to the emotion they are impaired in experiencing.
Simulation theory provides a natural explanation of these “paired deficits”: normal recognition proceeds by using the same neural substrate that subserves a tokening of that emotion, but if the substrate is damaged, mindreading should be impaired. TT, by contrast, has no explanation that is not ad hoc. TT is particularly unpromising because the impaired subjects retain conceptual (“theoretical”) understanding of the relevant emotions.
By what simulational process could normal facebased emotion recognition take place? One possibility involves facial mimicry followed by feedback that leads to (subthreshold) experience of the observed emotion. In other words, normal people undergo traces of the same emotion as the person they observe.
This resembles Nietzsche’s idea, now supported by research showing that even unconscious perception of faces produces covert, automatic imitation of facial musculature in the observer, and these mimicked expressions can produce the same emotions in the self.
Another possible explanation of emotion recognition is unmediated mirroring, or resonance, in which the observer undergoes the same emotion experience as the observed person without activation of facial musculature.
Such “mirror matching” phenomena have been identified for a variety of mental phenomena, in which the same experience that occurs in one person is also produced in someone who merely observes the first. Such mirror matching occurs for events ranging from action with the hands (Rizzolatti et al., 2001), to somatosensory experiences (Keysers et al., 2004), to pain (Singer et al., 2004).
For example, if one observes somebody else acting, the same area of the premotor cortex is activated that controls that kind of action; if one observes somebody being touched on the leg, the same area of somatosensory cortex is activated that is activated in the normal experience of being touched on the leg; the same sort of matching applies to pain. This leads Vittorio Gallese (2003) to speak of a “shared manifold” of intersubjectivity, a possible basis for empathy and social cognition more generally.
It is unclear whether mirror matching always yields recognition, or attribution, of the experience in question, so perhaps mindreading is not always implicated. But the basic occurrence of mental simulation, or mental mimicry, is strikingly instantiated.