Buscar

Moral brains - the neuroscience of morality - Prinz 2016

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 3, do total de 33 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 6, do total de 33 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 9, do total de 33 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Prévia do material em texto

i
mor a l br a ins
 
ii
1
 iii
Moral Brains
T H E N EU R O S C I EN C E O F M O R A L I T Y
Edited by S. Matthew Liao
 
44
45 
 45
Sentimentalism and the Moral Brain
Jesse Prinz
Over the last dozen years, there has been enormous interest in studying the neural 
basis of moral judgment. A growing number of researchers believe that the moral brain 
will lead to insights about the nature of morality. There is an emerging conviction that 
long- standing debates in psychology and philosophy can be settled, or at least propelled 
forward, by neuroscience. Much of this conviction centers around the more specific 
belief that we can make progress on questions about the relationship between moral 
judgment and emotion. That confidence, however, rests on undue faith in what brain 
scans can reveal, independent of other sources of evidence, including both behavioral 
studies and theoretical considerations. When taken on their own, extant neuroimag-
ing studies leave classic debates unsettled, and require other evidence for interpretation. 
This suggests that, at present, the idea that neuroscience can settle psychological and 
philosophical debates about moral judgments may have things backward. Instead, psy-
chological and philosophical debates about moral judgment may be needed to settle the 
meaning of brain scans. This reversal of directionality does not render brain scans unin-
teresting. The surprising range of hotspots seen in studies of moral judgment need to be 
decoded. Once decoded in light of other evidence, neuroimaging results can be helpful 
and informative.
My goal here will be, first, to establish that neuroimaging studies leave much uncer-
tainty about moral judgment and, in particular, about the relationship between moral-
ity and emotion. Fortunately, I will argue, behavioral evidence and philosophical argu-
mentation can help settle the questions that scans leave unanswered. This, then, points 
toward an account of what different brain structures are contributing to moral cogni-
tion. Such a mapping can be useful in making progress in this domain.
To spoil the surprise, I will say at the outset that I interpret the preponderance of em-
pirical evidence as supporting a fairly traditional kind of sentimentalist theory of moral 
1
 
 
46
46 Emotions versus Reason
judgment. According to this theory, occurrent moral judgments are constituted by emo-
tional states. I  will contrast this theory with a range of alternatives and argue for its 
explanatory superiority. For those who are unconvinced by my arguments, the chapter 
can be read as a plea for an integrative methodology, and many claims can be accepted 
without joining my sentimentalist bandwagon. For those less interested in methodol-
ogy, the chapter can be read as a defense of sentimentalism, which happens to engage 
neuroscientific research.
1.1. Blinded by Head Lights: The Ambiguity of Imagining
1.1.1. The Anatomy of Mor alit y
Since Greene et al.’s seminal (2001) study of moral dilemmas, there have been numerous 
efforts to identify brain structures involved in moral judgments. Though there is a fair 
degree of convergence between these studies, the results are often somewhat bewildering. 
Even within a single study, a variety of brain structures are usually implicated, and it is 
often far from obvious how to interpret the results. I will not attempt a complete review 
here. A survey of some of the main findings will suffice to make the point. I will focus 
on studies that compare moral judgments to nonmoral judgments, though I should men-
tion at the outset that many studies also compare different kinds of moral judgments, 
and many published reports include both kinds of comparisons. There will be occasion 
to discuss differing kinds of moral judgments as we move on in the discussion.
Let’s begin with Greene et al. (2001). Though their emphasis lies elsewhere, they do 
compare moral judgments (i.e., choices about the right thing to do in a moral dilemma) 
with nonmoral judgments (e.g., dilemmas about whether to replace an old TV set or 
whether to take a bus or a train). The findings suggest that moral dilemmas recruit the 
following brain structures to a greater degree than nonmoral dilemmas: medial frontal 
gyrus (including parts of Brodmann areas 9 and 10; the latter is also known as ventrome-
dial prefrontal cortex, or VMPFC), posterior cingulate (BA 31), and the angular gyrus 
(BA 39) bilaterally. Greene et al. also note increased activation in the superior parietal 
lobule (BA 7/ 40), but say little about that in their discussion.
Another seminal study, by Moll et al. (2001), compared judgments of moral wrong-
ness (e.g., “They hanged an innocent person”) to judgments of factual wrongness (e.g., 
“Stones are made of water”). Moral judgments were associated with activity in medial 
frontal gyrus (BA 9/ 10), as Greene et al. found, as well as the right angular gyrus (con-
sistent with Greene, but more lateralized). They also report activity in the left precuneus 
(BA 7, just above the posterior cingulate), the right temporal pole (BA 38), and the right 
posterior superior temporal sulcus (STS). One year later, Moll, Oliveira- Souza, Bramati, 
et al. (2002) published a study using a similar design and reported slightly different, but 
overlapping results. As compared to neutral sentences, moral sentences were associated 
with increases in parts of the left medial temporal gyrus (BA 10), as well as adjacent 
medial orbital frontal cortex (OFC, or BA 11).
 
 
 47
Sentimentalism and the Moral Brain 47
In another study, Moll, Oliveira- Souza, Eslinger, et al. (2002) presented participants 
with photographs depicting morally bad behavior. As compared to neutral images, the 
moral photos were associated with medial frontal and orbital frontal areas again (BA 9/ 
10/ 11), precuneus, and the STS (including parts of BA 21 and 38), all in the right hemi-
sphere. There was also bilateral activity in middle temporal gyrus (BA 19/ 22), as well as 
increases in the amygdala and the midbrain. BA 22 is adjacent to the angular gyrus (BA 
39) and portions of the superior parietal lobule (BA 40). The area encompassing all three 
is sometimes called the temporal partial junction.
Other pioneering results include Heekeren et  al.’s (2001) study, which compares 
morally anomalous to semantically anomalous sentences. The moral condition was as-
sociated with activity in the left angular gyrus (BA 39), the left middle temporal gyrus 
(BA 22), and the temporal pole (BA 38), consistent with other studies, and beyond these 
structures, bilateral inferior frontal gyrus (BA 45/ 47), which may reflect the linguistic 
nature of their task. In a subsequent study with a similar design, Heekeren et al. (2003) 
found that moral sentences were associated with area 47 again (perhaps a language area), 
as well as cast of areas familiar from the other moral judgment studies: medial frontal 
gyrus, STS, and temporal pole.
I will mention work by just one other research group; other findings follow a similar 
pattern. In one study, Harenski and Hamann (2006) presented participants with morally 
charged pictures (as in Moll, Oliveira- Souza, Eslinger, et al. 2002) and compared these to 
either nonmoral emotional pictures or a neutral baseline (deciding whether numbers are odd 
or even). When compared to the neutral condition, moral images produced greater activa-
tion in right medial frontal gyrus (BA 10), left amygdala, and left superior frontal gyrus. The 
latter area— not a big player in other studies— is associated with executive working memory 
spatial cognition; this may simply reflect that the neutral condition (classifying odd- even 
numbers) is highly automatic and nonspatial compared to picture viewing. As compared to 
nonmoral pictures,moral pictures were associated with greater STS activity as well as activ-
ity in the posterior cingulate cortex. Harenski et al. (2010) also conducted a follow- up study, 
presenting participants with moral and nonmoral images once again. When moral images 
were compared to nonmoral, some of the usual suspects appeared: medial frontal gyrus and 
OFC (BA 10/ 11, right lateralized in this study), angular gyrus (bilaterally), and right poste-
rior cingulate. The OFC and posterior cingulate also increased with severity ratings when 
participants were asked to rate how bad the depicted violations were.
Though there is considerable overlap between these studies, there is no easy interpre-
tation. Numerous brain areas are implicated in moral judgment, and the functions of 
these areas are often complex, varied, or poorly understood. Early efforts (e.g., Moll et al. 
2001) sought a moral module in the brain, but it is now widely believed that moral cogni-
tion supervenes on neural mechanisms that also participate in other capacities (Greene 
and Haidt 2002). But what are these other capacities, and what can we learn from brain 
scans about how moral decisions are made? These are the questions with which I will be 
occupied throughout this chapter, but let me make a preliminary observation here.
48
48 Emotions versus Reason
Every single neuroimaging study of moral cognition that I  know concurs on one 
point:  moral judgments regularly engage brain structures that are associated with 
emotional processing. I  say “regularly” because, while some studies imply that this is 
always the case, others imply that it is often the case, but not always, as we will see below. 
Even with those exceptions in mind, we can say that the moral brain looks a lot like the 
emotional brain: a number of brain areas mentioned in the survey above are frequently 
implicated in studies that investigate the neural correlates of emotion. These include 
frontal areas 9, 10, and 11 (Damasio 1994; Vytal and Hamann 2010), posterior cingulate 
(Madock et al. 2003; Nielen et al. 2009), and the temporal pole (Olsen et al. 2007).
The conclusion that emotions are regularly involved in moral judgment is important, 
but also extremely vague. Involved in what way? The question is exacerbated by the fact 
that some of the brain structures that come up in these studies are not thought to be 
correlates of emotion. The STS, for example, is a frequent player, as are potions of the 
temporal parietal junction (TPJ). Other studies have found associations between some 
moral judgments and activity in the dorsolateral prefrontal cortex (DLPFC) (Greene 
et al. 2001; Borg et al. 2006; Cushman et al. 2011). This is a classic working memory area, 
which Greene et al. refer to as a cognitive area, in contrast to what they label emotion 
areas. Thus, the cognitive neuroscience of moral judgment has actually yielded a mixed 
picture, with emotion areas and nonemotion areas both playing a role. This makes the 
findings somewhat difficult to interpret.
The challenge of interpreting imaging results becomes more vivid if we ask whether 
extant findings can be used to show that one prevailing theory of moral judgment is 
correct and others are false. As we will now see, there are numerous theories in the litera-
ture, and it is not clear whether any of the aforementioned can adjudicate between them.
1.1.2. Mor al Models
There are many debates about the nature of moral judgment. Some of these have deep 
philosophical routes and remain hotly contested within contemporary moral psychol-
ogy. Perhaps the most enduring and fundamental debate concerns the role of emotions. 
It is universally recognized that emotions often arise when we are thinking about moral 
issues. For example, when we read about injustice, we find it upsetting. What exactly is 
the role of these emotions? Are they causes of judgments or effects? Are they involved 
essentially or eliminably?
Divergent answers to questions like these have animated philosophers for generations. 
In recent years, psychologists have also weighed in on these debates, and a menu of com-
peting theories has emerged. I will summarize these here (see figure 1.1) and ask whether 
neuroscientific results can settle which is most plausible.
According to one theory, or, more accurately, one class of theories, when emotions 
arise in moral decision- making, they are the outputs of moral judgment. The idea is that 
we arise at moral conclusions dispassionately, and then, at least sometimes, emotions 
follow. Such “emotions as outputs” theories come in a variety of forms. For philosophers, 
 
 49
Sentimentalism and the Moral Brain 49
the most familiar is a kind of moral rationalism, according to which moral judgments 
arise at the end of a conscious and deliberate reasoning process. This view is associated 
with Kant, though this is a mistake, since he actually held that moral judgments often 
arise as the result of emotions and inclinations. For Kant rational derivation of moral 
judgments is possible and normatively preferable to emotional induction of moral judg-
ments, but emotional induction does occur.
Within contemporary cognitive science, moral rationalism is not a very popular posi-
tion. Much more popular is the view that moral judgments are based on the application 
of unconscious rules (e.g., Cushman et al. 2006; Mikhail 2007; Huebner et al. 2008). 
For these authors, we determine that something is morally right or wrong by uncon-
sciously analyzing the structure of an event (who did what to whom with what inten-
tions and what outcome) and then assigning a value in accordance with a “moral gram-
mar.” Defenders of this approach happily grant that emotions may arise once a moral 
verdict is reached, but emotions do not play a role in getting to that verdict.
“Emotions as outputs” views contrast sharply with “emotions as inputs,” which state 
that moral judgments arise as the result of emotional states. For example, consider 
Haidt’s (2001) theory, which he calls intuitionism. An intuition, for Haidt is “a conclu-
sion [that] appears suddenly and effortlessly in consciousness, without any awareness by 
the person of the mental processes that led to the outcome” (181). Haidt considers intu-
ition a form of cognition, but he stresses that intuitions contrast with reasoning. Haidt 
also suggests that intuitions generally take the form of emotions. His paper is called 
“The Emotional Dog and Its Rational Tail,” implying that intuitions are emotions, and 
these emotions, rather than reasoning, lead us to our moral judgments. His empirical re-
search on intuitionism has focused on measuring and inducing emotions in the context 
of moral judgment. Thus, for Haidt, certain forms of conduct cause emotional responses 
in us, and we use those emotions to arrive at the conclusion that the conduct in question 
is good or bad. For example, when we think about incest it causes disgust and we infer 
from this that incest must be bad. Haidt recognizes that we sometimes provide reasons 
Emotions
As Outputs
Emotions
As Inputs
Dual process model
Constitution Model
Reasoning/
Uncon. Rules Judgment Emotion
ReasoningJudgmentEmotion
Reasoning
Emotion
Judgment
Containing
Emotion
Judgment
Reasoning
Figure 1.1 Competing models of moral judgment.
50
50 Emotions versus Reason
for our moral judgments, but he thinks such reasoning is post hoc: emotions lead us to 
draw moral verdicts and reasoning is used to rationalize those verdicts once they have 
been drawn.
Emotions as outputs views and emotions as inputs views are sometimes presented as 
diametrically opposed. There are, however, compromise positions that say emotions can 
serve as either causes or effects. Leading among these are dual- process theories, which 
say that moral judgmentssometimes result from emotions and sometimes result from 
dispassionate reasoning (Greene et al. 2001; Greene 2008). Greene et al. (2001) make 
a further claim, which is that emotions are especially likely to be engaged when con-
duct under consideration involves a direct act of physical aggression against a person, 
as opposed to a crime where violence is indirect or a mere side- effect of some other 
action. Greene (2008) also speculates that categorical rules against killing (“deonto-
logical” rules) stem from emotional squeamishness about this kind of violence. In con-
trast, moral decisions based on estimating comparative outcomes (“utilitarian” deci-
sion procedures) are driven by reason, rather than emotion, on Greene’s story. Other 
mixed models are also imaginable. For example, some authors suggest that emotions are 
heuristics for quick moral decision- making, whereas reason can be used when we have 
time to make decisions more carefully (Sunstein 2005). Greene’s dual- process theory 
can be interpreted as a version of a heuristic view, and I will focus on his account in the 
discussion below.
The contemporary cognitive science literature often gives the impression that these are 
the only theoretical options. Indeed, one might think they are exhaustive: emotions are 
causes or moral judgments, or not causes, or both, depending on the case. But this misses 
out on a further alternative, which has had many defenders in the history of philosophy. 
Rather than seeing emotions as causes or effects of moral judgments, one might propose 
that they are constituent parts. To many ears, this sounds bizarre. Contemporary readers 
have difficulty imagining what it would mean for an emotion to be part of a judgment. 
Cognitive science has trained us to think about judgments as something like sentences 
in a mental language. Judgments are made up of something like words, on this view, so 
they cannot contain emotions. The linguistic view of judgments is not the only possibil-
ity, however. Many prominent figures in the history of philosophy, including rationalists 
such as Descartes and British empiricists such as Locke and Hume claim that judgments 
are made up of mental images. For Hume, imagery includes sensory states, such as visual 
and auditory images, as well as emotions. A judgment can be thought of as a simulation 
of what it would be like to experience that which the judgment is about. For example, 
to judge that snow is white might be a visual image of white snow. Such simulations can 
also include emotions:  to judge that sledding is fun, might be a visual- bodily simula-
tion of sledding together with delight. Kant ([1787] 1997)  famously argues that such 
empiricist theories of judgment cannot suffice, since the same image (say white snow) 
could correspond to many different attitudes other than judgments (e.g., a desire to see 
white snow). But, in solving this problem, Kant does not conclude that judgments are 
 51
Sentimentalism and the Moral Brain 51
sentences in the head. Rather, he realizes that what makes a mental state qualify as a 
judgment has to do with how it is used in thought (cf. Kitcher 1990). If we use an image 
of white snow in order to draw inferences about snow’s color, then we are, in effect, using 
that image as a judgment. On this view, judgments need not be sentences. They can com-
prise images, emotions, or anything else, provided those things are used in such a way 
that they constitute how an agent takes the world to be.
For empiricists like Hume, every word expresses a sensory or emotional idea. “Snow” 
corresponds to imagery of snow’s appearance and feel. “White” corresponds to visual 
image of whiteness. And so on. In his Treatise on Human Nature, Hume raises the ques-
tion, what ideas in the mind do words like “good” and “bad” express? He didn’t think 
good things or bad things have a characteristic visual appearance. Instead he suggested 
that “good” and “bad” express emotions (or, in 18th- century vocabulary, sentiments). 
Indeed, there are many words that seem to express sentiments. “Fun” is one example, and 
others include: amusing, disgusting, fascinating, confusing, delicious, sexy, and upset-
ting. When we use such terms, we are expressing how we feel about something. On an 
empiricist psychology, these words convey an occurrent emotional state. If I bite into a 
nectarine and say, “This is delicious!” the predicate gives verbal expression to the gusta-
tory pleasure that I am experiencing. By analogy, Hume thought that sentences such as, 
“that action is bad” express a negative emotion. He didn’t say much about what these 
emotions are (more on that below), but one can imagine various feelings of disapproval 
or condemnation filling this role: anger, contempt, guilt, and so on.
Hume’s view is known as “sentimentalism,” and it has had many adherents over the 
centuries. Modernizing the term, one might describe sentimentalism as a “constitution” 
view about the relationship between moral judgments and emotions. Emotions are part 
of what constitutes moral judgments. If one asserts that factory farming is bad, for exam-
ple, this assertion expresses a judgment that literally contains a negative feeling toward 
factory farming (factory farming itself might be mentally represented using a complex 
store of associated imagery, but I will leave that issue to one side). In recent cognitive 
science, there has been an effort to bring back the empiricist conjecture that all thought 
uses sensory and affective states as building blocks (e.g., Barsalou 1999; Prinz 2002). That 
view remains controversial, but here I  am only concerned with a more restricted hy-
pothesis:  it is both coherent and plausible to suppose that certain concepts, including 
the examples listed above, are grounded in emotions. The claim that moral concepts are 
constituted by emotions remains a live possibility, which contemporary cognitive sci-
ence should include in any menu of competing theories.
We are left, then, with four broad classes of theories:  emotions may be outputs of 
moral judgments or inputs of moral judgments, emotions may play both of these roles 
depending on the case, or emotions might be constituent parts of moral judgments. The 
differences between these views are substantive and important. Indeed, they have radi-
cally different implications about the nature of morality. Output views tell us there is a 
nonemotional source of moral judgments. Dual- process theories tell us we can arrive at 
52
52 Emotions versus Reason
moral judgments in different ways, and they often imply that the rational route is superior 
to the emotional route. Input theories describe emotions as intuitions, which are used 
to support moral judgments, rather than component parts of judgments. Constitution 
theories are by comparison stronger: if emotions are parts of moral judgments, then one 
cannot make a moral judgment without having an emotional state. The debates between 
these competing views are among the most important in moral psychology. They are 
debates about the very nature of morality, debates about how moral decisions are made, 
and what it means to be morally competent. The outcome of these debates would have 
philosophical, scientific, and societal implications.
One might think that extant neuroimaging results can be used to settle which of these 
very different views is most plausible. Indeed, one might assume this is the main goal of 
such studies. Why invest in expensive neuroimaging research on morality if not to settle 
the nature of moral judgments? It is disheartening, therefore, to realize that extant stud-
ies make little progress adjudicating between the theories outlined here. Notice that 
every theory supposes that emotions regularly arise in the context of making moral judg-
ments. Every theory also supposes that non- emotionalaspects of cognition are involved 
(e.g., we can’t morally evaluate a bit of conduct without first representing that conduct). 
Disagreements concern the role and ordering of these components. The problem is that 
extant studies shed too little light on those questions. They show that “emotion areas” of 
the brain are active during moral cognition, and they also regularly implicate brain struc-
tures that are not presumed to emotion areas. But they tell us little about how these relate. 
To put it bluntly, every model presented here is consistent with every study cited in the 
previous subsection.
1.2. Beyond the Brain: Finding the Place of Emotions in Moral Judgment
The fact that extant neuroimaging cannot decide between competing models of moral 
judgment should not be a cause for despair. Few authors of these studies claim to have 
provided decisive evidence in favor of any of the theories just surveyed. (For a notable 
exception, see Greene et al. 2001, whom I will come to in a moment.) Authors of these 
studies clearly, and rightly, take themselves to be establishing other things, such as the 
generalization that emotions are involved in moral judgment or, more often these days, 
that different kinds of moral judgments recruit different resources. At present, the best 
way to adjudicate between competing theories of moral judgment is to use a combina-
tion of behavioral research, some evidence from pathological populations, and attendant 
philosophical argumentation. I will argue that there are good reasons for rejecting most 
of the models under consideration. The constitution model, I will argue, enjoys the most 
support. I will then describe how this model can be used to make sense of imagining 
results. In other words, I will suggest that a theory of morality can be used to decipher 
the moral brain, rather than conversely.
 
 
 53
Sentimentalism and the Moral Brain 53
1.2.1. Emotions as Outputs
Let me begin with the view that emotions are outputs of moral judgments, with either 
reasoning or unconscious dispassionate rules serving as the primary input. Strictly un-
derstood, such models predict that moral emotions should not influence our moral judg-
ments. To assume otherwise would be to concede that emotions can be inputs as well as 
outputs.
Here the evidence is quite clear. Numerous studies have shown that induced emo-
tions can influence our moral judgments. For example, in one early study, Wheatley and 
Haidt (2005) found that hypnotically induced disgust makes moral judgments more 
severe. The effect was small, but subsequent work has robustly confirmed the basic effect. 
Severity of moral judgments increases when disgust is induced by filth, film clips, and 
memories (Schnall et al. 2008) and well as by bitter beverages (Eskine et al. 2011). It has 
also been shown that individual differences in disgust sensitivity correlate with more 
stringent moral attitudes in certain domains (Inbar et al. 2009).
Defenders of output models might counter that emotions can impact all kinds of 
mental operations even if those operations are not themselves emotional. Perhaps dis-
gust would impair people’s performance on math problems, for example. Such a reply 
misses the point, however. It’s not just that disgust impacts moral judgment. It does 
so in a very particular way. Based on predictions by Rozin et al. (1999), we show that 
induced disgust increases severity of judgments about crimes against nature, but not 
crimes against persons (Seidel and Prinz 2013a). We also show that anger has the oppo-
site pattern. In other work, we show that happiness increases positive moral judgments 
and anger brings them down (Seidel and Prinz 2013b). The pattern of emotional impact 
is highly specific. Different emotions have distinctive and predictable contributions. 
They are not noise in the system, but rather a core source of information that people use 
when expressing their moral attitudes.
There is also a more concessive response available to those who are inclined toward 
output models. They can grant that emotions increase the intensity of moral judgments, 
while denying that emotions are the basis of moral judgments (e.g., Pizarro et al. 2011; 
Decety and Cacioppo 2012; May 2014). I find arguments for this modest view uncon-
vincing. Pizarro et al. suggest that emotions are merely moral amplifiers on the grounds 
that emotions such as disgust have domain- general effects, making judgments of many 
kinds more negative. This generality is precisely what I just disputed; new evidence sug-
gests that each moral emotion has highly specific effects. Decety and Cacioppo use high- 
speed neuroimaging to argue that emotions serve a gain- function, modulating intensity, 
but not causing moral judgments. Their evidence stems from the fact that brain areas 
associated with intention attribution come online before areas associated with emotion 
when making a moral judgment. But this would be predicted by any model: an action 
must be classified before it is assessed as morally good or bad. Intention attribution is 
part of that classification process, and it is certainly not sufficient on its own to qualify as 
 
54
54 Emotions versus Reason
a moral judgment (we regularly recognize intentions outside of the moral domain). May 
is primary bothered by the fact the emotion- induction studies rarely cause judgments 
to flip from one side of the moral spectrum (e.g., morally permissible) to the other (e.g., 
morally impermissible). This is an unfortunate artifact of design, since researchers have 
used valenced vignettes rather than neutral vignettes, so baselines tend to fall on one side 
of the scale. That said, we have reported results where emotions cause movement across 
the midpoint of a scale. In Seidel and Prinz (2013b), we show that, on a nine- point scale 
anchored at “not good” and “extremely good,” the mean response to vignettes about 
helping was 6.9 when we induced happiness and 4.3 when we induced anger. This is 
stronger than making a neutral vignette negative, which is what May requests; it is a 
case in which a positive vignette becomes negative! In any case, once defenders of output 
models grant that emotions can amplify moral judgments, they have rendered their posi-
tion unstable. Why should emotions have any effect? For output models, this is an ad 
hoc concession to save the theory, not a principled prediction.
The case against output models can be strengthened by considering the proposed 
inputs that they envision. Moral rationalists, for instance, say that moral judgments arise 
through a process of reasoning. This can be challenged both empirically and philosophi-
cally. Empirically, it has been shown that people are often very bad at articulating rea-
sons for their moral judgments (Haidt 2001; Hauser et al. 2007). Philosophically, Hume 
([1740] 1978) and others have argued that no amount of reasoning can suffice for a moral 
attitude. By analogy, reasoning alone cannot tell us whether something is funny, deli-
cious, sexy, boring, or annoying. Such affect- laden concepts require prior emotional dis-
positions. Something is funny only against the background of a certain sense of humor. 
Importantly, we can use reasoning to ascertain whether something is funny if a sense of 
humor is presupposed. If I know you like language play, I can give you reason for think-
ing you will be amused by Lewis Carroll. If I know you find philosophy boring, I can 
give you reasons for thinking you will be bored by Alfred North Whitehead. Likewise, if 
I know you don’t like cruelty, I can use reasoning to alter your moral opinion about fac-
tory farming. What can’t be done, Hume argues, is to arrive at a moral opinion by reason 
alone (elsewhere I argue against Kant’s effort to prove that this is possible; Prinz 2007).
It is not easy to find moral rationalistsin the psychology literature, but there is at least 
one prominent group of researchers who might be classified this way: Turiel (2003) and 
his collaborators. For Turiel, reasoning can tell us, for example, that an action is harmful 
or unjust, and, once we see that we will judge that it is wrong. But this conjecture gains 
plausibility only when we recognize that “harm” and “injustice” are already morally 
loaded words. Consider “harm.” If a teacher intentionally makes a student work hard, or 
a gym instructor intentionally causes her class to endure exhaustion and muscle pain, we 
don’t call this harm. We are reluctant to use the word in contexts of self- defense, sports 
(think of boxing), surgery, and other cases where pain is knowingly inflicted by one 
person on another. Likewise for “injustice.” Most distributions in life are not equal. For 
example, earnings depend on where one lives, what one does, how much one works, how 
 55
Sentimentalism and the Moral Brain 55
much one produces, and so on. To decide whether unequal division in any of these cases 
is unjust, one must have moral views about wealth distribution. I don’t mean to deny 
that reasoning is important for making judgments about harm and justice. For example, 
if we learned that someone harms others for fun, we may become confident that this is a 
case of moral wrongness. But that itself is not an entailment of reason. Those who do not 
regard hurting someone for fun as wrong will not come to the same conclusion. Think 
of societies that have blood sports, for example. To go from “That’s a case of hurting for 
fun” to “That’s morally wrong,” we need a bridge principle, which says that “hurting for 
amusement is morally wrong.” Turiel has an analysis of what it is to construe something 
as morally wrong. Morally wrong actions are those that are regarded as serious and inde-
pendent of authority. He also says that such judgments of moral wrongness are justified 
by appeals to empathy. But this analysis does little to help the rationalist. Empathy is 
clearly an emotional construct, and, less obviously, judgments of seriousness may cor-
relate with emotional intensity. There is also evidence that emotions are involved in 
judging that something is true independent of authority. Blair (1997) has found that judg-
ments of authority independence are diminished in psychopaths, who have emotional 
deficits, and Nichols (2014) found that induced emotions increase perceived authority 
independence. This suggests that Turiel’s account of moral wrongness is implicitly an 
emotional account: his operationalization is symptomatic of underlying emotions.
As noted, many psychologists who defend output models are not rationalist. The 
theory that has usurped rationalism is sometimes called “moral grammar” (Dwyer 
1999; Mikhail 2007). The basic idea is that we arrive at moral judgments by applying 
unconscious rules, like the rules of syntax. These rules are used to assess the underlying 
structure of an event. For example, we unconsciously assess things like intentions and 
outcomes: Was this a case of battery? Was the outcome foreseen? Was it a side effect 
of something else? Evidence for such subtle action assessments comes from research on 
trolley dilemmas. It turns out that wrongness judgments in trolley cases are graded. 
Intentionally and directly killing someone (pushing a man in front of a trolley to save 
five others) is regarded as impermissible, while killing someone as a side effect (diverting 
a trolley to a side track where it will kill one instead of five) is regarded as permissible, 
and various cases fall between these options. For example, permissibility judgments fall 
between the two extremes when people imagine a case where a trolley that was heading 
for five people is diverted onto a looping track, where it will be stopped by hitting a large 
man, but would have otherwise continued back onto the main track, killing the five. 
Moral grammarians think this is evidence of subtle rules, and they think these rules can 
be applied dispassionately. I think both claims can be challenged.
Intuitions about trolley cases may derive from a simple mental computation, known 
to be pervasive in human categorization: the use of prototypes. Suppose we have a proto-
type for “murder”; it may be something like physically aggressing against another person 
with the explicit intention of taking his or her life. The case where a person pushes some-
one into the path of a trolley would qualify. But as the elements of the prototype weaken 
56
56 Emotions versus Reason
(e.g., if it is not intentional), the category becomes less applicable. Thus, in the case of 
killing someone as a mere side effect, there is no direct assault on a person and no in-
tention to take a life, merely foreknowledge that a life will be lost. In the case with the 
looping track, one must intend for a person to die, because that death is a crucial step in 
stopping the advance of the train. So it is a borderline case of murder. One simple rule is 
all we need, not a complex moral grammar.
Moreover, even if there were a complex set of action representations, that wouldn’t 
necessarily suffice for making moral judgments. Everyone must agree that we are ca-
pable of action perception, and that requires attributions of intentions, outcomes, and 
so on. So, at one level, the postulation of unconscious rules for parsing actions in un-
controversial. What becomes controversial is the label “moral” and the assumption that 
certain action representations suffice for judgments of wrongness independent of any 
emotional attitudes. Indeed, the belief that action representations suffice for moral at-
titudes is precisely the same as what rationalists claim, minus the supposition that such 
representations are conscious. Moral grammar is just unconscious rationalism. As such, 
it has the same weaknesses. There are cases where the same action representation leads 
to different moral verdicts in different individuals. For example, there is variation in 
intuitions about trolley cases. These differ depending on gender (Mikhail 2000)  and 
personality (Holtzman 2014), and intuitions also shift with some frontal brain injuries 
(Koenigs et al. 2007). Presumably actions are parsed the same way but attitudes differ. 
There is also evidence linking such differences to emotions. Holtzman (2014) found that 
neuroticism— an emotional construct— correlated with less tolerance in track- diverting 
cases, and Valdesolo and De Steno (2006) found more tolerance for pushing cases with 
positive mood induction. Once people have represented the action, they must decide 
how bad it is, and such findings suggest they do so by consulting their emotions.
I conclude that there is no strong evidence for the view that emotions are merely out-
puts of moral judgment. People use emotions in arriving at conclusions about moral 
significance.
1.2.2. The Dual- Process Model
Given the undeniable evidence that people use emotions in making moral judgments, 
there is no hope for the view that such judgments always arise from pure reason. There 
are, however, researchers who think that moral judgments sometimes arise from pure 
reason. These are dual- process views. They claim that we have two ways of arriving at 
the verdict that something is morally good or bad: an emotional way and a rational way 
(Greene et  al. 2001; Greene 2008). Dual- process views are popular in other domains 
of psychology. For example, Greenwald and Banaji (1995) explain implicit racism by 
saying that people who are rationally committed to the view that all ethnicities are 
equal also harbor emotional biases against members of minority groups. There is a ratio-
nal high road, on this view, and an emotional low road. The vision suggests a crown of 
 
 57
Sentimentalism and the Moral Brain57
dispassionate and deliberative human intellect perched on a lower animal brain, which 
works by automatic urges and instincts. Perhaps morality travels both of these pathways.
Or perhaps not. One problem for dual- process theories is already on the table. It is not 
clear how reasoning can ever suffice for a moral judgment. But there are also problems 
specific to Greene’s proposal. Greene’s rational pathway is utilitarian. It basically com-
putes the amount of good or bad that would result from an action and compares this to 
the good or bad produced by alternative actions. The story is incomplete, of course. We 
are not told what the good and the bad consist in or how these comparative computa-
tions take place. This is an important oversight, because the good and the bad may them-
selves be affective. The good may be the set of outcomes that the moral deliberator views 
positively and the bad may be outcomes that are viewed negatively, where the positive 
and negative are reducible to emotional states. Comparing good and bad could involve 
weighing conflicting emotions.
Greene is impressed by the fact the utilitarian calculations depend on math, so 
there is something rational, he thinks, about preferring less bad to more bad. But this 
is a fragile argument for rationality, given that one could have a weighting scheme 
where deontological violations were simply weighted as extremely bad. Granted, if 
one were comparing two outcomes that were equal in every way other than number 
of lives lost, math alone might settle what to do. But this does not mean that the de-
cision is purely mathematical. The initial assignment of badness value and the belief 
that it is good to minimize badness are required as well. The credo, “Maximize the 
good and minimize the bad!” is itself a norm. It is not merely a statement of fact or a 
deliverance of reason. One could reject the norm (as some Kantians do). Those who 
embrace it, like Greene, Peter Singer, Jeremy Bentham, and John Stuart Mill, are 
quite passionate about it. They think it’s bad to do otherwise, and they would criti-
cize those who do not follow utilitarianism. It seems then that utilitarian delibera-
tion is not just math, or any other form of pure reason, but is rather a commitment 
to the moral significance of mathematical outcomes. We need an account of what it 
is to form such a commitment. Whatever it is, is not merely a matter of reasoning. 
A plausible suggestion is that forming a moral commitment to utilitarianism is an 
emotional attitude, a positive feeling toward this normative theory. Or it might turn 
out that utilitarian convictions are just a consequence of first- order passions. One 
might feel strongly that one should help people in need. When confronted with op-
portunities to help one of two groups, where one group has more people in need, a 
person with the conviction about helping people in need will feel that sense of obliga-
tion more intensely when thinking about the larger group. In summary, we could not 
choose between numerical outcomes if we didn’t assign them different values, and 
the assignment of values may be an emotional matter, either at the first order, or at 
the level of general principle, or both.
I suspect Greene would agree with this, if pressed. He sometimes concedes that there 
are emotions behind utilitarianism. They can arise as outputs of utilitarian calculations, 
58
58 Emotions versus Reason
of course, and Greene (2007) even admits that they can be inputs. He clarifies, however, 
that they are not “alarm bells” that cry out “Don’t do this!” or “Do this!” but rather 
calmer passions that can be taken up for consideration and bypassed if reason demands 
it. On this version of Greene’s theory, it is hard to tell whether he thinks moral judg-
ments ever occur without emotional inputs, but he does seem to uphold the view that 
reason is in the driver’s seat in some case. Greene’s early defenses of the dual- process 
model suggest a firm stance on this question. There Greene implies that moral judgments 
are sometimes reached in a purely rational way, which is to say dispassionately. I don’t 
find the evidence he offers convincing. For example, Greene et al. (2001) report that the 
emotions are highly active when people reflect on moral dilemmas in which they must 
consider causing direct intentional harm, as in the case of pushing someone in front of 
a trolley; and emotions are less active in the track- diverting case, where the dorsolateral 
prefrontal cortex seems to play a significant role. Does this show that some dilemmas 
travel the emotion path while others use pure reason? No. The DLPFC is predicable in 
any case where people begin taking numbers into account, but it doesn’t follow that such 
decisions are dispassionate. As compared to nonmoral dilemmas, Greene et  al.’s data 
suggest that these cases also engage the emotions. Moreover, the subtraction methodol-
ogy may underestimate the amount of emotionality here. Compare the pushing case to 
the diverting case. In both a trolley is speeding toward five people, whom the participant 
in the study would like to help. That desire to help— the conviction that it would be mor-
ally good to help five people in need— may be grounded in an emotional state. But the 
neural correlates of this emotional state are rendered invisible, because saving five people 
is held constant across the two comparison conditions; it is subtracted away. Instead, 
we are left with a comparison of pushing someone to his death versus switching a trol-
ley track with fatal consequences, and, unsurprisingly, the former is more emotionally 
intense than the latter.
Subtraction methodology may also be a factor in other cases where people report 
moral decision- making in the absence of emotions. In all the fMRI studies that have 
been published on moral judgment, I have only come across one condition in one study 
where moral judgment was said to show less emotional response than a control condi-
tion. This is a condition in a study by Borg et al. (2006). But their control condition is a 
scenario in which participants have to imagine a fire encroaching on a precious flower 
garden— hardly a neutral vignette. The fact that this scenario elicits more emotion than 
one of the moral scenarios they used is not very surprising. It is also an outlier. The rest 
of their moral scenarios were more emotional on average than control scenarios, and this 
has been the pattern in study after study.
With respect to utilitarian intuitions, I would propose the following emotion- based 
account. I think people tend to make utilitarian judgments in cases where a perceived 
good (saving five) outweighs a perceived bad (letting one die). Far from being dispassion-
ate, this would be a situation where a strong positive emotion outweighs a weaker negative 
emotion. This interpretation fits with Greene’s data. Positive emotions associated with 
 59
Sentimentalism and the Moral Brain 59
saving are subtracted from his analysis, but there is evidence for residual, albeit weak, 
emotionality, which may reflect the wrongness of letting die. The proposal that positive 
emotions underlie the desire to help enjoys considerable empirical support. There is a 
big literature linking positive emotion and prosocial behavior (e.g., Isen and Levin 1972; 
Weyant 1978), and we have done work showing that induced positive emotions lead to 
increased sense of moral obligation (Seidel and Prinz 2013b). There is also work show-
ing that induced positive emotions greatly increase the likelihood that people will hurt 
someone in order to save five people in trolley dilemmas (Valdesolo and DeSteno 2006).
Proponents of the dual- process model need evidence that people can make some 
moral decisions in the absence of emotions, on the basis of reason alone. This interpre-
tationhas been suggested in the literature. Koenigs et al. (2007) and Ciaramelli et al. 
(2007) have shown that people with injuries in the VMPFC are more likely than a con-
trol population to make utilitarian decisions in trolley dilemmas. The VMPFC is a hub 
for emotional coordination, so it does suggest that people moralize without emotions 
and their moral values align with utilitarianism. But, as the authors of these studies note, 
VMPFC patients are not lacking in emotions. They are actually highly emotional. The 
deficit most characteristic of such injuries is a problem with using one emotion to miti-
gate another. In particular, VMPFC patients do badly on gambling tasks in which the 
pursuit of valuable playing cards (a positive emotional drive) should be stopped in light 
of the discovery that those high- value cards are interspersed with even higher losses (a 
negative emotional cost) (Bechara, Damasio et al. 1994). Thus, the joy of winning cannot 
be adequately dampened by the sting of losing. This is basically the structure of the push-
ing case in the set of trolley dilemmas: the joy of saving five is normally arrested by the 
sting of killing one. The fact that VMPFC patients tend to opt for saving five in such 
scenarios exactly replicates their gambling performance. This is not an absence of emo-
tions (they are eager to win and take delight in victory), but rather a failure of emotional 
regulation.
A stronger piece of evidence for dispassionate moralizing comes from a study by 
Koven (2011). Koven looked at moral reasoning in people who score high in alexithymia, 
a conditioned characterized by diminished awareness of emotional states. Like VMPFC 
patients, people with alexithymia make more utilitarian decisions than controls, and, 
in this case, that looks like a consequence of diminished emotionality. A closer look at 
the data, however, tells a different story. For one thing, Koven did not find that utili-
tarian decisions decreased with attention to negative emotions, which is what a dispas-
sionate reasoning account would predict. She did find a negative correlation between 
utilitarianism and clarity of emotions, but there is a simple explanation for this, which 
doesn’t posit dispassionate reasoning: a person who lacks emotional clarity may have a 
hard time deciding whether the sting of taking a life is greater than the joy of saving five. 
The most damning result in the study is that utilitarianism is negatively correlated with 
three measures of verbal intelligence, suggesting that such decisions do not rely on pure 
reason. I think the best interpretations of these results are as follows. People with high 
60
60 Emotions versus Reason
alexithymia scores have emotions, but their awareness of these emotions is somewhat 
limited. They have enough awareness to make moral decisions, but can get confused in 
dilemmas. This is especially likely in cases where, for people without alexithymia, one 
strong emotion trumps another (the pushing case). In other cases (the diverting case), 
the negative emotion is weak enough to have little impact. Those with lower verbal intel-
ligence probably have lower working memory capacities as well (verbal encoding helps 
keep multiple items in mind), so they get more flustered in the high- emotional- conflict 
cases and choose more or less randomly. Means are not reported in the study, but it is 
reasonable to assume that any increase in utilitarianism would be driven by fluctuation 
in scenarios like the pushing case, since other scenarios tend toward utilitarian responses 
already. A shift in pushing intuitions toward utilitarianism (barring massive reversals, 
which are not reflected in the correlational data) would be a shift toward chance.
In summary, I think there is no strong evidence for dual- process theories. On both 
empirical and theoretical grounds, there is no reason to think that we ever make moral 
judgments without the involvement of emotions.
1.2.3. Emotions as Inputs
So far, I have been making a case for the thesis that emotions are not merely outputs of 
moral judgments, but play a more important role. Emotions are somehow in the driver’s 
seat. This conclusion leaves open various possibilities. In psychology, the most widely 
discussed view of this kind is Jonathan Haidt’s (2001) “intuitionist” model. According 
to Haidt, moral judgments generally arise as the consequence of “intuitions,” which are 
automatic affective responses. In other work, Haidt makes it clear that these intuitions 
are emotions, such as anger, guilt, and contempt (e.g., Rozin et al. 1999). The idea is that 
we experience an emotional response and then, on that basis, settle whether an action 
under consideration is right or wrong. This is an “emotion as input” model.
Haidt also grants that people offer reasons for their moral conclusions, but he insists 
that reasoning is normally post hoc: we use reasoning after we have already arrived at a 
moral verdict. Such reasoning serves to rationalize verdicts that were arrived at nonra-
tionally, which is to say, by way of emotional intuitions. For Haidt, emotions are inputs 
to moral judgments and reasonings are outputs. Haidt’s evidence for this model includes 
his study on moral dumbfounding in which participants are presented with examples of 
cannibalism and incest that are designed to disqualify the reasons that people usually 
give when condemning these behaviors. The incest scenario involves consenting adult 
siblings who use birth control, and the cannibalism scenario involves a doctor who eats 
part of an unclaimed cadaver. Most participants continue to insist that incest and can-
nibalism are wrong even in these cases, but they admit that they cannot provide good 
reasons. This dogmatic insistence suggests that, in more typical cases, the reasons that 
people give are actually inert. We stick to these norms whether or not our favorite justi-
fications apply. Haidt also cites his own research on emotion induction to suggest that 
 
 61
Sentimentalism and the Moral Brain 61
emotional intuitions can be sufficient for drawing moral conclusions even in the absence 
of prior reasons. For example, in one study he found that people who were hypnotically 
induced to feel disgust expressed moral misgivings about a person who was described as 
exceptionally good (Wheatley and Haidt 2005).
The intuitionist model is very appealing and fits well with the evidence that I have 
reviewed thus far. On closer examination, however, it may not turn out to be the best 
way to characterize the relationship between emotions and moral judgments. Ironically, 
I think Haidt underestimates the role of both emotions and reasoning. Let me take up 
both of these points, beginning with reasoning.
Haidt says that reasoning in the moral domain is post hoc, and this is a plausible 
claim in many cases. But why think it is true? There do seem to be cases where reasoning 
is used to change moral opinions. An oft- cited example is animal rights. Peter Singer’s 
book Animal Liberation seems to have persuaded many people to become vegetarians or 
oppose factory farming. There are also many debates about public policy that seem to 
play a role in shaping public opinion. Arguments have been given for or against social-
ized healthcare, various military actions, and gay marriage. Some of these arguments 
seem to persuade. To see this, we can conjure up a policy decision that has not yet gotten 
much public discussion. Suppose we ask about plural marriages and point out that those 
who support gay marriage should, on pain of inconsistency, also favor the legalization of 
polygamy. Someone might be persuaded by this or might identify a legitimate difference 
between the two cases. It seems we must reflect and do intellectual toil to decide, and it 
is plausible that reasoning would play an importantrole in arriving at a verdict. Haidt 
overstates the case when he describes reasoning as post hoc. Indeed, in his dumbfound-
ing study, 20 percent of participants actually change their view and say abortion and 
cannibalism should be permitted in the special cases he describes.
In response, Haidt seems to concede that reasoning can in fact play an active role in 
leading to a moral verdict. But he says this happens only rarely. Most of the time, reason-
ing is inert. This concession and restatement of the position is puzzling. Notice, first, 
that it is tantamount to endorsing a dual- process model. In some sense, this is no great 
revelation; Haidt is explicitly committed to dual- process theories, arguing that emotion 
and reasoning are both core cognitive systems. But Haidt’s main narrative focuses on 
the claim that the reasoning system is usually inert in arriving at moral judgments. But 
a closer reading reveals that he can (and must) admit that reasoning can influence moral 
judgment. This is a bit like the moral rationalists who admit that sometimes emotions 
drive morality. That too, is an admission that the dual- process model is right. Thus, the 
three views— inputs, outputs, and dual process— collapse into one. To avoid this embar-
rassing conclusion and establish that intuitionism is a distinctive view that competes 
with these alternatives, Haidt insists that the reasoning route is rarely traveled. But what 
evidence does he have for this statistical claim? It is not even clear what such evidence 
would look like. Haidt would need some way to keep tabs on how people draw moral 
conclusions in their daily lives. He would need to have a measure of when reasoning is 
62
62 Emotions versus Reason
efficacious as opposed to inert, and a massive reservoir of field data looking at the moral 
judgments that people actually make. He would also need a varied sample. Perhaps some 
people (judges? children? political independents?) are very open to reasoned persuasion, 
while others are not. Haidt offers no evidence to support his conclusion that reason-
ing is usually inert, and it is unlikely that such evidence is forthcoming (for a similar 
argument, see Mallon and Nichols 2011, on what they call “the counting problem”). 
Ironically, Haidt may have done proponents of emotionally based ethics a disservice by 
implying that their position must somehow oppose reasoning, when, as we will see, this 
is not the case.
Let’s turn now to the charge that Haidt underestimates the role of emotions. As a 
preliminary, it is important to see that any emotion- as- input model must draw a distinc-
tion between emotions and moral judgments. In Haidt’s diagram of the model, emo-
tions (or intuitions, which include emotions as the primary example) are one box and 
judgments are another. These are presented as two stages in a sequence of processing. 
Emotional intuitions lead to judgments. If emotions are merely inputs, it follows that 
moral judgments must be something other than emotional states. But what are they? 
Haidt does not say. Notice that we can’t just define moral judgments as sentences, or 
even sentences in the head. Speakers of different languages may draw the same moral 
judgment, and the very words that we use to express moral judgments may express non-
moral judgments. “That’s wrong” can be used to express judgments in many different 
domains. Instead moral judgments must be the mental states that such words are used 
to express when considering moral scenarios. We need an account of what these mental 
states are. Astonishingly, this fundamental question gets very little discussion in the 
moral psychology literature, and I have not seen any answer in Haidt.
There is also a deeper worry in the vicinity. If moral judgments are some as- yet- 
undefined output of emotions, then they should be the kind of thing that could occur 
without emotion. When there is a causal relationship between two things, it should be 
possible to get the effect without the cause. If emotions cause moral judgments, then 
there should also be moral judgments caused some other way. Haidt implies as much 
when he concedes that, on rare occasions, moral judgments are caused by reasoning. But 
this possibility does not have empirical support. Strikingly, I am not aware of a single 
demonstration of a moral judgment being made in the absence of emotions. When emo-
tions are measured, during moral cognition, they are found. Moreover, there is evidence 
that emotional deficiencies lead to corresponding deficits in moral judgment. Blair (1995) 
has argued that psychopaths do not comprehend moral judgments, and he attributes 
this to the fact that they have flattened affective states, particularly anger, sadness, fear, 
and guilt. Some have argued that psychopaths give normal responses on some tests of 
moral competence, but this has only been established in cases where the moral vignettes 
deal with familiar kinds of cases, which psychopaths could come to recognize by memo-
rizing attitudes of their healthy peers. The claim that psychopaths do not understand 
morality hinges on two key findings, which are both noted by Blair: their tendency to 
 63
Sentimentalism and the Moral Brain 63
treat moral judgments as akin to conventions, and their abnormal patterns of justifica-
tion, as measured in the moral/ conventional test, police reports, and moral development 
scales. There is also evidence that people with Huntington’s disease, which decreases 
disgust, show patterns of paraphilic behavior, suggesting an acquired indifference to 
sexual norms (Schmidt and Bonelli 2008). Such evidence is not decisive because it is 
hard to find complete emotional deficits or uncontroversial measures of moral compre-
hension, but the findings are suggestive. Absent evidence for moral judgments without 
emotions, the dissociation predicted by input models like Haidt’s remains resoundingly 
unconfirmed.
In summary, I think Haidt’s model underestimates the impact or reasoning and un-
wittingly implies that moral judgments can occur without emotions. I now want to sug-
gest a model that directly denies the latter entailment.
1.2.4. The Constitution Model
As noted above, philosophers of the sentimentalist stripe have traditionally postulated 
an intimate relationship between emotions and moral judgments. They do not say that 
emotions are inputs to moral judgments. Rather, they say that emotions are constituent 
parts. To form the judgment that something is morally bad is, at least in part, to have a 
negative feeling toward that thing. It could be, for example, a state of outrage, disgust, or 
guilt. These emotions literally constitute the moral attitude. Being outraged at an action 
literally is a judgment that the action was morally bad.
The constitution model differs from input models in key respects. It does not need 
to give a separate story of what moral judgments consist in, since it equates moral judg-
ments with emotional attitudes. It owes some details about these attitudes, which I will 
come to presently, but it doesn’t imply that moral judgments are something over and 
above emotional states. Consequently, the constitution model doesn’t predict dissocia-
tions between emotions and moral judgments. It predicts that when people make genu-
ine moral judgments, they will be in emotional states. It might of course be that people 
can mouth the words, “Joy killing is wrong” or some other moral platitude without get-
ting riled up, as when we mention this judgment rather than using it. But even that is 
empirically dubious. Emotion- priming studies suggest that morally charged words are 
sufficient for evoking an emotional response even when presented in a word list without 
any context (see, e.g., Arnell et al. 2007; Chen and Bargh 1999). There seems to be some-
thing quite automatic about the emotionalresponses that arise when we contemplate 
actions that we regard as morally wrong.
The constitution model faces two prima facie problems, which have led, I think, to 
its comparative neglect in recent moral psychology. First, one might wonder how an 
emotional state could qualify as a judgment. As indicated above, I think this concern 
stems from an overly sentential metaphor for thinking. Since the dawn of cognitive 
science, computer metaphors have led people to think of thoughts as sentences in 
 
64
64 Emotions versus Reason
the head. But thoughts can also comprise sensations. If you touch a stove and say, 
“That’s hot!” the sentence expresses a feeling of heat joined with a perception of the 
stove. Likewise, if we taste some wine and say, “That’s delicious!” the sentence will 
express our gustatory pleasure. “That’s wrong” can equally be an expression of an 
emotional state.
A second worry concerns the differentiation of moral judgments and nonmoral emo-
tions. Disgust and anger can arise in nonmoral contexts. So it seems implausible to claim 
that we are making moral judgments every time we experience these emotions. Here 
I concur. I think a moral judgment is not just an emotional state, but an emotion that 
derives from what I call a moral sentiment (Prinz 2007). A sentiment is a standing emo-
tional disposition toward something (e.g., an action type, a trait, a motivation, a person, 
etc.). A moral sentiment is a disposition that causes us to feel emotions of other- blame 
(i.e., anger or disgust) when another person performs an action of a certain type (or has 
a certain trait, etc.), and emotions of self- blame (i.e., guilt or shame) when we ourselves 
perform that action. A  moral judgment is an emotional state that issues from such a 
bidirectional disposition. Thus, to tell whether a state of disgust is a moral judgment we 
must know whether it comes from a moral sentiment. That, in turn, depends on whether 
the object of our disgust would have caused shame if we ourselves have been responsible 
for it.
Consider some examples. Suppose I  experience disgust when I  see someone eat in-
sects. Is that a moral judgment? That depends on whether eating insects myself would 
cause shame. In all likelihood it would not. If I just find insects disgusting, but not im-
moral to eat, I would feel disgust on eating one, not shame. But suppose I  think it is 
immoral to harm insects. Then I would feel ashamed if I ate one. This disposition to 
have self- directed emotions of blame can distinguish moral judgments from nonmoral 
instantiations of the disgust and anger. Conversely, the disposition to experience other- 
directed blame can be used to determine when guilt and shame are moral. If some-
one feels ashamed of her grades, or guilty about having survived a tragedy that killed 
others, these are not necessarily moral responses. To qualify as moral, the person who is 
ashamed of her grades would also have to feel disgusted by others who get bad grades, 
and the person who feels survivor guilt would also have to feel angry at other surviv-
als. Without such bidirectional feelings of blame, we do not tend to regard a person’s 
emotions as moral in nature. There is some empirical evidence suggesting that guilt and 
shame complement anger and disgust in the moral domain, but further evidence would 
be welcome (Giner- Sorolla and Espinosa 2011). One prediction is that moral vegetarians, 
as opposed to health vegetarians, would be more likely to report shame if they ate meat.
By postulating that emotions are components of moral judgments rather than mere 
causes, the constitution model assigns a more central role to emotions than input models. 
This may be taken to suggest that the constitution model is less capable than input 
models of accommodating the role of reason in moral judgment. The opposite is the 
case. Input models insist that emotions, as opposed to reasoning, are the primary source 
 65
Sentimentalism and the Moral Brain 65
of moral judgment. The constitution model makes no such claim. It is a claim about 
what emotions are, not how they come about. There are, admittedly, good philosophical 
arguments for doubting that dispassionate reasoning can bring about a moral judgment 
if the constitution model is true. Dispassionate reasoning can tell us how things are in 
the world, but merely factual information can always be regarded with indifference. It is 
always possible to imagine someone who is completely unmoved by anything that rea-
soning can reveal— a person who is, for example, indifferent to mass famine. Fortunately, 
normally developing human beings are not indifferent to such facts. That is because we 
form (through learning or evolution) strong emotional associations. We come to experi-
ence despair when we think about mass famine. Reasoning does not entail despair, but 
our cultivated concern for human life does. So if reasoning uncovers that a government’s 
policies will lead to mass famine, that will lead us to despair. It will also lead us to out-
rage, insofar as we blame the government for bringing about a negative consequence. 
On this picture, reasoning can be an important precursor to a moral conclusion, but not 
reasoning alone. Reasoning can uncover facts toward which we have prior sentiments.
This picture suggests that reasoning and emotion work together in the moral domain. 
We have a set of moral sentiments, which are emotional dispositions toward action 
types, traits, and so on. Then reasoning can be used to tell us whether one of these action 
types has been instantiated. We are outraged at those who cause intentional harm. If we 
realize that a government has intentionally caused harm, outrage will ensue. But it may 
take a lot of reasoned reflection to make this discovery. Likewise, reasoning can tell us 
that factory farms are cruel, and we have negative sentiments toward cruelty. Reasoning 
can tell us that the criminal justice system unfairly targets African Americans, and we 
are outraged by unfairness and disgusted by racism. Such rational derivations allow us to 
make moral judgments about governments, factory farms, and criminal justice systems. 
Any theory of moral judgment must allow this. By insisting that reasoning is normally 
post hoc, Haidt’s intuitionist model neglects such important cases and dichotomizes the 
debate between rationalist and sentimentalist accounts. A sentimentalist who defends 
the constitution model can reply that reasoning can lead to moral judgments provided 
reasoning leads us to discover facts about which we have emotional dispositions.
A moral rationalist might want to push a bit further, arguing that reasoning can 
provide a basis for our emotions. For example, Matthew Liao has suggested (in con-
versation) that the reason- based recognition of human equality can warrant feelings of 
disgust when we encounter racial discrimination. Such a view about reasons entailing 
emotions would be especially plausible on a cognitive view of emotions, according to 
which emotions are cognitive appraisal judgments. I have argued against such views of 
emotion elsewhere (Prinz 2004). Here let me just say that, on the noncognitive view 
I favor, emotions are perceptions of patterned changes in the body, which correspond to 
behavioral dispositions. Disgust is a perception of the body preparing to reject or expel a 
noxious substance. Reasons cannot entail emotions, on this view, because no facts about 
the world entail any practical response. If you notice that your home is on fire, it doesn’t 
66
66 Emotions versus Reason
follow by entailment that you should try to escape. One could be indifferent one’s own 
death or even welcome it. Rather, we react to a fire precisely because we care about our 
well- being, and that care is implemented through the emotions that make us runfrom 
flames. Likewise, one could believe that all people are equal but be indifferent to racism; 
for example, one could be indifferent to the fact that people have false beliefs (i.e., beliefs 
in racial inequality)— indeed, we are not in general disgusted by falsehood. When we 
are disgusted by racism, we are not drawing a logical inference; rather we are expressing 
the fact that certain forms of falsehood repel us. Reasoning alone cannot bring us to this 
state. That requires a good deal of social conditioning, unfortunately. But reasoning can 
help us identify subtle cases of discrimination, and, thus, reason is a powerful instru-
ment in determining when we will experience our moral passions.
My defense of the constitution model is an argument to the best explanation. The 
model fits with the psychological evidence and philosophical arguments that I  have 
surveyed here better than competing models. I  think it is the most plausible account 
of moral judgments currently under consideration. Future work may tip the balance in 
another direction, of course. For now I want to assume the model is right and return to 
the neuroimaging data. We will see that, with a model in hand, we can begin to make 
concrete proposals about what different brain structures contribute to moral cognition.
1.3. Labeling Lobes: How Sentimentalism Maps onto the Moral Brain
The constitution model posits the following components in moral cognition. To make 
a moral judgment, we must first categorize an action. That may involve reasoning, and 
will normally involve ascertaining certain facts that are important to our moral values, 
such as whether the action was carried out intentionally. Once we have analyzed the 
action, our moral sentiments will be accessed. Moral sentiments, recall, are emotional 
dispositions. If the action is morally significant, these sentiments will generate an actual 
emotional state. That emotion, bound with the action representation, constitutes the 
judgments that the action is morally good or bad. Different emotional systems will be re-
cruited depending on what kind of action we are considering and whether it is regarded 
positively or negatively. If the task requires that we report these judgments, then we must 
also engage in emotional introspection. We must focus on what we are experiencing. 
Language centers might also be required to verbally express the verdict, and motor sys-
tems might be needed to select a verdict on a computer keyboard or other input device.
All of these components can be mapped onto the moral brain. The other models 
I  reviewed might end up with similar interpretations of imaging results, but there is 
at least one crucial difference. Only the constitution model posits an identity between 
moral judgments and emotional states. This raises a somewhat embarrassing question 
for other models. If moral judgments are not emotional states, what brain structure is 
their neural correlate? There is no obvious candidate suggested in the literature. We are 
 
 67
Sentimentalism and the Moral Brain 67
left wondering where moral judgments reside, with no clear proposal for how to find the 
answer. The constitution model provides an answer: moral judgments reside in emotion 
pathways, or, more accurately, in the joint activation of those pathways and brain struc-
tures that represent actions.
Let’s now return to the neuroanatomy and assign functional significance in light of 
what we have learned. First, let’s consider action representations. Here different brain 
structures will serve different roles. The TPJ, including the superior parietal sulcus, is a 
likely key player. This part of the brain has been implicated in theory of mind tasks, and 
would be especially useful in representing intentional actions. It doesn’t follow that we 
should always expect activity in the TPJ. There may be situations where an action type is 
so familiar and so obviously wrong that we don’t need to reflect much on the intentions 
of the agents involved. There is evidence the TPJ is actually deactivated in such easy cases 
(FeldmanHall et al. 2013). Similarly, when moral decisions require close attention to nu-
merical outcomes, we may see activation in lateral frontal areas associated with working 
memory, such as the dorsolateral prefrontal cortex. But this will not be observed when 
the action in question is so obviously bad to us that we don’t bother to do any math.
Actions about which our moral attitudes are well established may be represented in 
the part of the brain that forms associations between emotions and events. Two key 
structures are the ventromedial prefrontal cortex and the OFC, which can be thought of 
as emotion association areas because they form links between emotions and cognitively 
represented inputs. Another emotion association area is the temporal pole, which is es-
pecially associated with affect- laden imagery (Olson et al. 2007). This structure links 
visually represented information with emotions. In the language of the constitution 
model, these can structures are the correlates of our sentiments. They are not the cor-
relates of actual emotional states, but rather play a role in initiating emotions in response 
to information that has been cognitively, perceptually, or imaginatively presented. As 
noted in discussing dual- process models, the ventromedial prefrontal cortex may also 
play a role adjudicating conflicts between sentiments.
What about the emotions themselves? I have argued elsewhere that emotional states 
are most likely associated with brain structures that are implicated in bodily perception, 
such as the insula and parts of the cingulate cortex (Prinz 2004). This is borne out by 
the evidence, and in some cases, different emotions lead to greater involvement of dif-
ferent areas. For example, Lewis et al. (2012) contrasted brain volumes associated with 
individual differences in emphasis on “individualizing” norms (harm and fairness) as 
compared to “binding” norms (authority, purity, and in- group loyalty). They found ana-
tomical correlates in the subgenual cingulate and insula respectively. The relationship 
between insula and purity norms fits with prior literature associating this structure with 
the experience of disgust and other emotions that involve a visceral phenomenology 
(Wicker et al. 2003; Critchley et al. 2004). The anterior cingulate has been associated 
with heart- rate regulation (Critchley et al. 2000) and pain (Rainville 1997), making it a 
good candidate for high- arousal emotions. The posterior cingulate has been associated 
68
68 Emotions versus Reason
with negative emotions as well (e.g., Maddock et al. 2003). It is believed to be a corre-
late of guilt (Basile et al. 2011). All three of these areas, insula, anterior cingulate, and 
posterior cingulate, have been associated with anger (Denson et al. 2008). This should 
remind us that discrete emotions can engage multiple regions, and that different emo-
tions use overlapping brain mechanisms. Such findings are predictable on theories that 
equate emotions with perceived changes in bodily patterns (Prinz 2004). Many distinct 
emotions involve cardiovascular changes, changes in posture, and changes in respira-
tion. One should therefore expect similarities at the level of gross anatomy, but subtle 
differences within regions. Disgust may be the biggest outlier, since its phenomenology 
engages the digestive system more than other emotions.
It is sometimes said that positive emotions have different correlates than negative emo-
tion. In particular, positive emotional responses are associated with the ventral striatum. 
I suspect that striatum structures, such as the nucleus accumbens, are actually the seat 
of positive sentiments (dispositions to feel positive emotions), rather than the positive 
emotions themselves. The OFC

Outros materiais