Location: HOME > RESEARCH ACHI... > 正文

Deciding Versus Reacting : Conceptions of Moral Judgment and the Reason-Affect Debate

发布时间:2017-11-30  发布作者:  点击数:

Deciding Versus Reacting : Conceptions of moral Judgment and the Reason-Affect Debate

By: Benoît Monin
Department of Psychology, Stanford University
David A. Pizarro
Department of Psychology, Cornell University
Jennifer S. Beer
Department of Psychology and Center for Mind and Brain, University of California, Davis

Acknowledgement: We thank Kathleen Vohs, Roy Baumeister, and Alex Jordan for their insightful comments on a previous version of this paper.

Correspondence concerning this article should be addressed to: Benoît Monin, Department of Psychology, Jordan Hall, Stanford University, Stanford, CA 94305 Electronic Mail may be sent to:

Recent advances in the study of judgment and decision making have highlighted the role of emotion as an understudied yet powerful contributor to the decision-making process (e.g., Loewenstein, Weber, Hsee, & Welch, 2001; Vohs, Baumeister, & Loewenstein, in preparation; 2007). The question of the precedence of emotions over reason in affecting judgment has been of particular interest for those studying moral psychology. Indeed, recent debates in the social psychological literature on morality have focused on this very question, with some arguing that when it comes to morality, emotions play the primary causal role (e.g., Haidt, 2001), and others have defended the role of higher-order reasoning as an important causal determinant of moral judgment (e.g., Pizarro & Bloom, 2003).

In the first part of this article, we provide a brief summary of the debate over the role of emotion in moral judgment, focusing on the major theoretical approaches in the field. We then suggest that because morality encompasses a broad range of situations, conflicting views regarding the contribution of emotions to morality may reflect, more than anything, an emphasis on a different set of moral encounters. Specifically, if one thinks of the typical moral situation as involving the resolution of a moral dilemma, one is likely to arrive at a model of moral judgment that heavily emphasizes the role of rational deliberation. If, on the other hand, one conceives of the typical moral situation as one in which we must judge others' moral infractions, one may conclude that morality involves quick judgments that have a strong affective component and are not necessarily justifiable by reasoning. In the second part of this article, we use the distinction between these “prototypical” moral situations to provide a preliminary resolution of this debate, as well as discuss, in both situations, how emotions and reason might interact. Finally, we point to some potential directions for future research that have thus far received limited at least in part because of the emotions-reason dichotomy.

Reason Versus Emotion in Moral Psychology

The debate about the competing roles of reason and emotion in moral judgment has a long history, dating back at least two centuries to Hume (1777/1969), who suggested that reasoning was (and should be) guided by emotions, and Kant (1785), who argued (largely in reaction to Hume), for the supremacy of reason in making moral judgments. The debate between reason and emotion was of particular importance in the moral domain because a lot seemed to hinge on the answer. Specifically, at stake in this debate was the issue of the overall validity of moral judgments (e.g., Ayer, 1952). If one could not ground moral judgments with reasoning, how could one claim that moral judgments were “true” and not simply a matter of preference? Thus, one goal of the rationalist enterprise in moral philosophy was to ground moral laws by deducing them in quasimathematical fashion from a set of basic principles. Conversely, any acknowledgment of the contribution of affective responses to moral judgments seemed to open the door to rampant subjectivity.

Given this background, it may not be surprising that when Lawrence Kohlberg proposed his doctoral dissertation, just 10 years after the end of World War II, he ascribed to a rationalist model of moral judgment. Such an approach was consistent with a belief in the existence of moral absolutes—a concern likely to be particularly relevant for a generation that had witnessed the evils of fascism. In addition, because Kohlberg's approach relied heavily on Piaget's theory of cognitive development (1932) and was trying to distance itself both from psychodynamic and social learning theories fashionable at the time (see Kohlberg, 1963), it emphasized reasoning as the key to the moral experience. This view was buttressed by his findings that respondents seem able to articulate sophisticated moral reasoning, that the process of moral reasoning could therefore be observed and recorded by questioning participants resolving hypothetical dilemmas, and that a consistent structure emerged in responses to these hypothetical dilemmas (Kohlberg, 1969; Colby & Kohlberg, 1987). Following Kohlberg's lead, moral psychologists for much of the second half of the 20th century conceived of moral judgment essentially as a form of reasoning (see Krebs & Denton, 2005).

Recently, however, the role of reasoning has come increasingly in question as investigators have documented a number of limits to the reasoning process. For instance, there are a variety of cognitive shortcuts that individuals rely upon when their cognitive capacities are insufficient for the task at hand (e.g., Simon, 1967; Kahneman & Frederick, 2002). This sort of “heuristic” processing enables people to function effectively in a world where information is often incomplete and where time pressures or other demands on their attention prevent individuals from applying the full power of their reasoning capacity. In addition, other researchers have demonstrated that some judgments and decisions appear to sidestep conscious deliberation entirely (e.g., Bargh, 1994; Greenwald & Banaji, 1995; Dijksterhuis, 2004). Finally, in a last assault on the supremacy of reason, investigators have cast doubt on even those cases where people seem able to articulate the causes of their behavior, arguing that such accounts simply reveal people's ability to generate narratives for their own behavior in a post hoc fashion, rather than demonstrating any direct access to the real causes of their choices (Nisbett & Wilson, 1977).

Capitalizing on this assault on rationalism, and on the rebirth of interest in the impact of emotions on decisions (e.g., Schwarz & Clore, 1983; Bodenhausen, Sheppard, & Kramer, 1994; Lerner & Keltner, 2001; Lowenstein et al., 2001), a new view of morality emerged, one that was consistent with Hume's emotionalist vision, not the Kantian rationalism of Kohlberg and his followers. This shift toward emotion has been evident in such diverse fields as philosophy (Prinz, 2006), developmental psychology (Kagan, 1984), social psychology (Haidt, 2001), and neuroscience (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001). Specifically, these emotionalist approaches to moral judgment posit that emotions are the primary causes of moral judgment and behavior, that moral judgments often arrive in the form of quick, affect-laden moral intuitions, 1 and that the rational accounts given by actors often amount to little more than post hoc rationalizations (Haidt, 2001, 2002). As a counterpoint to the sophisticated interview transcripts put forth by the rationalist perspective (e.g., Colby & Kohlberg, 1987), researchers in the emotionalist tradition provide support for their view by showing that moral judgment is often characterized by strong emotional reactions (see Haidt, Koller, & Dias, 1993), and in some instances, respondents acknowledge that they have no rational defense for their position, and yet hold on to it—what Haidt (2001) refers to as “moral dumbfounding.”

These diverging views about the nature of moral judgment seem disheartening for someone new to the study of moral psychology. Ask one researcher and you are presented with a picture of the moral actor as a rational thinker who is able to weigh the pros and cons of moral decisions, reason her way to an answer, and if necessary, regulate her emotions to support her conclusion. Ask another and she is tugged around by strong emotions that dictate her judgments and behavior, makes snap moral decisions with little to no thought, and uses affect-driven intuitions to quickly condemn the actions of others. It would not be unreasonable to conclude that researchers in the two theoretical traditions are describing entirely different psychological entities. How can researchers purportedly studying the same topic arrive at such diverging conclusions about human morality?

Two Distinct Prototypical Moral Situations

We suggest that these diverging conclusions have arisen because investigators have started with differing understandings of what constitutes moral judgment and, as a result, have designed methods that capture very different phenomena. We propose that two types of moral situations (or moral “encounters,” see Hoffman, 2000) have received the bulk of the attention in the literature on moral judgment: moral dilemmas and moral reactions. We describe both in greater detail, and argue that understanding the differences between these two situations reveals that both the rationalist and the emotionalist approach provide important insights in the study of moral judgment, albeit as it applies to different settings.

Moral Dilemmas

A tension between conflicting moral claims

The first type of moral situation that has dominated the psychological literature (especially in the developmental literature, as influenced by Kohlberg's approach) is the moral dilemma. Dilemmas, defined here as scenarios constructed to highlight a conflict between two moral principles, seem especially appropriate if one is interested in understanding how individuals engage in moral reasoning. Yet a focus on moral dilemmas as the prototypical moral situation is likely to encourage a model that views moral judgments as primarily caused by complex reasoning. Indeed, Kohlberg's rationalist model is based on a vast amount of evidence gathered from structured interviews using these hypothetical moral dilemmas. Many readers may be familiar with the Heinz dilemma, in which a man must decide between stealing a drug to save his wife or upholding the law and letting her die (Colby & Kohlberg, 1987, also used in Rest's Defining Issues Test, 1986), but it may be useful to provide examples of other dilemmas used by Kohlberg and his colleagues to get a better sense of this approach (Colby & Kohlberg, 1987):

  • - A boy must decide between obeying his father and keeping money he has rightfully gained and that his father unjustly requests.
  • - A girl must decide whether to tell on a sister who used her savings to go to a rock concert instead of clothes for school and lied to her mother about it.
  • - A doctor must decide whether to kill a dying patient who is asking for an end to her suffering.
  • - A Marine captain must decide between ordering a man to go on a fatal mission, enabling him to lead the rest of his men to safety, and sacrificing himself, leaving his men to their own devices.
  • - A man must decide whether to report a prison escapee who used to steal food and medicine for his family and has now become a major benefactor to the community.

It is important to notice two constants in all of these examples that reflect a particular model of morality and are likely to influence the conclusions reached by investigators. First, participants are invited to advise the actor (e.g., “Should Heinz steal the drug?”, “Should Dr. Jefferson give her the drug that would make her die?”, “Should the captain order a man to go on the mission or should he go himself?”), effectively taking a first-person perspective and comparing possible outcomes rather than reacting to a fait accompli. Because the actor has yet to make a choice, the focus is squarely on the decision-making process. The importance of reasoning in determining what should be done in this situation becomes evident. The second important feature of this methodology is that these dilemmas are constructed to highlight a clash of moral duties: Actors in the vignettes often have to decide between two morally right but incompatible courses of action, such as upholding the law versus saving one's wife, obeying a parent versus retaining rightfully gained property, or directly sparing a man's life versus saving the life of the rest of the company. This again seems to tip the scales toward deliberative reasoning because one's immediate reaction tends to be inconclusive. Importantly, the verbal probing that accompanies the Kohlbergian interview method is not aimed at justifying the judgment itself as much as exploring the cognitive buttressing that accompanies it (e.g., “Does it make a difference whether or not [Heinz] loves his wife?”, “Is it important for people to do everything they can to save another's life?”, “Suppose it's a pet animal he loves. Should Heinz steal to save the pet animal?”). So not only are these dilemmas constructed to trigger deliberative reasoning, but the standard interview questions prompted respondent to generate sophisticated accounts.

To be sure, this approach has proved extremely helpful in exploring the cognitions involved in moral decision-making in particularly complex situations, and it has helped identify reliable individual differences in the type of arguments that respondents use across dilemmas. Yet it is important to realize that this focus on studying situations that require heavy reasoning was bound to yield a view of moral judgment based on reason, or what Blasi (2004) called Kohlberg's “momentous decision to consider understanding as the core of morality” (p. 338). It is certainly possible that the dilemma approach is the most effective way to study the moral reasoning process. At the same time, a full-blown theory of moral psychology that focuses on only one aspect of moral judgment may paint a picture of morality that is biased, in this case in favor of rationality.

The interplay of emotion and reason in moral dilemma situations

It is important to highlight that even if a focus on moral dilemmas might lead to a bias toward cognitive models of morality, the dilemma approach can also contribute to an understanding of how emotion and reason interact by incorporating recent advances in the study of emotion in decision-making. As noted above, the decision-making literature, using similarly hypothetical (though not necessarily moral) choice situations, has started recognizing the importance of anticipated emotions (e.g., Mellers, Schwarz, & Ritov, 1999) as inputs in the decision process. Despite the emphasis moral dilemmas place on explicit reasoning and analytical consideration, they also come with their share of emotional content, including weighing in the anticipated guilt that one associates with each of the options (Hoffman, 2000; Tangney & Dearing, 2002). Though emotions may seem less legitimate than reason as explanations for behavioral choice (Haidt, 2001), the recent decision-making literature suggests that they may be guiding decisions when people choose the option that they see as least likely to yield guilt (anticipated emotion), or which generates least discomfort at the time of the dilemma (anticipatory emotion, see Loewenstein et al., 2001).

Emotions can also play a role in moral dilemma situations without arising from the options themselves. Sometimes strong emotions arise from the experience of being torn between the options (Luce, Bettman, & Payne, 1997; Tetlock, Kristel, Elson, & Green, 2000), leading some to avoid the decision altogether in order to avoid this aversive state (Anderson, 2003). Other times, emotions that are unrelated to the options at hand (incidental emotions) end up influencing the decision: Lerner and Keltner (2001) showed that fearful individuals favor safer options, whereas anger leads people to take more risks. Based on Rozin et al.'s CAD hypothesis of a correspondence between moral emotions and violations of distinct domains of morality (Rozin, Lowery, Imada & Haidt, 1999), we can imagine that when resolving a dilemma pitting diverging moral claims against one another, angry individuals might favor claims related to autonomy and disgusted individuals favor divinity. These predictions are speculative, but future research could use the dilemma encounter to study the role of emotions. Introducing emotions in the study of moral dilemmas might be more convincing to investigators focusing on this type of moral encounter than showing the role of emotions in vastly different moral situations—such as the ones we now turn to.

Moral Reactions

Witnessing shocking transgressions

As described above, some recent work in moral psychology (e.g., Haidt, 2001; Greene et al., 2001) stands in sharp contrast with the Kohlbergian view that morality results from complex reasoning. Instead, this view proposes that moral judgments are quick and affect-laden, and the post hoc rationalizations readily articulated by respondents may have little to do with their original impetus. Another (less explicit) difference between these and previous approaches is the type of situation considered to be prototypically moral. In this approach, the moral judgments studied usually take the form of reactions to the moral infractions of others (“Person A performed behavior X. Is this wrong?”). Whereas the moral dilemma tradition described above used a first-person perspective on possible future behaviors, studies in the emotionalist tradition typically focus on reactions to the behaviors of others. And when a dilemma is considered, instead of the excruciating tensions arising from conflicting codes as in the Kohlbergian tradition, in this approach one option is typically shocking, leading to an immediate (affective) reaction. A participant might be served the following story: “A family's dog was killed by a car in front of their house. They had heard that dog meat was delicious, so they cut up the dog's body and cooked it and ate it for dinner,” and then asked “What do you think about this? Is it very wrong, a little wrong, or is it perfectly okay?” (Haidt, Koller, & Dias, 1993, p. 617). This approach centers on moral intuitions, defined as “the sudden appearance in consciousness of a moral judgment, including an affective valence (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion” (Haidt, 2001, p. 818). This quote reveals how this approach puts moral judgment squarely within the research on affective reactions (Zajonc, 1980) and unconscious processes (Greenwald & Banaji, 1995), and a consideration of the situations used in this research (such as the dog situation above) reveals how such a view of morality could emerge. Other typical questions in this tradition include whether it is wrong to masturbate inside a dead chicken before you eat it or to have sex with a sibling (Haidt, 2001; Haidt, Koller, & Dias, 1993), whether it is okay to sell your daughter to child pornographers, to hire a stranger to rape your own wife, or to kill a man for money (Greene et al., 2001). We list these somewhat arresting examples on purpose to contrast their immediate emotional impact with the much more complex and cerebral examples of the rationalist tradition that we illustrated in the previous section. 2 The theory of morality resulting from using these examples emphasizes quick emotional reactions like disgust or contempt when deciding whether something is moral or not, and these social emotions should undoubtedly be part of the moral picture. In contrast to the complexity of the moral dilemma approach, the simplicity of the moral reaction approach has made it easier to study moral judgment in a variety of settings, from the suburbs of Porto Alegre (Haidt, Koller, & Dias, 1993) to the brain scanner (Greene et al., 2001; Greene & Haidt, 2002; Greene, Nystrom, Engell, Darley, & Cohen, 2004; Moll et al., 2001, 2002a, 2002b). One limitation of this approach, however, is that it is not clear whether these moral reactions constitute the entirety of moral life, and it may be overlooking a wide variety of moral situations, some of which may lead to very different models of moral thought and behavior.

The interplay of reason and emotion in moral reaction situations

In an effort to assert the underinvestigated importance of emotions in moral life, emotionalists have focused on moral reactions to blatant transgressions, which, we have argued, are especially likely to reveal emotions as central to the process of moral judgment. Now that this point has been effectively made by proponents of this approach, a subtler understanding would follow from the study of cognitive reasoning as it pertains to these prototypical moral situations. Haidt, although giving first billing to emotions, also allows reasoning in his model (2001), in the form of “reasoned judgment” (overriding initial intuitions through sheer force of logic), and “private reflection” (spontaneously activating a new intuition that contradicts the initial one). One of the rare cases where he admits that these processes might be at work is “during a formal moral judgment interview” (p. 819), squarely identifying the type of moral encounter as a determinant of the type of process that gains prominence in a theoretical account (see also Haidt, 2003). Pizarro and Bloom (2003) stressed how much moral intuitions are shaped and informed by prior reasoning, noting for example the role of cognitive appraisal in the experience of emotions (Lazarus, 1991). Actors also seem sufficiently aware of the role of emotions in their reactions that they can manage their exposure to emotion-eliciting stimuli, effectively letting reason override a possible affective response and its motivational consequences—for example choosing not to listen to an account that they know might trigger empathy and an urge to help the speaker (Shaw, Batson, & Todd, 1994). And when they do indulge in the emotion, it can be because they have recruited this emotion to buttress prior resolutions attained through pure reasoning: moral vegetarians, for example, appear to recruit disgust effectively to serve their moral beliefs (Rozin, Markwith, & Stoess, 1997). We have argued that the focus on emotion, like the focus on reason, has led to (and results from in tautological fashion) a preference for a particular type of moral situation, taken as the prototype. As the dust settles, we advocate embracing the findings of this emotionalist approach as it pertains to this particular situation, but also bringing in some of the findings making it clear that emotions often result from (proximal or distal) cognitive processes.

The Value of Multiple Approaches

We have laid out two prototypical moral encounters that we argue have received the greatest amount of attention in the literature, defining contrasting theoretical accounts and setting the stage for the most salient debate in the literature to this day. However, the debate surrounding the causes of moral judgment may be unnecessary if we acknowledge that each of the prototypical moral situations involve different processes. Reasoning is primary when confronted with first-person dilemmas, and emotions are primary when judging the shocking infractions of others. There is no doubt that psychology's understanding of moral judgment greatly benefits from these multiple approaches, and our grasp of the moral domain is broadened by considering differing prototypical moral situations. It is also worth noting that different prescriptions for morality come across for students of each approach, even if authors typically do not mean to make normative claims. In the moral dilemmas view, one becomes more moral by reasoning better; in the moral reactions view, one is as moral as one's intuitive reactions. Elsewhere we have suggested that this leads to very different ideals (or “paragons”) of virtue; the philosopher and the sheriff (see Monin, Pizarro, & Beer, in press).

We described how further advances could be made by studying the role of emotions in moral dilemmas and the role of reason in moral reactions. Another way to structure this distinction is in terms of the types of emotions that occur at different stages of emotional-cognitive processing in a sequential model. Scherer (1984), for example, argues that an emotion like disgust (a big contributor in moral reaction research) arises early from initial reactions to a stimulus, whereas complex self-conscious emotions like shame (which, at least in its anticipated form, should carry more weight in the moral dilemma tradition) come later and require a deeper elaboration of the eliciting event.

Going Beyond the Reason—Emotions Dichotomy

Beyond creating unnecessary conflicts, another and possibly more damaging consequence of focusing on the two prototypical situations described above is that it limits moral psychology's scope of investigation. Many of the situations not considered by either the emotionalist or rationalist camps reflect a greater interplay between cognition and emotion. We now discuss three alternative moral encounters that have received relatively limited attention in the moral psychology literature and that we believe might prove fruitful to increase our understanding of everyday morality: moral temptation, moral self-image, and lay theories of morality. Because none of these situations is likely to tip the scales toward a more emotional or rational understanding of morality, they also provide more evidence of the complex and rich interplay between reason and emotion in moral judgment.

Moral Temptation

Self-regulation failures in the moral domain

On the sidelines of the reason versus emotion debate, another type of moral situation provides yet another model of moral judgment: situations where individuals are initially committed to a given moral course of action, but fail to follow through with this resolution, and often experience guilt and shame as a result. This case does not seem to fit either the moral dilemma framework (actors are clear on what should be done) or the moral reaction template (actors are not reacting to anyone when they fail to comply with previous engagements). Indeed, reviews of the correspondence between moral judgment and moral behavior have found disappointing relationships between the two (e.g., Blasi, 1980), suggesting that there might be more to moral behavior than moral judgment. Introspection suggests that many everyday failures are not the result of flawed judgment but rather of an inability to transform good intentions into good deeds. People cheat on their taxes, lie to their customers, or deceive their spouse knowing full well that what they are doing is wrong, and maybe even knowing that they will feel remorse. More than a poorly calibrated moral compass, it seems that many moral failings are the result of weakness of the will (what philosophers traditionally called incontinence or akrasia), succumbing to temptation, and the appeal of the forbidden fruit overshadowing one's good intentions.

Kohlberg and his colleagues seem to have had a change of heart regarding the moral importance of willpower. In their review of the factors increasing the correspondence between moral reasoning stage and behavior, Kohlberg and Candee initially (1984) identified “follow-through factors” such as intelligence, the ability to allocate attention effectively, and the ability to delay gratification as requirements for transforming moral choices into moral behavior. The evidence that they presented for the role of “ego controls” focused on IQ and attention allocation, presumably because these abilities were ones most related to higher cognitive functioning. Delay of gratification (what might be considered most similar to the concept of willpower), on the other hand, was given shorter shrift because its relationship to reasoning was less clear. Nonetheless, these “ego control” components were eventually dropped from the model (Candee & Kohlberg, 1987) presumably because they were general psychological abilities that had little to do (in this view) with morality itself. Kohlberg and his colleagues were attempting to identify the processes uniquely involved in moral action, and the recognition that willpower, like IQ, served a number of other functions in everyday behavior (indeed they call them “nonmoral factors”) may be what led them to discard this part of the model. In addition, Kohlberg was suspicious of the concepts of superego, self-blame, and guilt that had dominated the study of morality within the psychodynamic tradition, and had, in an earlier review of the literature, found them to be poor predictors of moral behavior (1963). This skepticicsm of psychodynamic concepts, coupled with the desire to advance a cognitive framework for understanding morality, may have been additional reasons for him to abandon any talk of “ego control.”

Moral weakness and willpower received more attention from other cognitive developmentalists. For instance, they come into play in the fourth component in Rest's (1986) four-component model 3 of morality, following through with one's intentions. However, Rest acknowledged that the moral judgment approach used by Kohlberg and others in this tradition (including his own Defining Issues Test) really focuses only on his second component (formulating the moral course of action) and is “ill-suited for providing information about the other components” (p. 9).

By focusing on responses to moral dilemmas, proponents of the rationalist view left little room for the study of willpower in morality. On the other side of the spectrum, defenders of the emotionalist view, by studying reactions to other people's infractions, made willpower irrelevant. In both cases, willpower takes a theoretical back seat. We believe that a different image of morality emerges if one focuses on those situations where willpower seems most involved. When one thus redirects one's attention to this alternative prototypical moral situation, the work on guilty pleasures (Giner-Sorolla, 2001), delay of gratification (Mischel & Ebbesen, 1970), and ego depletion (Vohs & Heatherton, 2000; Baumeister, Bratslasky, Muraven, & Tice, 1998), not typically considered within the realm of moral psychology, becomes eminently relevant. Giner-Sorolla (2001), for example, described “guilty pleasures,” which yield immediate rewards at a long-term cost (e.g., sexual promiscuity), and “grim necessities,” where an initial discomfort holds the promise of a later prize (e.g., volunteer work). Of particular interest to a reflection on emotions and decision making is Giner-Sorolla's finding that although the appeal of positive hedonic emotions (e.g., pleasure) may contribute to moral failing, the deterrence of negative self-conscious emotions (e.g., anticipated guilt) play an important role in holding on to one's resolutions. Instead of pitting controlled reason against impulsive emotions, self-control situations reveal that some emotions can support reasoned choice.

Walter Mischel's work on “delay of gratification” (Mischel & Ebbesen, 1970) provides a thorough investigation of the ability to forego an immediate reward in the hope of a larger one. The typical procedure sits children in front of an attractive snack (e.g., two pretzels) that they will be allowed to eat if they sit still for a few minutes. They are told, however, that if waiting is too hard, they can alert the experimenter with a bell and receive half the snack immediately, foregoing the other half in favor of instant gratification. This ability to delay gratification predicts real life outcome more than a decade later (r = .57 with SAT Quantitative scores in one study, see Mischel, Shoda, & Rodriguez, 1989). The techniques that children used to delay gratification effectively did not involve direct attempts at suppressing “hot” emotions but were instead more metacognitive, involving allocating attention away from the immediate reward (e.g., looking away, singing a song, etc.). The processes observed in this line of work should also be at work when individuals attempt to resist immediate temptations in the service of their long-term moral goals. Our point once again is that the choice of situations has important theoretical consequences: one would get just as incomplete a picture of moral life if one were to exclude situations of moral temptation as if one were to focus exclusively on them.

Both of the models just presented propose that moral strength relies on the ability to direct one's attention—away from the reward for Mischel and toward potential negative self-conscious emotions for Giner-Sorolla. But going beyond individual differences in the proclivity to use these cognitive techniques, what explains why the same person can resist temptation one day and fall into sin the next? A recent approach (Baumeister et al., 1998) suggests that self-control is a resource that fluctuates depending on the demands on it. If one task requires a high dose of restraint, then self-control, like a muscle, becomes depleted and later tasks are likely to be met with more abandon. Interestingly, following up on the muscle metaphor, Baumeister and Exline (1999) suggest that willpower can be increased through exercising it—they posit that a repeated cycle of depletion and replenishment results in a greater amount of initial self-control in later trials. Although authors in this limited-resource approach discuss ego depletion as a general phenomenon in self-regulatory contexts, it is worth noting that recent findings in this tradition have looked directly at domains of everyday morality, using the model to understand for example chronic dieters (Vohs & Heatherton, 2000), and excessive spenders (Vohs & Faber, 2007), and demonstrating the value of this approach to understand moral behavior.

The interplay of reason and emotion in moral temptation situations

Unlike moral dilemmas and moral reactions, situations of moral temptation illustrate the necessity to bridge “hot” and “cold” models and to realize that much of moral life results of the interplay between cognitive and emotional process. There is now a large literature on self-regulation processes (see Baumeister & Vohs, 2004) which might shed more light on (im-)moral behavior in temptation situations than rationalist or emotionalist theories of morality. One illustration of the interplay between cognition and emotion in this domain is the role played by intelligence and attention allocation in overcoming what may appear to be the pull of strong emotions. As discussed above, Kohlberg and Candee (1984) identified both as important follow-through factors, and Blasi (1980) noted that intelligence seemed to increase the link between moral cognition and moral action. We mentioned that Mischel and colleagues (1989) observed significant correlations between the ability to delay gratification and later scores on measures often used as markers of intelligence, like academic standardized tests. Finally, researchers in the limited-resource approach have also tied self-regulation to IQ (e.g., Schmeichel, Vohs, & Baumeister, 2003).

Moral Self-Image

One seemingly strong source of moral motivation is the desire to maintain a positive moral self-image, a source of motivation possibly underestimated by most work within moral psychology. The motivation to maintain a positive, worthy self-image is typically a concern of students of cognitive dissonance, self-enhancement, or other related areas. (One notable exception is Blasi's approach to morality, which stresses the role of moral identity; Blasi, 1980, 1983, 2004). Yet there has been a renewal of interest in moral self-image (e.g., Aquino & Reed, 2002), and studies in the last five years have looked, for example, at how people think they compare morally with others (Epley & Dunning, 2000), how people's moral confidence can license them to act in morally questionable ways (Monin & Miller, 2001), or how other people's moral superiority can be threatening and lead to rejection (Monin, Sawyer, & Marquez, 2007). The extent to which a given decision or behavior is coded as morally relevant seems to depend on individuals' idiosyncratic mapping of the moral domain, their tendency to see the world in moral terms (Lapsley & Narvaez, 2004), and the importance they attribute to preserving or enhancing a moral self-image (Aquino & Reed, 2002). This also brings into the domain of morality advances in the study of motivated self-perception and self-enhancement made over the last decades (e.g., research on self-verification, Swann, Pelham, & Krull, 1989, or symbolic self-completion, Wicklund & Gollwitzer, 1982). In an effort to defend its specificity, moral psychology sometimes shuns processes that extend beyond the domain of morality—but by doing so it runs the risk of ignoring important determinants of moral behavior.

Lay Understanding of Morality

What do individuals mean when they use the term “moral”? Surprisingly little is known about this question because researchers have often provided a definition of morality and conducted research by using their own definitions to guide research. But this tacit disregard of lay definitions of morality may be problematic. For instance, if an individual does not believe that a situation is moral, there may be little motivation to act according to moral standards in that particular situation. A few researchers have touched on this issue. For instance, Nucci (2004) argues that whether an issue is seen by the actor as falling within the moral domain is a critical variable in predicting the impact of moral judgments on moral actions. Although Rest's (1986, p. 5) first component of moral action, interpreting the situation, does not require that the actor explicitly think “This is a moral problem,” it minimally requires that she realized she could do something that would affect the interests, welfare, or expectations of other people. Candee and Kohlberg (1987) in their study of 1964 Berkeley sit-in protests established a factor they referred to as “moral relevance” by reporting that both commentators and student protestors believed the situation to be morally important, as well as by the fact that “all subjects were able to respond to questions dealing with the moral aspects of the event” (p. 563). Although this explicit approach—asking an individual if a situation is morally relevant—is valuable, it is possible that simply being asked about the moral aspect of an event communicates to participants that the event is morally relevant. A more implicit approach to the question of moral relevance may provide a better source of information as to the role of this factor.

Nucci and Turiel's work (e.g., Nucci & Turiel, 1978; Turiel, 1990) usefully explores the boundaries of moralization between norms that are truly morals and ones that merely reflect social conventions, but the determination of what subjects consider moral is still determined by criteria imposed by investigators (e.g., the prescription should be universal and not restricted to this group or culture) rather than by direct accounts by naïve respondents (for other stabs at the moral domain, see also Haidt, Koller, & Dias, 1993; Rest, Narvaez, Bebeau, & Thoma, 1999, Chap. 7). Future work should explore lay definitions of the moral domain and evaluate the importance of moral relevance for actors: Do individuals bring to bear different psychological mechanisms once they have determined that an issue is moral, or is this categorization by the actor actually of little importance? Moral maps of what constitutes the moral domain in everyday life will of course vary greatly by individual and subculture, both quantitatively in the proportion of daily matters imbued with moral significance, and qualitatively in the dimensions chosen to live morality. Researchers have tried to skirt this difficulty by restricting their inquiry to the most egregious moral infractions likely to generate consensus (e.g., theft, murder, incest), but a study of moral labeling in less obvious situations promises to yield important insights for everyday morality. Rozin's work on moralization provides a useful inroad (Rozin, 1999; Rozin & Singh, 1999; Rozin, Markwith, & Stoess, 1997), and there is also much to learn about this issue in the sociological literature (e.g., Lamont, 1992; Wolfe, 2001; Baker, 2005) where the societal level of analysis avoids the difficulty of interindividual differences.

Another important aspect of lay morality pertains to perceptions of other people's moral life, and this question is most relevant to the overarching theme of this review: Where do lay perceivers stand on the reason versus emotion debate? When making judgments of blame, how do people understand the interplay or reason and emotion? For example, Pizarro, Uhlmann, and Salovey (2003) have demonstrated that people seem to believe that strong emotional reactions can cause individuals to “lose control” and that they should be held less responsible for actions motivated by these emotions (at least for negative emotional impulses like anger–positive emotional impulses, such as sympathy, do not seem to exculpate positive actions). Future research should investigate these issues in greater depth.

Concluding Thoughts

Recent theorizing on the psychology of moral decision making has pitted deliberative reasoning against quick affect-laden intuitions. In this article, we propose a resolution to this tension by arguing that it results from a choice of different prototypical situations: advocates of the reasoning approach have focused on sophisticated dilemmas, whereas advocates of the intuition/emotion approach have focused on reactions to other people's moral infractions. Arbitrarily choosing one or the other as the typical moral situation has a significant impact on one's characterization of moral judgment. At this point, we believe that the most productive approach for the student of morality is to embrace both models (and the wealth of empirical findings that they have each generated), keeping in mind the setting in which each has greatest applicability. Additionally, we have suggested that some of the questions left on the sidelines of the reason-emotion debate (moral temptation, moral self-image, and lay moral understanding) deserve greater attention in future research. They illustrate a more complex interplay between reason and emotion, and may provide a richer understanding of the process of moral judgment across the wealth of situations encountered in everyday life.


1 We use the term “emotionalists” for clarity, although Haidt typically calls his approach “social intuitionist” (e.g., Haidt, 2001, 2003), but these intuitions are without fail described as affect-laden. The difference between intuitions and emotions in this model seems to be that intuitions are behavioral guides or evaluations that directly follow from an emotional experience.

2 Interestingly, Kohlberg did include one dilemma where respondents were to react to the past behavior of others instead of making recommendations about the future: In Dilemma VII (Colby & Kohlberg, 1987), two brothers each get $1,000, one by breaking into a store and one by lying to a kind old man. Respondents are asked, “Which is worse, stealing like Karl or cheating like Bob?” So in effect, although the dilemma is presented as a moral reaction, readers are left with having to prescribe which course of action should have been taken, putting the situation squarely back into the context of conflicting moral claims.

3 The four components of Rest's model are (1) interpreting the situation, (2) identifying the morally ideal course of action, (3) deciding whether to try to fulfill one's moral ideal, and (4) implementing what one intends to do (Rest, 1984, 1986; Rest et al., 1999).


Anderson, C. J. (2003). The psychology of doing nothing: Forms of decision avoidance result from reason and emotion. Psychology Bulletin, 129, 139–167.

Aquino, K., & Reed, A. I. I. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83, 1423–1440.

Ayer, A. J. (1952). Language, Truth and Logic (1st Dover ed.). New York: Dover Publications.

Baker, W. E. (2005). America's crisis of values: Reality and perception. Princeton, NJ: Princeton University Press.

Bargh, J. A. (1994). The four horsemen of automaticity: Awareness, intention, efficiency, and control in social cognition. In R. S.Wyer, Jr. & T. K.Srull (Eds.), Handbook of social cognition (Vol. 1, pp. 1–40). Hillsdale, NJ: Erlbaum.

Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego depletion: Is the active self a limited resource?Journal of Personality and Social Psychology, 74, 1252–1265.

Baumeister, R. F., & Exline, J. J. (1999). Virtue, personality, and social relations: Self-control as the moral muscle. Journal of Personality, 67, 1165–1194.

Baumeister, R. F., & Vohs, K. D. (2004). Handbook of self-regulation: Research, theory and applications. New York: Guilford Press.

Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88, 1–45.

Blasi, A. (1983). Moral development and moral action: A theoretical perspective. Developmental review, 3, 178–210.

Blasi, A. (2004). Moral functioning: Moral understanding and personality. In D. K.Lapsley & D.Narvaez (Eds.), Moral development, self and identity: Essays in honor of Augusto Blasi. Mahwah, NJ: Erlbaum & Associates.

Bodenhausen, G. V., Sheppard, L., & Kramer, G. P. (1994). Negative affect and social perception: The differential impact of anger and sadness. European Journal of Social Psychology, 24, 45–62.

Candee, D., & Kohlberg, L. (1987). Moral judgment and moral action: A reanalysis of Haan, Smith, and Block's (1968). Free speech movement data. Journal of Personality and Social Psychology, 52, 554–564.

Colby, A., & Kohlberg, L. (1987). The measurement of moral judgment, Vol. 1. Cambridge, UK: Cambridge University Press.

Dijksterhuis, A. (2004). Think different: The merits of unconscious thought in preference development and decision making. Journal of Personality and Social Psychology, 87(5), 586–598.

Epley, N., & Dunning, D. (2000). Feeling “holier than thou”: Are self-serving assessments produced by errors in self- or social prediction?Journal of Personality and Social Psychology, 79, 861–875.

Giner-Sorolla, R. (2001). Guilty pleasures and grim necessities: Affective attitudes in dilemmas of self-control. Journal of Personality and Social Psychology, 80, 206–221.

Greene, J., & Haidt, J. (2002). How (and where) does moral judgment work?Trends in cognitive sciences, 6, 517–523.

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44, 389–400.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108.

Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102, 4–27.

Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.

Haidt, J. (2002). The moral emotions. In R. J.Davidson, K. R.Scherer, & H. H.Goldsmith (Eds.), Handbook of affective sciences (pp. 852–870). Oxford University Press, Oxford.

Haidt, J. (2003). The emotional dog does learn new tricks: A reply to Pizarro and Bloom (2003). Psychological Review, 110, 197–198.

Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture and morality, or Is it wrong to eat your dog?Journal of Personality and Social Psychology, 65, 613–628.

Hoffman, M. L. (2000). Empathy and moral development: Implications for caring and justice. New York: Cambridge University Press.

Hume, D. (1969). An enquiry concerning the principles of morals. La Salle, IL: Open Court. (Original work published 1777)

Kagan, J. (1984). The nature of the child. New York: Basic Books.

Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T.Gilovich, D.Griffin, and D.Kahneman (Eds.)Heuristics and biases: The psychology of intuitive judgment. New York: Cambridge University Press, 49–81.

Kant, I. (1959). Foundation of the metaphysics of morals. (L. W.Beck, Trans.). Indianapolis, IN: Bobbs-Merrill. (Original work published 1785)

Kohlberg, L. (1963). Moral development and identification. In H.Stevenson (Ed.), Child psychology: 62nd yearbook of the National Society for the Study of Education. Chicago: University of Chicago Press.

Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A.Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480). Chicago: Rand McNally.

Kohlberg, L., & Candee, D. (1984). The relationship of moral judgment to moral action. In W. M.Kurtines & J. L.Gerwitz (Eds.), Morality, moral behavior, and moral development. New York: Wiley.

Krebs, D. L., & Denton, K. (2005). Toward a more pragmatic approach to morality: A critical evaluation of Kohlberg's model. Psychological Review, 112, 629–649.

Lamont, M. (1992). Money, morals and manners: The culture of the French and American upper-middle class. Chicago, IL: University of Chicago Press.

Lapsley, D. K., & Narvaez, D. (2004). A Social-cognitive approach to the moral personality. In D. K.Lapsley & D.Narvaez (Eds.), Moral development, self and identity (pp. 189–212). Mahwah, NJ: Erlbaum.

Lazarus, R. S. (1991). Emotion and adaptation. New York: Oxford University Press.

Lerner, J., & Keltner, D. (2001). Fear, anger, and risk. Journal of Personality and Social Psychology, 81, 146–159.

Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings. Psychology Bulletin, 127, 267–286.

Luce, M. F., Bettman, J. R., & Payne, J. W. (1997). Choice processing in emotionally difficult decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 384–405.

Mellers, B., Schwartz, A., & Ritov, I. (1999). Emotion-based choice. Journal of Experimental Psychology: General, 128, 332–345.

Mischel, W., & Ebbesen, E. B. (1970). Attention in delay of gratification. Journal of Personality and Social Psychology, 16, 239–337.

Mischel, W., Shoda, Y., & Rodriguez, M. L. (1989). Delay of gratification in children. Science, 244, 933–938.

Moll, J., Eslinger, P. J., & Oliveira-Souza, R. (2001). Frontopolar and anterior temporal cortex activation in a moral judgment task: Preliminary functional MRI results in normal subjects. Arquivos De Neuro-Psiquiatria, 59, 657–664.

Moll, J., Oliveira-Souza, R., Bramati, I. E., & Grafman, J. (2002a). Functional networks in emotional moral and nonmoral social judgments. Neuroimage, 16, 696–703.

Moll, J., Oliveira-Souza, R., Eslinger, P. J., Bramati, I. E., Mourão-Miranda, J., Andreiuolo, P. A., et al. (2002b). The neural correlates of moral sensitivity: A functional magnetic resonance imaging investigation of basic and moral emotions. The Journal of Neuroscience, 22, 2730–2736.

Monin, B., & Miller, D. T. (2001). Moral credentials and the expression of prejudice. Journal of Personality and Social Psychology, 81, 33–43.

Monin, B., Pizarro, D., & Beer, J. (in press). Emotion and reason in moral judgment: Different prototypes lead to different theories. In K. D.Vohs, R. F.Baumeister, & G.Loewenstein (Eds.), Do emotions help or hurt decision making?New York: Russell Sage Foundation Press.

Monin, B., Sawyer, P., & Marquez, M. (2007). Resenting those who do the right thing: Why do people dislike moral rebels?Unpublished manuscript.

Nisbett, R., & Wilson, T. D. (1977). Telling more than we know: Verbal reports on mental processes. Psychological Review, 84, 231–295.

Nucci, L. (2004). Reflections on the moral self construct. In D. K.Lapsley & D.Narvaez (Eds.), Moral development, self and identity: Essays in honor of Augusto Blasi. Mahwah, NJ: Erlbaum & Associates.

Nucci, L. P., & Turiel, E. (1978). Social interactions and the development of social concepts in preschool children. Child Development, 49, 400–407.

Piaget, J. (1932). The moral judgment of the child. New York: Harcourt, Brace Jovanovich.

Pizarro, D. A., & Bloom, P. (2003). The intelligence of the moral intuitions: A comment on Haidt (2001). Psychological Review, 110, 193–196.

Pizarro, D. A., Uhlmann, E., & Salovey, P. (2003). Asymmetry in judgments of moral blame and praise: The role of perceived metadesires. Psychological Science, 14, 267–272.

Prinz, J. (2006). The emotional construction of morals. Oxford: Oxford University Press.

Rest, J., Narvaez, D., Bebeau, M. J., & Thoma, S. J. (1999). Postconventional moral thinking: A neo-Kohlbrergian approach. Mahwah, NJ: Erlbaum.

Rest, J. (1984). The major components of morality. In W.Kurtines & J.Gerwitz (Eds.), Morality, moral development and moral behavior (pp. 24–38). New York: Wiley.

Rest, J. R. (1986). Moral development: Advances in research and theory. New York: Praeger.

Rozin, P. (1999). The process of moralization. Psychological Science, 10, 218–221.

Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: A mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity). Journal of Personality and Social Psychology, 76, 574–586.

Rozin, P., Markwith, M., & Stoess, C. (1997). Moralization and becoming a vegetarian: The transformation of preferences into values and the recruitment of disgust. Psychological Science, 8, 67–73.

Rozin, P., & Singh, L. (1999). The moralization of cigarette smoking in the United States. Journal of Consumer Psychology, 8, 339–342.

Scherer, K. (1984). Emotion as a multicomponent process: A model and some cross-cultural data. In P.Shaver (Ed.), Review of personality and social psychology: Emotions, relationships, and health (pp. 37–63). Beverly Hills, CA: Sage

Schmeichel, B. J., Vohs, K. D., & Baumeister, R. F. (2003). Intellectual performance and ego depletion: Role of the self in logical reasoning and other information processing, Journal of Personality and Social Psychology, 85, 33–46.

Schwarz, N., & Clore, G. L. (1983). Mood, misattribution, and judgments of well-being: Informative and directive functions of affective states. Journal of Personality and Social Psychology, 45, 513–523.

Shaw, L. L., Batson, C. D., & Todd, R. M. (1994). Empathy avoidance: Forestalling feeling for another in order to escape the motivational consequences. Journal of Personality and Social Psychology, 67, 879–887.

Simon, H. (1967). Motivational and emotional controls of cognition. In H.Simon, Models of thought (pp. 29–38). New Haven, CT: Yale University Press.

Swann, W. B., Jr., Pelham, B. W., & Krull, D. S. (1989). Agreeable fancy or disagreeable truth: Reconciling self-enhancement and self-verification. Journal of Personality and Social Psychology, 57, 782–791.

Tangney, J. P., & Dearing, R. L. (2002). Shame and guilt. New York: Guilford Press.

Tetlock, P. E., Kristel, O. V., Elson, S. B., & Green, M. C. (2000). The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78, 853–870.

Turiel, E. (1990). The development of social knowledge: Morality and convention. Cambridge University Press.

Vohs, K. D., Baumeister, R. F., & Loewenstein, G. (in preparation). Do emotions help or hurt decision making?New York: Russell Sage Foundation Press.

Vohs, K. D., & Faber, R. J. (2007), Spent Resources: Self-regulatory Resource Availability Affects Impulse Buying, Journal of Consumer Research, 33, 537–547.

Vohs, K. D., & Heatherton, T. F. (2000), Self-regulation failure: A resource-depletion approach, Psychological Science, 11, 249–54.

Wicklund, R. A., & Gollwitzer, P. M. (1982). Symbolic self-completion. Hillsdale, NJ: Erlbaum.

Wolfe, A. (2001). Moral freedom: The search for virtue in a world of choice. New York: Norton.

Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35, 151–175.

Submitted: March 27, 2006 Revised: April 27, 2006 Accepted: May 27, 2006

Copyright © 2017 Institute of Moral Education NanJing Normal University

 Tel: 86-25-83598304   Fax: 86-25-83721092