Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Understanding the adaptive functions of morality
from a cognitive psychological perspective
James Dungan & Liane Young
Boston College Psychology Department
1
Abstract
What are the possible functions of moral cognition? Addressing this question has proven
difficult, leading to disagreement among moral psychologists. Researchers claiming that morality
is composed of many distinct domains have posited multiple functions, whereas researchers
focusing on the features that are unique to and common across all moral judgments have
suggested a unified evolutionary function. In this review, we suggest that the limitations of these
accounts can be overcome by systematically investigating the cognitive mechanisms that support
moral judgments across descriptively distinct domains. As a case study, we focus on the contrast
between harm and purity morals, and we argue for a novel functional difference on the basis of
differences in the underlying psychological processes. Understanding the psychology behind
distinct morals will pave the way for understanding the distinct functions of moral cognition.
Keywords
morality; function; harm; purity; cognitive processes
Disciplines
Moral/Social Psychology; Cognitive Neuroscience; Evolutionary Biology; Philosophy
2
Introduction
Is it not reasonable to anticipate that our understanding of the human mind would be greatly
aided by knowing the purpose for which it was designed?
- George C. Williams (1966, p. 16)
What is the purpose of moral cognition? Moral psychologists have become increasingly
interested in exploring the function of moral cognition (that is, how we think about moral right
and wrong) using the tools of evolutionary and developmental psychology. Given the complexity
of moral cognition, this has proven to be a difficult task. On the one hand, “morality” could mean
many things at once – an umbrella term referring to diverse judgments of diverse behaviors, from
assault to incest. Accordingly, researchers supporting this view have posited multiple distinct
functions for distinct “domains” of morality. On the other hand, morality may be unified and
defined by features that are unique to morality (versus other domains of cognition) or at least
features that are common across descriptively different kinds of moral norms. Researchers
proposing such a unified account of morality also typically propose a unified adaptive function
for morality.
In this review, we discuss the strengths and limitations of these two accounts. We then
highlight new research on the cognitive mechanisms supporting moral judgment and how this
research constrains and informs inferences about the adaptive functions of morality. Using the
contrast between harm and purity morals as a case study, we illustrate how understanding the
cognitive mechanisms for moral judgments allows researchers to build novel functional accounts
that capture real psychological differences.
3
Foundational Research
Morality is many things
Historically, morality has been thought to be a unified ethic of justice and fairness
(Kohlberg, 1969, 1981; cf. Baumard, Andre, & Sperber, in press) or a related ethic of concern for
people’s welfare and happiness (Harris, 2010). However, views of morality as a single ethic or
value have attracted many critics. These critics have argued that our moral concerns are many –
providing care and prohibiting harm (Gilligan, 1982), showing respect and loyalty (Shweder,
Much, Mahapatra, & Park, 1997), preserving one’s purity (Appiah, 2006; Haidt & Joseph, 2007),
to name a few. In order to account for these diverse descriptions of common moral concerns,
Moral Foundations Theory (MFT) has posited that morality is composed of five distinct moral
domains (Harm, Fairness, Loyalty, Authority, and Purity), each of which evolved in response to
a specific adaptive need (Graham et al., 2013; Graham et al., 2011; Graham, Haidt, & Nosek,
2009). On this view, each moral domain is a functionally specialized mechanism, or module
(Graham et al., 2013; Haidt & Joseph, 2007). For example, the Harm domain addresses the
challenge of caring for vulnerable offspring, the Authority domain helps people navigate social
dominance hierarchies, and the Purity domain prevents exposure to pathogens and parasites.
While MFT has broadened the scope of research in moral psychology and uncovered
meaningful differences in the moral concerns endorsed within and across cultures (e.g.,
liberals/conservatives; Graham, Haidt, & Nosek, 2009), MFT may be limited in its empirical
focus on the descriptive content of moral norms. The ways in which moral content can be carved
up into different categories are numerous and possibly arbitrary. Indeed, MFT originally featured
four domains, such that loyalty was covered by a combination of reciprocity and hierarchy
(Haidt & Joseph, 2004). MFT in its present form accommodates five domains, featuring loyalty
4
as its own distinct domain (Haidt, & Joseph, 2007; Graham, Nosek, Haidt, Iyer, Koleva, & Ditto,
2011). More recently, theorists have tentatively proposed six domains, splitting Fairness into two
separate domains: Equality and Liberty/Oppression (Graham et al., 2013). Given the sheer
breadth of possible moral concerns, the number of domains could be ratcheted up to include
domains concerning industriousness, modesty, and wastefulness (Graham et al., 2013; Suhler &
Churchland, 2011). The number of moral domains that researchers could potentially identify
seems limitless: morality may in fact be composed of hundreds or even thousands of distinct
functional modules, each addressing a unique adaptive problem (Cosmides & Tooby, 1994).
The concern here is not one of parsimony – an acceptable taxonomy of morality should
strive for explanatory adequacy without limiting itself for the sake of neatness. Instead, the
concern is with the general approach of carving up moral cognition based on descriptive
differences, that is, differences in the content of moral actions, rather than differences in the
psychological mechanisms that support the processing of different actions. This concern deserves
more consideration as more domains are identified. Notably, the addition of Liberty as a sixth
domain was motivated by the observation of moral attitudes among Libertarians that were not
easily described by the current moral domains (Iyer, Koleva, Graham, Ditto, & Haidt, 2012),
rather than by novel evolutionary theorizing or novel observations of cognitive mechanism. If
moral domains represent functionally distinct modules, then judgments associated with distinct
domains might be expected to behave differently – they might follow different cognitive rules
due to the contribution of different cognitive processes. To the extent that evidence for such
differences is lacking among the five domains posited by MFT (see Key Issues for Future
Research), amendments to current moral taxonomies may be needed.
5
Morality is one thing
Despite its diverse content, morality may nonetheless be unified by features that are
unique to moral judgments and also common to moral judgments across all domains (cf. Young
& Dungan, 2012). Developmental psychologists made an early attempt at distinguishing moral
norms from norms of social convention (Turiel, 1983). Work on the moral-conventional
distinction was aimed at identifying the features that separate conventional judgments (e.g.,
wearing pajamas to class is wrong) from uniquely moral judgments (e.g., murder is wrong). For
instance, whereas conventional judgments are culture-specific, moral judgments might apply
universally across all cultures as well as across time (e.g., murder is always wrong, no matter the
place or time).
While the moral-conventional distinction has faced criticism (Kelly, Stich, Haley, Eng, &
Fessler, 2007), researchers have proposed other features that may be unique to, and common
across, moral judgments (DeScioli, Asao, & Kurzban, 2012; DeScioli & Kurzban, 2009; Gray,
Young, & Waytz, 2012). For example, moral violations evoke stronger emotional and behavioral
reactions than conventional violations (Rozin, 1997, 1999). Furthermore, when previously non-
moral norms become moralized (as in the case of an omnivore who converts to vegetarianism for
moral reasons), these newly moralized norms elicit a common suite of reactions such as
prohibition, internalization, over-justification, and increased parent-to-child transmission (Rozin,
Markwith, & Stoess, 1997).
More importantly, whereas descriptive theories of diverse moral domains offer few
specific predictions about the psychological mechanisms underlying distinct domains,
researchers focusing on morality as a unified concept suggest common cognitive components
that cut across all moral judgments – in spite of their apparent differences. On one such account,
6
“moral cognition has an “insert here” parameter, processing diverse moral rules with the same
computational architecture” (DeScioli & Kurzban, 2009, p. 3). Researchers have proposed a
number of candidate components of uniquely moral cognition, such as the perception of an agent
who inflicts harm on a patient (Gray, Young, & Waytz, 2012), or the integration of causal and
intentional attributions (Mikhail, 2007).
If cognitive mechanisms operate similarly across descriptively different moral domains,
and if these domains share a common cognitive structure that is distinct from non-moral
judgments, then morality may be better understood as serving a single purpose. One possibility is
that morality functions to limit selfishness and foster cooperation (Tomasello & Vaish, 2013).
Similarly, morality could function as a means of navigating social alliances (Atran & Henrich,
2010; Graham, Haidt, & Nosek, 2009; Sosis & Bressler, 2003). In competitive terms, morality
could function as a dynamic coordination strategy for choosing sides in interpersonal and
intergroup conflict (DeScioli & Kurzban, 2009; Descioli & Kurzban, 2013). In all cases, the
functional explanations are not tied down to the specific content of moral actions and instead
focus on a broader adaptive problem that might be unique to morality and common across all
kinds of moral judgments.
The strength of these unified accounts is their attention to the psychological mechanisms
supporting moral judgment. However, unified accounts run the risk of overgeneralization
(Parkinson et al., 2011). By focusing on common features across moral domains, researchers
may fail to detect meaningful differences in the psychological mechanisms underlying different
moral judgments.
A new approach for defining morality
7
We have reviewed the strengths and limitations of current taxonomies of morality. How
then should moral psychologists approach the task of carving up (or not carving up) moral
cognition? We suggest that moral psychologists might first identify cognitive rules that apply to
moral judgment in one context and then test whether those rules apply equivalently across
descriptively distinct domains. In the next section, we use the well-studied domains of harm and
purity to illustrate how the cognitive processes underlying different norms dictate where
psychologically meaningful boundaries exist within moral cognition.
Cutting-Edge Research
The case of harm and purity
Descriptive accounts of morality provide clear evidence that people moralize not simply
concerns of harm (e.g., murder is wrong) but also concerns of purity – e.g., avoiding impure
objects or acts that could lead to defilement or contamination (Shweder, Much, Mahapatra, &
Park, 1997). Purity morals are typically characterized as part of an adaptive disease-avoidance
mechanism that has been co-opted to signal socially and morally offense behavior as well (e.g.,
drug abuse, sexual deviance; Chapman, Kim, Susskind, & Anderson, 2009). Purity norms may
thus function to protect the body from desecration or defilement. By contrast, moral norms
against harm are thought to address the evolutionary need to care for vulnerable offspring,
leading us to express compassion and empathy for the suffering (Haidt & Joseph, 2007).
While this functional distinction may be intuitively appealing, empirical evidence for
distinct psychological signatures of harm and purity morals has been mixed. On the one hand,
researchers supporting the view that “morality is many things” suggest that harm and purity are
associated with distinct emotions: typically, harm violations elicit anger, whereas purity
8
violations elicit disgust (Rozin, Lowery, Imada, & Haidt, 1999; Russell & Giner-Sorolla, 2013).
However, researchers supporting the view that “morality is one thing” point out that feelings of
anger and disgust are highly correlated and frequently co-occur (Gray, Young, & Waytz, 2012).
Furthermore, observations of whether specific emotions are linked to specific moral judgments
have been variable across studies. Disgust, for instance, has been shown to increase the severity
of purity-related judgments specifically (Seidel & Prinz, 2013), moral judgment more generally
(Zhong & Liljenquist, 2006), and even judgments of non-moral actions (e.g., giving a class
presentation; Wheatley & Haidt, 2005).
Some researchers have argued that purity violations do not belong to a separate domain;
instead, they can be construed as a form of harm (Gray, Young, & Waytz, 2012). If harm and
purity morals do not form distinct domains, then we might expect similar psychological
processing underlying harm and purity morals. Yet, new studies reveal important differences.
For example, while anger, elicited by harmful actions, is modulated by contextual factors,
including whether or not the violation was committed intentionally or accidentally, disgust,
elicited by purity violations, is insensitive to factors such as intent – disgust is elicited simply if
purity rules have been broken, regardless of the circumstances (Russell & Giner-Sorolla, 2011;
Russell & Giner-Sorolla, 2010). Furthermore, participants perceive a greater moral difference
between intentional and accidental harms, compared to the difference between intentional and
accidental purity violations (Young & Saxe, 2011).
The behavioral difference in people’s moral judgments of harm and purity violations is
supported by recent neural evidence. Brain regions involved in reasoning about mental states
(e.g., beliefs, intentions) are more active when participants judge harmful compared to impure
actions (Young, Chakroff, Dungan, Koster-Hale, & Saxe, in prep). Additionally, information
9
about intent is encoded in the spatial pattern of neural activity in brain regions for mental state
reasoning when participants consider harm violations but not purity violations. The evidence
thus reveals important differences in the cognitive mechanisms for harm and purity judgments.
Distinct functions for self versus other
How can the pattern of cognitive differences observed above inform our functional
accounts of harm versus purity norms? One explanation is that harm norms function to limit our
negative impact on others. Harm norms appear to operate primarily in interpersonal contexts,
where one person’s harmful actions affect another. Recent work suggests that the presence of at
least two parties – a violator who acts on a victim – is necessary to establish an act as harmful in
the first place (DeScioli & Kurzban, 2009; Gray & Wegner, 2011). If harm norms dictate how
we ought (and ought not) to treat each other, information about intent would be expected to play
a significant role: we need to know what others are thinking to evaluate their actions and to
understand their intentions towards us.
Conversely, purity norms may function to limit our negative impact on ourselves. We
may pay particular attention to preserving the purity of our own bodies (Russell & Giner-Sorolla,
2013). Indeed, we may be concerned with the impurities of other people only to the extent that
they are perceived as a possible threat to our own purity (Rozin et al., 2000). Impure actions are
therefore prohibited even when no one, except for possibly one’s own self, is rendered a “victim”
(Haidt, Koller, & Dias, 1993). If purity norms function to protect our own selves from possible
contamination, we may care less about our own intent, i.e., whether we acted accidentally or
intentionally in defiling ourselves. As Appiah (2006) states in an account of Akran society in
Ghana: “With taboo breaking… it doesn’t matter what you meant to do. You’re polluted. You
10
need to get clean” (p. 51). In other words, since purity violations affect the self, we care mostly
about avoiding the outcome or else making sure we “get clean” afterwards.
Thus, our key prediction is that purity norms function to protect one’s self, whereas harm
norms function to protect others. Two recent lines of investigation in our lab support this
prediction. First, regardless of whether the content of actions was harmful or impure, participants
judged self-directed actions (e.g. cutting or splashing urine on yourself) as impure and other-
directed actions (e.g. cutting or splashing urine on someone else) as harmful. Importantly, the
perceived moral difference between intentional versus accidental violations was also smaller for
self-directed actions, compared to other-directed actions, consistent with the differential role of
intent across moral domains (Chakroff, Dungan, & Young, submitted).
Second, although moral judgments of negative actions directed at others versus one’s self
show the cognitive signatures of moral judgments of harm versus purity violations, respectively,
an outstanding question is whether participants are especially averse to impure outcomes that
affect themselves and harmful outcomes that affect others. Consistent with this prediction,
participants judged harmful actions as morally worse (and less preferred) than impure actions
when directed at others; however, the opposite pattern emerged when participants judged actions
directed at themselves - impure actions were judged as morally worse (and less preferred;
Dungan, Chakroff, & Young, submitted). Thus, purity norms are tied specifically to concerns
about defiling one’s self, whereas harm norms are tied to concerns about harming others.
Beyond self versus other
A potential limitation of the proposed functional distinction between norms governing
how we treat our selves and norms governing how we treat others is that we have focused only
11
on judgments of interactions between individuals when much of morality occurs at the level of
groups. Purity morals may not only govern how people treat their own bodies but also more
generally define proper behavior within a group, as well as boundaries between groups (cf. Atran
& Henrich, 2010; Sosis & Bressler, 2003). As such, close adherence to purity norms may be a
strong signal of group membership. We argue though that purity morals operating at the group
level may nonetheless function to protect the self for two reasons: 1) purity violations undermine
the cohesion of the social group, making each individual potentially vulnerable to threat, and 2)
purity violations committed by other group members may contaminate one’s self either by
association with the group, or directly through physical contagion (cf. van Leeuwen, Park,
Koenig, & Graham, 2012).
A unique prediction follows from this account. While studies in social psychology
routinely demonstrate a robust ingroup bias whereby people judge their own group’s violations
as less wrong than the same violations committed by an outgroup member (Valdesolo &
DeSteno, 2007), this account predicts that people should be harsher on purity violations
occurring within versus outside of their own group – again, because of the potential threat of
impurity for the self. To test this prediction, we presented participants with vignettes (adapted
from Leidner & Castano, 2012) depicting ingroup and outgroup members committing harm and
purity violations (Dungan, Chakroff, & Young, submitted). Consistent with our prediction,
participants displayed the typical ingroup bias for harm violations, judging outgroup harms as
worse than ingroup harms; however, the opposite pattern emerged for purity violations – purity
violations committed by ingroup members were judged as morally worse than the same purity
violations committed by outgroup members. In an extension of our previous findings, these
12
results suggest that some morals function for protecting the group and the self, whereas other
morals may be focused more on how our actions affect others, across group boundaries.
Key Issues for Future Research
A new taxonomy for all moral judgment
While we have focused on the distinction between harm and purity, a primary question
for future research is whether the functional distinction between morals that protect the self and
morals that protect others applies to other moral norms described by MFT and other theories,
including loyalty, hierarchy, and fairness. Although empirical work on the cognitive components
of loyalty, hierarchy, and fairness judgments is sparse, several studies indicate a possible
cognitive boundary between “individualizing” norms such as harm and fairness, and “binding
norms” such as purity, loyalty, and hierarchy (Graham, Haidt, & Nosek, 2009). For example,
while harmful omissions are generally judged less harshly than harmful commissions (e.g., it is
worse to kill than to fail to rescue), this omission bias is reduced when the violator and victim are
bound by a close relationship, necessitating loyalty, or a hierarchical relationship (e.g., the
violator and victim are boss and employee; Haidt & Baron, 1996). Similarly, omission bias is
enhanced for individualizing violations compared to purity violations (DeScioli, Asao, &
Kurzban, 2012; DeScioli, Christner, & Kurzban, 2011). These findings fit nicely with the
observations above on the differential role of intent – revealing an overall emphasis on outcomes
when considering binding norms compared to individualizing norms.
Other research on the cognitive and neural processes supporting different moral
judgments corroborates the difference between individualizing and binding norms. Judgments of
individualizing and binding norms are differentially affected by abstract versus concrete thinking
13
(Napier & Luguri, 2013) as well as cognitive load (Wright & Baril, 2011). Endorsement of these
norms is also associated with volumetric differences in specific brain regions (Lewis et al., 2012).
Finally, while psychopathy is marked by a willingness to violate individualizing norms of harm
and fairness, endorsement of binding norms is relatively preserved (Glenn, Iyer, Graham, Koleva,
& Haidt, 2009). Notably, these findings do not reveal differences among all five domains of
morality – suggesting that a two-factor model may better capture psychological differences in
moral cognition (Dungan & Young, 2012; cf. the exploratory factor analysis in Graham et al.,
2011). These lines of research reveal empirical approaches for testing whether descriptively
different moral domains are psychologically distinct and important avenues for building
taxonomies of moral psychology.
Motivation and behavior
Uncovering the functional differences within moral cognition might also illuminate the
situational factors that influence judgments across domains. In a recent example, we described
two people – one loyal person and one fair person – and asked participants which person they
would rather befriend and which person they deemed more moral (Dungan, Waytz, & Young, in
prep). Overall, participants reported a preference for loyal (versus fair) friends but endorsed
fairness (over loyalty) as an abstract moral virtue. Future research should investigate how
different motivations, such as the need for social inclusion versus the need to feel moral (moral
self-concept), influence judgments across moral domains.
Another key aim for moral psychologists will be to connect theories describing the
adaptive functions of morality to actual behavior, both within and beyond the laboratory. For
instance, if some moral norms function to protect the self, while other moral norms function to
14
protect others, we might expect a tension between competing moral demands (cf. Cohen,
Montoya, & Insko, 2006). This tension has behavioral implications, as in the case of
whistleblowing. One hypothesis is that the valuation of fairness over loyalty predicts decisions to
report unethical deeds; emphasizing one value over another may consequently shift attitudes
toward whistleblowing (Waytz, Dungan, & Young, submitted).
Normative implications
Understanding moral norms from a cognitive psychological perspective may even, in
some sense, constrain claims as to what morality is or ought to be. If moral norms ought to apply
equally to everyone, regardless of culture or standing in society, then moral norms, such as purity,
which govern self-interest or ingroup-interest, may be less legitimately moral than norms
governing interpersonal interactions across group boundaries, such as harm (Bloom, 2011).
Furthermore, if moral judgments of an agent should be based at least in part on the agent’s
mental states (e.g., whether the agent intended to do wrong or not), then our reactions to purity
violations might be less legitimately moral, to the extent that they are largely insensitive to intent
information and driven instead by the inflexible emotion of disgust, (Russell & Giner-Sorolla,
2013). Illuminating the cognitive mechanisms underlying different moral norms may indeed
constrain key normative and meta-ethical claims.
Conclusion
Morality is complex. Researchers have disagreed over the function or functions of
morality. Moving forward, moral psychologists would do well to focus their empirical efforts on
the underlying cognitive processes for moral judgment to identify which descriptive differences
15
are psychologically meaningful. Understanding these psychological differences will pave the
way for understanding the key functions of moral cognition.
References
Appiah, K.A. (2006). Cosmopolitanism: Ethics in a World of Strangers. Gates, H.L. (Ed.), New
York, NY: W.W. Norton & Company, Inc.
Atran, S., & Henrich, J. (2010). The Evolution of Religion: How Cognitive By-Products,
Adaptive Learning Heuristics, Ritual Displays, and Group Competition Generate Deep
Commitments to Prosocial Religions. Biological Theory, 5(1), 18–30.
Baumard, N., André, J.B. and Sperber, D. (in press). A mutualistic approach to morality,
Behavioral and Brain Sciences.
Bloom, P. (2011). Family, community, trolley problems, and the crisis in moral psychology. The
Yale Review, 99(2), 26-43.
Chakroff, A., Dungan, J., & Young, L. (submitted). Harming ourselves and defiling others: what
determines a moral domain?
Chapman, H. A., Kim, D. A., Susskind, J. M., & Anderson, A. K. (2009). In bad taste: Evidence
for the oral origins of moral disgust. Science, 323(5918), 1222-1226.
Cohen, T. R., Montoya, R. M., & Insko, C.A. (2006). Group morality and intergroup relations:
cross-cultural and experimental evidence. Personality & social psychology bulletin,
32(11), 1559–72. doi:10.1177/0146167206291673
Cosmides, L., & Tooby, J. (1994). Origins of domain specificity: the evolution of functional
organization. In L. A. Hirschfeld & S. A. Gelman (Eds.), Mapping the Mind: Domain-
16
specificity in Cognition and Culture. (pp. 85–116). Cambridge: Cambridge University
Press.
DeScioli, P., Asao, K., & Kurzban, R. (2012). Omissions and byproducts across moral domains.
PloS one, 7(10), e46963. doi:10.1371/journal.pone.0046963
DeScioli, P., Christner, J., & Kurzban, R. (2011). The omission strategy. Psychological science,
22(4), 442–6. doi:10.1177/0956797611400616
DeScioli, P., & Kurzban, R. (2009). Mysteries of morality. Cognition, 112(2), 281–99.
doi:10.1016/j.cognition.2009.05.008
Descioli, P., & Kurzban, R. (2013). A solution to the mysteries of morality. Psychological
bulletin, 139(2), 477–96. doi:10.1037/a0029065
Dungan, J., Chakroff, A., & Young, L. (submitted). Purity Versus Pain: Distinct moral concerns
for self versus other.
Dungan, J., Waytz, A., & Young, L. (in prep). Distinct motivations drive prioritization of loyalty
versus fairness.
Dungan, J., & Young, L. (2012). Moral Psychology. A Companion to Moral Anthropology, 578-
594.
Fincher, C. L., Thornhill, R., Murray, D. R., & Schaller, M. (2008). Pathogen prevalence predicts
human cross-cultural variability in individualism/collectivism. Proceedings of the Royal
Society B: Biological Sciences, 275(1640), 1279-1285.
Gilligan, C. (1982). In a Different Voice: Psychological Theory and Women’s Development.
Cambridge: Harvard University Press.
17
Glenn, A. L., Iyer, R., Graham, J., Koleva, S., & Haidt, J. (2009). Are all types of morality
compromised in psychopathy? Journal of personality disorders, 23(4), 384–98.
doi:10.1521/pedi.2009.23.4.384
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P.H. (2013). Moral
Foundations Theory: The pragmatic validity of moral pluralism. Advances in
Experimental Social Psychology, 47, 55-130.
Graham, J., Haidt, J., & Nosek, B.A. (2009). Liberals and conservatives rely on different sets of
moral foundations. Journal of personality and social psychology, 96(5), 1029–46.
doi:10.1037/a0015141
Graham, J., Nosek, B. a, Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral
domain. Journal of personality and social psychology, 101(2), 366–85.
doi:10.1037/a0021847
Gray, K. & Wegner, D.M. (2011). Dimensions of Moral Emotions. Emotion Review, 3(3), 258–
260. doi:10.1177/1754073911402388
Gray, K., Young, L., & Waytz, A. (2012). Mind Perception Is the Essence of Morality.
Psychological inquiry, 23(2), 101–124. doi:10.1080/1047840X.2012.651387
Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate
culturally variable virtues. Daedalus, 133(4), 55-66.
Haidt, J, Koller, S.H., & Dias, M.G. (1993). Affect, culture, and morality, or is it wrong to eat
your dog? J Pers Soc Psychol, 65(4), 613–628. Retrieved from
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citati
on&list_uids=8229648
18
Haidt, J. & Baron, J. (1996). Social roles and the moral judgement of acts and omissions,
26(March 1994), 201–218.
Haidt, J. & Joseph, C. (2007). The moral mind : How five sets of innate intuitions guide the
development of many culture-specific virtues , and perhaps even modules, 1–31.
Harris, S. (2010). The moral landscape: How science can determine human values. New York:
Free Press.
Iyer, R., Koleva, S., Graham, J., Ditto, P., & Haidt, J. (2012). Understanding libertarian morality:
the psychological dispositions of self-identified libertarians. PloS one, 7(8), e42366.
doi:10.1371/journal.pone.0042366
Kelly, D., Stich, S., Haley, K. J., Eng, S. J., & Fessler, D.M.T. (2007). Harm, Affect, and the
Moral/Conventional Distinction. Mind & Language, 22(2), 117–131. doi:10.1111/j.1468-
0017.2007.00302.x
Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization.
In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 151–235). New
York: Academic Press.
Kohlberg, L. (1981). Essays on Moral Development, Volume 1: The Philosophy of Moral
Development. New York: Harper Row.
Leidner, B., & Castano, E. (2012). Morality shifting in the context of intergroup violence.
European Journal of Social Psychology, 42(1), 82–91. doi:10.1002/ejsp.846
Lewis, G.J., Kanai, R., Bates, T.C., & Rees, G. (2012). Moral values are associated with
individual differences in regional brain volume. Journal of Cognitive Neuroscience, 24(8),
1657-1663. doi: 10.1162/jocn_a_00239
19
Mikhail, J.M. (2007). Universal moral grammar: theory, evidence and the future. Trends in
Cognitive Sciences, 11(4), 143–152.
Napier, J.L., & Luguri, J.B. (2013). Moral Mind-Sets: Abstract Thinking Increases a Preference
for “Individualizing” Over “Binding” Moral Foundations. Social Psychological and
Personality Science. doi:10.1177/1948550612473783
Parkinson, C., Sinnott-Armstrong, W., Koralus, P.E., Mendelovici, A., McGeer, V., & Wheatley,
T. (2011). Is morality unified? Evidence that distinct neural systems underlie moral
judgments of harm, dishonesty, and disgust. Journal of cognitive neuroscience, 23(10),
3162–80. doi:10.1162/jocn_a_00017
Rozin, P. (1997). Moralization. In A. Brandt & P. Rozin (Eds.), Morality and Health. (pp. 379–
401). New York: Routledge.
Rozin, P. (1999). The Process of Moralization. Psychological Science, 10(3), 218–221.
doi:10.1111/1467-9280.00139
Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: a mapping
between three moral emotions (contempt, anger, disgust) and three moral codes
(community, autonomy, divinity). J Pers Soc Psychol, 76(4), 574–586. Retrieved from
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citati
on&list_uids=10234846
Rozin, P., Markwith, M., & Stoess, C. (1997). Moralization and becoming a vegetarian: The
transformation of preferences into values and the recruitment of disgust. Psychological
Science, 8(2), 67-73.
20
Russell, P.S., & Giner-Sorolla, R. (2010). Moral Anger Is More Flexible Than Moral Disgust.
Social Psychological and Personality Science, 2(4), 360–364.
doi:10.1177/1948550610391678
Russell, P.S., & Giner-Sorolla, R. (2011). Moral anger, but not moral disgust, responds to
intentionality. Emotion (Washington, D.C.), 11(2), 233–40. doi:10.1037/a0022598
Russell, P.S. & Giner-Sorolla, R. (2013). Bodily moral disgust: What it is, how it is different
from anger, and why it is an unreasoned emotion. Psychological bulletin, 139(2), 328–51.
doi:10.1037/a0029319
Seidel, A., & Prinz, J. (2013). Sound morality: Irritating and icky noises amplify judgments in
divergent moral domains. Cognition, 127(1), 1–5. doi:10.1016/j.cognition.2012.11.004
Shweder, R.A., Much, N.C., Mahapatra, M., & Park, L. (1997). The "big three" of morality
(autonomy, community, and divinity), and the "big three" explanations of suffering. In A.
Brandt & P. Rozin (Eds.), Morality and health (pp. 119-169). New York: Routledge.
Sosis, R., & Bressler, E. (2003). Cooperation and Commune Longevity: A Test of the Costly
Signaling Theory of Religion. Cross-Cultural Research, 37, 211–239.
Suhler, C.L., & Churchland, P. (2011). Can Innate, modular “foundations” explain morality?
Challenges for Haidt’s Moral Foundations Theory. Journal of cognitive neuroscience,
23(9), 2103–16; discussion 2117–22. doi:10.1162/jocn.2011.21637
Tomasello, M., & Vaish, A. (2013). Origins of human cooperation and morality. Annual review
of psychology, 64, 231–55. doi:10.1146/annurev-psych-113011-143812
Turiel, E. (1983). The Development of Social Knowledge: Morality and Convention. Cambridge:
Cambridge University Press.
21
Valdesolo, P., & DeSteno, D. (2007). Moral Hypocrisy Social Groups and the Flexibility of
Virtue. Psychological Science, 18(8), 689-690.
van Leeuwen, F., Park, J. H., Koenig, B. L., & Graham, J. (2012). Regional variation in pathogen
prevalence predicts endorsement of group-focused moral concerns. Evolution and Human
Behavior, 33(5), 429-437.
Waytz, A., Dungan, J., Young, L. (2013). The Whistleblower’s Dilemma and the Fairness-
Loyalty Tradeoff. Journal of Experimental Social Psychology, 49, 1027-1033.
Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe.
Psychological Science, 16(10), 780–784.
Wright, J., & Baril, G. (2011). The Role of Cognitive Resources in Determining Our Moral
Intuitions: Are We All Liberals at Heart? . Journal of Experimental Social Psychology .
Young, L., Chakroff, A., Dungan, J., Koster-Hale, J., & Saxe, R. (in prep). Theory of mind for
evaluating assault not incest: Selective neural encoding of acts as intentional versus
accidental.
Young, L., & Dungan, J. (2012). Where in the brain is morality? Everywhere and maybe
nowhere. Social neuroscience, 7(1), 1-10.
Young, L., & Saxe, R. (2011). When ignorance is no excuse: Different roles for intent across
moral domains. Cognition, 120(2), 202–214. doi:S0010-0277(11)00110-7 [pii]
10.1016/j.cognition.2011.04.005
Zhong, C. B., & Liljenquist, K. (2006). Washing away your sins: threatened morality and
physical cleansing. Science, 313(5792), 1451–1452. doi:313/5792/1451 [pii]
10.1126/science.1130726
22
Further Readings
Amit, E., and Greene, J.D. (2012). You see, the ends don't justify the means: Visual imagery and
moral judgment. Psychological Science, 23(8), 861-868.
Cushman, F., Murray, D., Gordon-McKeon, S., Wharton, S., & Greene, J. D. (2012). Judgment
before principle: engagement of the frontoparietal control network in condemning harms
of omission. Social cognitive and affective neuroscience, 7(8), 888-895.
Janoff-Bulman, R., & Carnes, N. C. (in press). Surveying the moral landscape: Moral motives
and group-based moralities. Personality and Social Psychology Review.
Shaw, A., DeScioli, P., & Olson, K. R. (2012). Fairness versus favoritism in children. Evolution
and Human Behavior, 33(6), 736 – 735.
Young, L., Tsoi, L. (in press). When mental states matter, when they don’t, and what that means
for morality. Social and Personality Psychology Compass, 7(8), 585 – 604.
23
Biographies
James Dungan received his undergraduate degree in Brain and Cognitive Sciences from
the Massachusetts Institute of Technology and is now a graduate student in the Boston College
Psychology Department. He is a member of the Morality Lab, under the direction of Liane
Young. His research investigates the cognitive and neural processes underlying moral judgment
using methods from social psychology and cognitive neuroscience. He is supported by the
National Science Foundation Graduate Research Fellowship.
Liane Young is an assistant professor in the Department of Psychology at Boston College,
where she directs the Morality Lab. Her research focuses on the cognitive and neural bases of
human moral judgment, including the roles of mental state reasoning and emotional processing.
Her research relies on the tools of social psychology and cognitive neuroscience, including
functional magnetic resonance imaging, transcranial magnetic stimulation, and the study of
patients with cognitive and neural deficits. Young received her BA in philosophy (2004) and her
PhD in cognitive psychology (2008) from Harvard University, after which she did post-doctoral
work in Cognitive Neuroscience at MIT’s Brain and Cognitive Sciences Department.
Personal webpage: https://www2.bc.edu/liane-young/
Curriculum vitae: https://www2.bc.edu/liane-young/CV.html
Morality Lab at Boston College: http://moralitylab.bc.edu/