As I said at the beginning of the last post on moral psychology, things have been changing in the study of moral psychology. Part of the motivation for the change has been the work of neuroscientists like Antonio Damasio that was described in the first post, which has shown that primarily rationalist theories like those of Kohlberg and his descendents simply aren't sufficient to account for human moral judgments and behavior. For the rationalists, moral behavior is the largely the product of deliberative, consciously-available reasoning. For Damasio, you may recall, feelings or emotions are by and large guiding our reasoning, rather than the other way around. Furthermore, they are doing so below the level of what Damasio calls extended consciousness, which is what most of us would simply call consciousness1. There have been other motivations for the change, though. For some, those motivations have to do with the differences in learning and processing involved in the symbolic architectures of old-school cognitive science and connectionist models. For others, the motivations have come from the increasingly popular belief among social psychologists that most social cognition is automatic and unconscious. Whatever the motivations, the results are pretty much the same: shedding the emphasis on conscious moral reasoning, and instead focusing on automatic and unconscious moral judgments. Since I covered the work of Damasio and other neuroscientists in the first post, I'll focus on the connectionists and the social psychologists in this one. In particular, I'll describe the theories of Paul Churchland and his colleagues (most notably, Andy Clark), and the social intuitionist model of Jonathan Haidt, along with related work.
Prototypes and Exemplars vs. Rules
"Stateable rules are not the basis of one''s moral character. They are merely its pale and partial reflection at the comparatively impotent level of language." - P.M. Churchland2That quote pretty much sums up the recent connectionist-inspired position on moral reasoning. There hasn't been a great deal of empirical research or modeling of moral reasoning, to date, but the theoretical position has been supported by the work that I'll discuss in the next section. The basic idea is this: we know, the argument goes, from research on categorization and concepts, that categories are not represented as statements and definitions, as once thought, but as similarity-based structures such as prototypes and exemplars3. On this view, categorization is, in essence, pattern recognition, and the use of categories is largely associative (here is a short primer on connectionism to help explain how these processes work). Because of this, using morprinciplesles that are stated in rule-form will not work, because the representational material needed to do this is absent.
Instead, these theorists argue, moral reasoning uses a moral state space, within which a wide range of situation types are represented4. Within this state space, those situations will be represented as prototypes, with prototypical features of those situations encoded in the representation. This has the implication that we will recognize, and react to, moral situations by relying on a few prototypical features. Obviously, this is sub-optimal, in that situations that are highly similar to certain prototypes, i.e., situations that have many of the prototypical features of a particular type of moral situation, will activate moral emotions or intuitions associated with those situations, even if those emotions and intuitions are not relevant for the particular situation.
This means that the study of moral psychology should focus not on rule-based reasoning, which insufficiently captures the ways in which we make moral judgments, and may in fact be irrelevant (at least in a description of those judgment processes, though it may be relevant to the ways in which we communicate our moral intuitions). Instead, we should focus on moral intuitions and their associated emotions. This is, in fact, what social psychologists have begun to do, led in particular by the work of Jonathan Haidt.
Haidt describes his social intuitionist model (all page numbers will refer to the manuscript in the link)as a dual process model5. In dual process models6 there are, as the name suggests, two types of processes, Process I and Process II, which may run in parallel to each other. Process I is the intuitive process. It is automatic, and requires little effort. Often it occurs below the level of awareness. Process II is effortful, and usually occurs within consciousness. The social intuitionist model posits that in morjudgmentsnts, Process I is privileged, while Process II's purposes is largely to supplement and/or justify Process I.
What does it mean for moral judgments to be intuitive? It might help to think in terms of heuristics. This is, in fact, how Cass Sunstein describes them in a recent Behavioral and Brain Sciences target article7. He uses as an example the Asian disease problem, which goes like this (from the article, p. 9 of the linked manuscript):
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences are as follows:When given these two choices, people almost always pick Program A. However, when given the following two choices (p. 9-10):
- If Program A is adopted, 200 people will be saved.
Which of the two programs would you favor?
- If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
People usually choose Program D. Look closely at the four Programs. You should notice that the difference between Program A and Program C is only in the wording, as is the difference between Program B and Program D. Why is it that when the exact same situation is worded one way (Program A), people prefer it to an alternative (Program B), while when it is worded in another way (Program C), people don't prefer it to the same alternative worded differently (Program D)? Those of you familiar with the work of Kahneman and Tversky will immediately recognize this as an issue of framing. Changing the way you frame the options changes the heuristics we use to make decisions. In this case, the choice of A but not C, and D but not B, is an instance of Kahnemen and Tversky's "risk aversion" heuristic8.
- If Program C is adopted, 400 people will die.
- If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.
Another way of thinking about moral intuitions is in terms of an innate "moral grammar," similar to the innate moral grammar of Chomsky's generative linguistics. This is how John Mikhail describes them in an unpublished paper on research he has conducted. Using famous moral dilemmas such as the trolley problem, he shows in several experiments that children as young as eight years old appear to be using two intuitive moral principles: the prohibition of intentional battery and the principle of double effect. Here are his descriptions of the two (p. 11):
The former is a familiar principle of both common morality and the common law proscribing acts of unpermitted, unprivileged bodily contact, that is, of touching without consent. The latter is a complex principle of justification, narrower in scope than the traditional necessity or "choice of evils" defense, which in its standard formulation holds that an otherwise prohibited action may be permissible if the act itself is not wrong, the good but not the bad effects are intended, the good effects outweigh the bad effects, and no morally preferable alternative is available.He shows, as others have, that as in the Asian disease problem, people's responses to these sorts of dilemmas is not predictable from traditional moral rules, and that their justifications for their choices are not consistent with their actual choices over several problems. Instead, he believes, they demonstrate the existence of the two moral principles above. He argues that this implies the principles are intuitive, and furthermore, because they are not taught (since we're not actually aware of them), and appear at a young age, they may be innate. This Thsi is, in essence, a poverty of stimulus argument for moral judgment.
Haidt himself focuses more on the role of emotion. Drawing on the work of Damasio described in the first post (linked above), Haidt argues that moral judgments consist, largely, of pattern recognition of the sort described by connectionist models, combined with the automatic associative activation of affective states. As evidence of the role of emotions in mjudgmentement, Haidt describes two experiments. In the first, conducted by Baston et al.9, participants were hooked up to machines that they were told would measure their bodily reactions, and listened to stories in which core values (freedom or equality) were violated. As they listened to the stories, were given false feedback about their physical reactions to the stories. The participants were then asked which of the values (freedom or equality) should be used as the theme for a series of events at their university. Most participants chose the value that they had been told elicited the strongest physical reaction.
In the second experiment Haidt describes, Wheatley and Haidt10 hypnotized "highly hypnotizable" participants and suggested that when they heard one of two target words ("take" or "often") that would ordinarily elicit feelings of disgust, they would feel disgusted. Participants then read six stories, each of which contained one of the two target words, and were asked to rate their disgust level and moral condemnation of the actors in the stories. They found that participants gave higher disgust and moral condemnation ratings for stories containing the word suggested to cause disgust.
These pieces of evidence, along with Damasio's research, lead Haidt to believe that affective reactions are the driving force behind moral evaluations. In his discussion of these reactions, he focuses on what he calls the "moral emotions", though he recognizes that any emotion can affect mjudgmentement. He describes two features of moral emotions11:
- Disinterestedseted inhibitors: Moral emotions can and frequently are triggered "even when the self has no stake in the triggering event" (p. 853).
- Prosocial action tendencies: Moral emotions elicit behaviors that "benefit others or else uphold or benefit the social order" (p. 854).
So, there are three ways of thinking about moral intuitions: heuristics, moral grammar, and affect-driven responses to patterns in the environment. All three are automatic, in the sense that we don't purposefuly activate them, they occur below the level awareness, meaning that we may not even be aware of their existence, and they require little effort.
The second type of process, Process II, in Haidt's dual process model is the conscious reasoning that we usually associate with moral judgment. However, unlike traditional moral theories in psychology and philosophy, Haidt believes that moral reasoning is largely post hoc, and meant to justify our moral beliefs and actions that are caused by intuitive moral processes. This is where Haidt's theory might become counterintuitive (pardon the pun) for some of us. While I imagine we've all experienced immediate, gut reactions to moral situations that did not take any deliberation, I doubt that many of us have seen our moral reasoning as strictly post hoc. But Haidt argues this is what it in fact is. As he puts it, we are more like "moral lawyers" than "moral scientists."
Haidt provides a few different lines of evidence for the post hoc nature of moral reasoning. First he discusses motivations that influence the reasoning process, but which serve more to justify our beliefs than to provide arguments for them. He discusses relatedness motives, which involve the desire for our beliefs and actions to be consistent with our social goals ( e.g., agreeing with people we like), coherence motives, which involve the desire for our beliefs and actions to be consistent with our self-image and with each other (think of cognitive dissonance). These types of motivations influence the sort of information or evidence we will consider when reasoning about our moral judgments. He also points to the literature on the "my-side" bias, which shows that people generally only consider evidence on one side of an issue -- their own side12. The my-side bias is so deeply entrenched that, even when we're evaluating the arguments of people who disagree with us, we'll rate them as better arguments if they only discuss evidence on one side of the issue. In other words, the my-side bias appears to be an integral part of how we think reasoning should be13. It is so persistent, in fact, that when we are searching for evidence for our position, we will often stop after finding a single piece of evidence14.
Haidt also discusses evidence from nonmoral domains that shows people using post hoc reasoning when the processes involved in causing a behavior or belief are automatic and unconsciouscious, as Haidt believes they are in moral judgment. He cites a famous experiment by Nisbett and Schachter15 in which they gave participants in two conditions electric shocks as an illustration. Participants in the first condition received a pill, a placebo, and were told that the pill caused the same symptoms as electric shock. Participants in the second condition received no such pill. Participants in the first condition, amazingly, were able to stand four times as much electric shock as the participants in the second condition. When asked to explain why they could take more shock, the majority of the participants in the pill condition never mentioned the pill, but instead came up with post hoc explanations such as the idea that since they had an experience with electric shock at some point earlier in their life, and had thus built up a tolerance. In other words, they weren't aware of the reason for their increased tolerance, so they made reasons up.
When we combine the evidence for the role of intuition, particularly intuitive affective reactions, in moral judgment, and the evidence for the frequently post hoc functioning of conscious reasoning, Haidt believes we have strong evidence for his dual process theory of mjudgmentement. He then goes on to discuss the possible origins of moral intuitions. He cites de Waal's research16 on what Haidt calls "primate proto-morality." de Waal argues that nonhuman primates are the only nonhuman species who show evidence of "prescriptive" social rules, and Haidt believes that the affective and cognitive mechanisms that underlies these rules, combined with our superior communication abilities, also underlie human's more sophisticated moral behavior. This is not to say that human and nonhuman primate moral systems are at all equivalent. Haidt writes (p. 18):
The above considerations are not meant to imply that chimpanzees have morality, nor that humans are just chimps with post-hoc reasoning skills. There is indeed a moral Rubicon that only Homo Sapiens appears to have crossed: widespread third party norm enforcement. Chimpanzee norms generally work at the level of private relationships, where the individual that has been harmed is the one that takes punitive action. Yet human societies are marked by a constant and vigorous discussion of norms and norm violators, and by a willingness to expend individual or community resources to inflict punishment, even by those who were not harmed by the violator.Thus, the difference between human and nonhuman primate prescriptive social rules and behaviors appears to lie in the social and communicative aspects of our moral beliefs and behaviors. This leads Haidt to the "social" aspects of the social intuitionist model. In his model, cultural influences pick and choose from our innate moral repertoir. He writes (p. 19):
Shweder''s theory of the ""big three"" moral ethics17 proposes that moral "goods"" ( i.e., culturally shared beliefs about what is morally good and valuable) generally cluster into three complexes, or ethics, which cultures embrace to varying degrees: the ethic of autonomy (focusing on goods that protect the autonomous individual, such as rights, freedom of choice, and personal welfare); the ethic of community (focusing on goods that protect families, nations, and other collectivities, such as loyalty, duty, honor, respectfulness, modesty, and selfcontrol); and the ethic of divinity (focusing on goods that protect the spiritual self, such as piety, and physical and mental purity). A child is born prepared to develop moral intuitions in all three ethics, but her local cultural environment generally stresses only one or two of the ethics.In addition to large-scale cultural influences on moral development, through socialization, Haidt notes that it is the social aspect of moral belief and behavior that often motivates the post hoc reasoning processes in moral judgment. In many instances, the only time that we will feel the need to provide arguments and evidence for our moral beliefs is that others have challenged them. Consciously reasoning about them not only provide justifications for ourselves, but also with the material with which to communicate our justifications. Finally, as Pizarro and Bloom have noted in their commentary18 on Haidt's social intuitionist theory, taking the perspective of others, an ability that is likely uniquely human, can also influence our moral beliefs. Perspective-taking essentially provides us with new schemas and categories on which our intuitjudgment judgment processes can operate. So, their are multiple avenues through which social interactions can influence our moral beliefs.
That, then, in a pretty long nutshell, is Haidt's social intuitionist model. If you've read the post on the rationalist theories in moral psychology, you'll quickly see how starkly this model contrasts with them. And the implications of this sort of intuitionist moral theory are quite different from those of rationalist theories. For example, while Haidt emphasizes the social aspect and third-person focus of intuitive moral beliefs, it is likely that automatic, unconscious moral processes will tend to be more egocentric than someone like Kohlberg believed higher-stage moral reasoning would be. Epley and Caruso19, for example, have argued that since taking a first-person perspective requires far fewer resources than taking a third-person perspective, most of our automatic judgment processes will be egocentric. It is only when we have the time and cognitive resources to reevaluate our egocentric reactions to moral situations that we can think in more other-focused terms. Another implication is that moral communication will be designed not so much to convince people through reasoning, or even to provide them with arguments for their positions, but to pass to others representations that will induce certain kinds of moral reactions. If this is the case, then people like George Lakoff, in his political writings, are moving in the right direction in analyzing discourse not in order to be able to argue better, but in order to be able to utilize and create certain representations in listeners and readers.
There are, of course, many more implications of this sort of perspective on moral psychology, but since this post is already reaching book length, I think I'll save a discussion of those for future posts. If you find these issues interesting, though, feel free to check out the references below (if I haven't provided a link to a paper, write me and I will try to get you a copy), or ask me about some of the other people doing theoretical and empirical work along these lines. Moral psychology has become a hot topic in cognitive science and social cognition, so there is no shortage of good reading material.
1In The Feeling of What Happens, Damasio describes two kinds of consciousness. The first, basic consciousness, is more primitive, and we share it with many nonhuman animals. It iawarenessawareneses of our bodies and surroundings, and is free of all of the trappings of complex concepts of self, history, and effortful reasoning. The second is extended consciousness, which is where we experience a sense of self, autobiographical memories, language, and so on.
2Churchland, P. M. (1996). "The Neural Representation of the Social World". In L.May, L. Friedman, & A. Clark (Eds.) Mind and Morals MIT Press: 91-108.
3Recent reseach has, of course, challenged the strict similarity-based views of concepts, but connectionists still treat concepts as prototypes or exemplars almost exclusively.
4Clark, A. (1996). Connectionism, Moral Cognition, and Collaborative Problem Solving. In L.May, M.Friedman and A.Clark (Eds.) Mind And Morals, MIT Press: 109-127.
5Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 814-834.
6Kahneman, D., & Frederick, S. (2002) Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge University Press.
7Sunstein, C.R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28(4), 531-542.
8Tversky, A. and Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. Quarterly Journal of Economics, 106(4), 1039-61.
9Batson, C. D., Engel, C. L., & Fridell, S. R. (1999). Value judgments: Testing the somatic-marker hypothesis using false physiological feedback. Personality and Social Psychology Bulletin, 25, 1021-1032.
10Wheatley, T., & Haidt, J. (2005). Hypnotically induced disgust makes moral judgments more severe. Psychological Science, 16, 780-784.
11Haidt, J. (2003). The moral emotions. In R.J. Davidson, K.R. Scherer, & H.H. Goldsmith (Eds.), Handbook of Affective Sciences. Oxford: Oxford University Press, 852-870.
12E.g., Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221-235.
14Perkins, D. N., Allen, R., & Hafner, J. (1983). Difficulties in everyday reasoning. In W. Maxwell (Ed.), Thinking: The Frontier Expands. Hillsdale, NJ: Erlbaum, 177-189.
15Nisbett, R. E., & Schacter, S. (1966). Cognitive manipulation of pain. Journal of Experimental Social Psychology, 2, 227-236.
16de Waal, F. (1996). Good Natured: The origins of right and wrong in humans and other animals, Cambridge, Mass: Harvard University Press.
17Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The "big three" of morality (autonomy, community, and divinity), and the "big three" explanations of suffering. In A. Brandt & P. Rozin (Eds.), Morality and Health, New York: Routledge, 119-169.
18Pizarro, D.A., & Bloom, P. (2003). The intelligence of moral intuitions: Comment on Haidt. Psychological Review, 110(1), 193-196.
19Epley, N., & Caruso, E. M. (2004). Egocentric ethics. Social Justice Research, 17(2), 171-187.