Tuesday, August 02, 2005

Moral Psychology II: The Life and Death of Moral Rationalism



WARNING: This post is long and, compared to the last one at least, quite boring. Read at your own risk.

The neuroscientific evidence in the previous post clearly suggests a moral psychology built around affect, theory of mind, and social schemas. The picture it paints is of a complex of unconscious, automatic processes combining to produce moral judgments. If you read the post carefully, though, you might have noticed that none of the works I cited were published prior to 1994. This is because within the empirical study of moral psychology, a change has been taking place for about a decade, inspired in part by the work of people like Antonio Damasio in neuroscience, as well as behavioral research that I'll discuss in a subsequent post. The change is from a Kantian or rationalist paradigm to a Humean, moral intuitionist one. The change is not total, in that there are still plenty of rationalists, but the intuitionist view is now the dominant one, at least outside of developmental psychology. But in order to really understand the change, it's important to understand what is being changed. That means understanding the rationalism that dominated the empirical study of moral psychology for decades, and the philosophical treatment of moral judgment at least since the late 18th century, if not since Plato. So in this post, I'm going to describe the more prominent reason-based theories of moral psychology.

A responsible blogger would start with a fairly detailed summary of Kantian moral psychology, both as it's found in Kant and in its contemporary formulations, because as Kant's picture hangs over this post, his philosophical legacy has hovered over moral psychology for decades. I, however, am not a responsible blogger, and am simply not qualified to provide such a summary anyway. If you're really interested, you might check out Patrick Smith's posts on the topic at Philosophy, et cetera (here, here, here, and here). Otherwise, you can make do with a couple quotes that illustrate quite well the aspects of the Kantian perspective that are present in the rationalist view of moral psychology today:
The Kantian approach to moral philosophy is to try to show that ethics is based on practical reason: that is, that our ethical judgments can be explained in terms of rational standards that apply directly to conduct or to deliberation. -Christine Korsgaard1

If the requirements of ethics are rational requirements, it follows that the motive for submitting to them must be one which it would be contrary to reason to ignore. -Thomas Nagel2
In other words, within the Kantian approach, moral psychology consists of the deliberate adherence to rationally-conceived rules, or standards, of behavior. Central in this account is moral reasoning, in which we attempt to retrieve and use the appropriate rule in a given moral context. There is also another key element of Kantian moral philosophy that has been central to the empirical study of moral psychology, but which has extended into the new intuitionist approaches. It concerns the very definition of "moral." For moral psychologists, as for Kant, the distinction between the "moral" and other social or interpersonal conventions and behaviors comes from the fact that they are not strictly self-serving. They are in some way other-directed.

Both the emphasis on moral reasoning and the other-based definition of mature moral reasoning are reflected in the early but influential moral psychology of Jean Piaget. For Piaget, there were two stages in the course of moral development, characterized by the type of reasoning that children used when they were in them. The first was characterized by what Piaget called heteronomous reasoning, or self-centered reasoning. During this stage, children consider the consequences of their behavior for themselves, and act on that reasoning. The second stage, which occurs sometime between 7 and 9 years of age, involves what Piaget called autonomous reasoning, which is cooperative or altruistic. But Piaget's theory was never itself widely accepted. Instead, it was a student of his thought (though not an actual student of Piaget's), Lawrence Kohlberg, one of the members of the cognitive revolution, whose work gave shape to the empirical study of moral psychology until the intuitionists began to take over. So for the rest post, I'll discuss Kohlberg and those who came after him.

Lawrence Kohlberg

Like Piaget, Kohlberg theorized that moral development took places in stages. Also like Piaget, he believed that this development was strictly progressive (i.e., once a child had transitioned to a higher stage, he or she could not go back to the kind of reasoning used in an earlier stage), and that children always transitioned from their current stage to the next stage (i.e., they never skipped stages). Kohlberg also believed that these stages were universal, and conducted studies in a variety of cultures to demonstrate this. His theory also involved a transition from heteronomous to autonomous moral reasoning. However, his theory included six stages, divided into three levels. Here is a short description of each stage3:
  • First Level: Peconventional (Ages 2-8)
    • Stage 1: Obedience and Punishment Orientation. As the title of this stage suggests, children's explanations for following rules in this stage largely concern the consequences of breaking the rules for themselves. During this stage, they see rules as unquestionable and immutable.
    • Stage 2: Instrumental Exchange Orientation. During this stage, children's reasoning considers what's in it for them. During this stage, moral rules are not immutable and unquestionable, but subjective. Different self-interests will yield different rules. Punishment is still important, but in a different way. Here is how one author put it:
      At stage 1 punishment is tied up in the child's mind with wrongness; punishment "proves" that disobedience is wrong. At stage 2, in contrast, punishment is simply a risk that one naturally wants to avoid.
  • Second Level: Conventional (Ages 9-11)
    • Stage 3: Interpersonal Conformity Orientation. This stage contains elements of the more mature stages to follow, such as the belief that morality involves a sense of community, and duty, but also contains elements of the previous stages. In particular, it involves conformity to family or community standards in order to gain approval.
    • Stage 4: Law-and-Order Orientation. During this stage, moral reasoning involves considering what's best for the community. Laws are designed to maintain order, and keep society together. This is where you get comments like, "What would happen if everybody did it?" In other words, we're getting more Kantian by the year, now.
  • Third Level: Postconventional (Ages 12 and Up)
    • Stage Five: Prior Rights and Social Contract Orientation. From Crain (1985):
    • Stage 5 respondents basically believe that a good society is best conceived as a social contract into which people freely enter to work toward the benefit of all They recognize that different social groups within a society will have different values, but they believe that all rational people would agree on two points. First they would all want certain basic rights, such as liberty and life, to be protected Second, they would want some democratic procedures for changing unfair law and for improving society.
    • Stage Six: Universal Ethical Principles Orientation. During this stage, people reason about ethical rules from an individualist, democratic perspective. Ethical rules are a product of individual reasoning, rather than handed down from an authority. Justice and fairness are the guiding principles.
Interestingly, according to Crain (1985), Kohlberg at least temporarily stopped using the sixth stage. Stage six reasoning might be limited to the writings of certain philosophers or Lisa Simpson.

To develop and refine his theory, Kohlberg relied almost exclusively on one research instrument, the Moral judgment Interview (MJI). This involves presenting participants with moral dilemmas in which two different principles are in conflict, and recording their resolution as well as their justifications for their position. The following is a typical dilemma (known as the Heinz dilemma) from the actual MJI5:
In Europe a women was near death from cancer. There was one drug that the doctors thought might save her. It was a form of radium that a druggist was charging ten times what the drug cost to make. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman’s husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000. He told the druggist his wife was dying, and asked him to sell it cheaper or to let him pay later. But the druggist said, “No, I discovered the drug and I’m going to make money from it.” So Heinz got desperate and began to think about breaking into the man’s store to steal the drug for his wife.
For each answer to each dilemma (there are three dilemmas on each the test), researchers code the participants reasoning into one of the six stages, using a standard list of answers. While Kohlberg required extensive training for researchers on the MIJ, and interraterrator reliability has tended to be high, you can probably imagine that the subjective coding scheme has led to some skepticism about the measure itself. As a result, James Rest developed the Defining Issues Test (DIT)6. This test includes six moral dilemmas, all taken from the MIJ, along with twelve questions for each dilemma. The questions contain examples of reasoning from each of the six stages, and participants are asked to rate on a scale of 1 to 5 how much they will take the issues in the statement into consideration. Thus, the scores are more quantitative, and less subjective. Rest has developed a developmental stage model using the DIT that is similar to Kohlberg's, but which allows for reasoning at multiple stages at one time. So for example, according to Rest, people may reason at stage 4 for some problems, and stage 5 for others.

While neither Kohlberg nor Rest's theories are widely accepted today, at least outside of developmental psychology, they serve as good illustrations of the important points I want to make about rationalist views of moral psychology. First, moral reasoning is central. The stages themselves are defined by the types of reasoning that people use to justify moral decisions. People are consciously aware of this reasoning, and can articulate it. In fact, the tests of mreasoningining (the MJI and the DIT) rely entirely on participants' articulations of their reasoning, and thus on their conscious awareness of that reasoning. Another important point, which I haven't yet mentioned, is the Kohlbergian view of the mechanisms underlying mjudgmentement. These mechanisms are not innate, but they are not learned through socialization either. Instead, they are offshoots of the cognitive abilities that people have at various stages of development. In other words, moral reasoning is a subtype of more general reasoning mechanisms, and comes about through the interaction of these mechanisms and social contexts and rules. Moral reasoning is part of a larger practical reasoning system.

Turiel & Gilligan

As with any dominant paradigm, it didn't take long until the Kohlbergian view of moral development began to be challenged, but as is also the case with most dominant paradigms, the first challenges primarily came from within the paradigm itself. One of the most interesting and controversial challenges came from a student of Kohlberg's named Carol Gilligan. In her 1982 book In a Different Voice7, she argued that the fact that the vast majority of Kohlberg's research subjects were male biased his theory toward an ethic of justice. Females, she claimed, were more inclined to reason from an "ethic of care." As you might imagine, her claim that there were sex differences in moral reasoning was highly controversial, and the evidence hasn't borne it out. As one author put it in a review of the literature, sex huge sex differences in moral reasoning that Gilligan claimed are "mythic"8 but she was right about one thing: the emphasis on justice and fairness, to the exclusion of things like empathy and care, limited Kohlberg's theory.

A separate critique of Kohlbergian theories came from another school of rationalists, headed by Elliot Turriel. Over the years, researchers like Rest began to notice that some children, even at a very young age (e.g., ages in which they should be squarely within the first two Kohblergian stages) used justifications that spanned multiple stages. Some theorists, particularlyTurriel, argued that this meant that a graduated stage model, even a loose one like Rest's, is insufficient. Instead, he and others developed what they call the domain theory of moral reasoning, or what some are calling the social interactionist view.. Central to this theory is the distinction between moral rules and mere social conventions. Turriel and his colleagues have argued that children are able to recognize the distinction between the two at a much younger age (as young as 3 years) than Kohlberg or Rest's theories allow. In a typical experiment9, Turriel would present children with different situations (not dilemmas), some of which involved violating moral rules or laws (such as "thou shalt not steal"), and others that involved social conventions (such as "do not talk loudly in the library"). Children were then asked whether it would be OK to commit the acts that would break the rule (steal or talk loudly in the library) even if there were no such rule. Children who recognize the moral-social convention distinction answer no for moral rules, and yes to mere social distinctions (no, it wouldn't be OK to steal if there were no rule against it, but yes, it would be OK to talk in the library without a rule prohibiting it). Turiel argued that this indicates that from a young age, moral reasoning and nonmoral social reasoning are distinct, and thus utilize different mechanisms. Moral reasoning concerns justice and fairness beginning early in development (recall justice and fairness don't appear in Kohlberg's model until the last level), and children recognize that violations of moral rules have negative consequences even when the rules are not explicitly stated. Also in contrast with Kohlberg, then, Turiel believes that the sense of justice and fairness that underlies children's moral reasoning is learned, primarily through social interactions and observations. It is through social interaction that children learn the consequences of certain actions, and therefore recognize that those actions are wrong even when there are no rules against them.

While both Gilligan and Turriel diverge from Kohlberg in some ways, the central themes remain the same. Their concepts of what morality is, and how mjudgmentement takes place, is still firmly rationalist. But as I said at the beginning of this post, and as the previous post indicates, things have been changing over the last decade or so. One might even say that outside of developmental psychology, rationalism is no longer the majority view. However, there are hybrid theories that contain some elements of rationalism, but with affect and intuition playing large roles as well. I'll quickly describe two, as a transition into the next post.

Blair & Nichols

James Blair, a neuroscientist, has taken an interesting approach to moral psychology. He studies psychopaths. From this research, he's come to view moral reasoning as insufficient to account for the differences between psychopaths and normal individuals in mjudgmentement. In particular, psychopaths are unable to distinguish the moral from the conventional (they have other relevant deficits, but this is the most illustrative one), which most children can do around age three. Their deficits appear to lie in affect, rather than reasoning. Their emotional responses to moral violations are no different than their emotional responses to conventional ones. Blair argues that to account for this, we need an affective mechanism that directly influences moral judgment. He calls this mechanism the Violence Inhibition Mechanism10, which is designed to cause an emotional reaction to human suffering that inhibits aggression and, perhaps, promotes empathy. However, the Violence Inhibition Mechanism doesn't do it alone. There's still something like reasoning going on in what Blair calls "meaning analysis." Now, it's not clear from Blair's writing what, exactly, meaning analysis is, and it could be something more like the automatic, schematic processes involved in intuitionist theories, but from discussions of it, I get the impression that it is more conscious and deliberative. It involves the interpretation of the situations in which the Violence Inhibition Mechanism is activated, and from what I can tell, Blair believes that this can be done consciously and deliberatively, which means that his theory contains at least some elements of the rationalist school.

The second hybrid account comes from the experimental philosopher Shaun Nichols. He has argued that Blairs Violence Inhibition Mechanism can't explain Blair's data, and has produced data of his own to support his own account, in which disgust and other emotions play a key role. For example, in one experiment, he presented participants with three types of violations: moral (e.g., a person hitting another person), conventional (e.g., someone drinking soup from the bowl at a dinner party), and conventional but disgusting (e.g., someone spitting in his drink and then drinking it)11. He argued that if affect plays a key role in the moral-conventional distinction, then disgust-inducing conventional violations should resemble moral violations more than conventional violations. This is, in fact what he found. As for moral violations, the disgust-inducing violations were rated as worse violations than the merely conventional, and participants said they would have been wrong even if an authority figure said they were OK.

According to Nichols, then, affect plays a central role in moral judgment. (By the way, this is the end of my second long post on the topic, and I still put two e's in judgment every time I type it!!!!) But affect doesn't do it alone. Moral reasoning is still central. At least I think that's the case. Like Blair, and pretty much every "cognitive" account of moral judgment (i.e., any account that's not by a neuroscientist or a social psychologist), things get pretty vague after the talk about affect. Nichols has explicitly argued against what he calls "empirical rationalism"12, which encompasses the views of Kohlberg, Turriel, Gilligan, and the like, but it also requires the "understanding" of normative rules governing moral behavior (and distinguishing the moral from the disgust-inducing), and he even calls this understanding a "normative theory." That sounds like it involves reasoning to me. In fact, since it is the theory-like knowledge view of moral reasoning that the intuitionists are rebelling against, calling it a "normative theory" seems pretty straightforward. He also uses the sorts of evidence that rationalists, but not intuitionists, would use in his experiments, asking people "why" it violations were bad. If moral reasoning isn't important, than moral reasoning data isn't important. Still, this is clearly different from strict rationalism. If affect is involved, and even guiding reasoning, then moral reasoning is no longer the arbiterbitor of moral decisions. That makes Nichols a hybrid theorist.

It's no longer controversial to claim that affect is involved in moral judgment, and that it must be central in any theory of moral psychology. The neuroscientific work, along with that of Blair, Nichols, and many others, has ensured this. It's safe to say that strict rationalism is dead as a viable view in the field. Yet, the view that I will talk about in the next post, Jonathan Haidt's social intuitionism, is a radical departure from rationalism. You won't read anything about "theories" or "understanding" in Haidt's work. If you're a moral rationalist at heart, Haidt's work is going to be a wild ride, and you may get nauseated. I just want to warn you up front.

1 Korsgaard, Christine (1986). Skepticism about practical reason. Journal of Philosophy, 83(1), 5-25. I found this quote, in the next, in this paper: Nichols, S. (2002a). How psychopaths threaten moral rationalism, or is it irrational to be amoral? The Monist, 85, 285-304.
2 Nagel, T. (1970). The Possibility of Altruism, Princeton, NJ: Princeton University Press.
3 Crain, W.C. (1985). Theories of Development: Concepts and Applications, 2nd Edition. Englewood Cliffs: Prentice-Hall
4 Kohlberg, L., & Turiel, E. (1971). Moral development and moral education. In Gerald Lesser (ed.), Psychology and Educational Practice, Glenview, IL: Foresman, pp. 410-465.
5 White, R.D. (1997) Ethics and hierarchy: The influence of a rigidly hierarchical organizational design on moral reasoning, Pennsylvania: Pennsylvania State University.
6 Rest, J. (1979). Development in Judging Moral Issues, Minneapolis: University of Minnesota Press.
7 Gilligan, C. (1982). In a different voice: Psychological theory and women's development, Harvard University Press: Cambridge.
8 Brabeck, M. (1983). Moral judgment: Theory and research on differences between males and females. Developmental Review, 3, 274-291.
9 Turiel, E. (1983). The Development of Social Knowledge: Morality & Convention, New York: Cambridge University Press.
10 Blair, R. (1995). A cognitive developmental approach to morality: Investigating the psychopath”. Cognition, 57, 1-29.
11 Nichols, S. (2002b). Norms with feeling: Towards a psychological account of moral judgment. Cognition, 84, 221-236.
12 Nichols, S. (2002a). See footnote 1.

4 comments:

Clark Goble said...

Very interesting.

Could you perhaps expand a little on the meaning of "affect" in cog sci? It was thrown around a lot in that article but I was a little fuzzy on its exact meaning. It sounds like some mental emotional consequence of actions. But that sounds a bit vague and perhaps too broad. For instance if I have a bad experience with chinese food in the past and get nauseous at the thought or smell of chinese food for years afterwards, is that "affective" in the sense you are using it?

Chris said...

haha... no Fowler.

Clark, I tend to use affect and emotion interchangably, though it might be better to call emotion the phenomenal experience of affect. A wide range of things can fall under the heading of affect, from Damasio's "somatic markers" to, perhaps, Gilligan's empathy (at least the emotional content of empathy; there are also cognitive components).

Anonymous said...

Hi Chris,

As you develop your post on Haidt's social intuitionist model, keep in mind that the empirical support is, so far, pretty slim (e.g., Wheatley & Haidt, 2005). Carefully read Wheatley and Haidt 2005. In my humble opinion, the data are weak and support counterarguments.

I just wanted to keep your readers aware of the current state of his theory (at least in my mind).

Thanks for the series of posts.

Scar Symetry said...

Great site