An entrée of Cognitive Science with an occasional side of whatever the hell else I want to talk about.
Thursday, February 16, 2006
Koufax
So I discovered a little while ago that Mixing Memory has been nominated for a Koufax Award for Best Expert Blog. I'm pretty sure I have Coturnix to thank for that, so thank you. Anyway, despite my huge ego, I don't feel like I could compete with, well, any of those other blogs. I know I, for one, will be voting for RealClimate, because those guys are seriously smart, and their posts are always informative. It's nice to know that someone (i.e. Coturnix) has a high opinion of this blog, though, and it's nice to be mentioned along with some of those other blogs.
Monday, February 13, 2006
What Framing Analysis Is
I've reached a point at which I cringe every time I see the words "framing" or "Lakoff" in the blogosphere or mainstream media, and when I see them together, I damn near have a seizure. Two things are clear to me now:
Several misconceptions reappear, again and again. For example, "Lakoff is a postmodernist." If we put aside for a moment the fact that I haven't the slightest idea what that means, I can't imagine that if we compared Lakoff to many of the thinkers who are labeled "postmodernists" in more serious circles, we'd find a whole hell of a lot in common. I mean, sure, Lakoff readily admits to being a "relativist," but the sort of relativism he's talking about is probably not all that common among analytic philosophers, and I know it's not uncommon among cognitive scientists, and I don't think we can count many in either of those crowds as members of "postmodernist" schools of thought. The "relativism" of Lakoff, and many other cognitive scientists, simply says that our background knowledge influences our understanding and interpretation of facts and language. And that, in a nutshell, is what frame analysis is about.
Another common misconception is that framing is just a marketing tool. To be fair, I don't think that Lakoff has done a particularly good job of dispelling that notion, as when he talks about who should be the spokesperson for evolutionary science. And it's certainly true that framing is often used for marketing purposes, both in advertising and in political rhetoric. But that's not all it is. Frame analysis is a tool for interpreting discourse, and a tool for more effective communication. The goal of framing doesn't have to be convincing, it can simply be understanding.
A related misconception is that the purpose of framing is to manipulate or trick people into taking your side. Again, it can be used for that purpose, but I've never gotten the impression that Lakoff is advocating that type of use, and that's certainly not what frame analysis is about.
So, since my frustration has peaked, I thought I'd attempt to provide a very brief explanation of frame analysis, with the hope that it will clear up those misconceptions and others. This is all ground I've covered before, but what the hell? I'll cover it again.
First, what is a frame? Frames are essentially schemas in the head, or words and phrases used to elicit schemas in the head (follow the last link for more technical definitions). When referring to frames that are in people's heads, we're talking about knowledge structures in long-term memory that are used to interpret incoming information, and to reason. An example of a mental frame might be the FANCY RESTAURANT frame. You have specific knowledge of the order of events to expect in a fancy restaurant (the order of the courses, when the check comes, and even how often the server should check up on you), how to behave in a fancy restaurant (you don't eat with your hands, for example), and so on. When you're actually in a fancy restaurant, the frame serves to highlight certain information and create expectations (which is why you get pissed off if the server leaves your drink near empty or watered down for too long), while it causes you to ignore other information (like, say, the color of the plates, unless you're just really into plates). Frames in language are designed to take advantage of those representations or change them, by highlighting certain parts of them or additional facts or associations, and perhaps ignoring others.
So frame analysis will consist of three stages (maybe only two, if you're a linguist... sorry, I couldn't resist that little jab):
1.) Discovering the mental frames that people already have. If you don't do this, you won't know what information you should highlight or add, or what information you should de-emphasize.
2.) Developing an understanding of what it is you want to communicate. What do you want to make more salient in people's mental frames, and what do you want to add to their knowledge?
3.) Framing your speech and writing in such a way that it accomplishes the goals from 2) given 1).
That's it, really. You're just developing an understanding of how people are representing something, deciding what you want to communicate, and choosing your wording based on the combination of those two things. There's nothing inherently postmodern or manipulative (I think that some people mean "manipulative" when they say "postmodern") about any of that, and the potential uses of it extend well beyond simple marketing. In fact, it's a good idea to do those three things anytime you want to communicate effectively. Of course, in practice, all three of those things can be very difficult, particularly when your audience is diverse and/or your message is complicated. And it's important that you go into the process with a good understanding of how people represent information in general, and how they reason. And that's a large part of Lakoff's point, even if he really has no clue how people actually represent information or reason. But at least he's trying.
- Most people don't really feel the need to actually read something about frame analysis, even if it's only Lakoff, before they develop opinions about its worth.
- Lakoff himself hasn't done a very good job of explaining what frame analysis is, because even the people who do appear to have read him usually don't get it.
Several misconceptions reappear, again and again. For example, "Lakoff is a postmodernist." If we put aside for a moment the fact that I haven't the slightest idea what that means, I can't imagine that if we compared Lakoff to many of the thinkers who are labeled "postmodernists" in more serious circles, we'd find a whole hell of a lot in common. I mean, sure, Lakoff readily admits to being a "relativist," but the sort of relativism he's talking about is probably not all that common among analytic philosophers, and I know it's not uncommon among cognitive scientists, and I don't think we can count many in either of those crowds as members of "postmodernist" schools of thought. The "relativism" of Lakoff, and many other cognitive scientists, simply says that our background knowledge influences our understanding and interpretation of facts and language. And that, in a nutshell, is what frame analysis is about.
Another common misconception is that framing is just a marketing tool. To be fair, I don't think that Lakoff has done a particularly good job of dispelling that notion, as when he talks about who should be the spokesperson for evolutionary science. And it's certainly true that framing is often used for marketing purposes, both in advertising and in political rhetoric. But that's not all it is. Frame analysis is a tool for interpreting discourse, and a tool for more effective communication. The goal of framing doesn't have to be convincing, it can simply be understanding.
A related misconception is that the purpose of framing is to manipulate or trick people into taking your side. Again, it can be used for that purpose, but I've never gotten the impression that Lakoff is advocating that type of use, and that's certainly not what frame analysis is about.
So, since my frustration has peaked, I thought I'd attempt to provide a very brief explanation of frame analysis, with the hope that it will clear up those misconceptions and others. This is all ground I've covered before, but what the hell? I'll cover it again.
First, what is a frame? Frames are essentially schemas in the head, or words and phrases used to elicit schemas in the head (follow the last link for more technical definitions). When referring to frames that are in people's heads, we're talking about knowledge structures in long-term memory that are used to interpret incoming information, and to reason. An example of a mental frame might be the FANCY RESTAURANT frame. You have specific knowledge of the order of events to expect in a fancy restaurant (the order of the courses, when the check comes, and even how often the server should check up on you), how to behave in a fancy restaurant (you don't eat with your hands, for example), and so on. When you're actually in a fancy restaurant, the frame serves to highlight certain information and create expectations (which is why you get pissed off if the server leaves your drink near empty or watered down for too long), while it causes you to ignore other information (like, say, the color of the plates, unless you're just really into plates). Frames in language are designed to take advantage of those representations or change them, by highlighting certain parts of them or additional facts or associations, and perhaps ignoring others.
So frame analysis will consist of three stages (maybe only two, if you're a linguist... sorry, I couldn't resist that little jab):
1.) Discovering the mental frames that people already have. If you don't do this, you won't know what information you should highlight or add, or what information you should de-emphasize.
2.) Developing an understanding of what it is you want to communicate. What do you want to make more salient in people's mental frames, and what do you want to add to their knowledge?
3.) Framing your speech and writing in such a way that it accomplishes the goals from 2) given 1).
That's it, really. You're just developing an understanding of how people are representing something, deciding what you want to communicate, and choosing your wording based on the combination of those two things. There's nothing inherently postmodern or manipulative (I think that some people mean "manipulative" when they say "postmodern") about any of that, and the potential uses of it extend well beyond simple marketing. In fact, it's a good idea to do those three things anytime you want to communicate effectively. Of course, in practice, all three of those things can be very difficult, particularly when your audience is diverse and/or your message is complicated. And it's important that you go into the process with a good understanding of how people represent information in general, and how they reason. And that's a large part of Lakoff's point, even if he really has no clue how people actually represent information or reason. But at least he's trying.
Sunday, February 12, 2006
Happy Darwin Day, Sort Of
So now we're celebrating Darwin's birthday as Dawrin Day, partly because his ideas have been so important, and partly to help us to educate people about evolution (though holding it on a Sunday probably doesn't serve to facilitate that purpose), but mostly, I think, to stick it to creationists (and holding it on Sunday definitely facilitates that purpose). And believe me, I'm all for sticking it to the creationists. I can even show you where on creationists I want to stick it, if you'd like.
But is calling it Darwin Day really the ideal way to stick it to them, or to faciliate evolution education? I mean, haven't biologists and others who are pro-science and know anything whatsoever about contemporary biology been trying to shed the silly label "Darwinists?" Because let's face it, modern biologists aren't really Darwinists, in the same sense that modern physicists aren't Keplerians. Biology has actually produced some advancements in the last 150 years. Wouldn't it be better to call it something like Evolution Day, then? That way, we don't look like we're Darwinists worshipping at the alter of Darwin. We could still hold it on Darwin's birthday, or maybe the closest weekday (the Friday before or the Monday after, giving science teachers a chance to reiterate the importance of evolution to their students). But the name itself would help to convey the fact that we're not celebrating a 150 year old theory, because we're celebrating a very contemporary theory; one that continues to, for lack of a better word, evolve, just as any good scientific theory does.
So Happy Evolution Day,everyone, and may you have many more.
But is calling it Darwin Day really the ideal way to stick it to them, or to faciliate evolution education? I mean, haven't biologists and others who are pro-science and know anything whatsoever about contemporary biology been trying to shed the silly label "Darwinists?" Because let's face it, modern biologists aren't really Darwinists, in the same sense that modern physicists aren't Keplerians. Biology has actually produced some advancements in the last 150 years. Wouldn't it be better to call it something like Evolution Day, then? That way, we don't look like we're Darwinists worshipping at the alter of Darwin. We could still hold it on Darwin's birthday, or maybe the closest weekday (the Friday before or the Monday after, giving science teachers a chance to reiterate the importance of evolution to their students). But the name itself would help to convey the fact that we're not celebrating a 150 year old theory, because we're celebrating a very contemporary theory; one that continues to, for lack of a better word, evolve, just as any good scientific theory does.
So Happy Evolution Day,everyone, and may you have many more.
Wednesday, February 08, 2006
Moral Psychology III: Social Intuitionism, or The Rise of the Intuitive Lawyers
Last summer, I started a series of posts on moral psychology, but never got to finish, for various reasons. This post represents the third installment in that series, and presents what are to me the most interesting developments in moral psychology. These developments have wide-ranging implications for philosophy, politics, and communication in general, and I hope to get to those in a later post.
As I said at the beginning of the last post on moral psychology, things have been changing in the study of moral psychology. Part of the motivation for the change has been the work of neuroscientists like Antonio Damasio that was described in the first post, which has shown that primarily rationalist theories like those of Kohlberg and his descendents simply aren't sufficient to account for human moral judgments and behavior. For the rationalists, moral behavior is the largely the product of deliberative, consciously-available reasoning. For Damasio, you may recall, feelings or emotions are by and large guiding our reasoning, rather than the other way around. Furthermore, they are doing so below the level of what Damasio calls extended consciousness, which is what most of us would simply call consciousness1. There have been other motivations for the change, though. For some, those motivations have to do with the differences in learning and processing involved in the symbolic architectures of old-school cognitive science and connectionist models. For others, the motivations have come from the increasingly popular belief among social psychologists that most social cognition is automatic and unconscious. Whatever the motivations, the results are pretty much the same: shedding the emphasis on conscious moral reasoning, and instead focusing on automatic and unconscious moral judgments. Since I covered the work of Damasio and other neuroscientists in the first post, I'll focus on the connectionists and the social psychologists in this one. In particular, I'll describe the theories of Paul Churchland and his colleagues (most notably, Andy Clark), and the social intuitionist model of Jonathan Haidt, along with related work.
Prototypes and Exemplars vs. Rules
Instead, these theorists argue, moral reasoning uses a moral state space, within which a wide range of situation types are represented4. Within this state space, those situations will be represented as prototypes, with prototypical features of those situations encoded in the representation. This has the implication that we will recognize, and react to, moral situations by relying on a few prototypical features. Obviously, this is sub-optimal, in that situations that are highly similar to certain prototypes, i.e., situations that have many of the prototypical features of a particular type of moral situation, will activate moral emotions or intuitions associated with those situations, even if those emotions and intuitions are not relevant for the particular situation.
This means that the study of moral psychology should focus not on rule-based reasoning, which insufficiently captures the ways in which we make moral judgments, and may in fact be irrelevant (at least in a description of those judgment processes, though it may be relevant to the ways in which we communicate our moral intuitions). Instead, we should focus on moral intuitions and their associated emotions. This is, in fact, what social psychologists have begun to do, led in particular by the work of Jonathan Haidt.
Social Intuitionism
Haidt describes his social intuitionist model (all page numbers will refer to the manuscript in the link)as a dual process model5. In dual process models6 there are, as the name suggests, two types of processes, Process I and Process II, which may run in parallel to each other. Process I is the intuitive process. It is automatic, and requires little effort. Often it occurs below the level of awareness. Process II is effortful, and usually occurs within consciousness. The social intuitionist model posits that in morjudgmentsnts, Process I is privileged, while Process II's purposes is largely to supplement and/or justify Process I.
What does it mean for moral judgments to be intuitive? It might help to think in terms of heuristics. This is, in fact, how Cass Sunstein describes them in a recent Behavioral and Brain Sciences target article7. He uses as an example the Asian disease problem, which goes like this (from the article, p. 9 of the linked manuscript):
Another way of thinking about moral intuitions is in terms of an innate "moral grammar," similar to the innate moral grammar of Chomsky's generative linguistics. This is how John Mikhail describes them in an unpublished paper on research he has conducted. Using famous moral dilemmas such as the trolley problem, he shows in several experiments that children as young as eight years old appear to be using two intuitive moral principles: the prohibition of intentional battery and the principle of double effect. Here are his descriptions of the two (p. 11):
Haidt himself focuses more on the role of emotion. Drawing on the work of Damasio described in the first post (linked above), Haidt argues that moral judgments consist, largely, of pattern recognition of the sort described by connectionist models, combined with the automatic associative activation of affective states. As evidence of the role of emotions in mjudgmentement, Haidt describes two experiments. In the first, conducted by Baston et al.9, participants were hooked up to machines that they were told would measure their bodily reactions, and listened to stories in which core values (freedom or equality) were violated. As they listened to the stories, were given false feedback about their physical reactions to the stories. The participants were then asked which of the values (freedom or equality) should be used as the theme for a series of events at their university. Most participants chose the value that they had been told elicited the strongest physical reaction.
In the second experiment Haidt describes, Wheatley and Haidt10 hypnotized "highly hypnotizable" participants and suggested that when they heard one of two target words ("take" or "often") that would ordinarily elicit feelings of disgust, they would feel disgusted. Participants then read six stories, each of which contained one of the two target words, and were asked to rate their disgust level and moral condemnation of the actors in the stories. They found that participants gave higher disgust and moral condemnation ratings for stories containing the word suggested to cause disgust.
These pieces of evidence, along with Damasio's research, lead Haidt to believe that affective reactions are the driving force behind moral evaluations. In his discussion of these reactions, he focuses on what he calls the "moral emotions", though he recognizes that any emotion can affect mjudgmentement. He describes two features of moral emotions11:
So, there are three ways of thinking about moral intuitions: heuristics, moral grammar, and affect-driven responses to patterns in the environment. All three are automatic, in the sense that we don't purposefuly activate them, they occur below the level awareness, meaning that we may not even be aware of their existence, and they require little effort.
The second type of process, Process II, in Haidt's dual process model is the conscious reasoning that we usually associate with moral judgment. However, unlike traditional moral theories in psychology and philosophy, Haidt believes that moral reasoning is largely post hoc, and meant to justify our moral beliefs and actions that are caused by intuitive moral processes. This is where Haidt's theory might become counterintuitive (pardon the pun) for some of us. While I imagine we've all experienced immediate, gut reactions to moral situations that did not take any deliberation, I doubt that many of us have seen our moral reasoning as strictly post hoc. But Haidt argues this is what it in fact is. As he puts it, we are more like "moral lawyers" than "moral scientists."
Haidt provides a few different lines of evidence for the post hoc nature of moral reasoning. First he discusses motivations that influence the reasoning process, but which serve more to justify our beliefs than to provide arguments for them. He discusses relatedness motives, which involve the desire for our beliefs and actions to be consistent with our social goals ( e.g., agreeing with people we like), coherence motives, which involve the desire for our beliefs and actions to be consistent with our self-image and with each other (think of cognitive dissonance). These types of motivations influence the sort of information or evidence we will consider when reasoning about our moral judgments. He also points to the literature on the "my-side" bias, which shows that people generally only consider evidence on one side of an issue -- their own side12. The my-side bias is so deeply entrenched that, even when we're evaluating the arguments of people who disagree with us, we'll rate them as better arguments if they only discuss evidence on one side of the issue. In other words, the my-side bias appears to be an integral part of how we think reasoning should be13. It is so persistent, in fact, that when we are searching for evidence for our position, we will often stop after finding a single piece of evidence14.
Haidt also discusses evidence from nonmoral domains that shows people using post hoc reasoning when the processes involved in causing a behavior or belief are automatic and unconsciouscious, as Haidt believes they are in moral judgment. He cites a famous experiment by Nisbett and Schachter15 in which they gave participants in two conditions electric shocks as an illustration. Participants in the first condition received a pill, a placebo, and were told that the pill caused the same symptoms as electric shock. Participants in the second condition received no such pill. Participants in the first condition, amazingly, were able to stand four times as much electric shock as the participants in the second condition. When asked to explain why they could take more shock, the majority of the participants in the pill condition never mentioned the pill, but instead came up with post hoc explanations such as the idea that since they had an experience with electric shock at some point earlier in their life, and had thus built up a tolerance. In other words, they weren't aware of the reason for their increased tolerance, so they made reasons up.
When we combine the evidence for the role of intuition, particularly intuitive affective reactions, in moral judgment, and the evidence for the frequently post hoc functioning of conscious reasoning, Haidt believes we have strong evidence for his dual process theory of mjudgmentement. He then goes on to discuss the possible origins of moral intuitions. He cites de Waal's research16 on what Haidt calls "primate proto-morality." de Waal argues that nonhuman primates are the only nonhuman species who show evidence of "prescriptive" social rules, and Haidt believes that the affective and cognitive mechanisms that underlies these rules, combined with our superior communication abilities, also underlie human's more sophisticated moral behavior. This is not to say that human and nonhuman primate moral systems are at all equivalent. Haidt writes (p. 18):
That, then, in a pretty long nutshell, is Haidt's social intuitionist model. If you've read the post on the rationalist theories in moral psychology, you'll quickly see how starkly this model contrasts with them. And the implications of this sort of intuitionist moral theory are quite different from those of rationalist theories. For example, while Haidt emphasizes the social aspect and third-person focus of intuitive moral beliefs, it is likely that automatic, unconscious moral processes will tend to be more egocentric than someone like Kohlberg believed higher-stage moral reasoning would be. Epley and Caruso19, for example, have argued that since taking a first-person perspective requires far fewer resources than taking a third-person perspective, most of our automatic judgment processes will be egocentric. It is only when we have the time and cognitive resources to reevaluate our egocentric reactions to moral situations that we can think in more other-focused terms. Another implication is that moral communication will be designed not so much to convince people through reasoning, or even to provide them with arguments for their positions, but to pass to others representations that will induce certain kinds of moral reactions. If this is the case, then people like George Lakoff, in his political writings, are moving in the right direction in analyzing discourse not in order to be able to argue better, but in order to be able to utilize and create certain representations in listeners and readers.
There are, of course, many more implications of this sort of perspective on moral psychology, but since this post is already reaching book length, I think I'll save a discussion of those for future posts. If you find these issues interesting, though, feel free to check out the references below (if I haven't provided a link to a paper, write me and I will try to get you a copy), or ask me about some of the other people doing theoretical and empirical work along these lines. Moral psychology has become a hot topic in cognitive science and social cognition, so there is no shortage of good reading material.
1In The Feeling of What Happens, Damasio describes two kinds of consciousness. The first, basic consciousness, is more primitive, and we share it with many nonhuman animals. It iawarenessawareneses of our bodies and surroundings, and is free of all of the trappings of complex concepts of self, history, and effortful reasoning. The second is extended consciousness, which is where we experience a sense of self, autobiographical memories, language, and so on.
2Churchland, P. M. (1996). "The Neural Representation of the Social World". In L.May, L. Friedman, & A. Clark (Eds.) Mind and Morals MIT Press: 91-108.
3Recent reseach has, of course, challenged the strict similarity-based views of concepts, but connectionists still treat concepts as prototypes or exemplars almost exclusively.
4Clark, A. (1996). Connectionism, Moral Cognition, and Collaborative Problem Solving. In L.May, M.Friedman and A.Clark (Eds.) Mind And Morals, MIT Press: 109-127.
5Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 814-834.
6Kahneman, D., & Frederick, S. (2002) Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge University Press.
7Sunstein, C.R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28(4), 531-542.
8Tversky, A. and Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. Quarterly Journal of Economics, 106(4), 1039-61.
9Batson, C. D., Engel, C. L., & Fridell, S. R. (1999). Value judgments: Testing the somatic-marker hypothesis using false physiological feedback. Personality and Social Psychology Bulletin, 25, 1021-1032.
10Wheatley, T., & Haidt, J. (2005). Hypnotically induced disgust makes moral judgments more severe. Psychological Science, 16, 780-784.
11Haidt, J. (2003). The moral emotions. In R.J. Davidson, K.R. Scherer, & H.H. Goldsmith (Eds.), Handbook of Affective Sciences. Oxford: Oxford University Press, 852-870.
12E.g., Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221-235.
13Ibid.
14Perkins, D. N., Allen, R., & Hafner, J. (1983). Difficulties in everyday reasoning. In W. Maxwell (Ed.), Thinking: The Frontier Expands. Hillsdale, NJ: Erlbaum, 177-189.
15Nisbett, R. E., & Schacter, S. (1966). Cognitive manipulation of pain. Journal of Experimental Social Psychology, 2, 227-236.
16de Waal, F. (1996). Good Natured: The origins of right and wrong in humans and other animals, Cambridge, Mass: Harvard University Press.
17Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The "big three" of morality (autonomy, community, and divinity), and the "big three" explanations of suffering. In A. Brandt & P. Rozin (Eds.), Morality and Health, New York: Routledge, 119-169.
18Pizarro, D.A., & Bloom, P. (2003). The intelligence of moral intuitions: Comment on Haidt. Psychological Review, 110(1), 193-196.
19Epley, N., & Caruso, E. M. (2004). Egocentric ethics. Social Justice Research, 17(2), 171-187.
As I said at the beginning of the last post on moral psychology, things have been changing in the study of moral psychology. Part of the motivation for the change has been the work of neuroscientists like Antonio Damasio that was described in the first post, which has shown that primarily rationalist theories like those of Kohlberg and his descendents simply aren't sufficient to account for human moral judgments and behavior. For the rationalists, moral behavior is the largely the product of deliberative, consciously-available reasoning. For Damasio, you may recall, feelings or emotions are by and large guiding our reasoning, rather than the other way around. Furthermore, they are doing so below the level of what Damasio calls extended consciousness, which is what most of us would simply call consciousness1. There have been other motivations for the change, though. For some, those motivations have to do with the differences in learning and processing involved in the symbolic architectures of old-school cognitive science and connectionist models. For others, the motivations have come from the increasingly popular belief among social psychologists that most social cognition is automatic and unconscious. Whatever the motivations, the results are pretty much the same: shedding the emphasis on conscious moral reasoning, and instead focusing on automatic and unconscious moral judgments. Since I covered the work of Damasio and other neuroscientists in the first post, I'll focus on the connectionists and the social psychologists in this one. In particular, I'll describe the theories of Paul Churchland and his colleagues (most notably, Andy Clark), and the social intuitionist model of Jonathan Haidt, along with related work.
Prototypes and Exemplars vs. Rules
"Stateable rules are not the basis of one''s moral character. They are merely its pale and partial reflection at the comparatively impotent level of language." - P.M. Churchland2That quote pretty much sums up the recent connectionist-inspired position on moral reasoning. There hasn't been a great deal of empirical research or modeling of moral reasoning, to date, but the theoretical position has been supported by the work that I'll discuss in the next section. The basic idea is this: we know, the argument goes, from research on categorization and concepts, that categories are not represented as statements and definitions, as once thought, but as similarity-based structures such as prototypes and exemplars3. On this view, categorization is, in essence, pattern recognition, and the use of categories is largely associative (here is a short primer on connectionism to help explain how these processes work). Because of this, using morprinciplesles that are stated in rule-form will not work, because the representational material needed to do this is absent.
Instead, these theorists argue, moral reasoning uses a moral state space, within which a wide range of situation types are represented4. Within this state space, those situations will be represented as prototypes, with prototypical features of those situations encoded in the representation. This has the implication that we will recognize, and react to, moral situations by relying on a few prototypical features. Obviously, this is sub-optimal, in that situations that are highly similar to certain prototypes, i.e., situations that have many of the prototypical features of a particular type of moral situation, will activate moral emotions or intuitions associated with those situations, even if those emotions and intuitions are not relevant for the particular situation.
This means that the study of moral psychology should focus not on rule-based reasoning, which insufficiently captures the ways in which we make moral judgments, and may in fact be irrelevant (at least in a description of those judgment processes, though it may be relevant to the ways in which we communicate our moral intuitions). Instead, we should focus on moral intuitions and their associated emotions. This is, in fact, what social psychologists have begun to do, led in particular by the work of Jonathan Haidt.
Social Intuitionism
Haidt describes his social intuitionist model (all page numbers will refer to the manuscript in the link)as a dual process model5. In dual process models6 there are, as the name suggests, two types of processes, Process I and Process II, which may run in parallel to each other. Process I is the intuitive process. It is automatic, and requires little effort. Often it occurs below the level of awareness. Process II is effortful, and usually occurs within consciousness. The social intuitionist model posits that in morjudgmentsnts, Process I is privileged, while Process II's purposes is largely to supplement and/or justify Process I.
What does it mean for moral judgments to be intuitive? It might help to think in terms of heuristics. This is, in fact, how Cass Sunstein describes them in a recent Behavioral and Brain Sciences target article7. He uses as an example the Asian disease problem, which goes like this (from the article, p. 9 of the linked manuscript):
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences are as follows:When given these two choices, people almost always pick Program A. However, when given the following two choices (p. 9-10):
- If Program A is adopted, 200 people will be saved.
Which of the two programs would you favor?
- If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
People usually choose Program D. Look closely at the four Programs. You should notice that the difference between Program A and Program C is only in the wording, as is the difference between Program B and Program D. Why is it that when the exact same situation is worded one way (Program A), people prefer it to an alternative (Program B), while when it is worded in another way (Program C), people don't prefer it to the same alternative worded differently (Program D)? Those of you familiar with the work of Kahneman and Tversky will immediately recognize this as an issue of framing. Changing the way you frame the options changes the heuristics we use to make decisions. In this case, the choice of A but not C, and D but not B, is an instance of Kahnemen and Tversky's "risk aversion" heuristic8.
- If Program C is adopted, 400 people will die.
- If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.
Another way of thinking about moral intuitions is in terms of an innate "moral grammar," similar to the innate moral grammar of Chomsky's generative linguistics. This is how John Mikhail describes them in an unpublished paper on research he has conducted. Using famous moral dilemmas such as the trolley problem, he shows in several experiments that children as young as eight years old appear to be using two intuitive moral principles: the prohibition of intentional battery and the principle of double effect. Here are his descriptions of the two (p. 11):
The former is a familiar principle of both common morality and the common law proscribing acts of unpermitted, unprivileged bodily contact, that is, of touching without consent. The latter is a complex principle of justification, narrower in scope than the traditional necessity or "choice of evils" defense, which in its standard formulation holds that an otherwise prohibited action may be permissible if the act itself is not wrong, the good but not the bad effects are intended, the good effects outweigh the bad effects, and no morally preferable alternative is available.He shows, as others have, that as in the Asian disease problem, people's responses to these sorts of dilemmas is not predictable from traditional moral rules, and that their justifications for their choices are not consistent with their actual choices over several problems. Instead, he believes, they demonstrate the existence of the two moral principles above. He argues that this implies the principles are intuitive, and furthermore, because they are not taught (since we're not actually aware of them), and appear at a young age, they may be innate. This Thsi is, in essence, a poverty of stimulus argument for moral judgment.
Haidt himself focuses more on the role of emotion. Drawing on the work of Damasio described in the first post (linked above), Haidt argues that moral judgments consist, largely, of pattern recognition of the sort described by connectionist models, combined with the automatic associative activation of affective states. As evidence of the role of emotions in mjudgmentement, Haidt describes two experiments. In the first, conducted by Baston et al.9, participants were hooked up to machines that they were told would measure their bodily reactions, and listened to stories in which core values (freedom or equality) were violated. As they listened to the stories, were given false feedback about their physical reactions to the stories. The participants were then asked which of the values (freedom or equality) should be used as the theme for a series of events at their university. Most participants chose the value that they had been told elicited the strongest physical reaction.
In the second experiment Haidt describes, Wheatley and Haidt10 hypnotized "highly hypnotizable" participants and suggested that when they heard one of two target words ("take" or "often") that would ordinarily elicit feelings of disgust, they would feel disgusted. Participants then read six stories, each of which contained one of the two target words, and were asked to rate their disgust level and moral condemnation of the actors in the stories. They found that participants gave higher disgust and moral condemnation ratings for stories containing the word suggested to cause disgust.
These pieces of evidence, along with Damasio's research, lead Haidt to believe that affective reactions are the driving force behind moral evaluations. In his discussion of these reactions, he focuses on what he calls the "moral emotions", though he recognizes that any emotion can affect mjudgmentement. He describes two features of moral emotions11:
- Disinterestedseted inhibitors: Moral emotions can and frequently are triggered "even when the self has no stake in the triggering event" (p. 853).
- Prosocial action tendencies: Moral emotions elicit behaviors that "benefit others or else uphold or benefit the social order" (p. 854).
So, there are three ways of thinking about moral intuitions: heuristics, moral grammar, and affect-driven responses to patterns in the environment. All three are automatic, in the sense that we don't purposefuly activate them, they occur below the level awareness, meaning that we may not even be aware of their existence, and they require little effort.
The second type of process, Process II, in Haidt's dual process model is the conscious reasoning that we usually associate with moral judgment. However, unlike traditional moral theories in psychology and philosophy, Haidt believes that moral reasoning is largely post hoc, and meant to justify our moral beliefs and actions that are caused by intuitive moral processes. This is where Haidt's theory might become counterintuitive (pardon the pun) for some of us. While I imagine we've all experienced immediate, gut reactions to moral situations that did not take any deliberation, I doubt that many of us have seen our moral reasoning as strictly post hoc. But Haidt argues this is what it in fact is. As he puts it, we are more like "moral lawyers" than "moral scientists."
Haidt provides a few different lines of evidence for the post hoc nature of moral reasoning. First he discusses motivations that influence the reasoning process, but which serve more to justify our beliefs than to provide arguments for them. He discusses relatedness motives, which involve the desire for our beliefs and actions to be consistent with our social goals ( e.g., agreeing with people we like), coherence motives, which involve the desire for our beliefs and actions to be consistent with our self-image and with each other (think of cognitive dissonance). These types of motivations influence the sort of information or evidence we will consider when reasoning about our moral judgments. He also points to the literature on the "my-side" bias, which shows that people generally only consider evidence on one side of an issue -- their own side12. The my-side bias is so deeply entrenched that, even when we're evaluating the arguments of people who disagree with us, we'll rate them as better arguments if they only discuss evidence on one side of the issue. In other words, the my-side bias appears to be an integral part of how we think reasoning should be13. It is so persistent, in fact, that when we are searching for evidence for our position, we will often stop after finding a single piece of evidence14.
Haidt also discusses evidence from nonmoral domains that shows people using post hoc reasoning when the processes involved in causing a behavior or belief are automatic and unconsciouscious, as Haidt believes they are in moral judgment. He cites a famous experiment by Nisbett and Schachter15 in which they gave participants in two conditions electric shocks as an illustration. Participants in the first condition received a pill, a placebo, and were told that the pill caused the same symptoms as electric shock. Participants in the second condition received no such pill. Participants in the first condition, amazingly, were able to stand four times as much electric shock as the participants in the second condition. When asked to explain why they could take more shock, the majority of the participants in the pill condition never mentioned the pill, but instead came up with post hoc explanations such as the idea that since they had an experience with electric shock at some point earlier in their life, and had thus built up a tolerance. In other words, they weren't aware of the reason for their increased tolerance, so they made reasons up.
When we combine the evidence for the role of intuition, particularly intuitive affective reactions, in moral judgment, and the evidence for the frequently post hoc functioning of conscious reasoning, Haidt believes we have strong evidence for his dual process theory of mjudgmentement. He then goes on to discuss the possible origins of moral intuitions. He cites de Waal's research16 on what Haidt calls "primate proto-morality." de Waal argues that nonhuman primates are the only nonhuman species who show evidence of "prescriptive" social rules, and Haidt believes that the affective and cognitive mechanisms that underlies these rules, combined with our superior communication abilities, also underlie human's more sophisticated moral behavior. This is not to say that human and nonhuman primate moral systems are at all equivalent. Haidt writes (p. 18):
The above considerations are not meant to imply that chimpanzees have morality, nor that humans are just chimps with post-hoc reasoning skills. There is indeed a moral Rubicon that only Homo Sapiens appears to have crossed: widespread third party norm enforcement. Chimpanzee norms generally work at the level of private relationships, where the individual that has been harmed is the one that takes punitive action. Yet human societies are marked by a constant and vigorous discussion of norms and norm violators, and by a willingness to expend individual or community resources to inflict punishment, even by those who were not harmed by the violator.Thus, the difference between human and nonhuman primate prescriptive social rules and behaviors appears to lie in the social and communicative aspects of our moral beliefs and behaviors. This leads Haidt to the "social" aspects of the social intuitionist model. In his model, cultural influences pick and choose from our innate moral repertoir. He writes (p. 19):
Shweder''s theory of the ""big three"" moral ethics17 proposes that moral "goods"" ( i.e., culturally shared beliefs about what is morally good and valuable) generally cluster into three complexes, or ethics, which cultures embrace to varying degrees: the ethic of autonomy (focusing on goods that protect the autonomous individual, such as rights, freedom of choice, and personal welfare); the ethic of community (focusing on goods that protect families, nations, and other collectivities, such as loyalty, duty, honor, respectfulness, modesty, and selfcontrol); and the ethic of divinity (focusing on goods that protect the spiritual self, such as piety, and physical and mental purity). A child is born prepared to develop moral intuitions in all three ethics, but her local cultural environment generally stresses only one or two of the ethics.In addition to large-scale cultural influences on moral development, through socialization, Haidt notes that it is the social aspect of moral belief and behavior that often motivates the post hoc reasoning processes in moral judgment. In many instances, the only time that we will feel the need to provide arguments and evidence for our moral beliefs is that others have challenged them. Consciously reasoning about them not only provide justifications for ourselves, but also with the material with which to communicate our justifications. Finally, as Pizarro and Bloom have noted in their commentary18 on Haidt's social intuitionist theory, taking the perspective of others, an ability that is likely uniquely human, can also influence our moral beliefs. Perspective-taking essentially provides us with new schemas and categories on which our intuitjudgment judgment processes can operate. So, their are multiple avenues through which social interactions can influence our moral beliefs.
That, then, in a pretty long nutshell, is Haidt's social intuitionist model. If you've read the post on the rationalist theories in moral psychology, you'll quickly see how starkly this model contrasts with them. And the implications of this sort of intuitionist moral theory are quite different from those of rationalist theories. For example, while Haidt emphasizes the social aspect and third-person focus of intuitive moral beliefs, it is likely that automatic, unconscious moral processes will tend to be more egocentric than someone like Kohlberg believed higher-stage moral reasoning would be. Epley and Caruso19, for example, have argued that since taking a first-person perspective requires far fewer resources than taking a third-person perspective, most of our automatic judgment processes will be egocentric. It is only when we have the time and cognitive resources to reevaluate our egocentric reactions to moral situations that we can think in more other-focused terms. Another implication is that moral communication will be designed not so much to convince people through reasoning, or even to provide them with arguments for their positions, but to pass to others representations that will induce certain kinds of moral reactions. If this is the case, then people like George Lakoff, in his political writings, are moving in the right direction in analyzing discourse not in order to be able to argue better, but in order to be able to utilize and create certain representations in listeners and readers.
There are, of course, many more implications of this sort of perspective on moral psychology, but since this post is already reaching book length, I think I'll save a discussion of those for future posts. If you find these issues interesting, though, feel free to check out the references below (if I haven't provided a link to a paper, write me and I will try to get you a copy), or ask me about some of the other people doing theoretical and empirical work along these lines. Moral psychology has become a hot topic in cognitive science and social cognition, so there is no shortage of good reading material.
1In The Feeling of What Happens, Damasio describes two kinds of consciousness. The first, basic consciousness, is more primitive, and we share it with many nonhuman animals. It iawarenessawareneses of our bodies and surroundings, and is free of all of the trappings of complex concepts of self, history, and effortful reasoning. The second is extended consciousness, which is where we experience a sense of self, autobiographical memories, language, and so on.
2Churchland, P. M. (1996). "The Neural Representation of the Social World". In L.May, L. Friedman, & A. Clark (Eds.) Mind and Morals MIT Press: 91-108.
3Recent reseach has, of course, challenged the strict similarity-based views of concepts, but connectionists still treat concepts as prototypes or exemplars almost exclusively.
4Clark, A. (1996). Connectionism, Moral Cognition, and Collaborative Problem Solving. In L.May, M.Friedman and A.Clark (Eds.) Mind And Morals, MIT Press: 109-127.
5Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 814-834.
6Kahneman, D., & Frederick, S. (2002) Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge University Press.
7Sunstein, C.R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28(4), 531-542.
8Tversky, A. and Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. Quarterly Journal of Economics, 106(4), 1039-61.
9Batson, C. D., Engel, C. L., & Fridell, S. R. (1999). Value judgments: Testing the somatic-marker hypothesis using false physiological feedback. Personality and Social Psychology Bulletin, 25, 1021-1032.
10Wheatley, T., & Haidt, J. (2005). Hypnotically induced disgust makes moral judgments more severe. Psychological Science, 16, 780-784.
11Haidt, J. (2003). The moral emotions. In R.J. Davidson, K.R. Scherer, & H.H. Goldsmith (Eds.), Handbook of Affective Sciences. Oxford: Oxford University Press, 852-870.
12E.g., Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221-235.
13Ibid.
14Perkins, D. N., Allen, R., & Hafner, J. (1983). Difficulties in everyday reasoning. In W. Maxwell (Ed.), Thinking: The Frontier Expands. Hillsdale, NJ: Erlbaum, 177-189.
15Nisbett, R. E., & Schacter, S. (1966). Cognitive manipulation of pain. Journal of Experimental Social Psychology, 2, 227-236.
16de Waal, F. (1996). Good Natured: The origins of right and wrong in humans and other animals, Cambridge, Mass: Harvard University Press.
17Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The "big three" of morality (autonomy, community, and divinity), and the "big three" explanations of suffering. In A. Brandt & P. Rozin (Eds.), Morality and Health, New York: Routledge, 119-169.
18Pizarro, D.A., & Bloom, P. (2003). The intelligence of moral intuitions: Comment on Haidt. Psychological Review, 110(1), 193-196.
19Epley, N., & Caruso, E. M. (2004). Egocentric ethics. Social Justice Research, 17(2), 171-187.
Tuesday, February 07, 2006
Innate Grammatical Categories Evident in Home Sign?
A paper from an issue of PNAS from late 2005 argues that evidence of the existence of the grammatical subject in deaf Nicaraguan users of "home sign," which is sign language used in the home, but not based on official sign languages, and generally developed with no other linguistic input, is evidence of the innateness of the grammatical subject category. Here is the paper's abstract:
Language ordinarily emerges in young children as a consequence of both linguistic experience (for example, exposure to a spoken or signed language) and innate abilities (for example, the ability to acquire certain types of language patterns). One way to discern which aspects of language acquisition are controlled by experience and which arise from innate factors is to remove or manipulate linguistic input. However, experimental manipulations that involve depriving a child of language input are impossible. The present work examines the communication systems resulting from natural situations of language deprivation and thus explores the inherent tendency of humans to build communication systems of particular kinds, without any conventional linguistic input. We examined the gesture systems that three isolated deaf Nicaraguans (ages 14–23 years) have developed for use with their hearing families. These deaf individuals have had no contact with any conventional language, spoken or signed. To communicate with their families, they have each developed a gestural communication system within the home called ‘‘home sign.’’ Our analysis focused on whether these systems show evidence of the grammatical category of Subject. Subjects are widely considered to be universal to human languages. Using specially designed elicitation tasks, we show that home signers also demonstrate the universal characteristics of Subjects in their gesture productions, despite the fact that their communicative systems have developed without exposure to a conventional language. These findings indicate that abstract linguistic structure, particularly the grammatical category of Subject, can emerge in the gestural modality without linguistic input.After reading the paper, I couldn't help but feel that while the data is interesting, it doesn't speak to the issue of innateness. Because the home sign systems were developed in collaboration with family members, it is entirely possible that those family members naturally include at least some of their grammatical categories in their signing, and thus that their deaf family members picked it up from them. Maybe I'm missing some aspect of the data, however, not being a linguist. If you're interested in language acquisition, check out the paper, and if you see something that argues against my interpretation, let me know.
Friday, February 03, 2006
Speaking of Numbers
Chris Chatham of Developing Intelligence reminded me, in comments to the post on categorizing fish, of this paper published a little over a year ago in Science. It's an interesting description of what can best be described as preliminary or pilot research on the relationship between number words and number concepts in a culture with only three number words: one, two, and many. Here's a description of the population from the paper (p. 469):
The author of the paper, Peter Gordon, visited the Pirahã three times, and after becoming interested in the question of how their verbal number system might affect their numerical concepts, conducted several pseudo-experiments (no, that's not a derogatory term, it just means that they weren't proper experiments from which we can infer causation) designed to explore Pirahã counting abilities. Several of the studies involved presenting the Pirahã participants with an array or cluster of familiar objects, and asking them to produce an array with the same number of objects. In another study, the participants were presented with line drawings and asked to draw the same number of lines (a task that was apparently very difficult for the Pirahã participants, because drawing is something they never do). In a third type of study, participants watched as the experimenter put nuts into a can, and then removed them, and were asked to report when they thought the can was empty. For each type of task, the Pirahã participants' performance dropped significantly for numerosities greater than 2 or 3 (up to about 10).
Gordon argues that these studies provide at least preliminary evidence for a strong version of the Sapir-Whorfy hypothesis. This strong version is generally called linguistic determinism (as opposed to the weaker version, linguistic relativity). He believes that the Pirahã are able to count small numbers (less than 3), perhaps by subitizing, and that they can also use analogue representations (e.g., lines whose length represent a quantity) for larger numbers (techniques which are inherently less accurate). These are both skills that appear to be present in all cultures, and even in some nonhuman animals. However, he believes that the evidence from his studies indicate that the Pirahã are not capable of more sophisticated counting techniques for numbers above three, and that this is due to the lack of number terms in their language.
There are several problems with these studies. The most obvious is that Gordon makes a causal inference from pseudo-experiments, which is a methodological no-no. There are no control conditions, no comparisons with similar populations, and no testing of different causal explanations. As Daniel Casasanto points out in a letter to Science in response to Gordon's paper, one could predict Gordon's data with either the linguistic determinism hypothesis or with its exact opposite. It could be that the environment and cultural practices of the Pirahã make the learning of number concepts over and above numerosities of 2 or 3 unnecessary, and that this in turn has led to the absence of terms related to these concepts in the Pirahã language. Gordon replies (in the same pdf file, just read to the end of Casasanto's letter) that he's not actually making a firm statement of cause, but when someone argues for "linguistic determinism," it's hard to interpret it any other way.
Speaking directly to these issues is another paper that appeared in the same issue of Science as Gordon's paper. That paper presents experiments (real experiments, this time, though they’re very limited) conducted with speakers of the Mundurukú language, who also live in the rainforests of Brazil. Here's a description of the speakers and their language from the paper (p. 500):
In several tasks requiring Mundurukú speakers to compare the numerosity of large arrays of dots (20 to 80), they performed well but slightly worse than a French-speaking control group. They also performed as well as the French-speaking participants on an arithmetic task that required only approximating numerosities. However, when they were required to perform an arithmetic task that required giving exact answers (a subtraction task), they performed significantly worse than the French controls, particularly as numerosities increased.
The authors of this paper argue that their results provide evidence against the strong determinism view advocated by Gordon. Instead, they write:
The picture I'm trying to paint with these two sets of studies is just how messy research on language and thought really is. Even the studies with the Mundurukú speakers, though they are proper experiments involving experimental and control groups, are at best preliminary explorations into the numerical knowledge of Mundurukú speakers, and the causes of differences between their numerical abilities and those of speakers of languages with larger sets of number terms. Figuring out whether language, as opposed to other cultural and/or environmental variables are responsible for differences in cognition is damn near impossible to do with any certainty. But the research is fun anyway.
The Pirahã... live along the banks of the Maici River in the Lowland Amazonia region of Brazil. They maintain a hunter-gatherer existence and reject assimilation into mainstream Brazilian culture. Almost completely monolingual in their own language, they have a population of less than 200 living in small villages of 10 to 20 people. They have only limited exchanges with outsiders, using primitive pidgin systems for communicating in trading goods without monetary exchange. The Pirahã counting system consists of the words hói (falling tone = ‘one’) and hoí (rising tone = ‘two’). Larger quantities are designated as baagi or aibai (= ‘many’).Perhaps more interesting than the lack of number terms for amounts greater than 2, the Pirahã use their word for one, "hói," to refer to any small quantity, sort of like English speakers use "a couple" to mean something in the neighborhood of two.
The author of the paper, Peter Gordon, visited the Pirahã three times, and after becoming interested in the question of how their verbal number system might affect their numerical concepts, conducted several pseudo-experiments (no, that's not a derogatory term, it just means that they weren't proper experiments from which we can infer causation) designed to explore Pirahã counting abilities. Several of the studies involved presenting the Pirahã participants with an array or cluster of familiar objects, and asking them to produce an array with the same number of objects. In another study, the participants were presented with line drawings and asked to draw the same number of lines (a task that was apparently very difficult for the Pirahã participants, because drawing is something they never do). In a third type of study, participants watched as the experimenter put nuts into a can, and then removed them, and were asked to report when they thought the can was empty. For each type of task, the Pirahã participants' performance dropped significantly for numerosities greater than 2 or 3 (up to about 10).
Gordon argues that these studies provide at least preliminary evidence for a strong version of the Sapir-Whorfy hypothesis. This strong version is generally called linguistic determinism (as opposed to the weaker version, linguistic relativity). He believes that the Pirahã are able to count small numbers (less than 3), perhaps by subitizing, and that they can also use analogue representations (e.g., lines whose length represent a quantity) for larger numbers (techniques which are inherently less accurate). These are both skills that appear to be present in all cultures, and even in some nonhuman animals. However, he believes that the evidence from his studies indicate that the Pirahã are not capable of more sophisticated counting techniques for numbers above three, and that this is due to the lack of number terms in their language.
There are several problems with these studies. The most obvious is that Gordon makes a causal inference from pseudo-experiments, which is a methodological no-no. There are no control conditions, no comparisons with similar populations, and no testing of different causal explanations. As Daniel Casasanto points out in a letter to Science in response to Gordon's paper, one could predict Gordon's data with either the linguistic determinism hypothesis or with its exact opposite. It could be that the environment and cultural practices of the Pirahã make the learning of number concepts over and above numerosities of 2 or 3 unnecessary, and that this in turn has led to the absence of terms related to these concepts in the Pirahã language. Gordon replies (in the same pdf file, just read to the end of Casasanto's letter) that he's not actually making a firm statement of cause, but when someone argues for "linguistic determinism," it's hard to interpret it any other way.
Speaking directly to these issues is another paper that appeared in the same issue of Science as Gordon's paper. That paper presents experiments (real experiments, this time, though they’re very limited) conducted with speakers of the Mundurukú language, who also live in the rainforests of Brazil. Here's a description of the speakers and their language from the paper (p. 500):
Here, we studied numerical cognition in native speakers of Mundurukú, a language that has number words only for the numbers 1 through 5. Mundurukú is a language of the Tupi family, spoken by about 7000 people living in an autonomous territory in the Pará state of Brazil.So, the Mundurukú language has more number terms than the Pirahã, but is still much more limited in its number terms than Western languages. Furthermore, the Mundurukú do not use numbers in exact ways, or in counting, but instead use them to approximate numeroisities.
In several tasks requiring Mundurukú speakers to compare the numerosity of large arrays of dots (20 to 80), they performed well but slightly worse than a French-speaking control group. They also performed as well as the French-speaking participants on an arithmetic task that required only approximating numerosities. However, when they were required to perform an arithmetic task that required giving exact answers (a subtraction task), they performed significantly worse than the French controls, particularly as numerosities increased.
The authors of this paper argue that their results provide evidence against the strong determinism view advocated by Gordon. Instead, they write:
What the Mundurukú appear to lack, however, is a procedure for fast apprehension of exact numbers beyond 3 or 4.In other words, the lack of a counting system, not the lack of numerical terms, appears to be the primary cause of the difference between Mundurukú and French-speakers performance on the subtraction task. Since the Pirahã don't have procedures for counting, either, this may underlie their difficulty with large numbers as well.
The picture I'm trying to paint with these two sets of studies is just how messy research on language and thought really is. Even the studies with the Mundurukú speakers, though they are proper experiments involving experimental and control groups, are at best preliminary explorations into the numerical knowledge of Mundurukú speakers, and the causes of differences between their numerical abilities and those of speakers of languages with larger sets of number terms. Figuring out whether language, as opposed to other cultural and/or environmental variables are responsible for differences in cognition is damn near impossible to do with any certainty. But the research is fun anyway.
Thursday, February 02, 2006
100,000
I didn't mention it the other day, but Mixing Memory received its 100,000th visitor earlier this week. So I thought I'd take this opportunity to say thanks to everyone whose visited, especially those of you who visit regularly, and extra especially to those who comment.
As always, if there's something on which you'd like to see me post, let me know.
As always, if there's something on which you'd like to see me post, let me know.
Wednesday, February 01, 2006
Cultural Differences In Cognition: The Case of Fish
A few times a week, I get an email from a blog reader asking me a cognitive science question. They ask about a lot of different topics, but there are a few topics on which I get questions regularly. Two of the most frequent are Evolutionary Psychology and Lakoff, and I can only blame myself for that. A third frequent question topic, however, is something that I haven't discussed much, so it must be a topic in which a lot of people are really interested: cultural differences in cognition. It's always difficult answering questions about cultural differences, because there are a lot of different areas of research on cultural differences, and at the same time, there's not a lot of research on cultural differences (meaning that people are studying differences in a lot of different areas, they just haven't gotten very far in doing so). Many of the questions are inspired by The Geography of Thought, and thus are about analytical vs. holistic/dialectical thinking. I've talked about that a little in the past. Others, though fewer, surprisingly (having been around enough students, I know this is something a lot of people wonder about) are about linguistic relativity and the Sapir-Whorf hypothesis. I've talked about that a little, too. But no one ever asks about what is, to me, the most interesting research on cultural differences, work on cultural differences in categorization. That tells me that most people don't really know about that research, and that means I need to write a post about it. Or at least, that gives me an excuse to write one. And it just so happens that I read a paper on the topic earlier this week that provides a nice look at that research. So, here's a post about it.
First a little background. While research on cultural differences in cognition is very sexy these days, and is in fact older than cognitive science itself (e.g., the work on memory by Bartlett), for about the last 30 years, cognitive psychology, and the study of concepts and categories in particular, has been dominated by objectivism. In the study of concepts and categories, this view was first articulated by Eleanor Rosch1, who argued that our categories "carve the world at its joints," or more technically, "that correlated features or properties create natural 'chunks' or basic level categories that any well-adapted categorization system must acknowledge or exploit"2. The implication of this idea is that everyone, regardless of their cultural background, will have pretty much the same categories, because they're exposed to pretty much the same correlational structure of features in the world. And to a large extent, empirical research has borne this idea out, especially for natural categories like animals and plants3, but even for some artifacts4.
Research over the last decade or so has begun to challenge the objectivist view of concepts. Central to the objectivist view is the idea of a "basic level," which is the level at which within-category similarity is high while between-category similarity is low. Examples of basic level categories include BIRD, CAR, CHAIR, and FISH. Other levels of categorization might admit some cultural variability, under the objectivist view. This is the level that is supposed to be the most constricted by the world's joints. But in the early 90s, researchers produced evidence that the Basic Level shifts with expertise. When people are experts in a particular domain, they tend to treat subordinate level categories (ROBIN, VW BEETLE, DESK CHAIR, GROUPER; categories in which within-level similarity is higher than at the basic level, but between-category similarity is higher as well) like basic level categories. This means that how we perceive the world's correlational structure depends, to some extent, on our knowledge. Along this line, research on theory-based theories of concepts, which argue that things like knowledge of causal relations, in addition to correlational structure, factor in to how we divide the world into categories.
So, with the recognition that knowledge influences categorization, it's possible to start thinking about cultural differences, since knowledge across cultures will certainly differ to some extent. In order to explore potential differences, then, Doug Medin and his colleagues have studied the differences in categorization among different kinds of experts and novices. For example, when categorizing trees, landscapers tend to sort them differently than park maintenance workers and botanists. The latter two groups' sortings tend to agree with scientific taxonomies, as do the sortings of novices, while tree landscapers appear to be using knowledge related to their use of trees (goal-related knowledge) to classify them5. This implies that different cultural groups that have different sets of goals related to the same objects may classify them differently.
Recent research also suggests that there may be cultural differences in categorization that are not due to goal-related knowledge. Atran et al.6 found that three neighboring tribes in a Guatemalan rainforest with similar agrarian lifestyles, and thus similar goal-related knowledge bases, classified plants and animals similarly, but reasoned about them differently. In other words, while their categories divide the world at about the same joints, their concept representations contain different information. It may be, then, that even when members of different cultures classify things in the same way, they actually have very different representations of those things.
In order to explore this, Medin et al.7 conducted a series of experiments comparing the ways in which two expert populations, American fisherman of European ancestry (majority culture) living in Wisconsin and and Menominee Indian fishermen in Wisconsin, categorize and represent freshwater fish. Both populations have similar goal-related knowledge of fish, and live in the same region, so any differences observed can't be attributed to goal-related knowledge from expertise. In the first experiment, both groups were given the names of 44 different species of local fish, and asked list the properties of each species and then to sort them into "meaningful categories," and to provide reasons for sorting them the way they did. They found that the sortings of the two groups were similar, but different in ways that corresponded with the types of justifications that they gave. For example, the majority culture participants used goal-related categories such as food and garbage fish, that the Menominee participants did not have. The Menominee participants, on the other hand, had categories based on the fish's habitat, whereas the majority culture participants did not. These sorting differences were reflected in the justifications the two groups gave, with both groups giving goal-related justifications, but the Menominee giving ecological (e.g., habitat-related) justifications much more often than the majority group participants.
In order to explore these differences in more depth, in a subsequent experiment, Medin et al. asked participants from the two groups to describe interactions between the different species of fish. They found that both groups had a great deal of knowledge of the interactions between different species of fish, but that the types of interactions they described different significantly. The Menominee participants were much more likely to mention two types of interactions: positive interactions, in which one species helps another, and reciprocal interactions, in which two species benefit each other, than were majority participants. These differences in representations are not likely to be due to expertise, since both groups have similar levels of knowledge of the relationships between species, and they're not likely to be goal related, since they both have similar goals (catching the fish). So the differences must be due to some other aspect of culture. Medin et al. offer the following explanation:
So, the Medin et al. studies, combined with the studies of different types of experts, and the neighboring tribes in Guatemala, all imply that cultural differences in how we categorize and conceptualize things do exist. These differences are likely due to differences in how we interact with those things (i.e., differences in our goals), as well as to cultural differences in focus or perspective. It's important to note, however, that while the differences in our conceptual representations can be big, and important for how we reason, ultimately, the differences in categorization, or how we divide the world up, are pretty small. As Medin et al. put it, cultural differences "emerge against a backdrop of similarity" (p. 58). This similarity is likely due to the correlational structure of the world that motivates the objectivist view. So, Medin et al. state, "cultural models and biological reality both 'bend' a bit in answering one another's demands, and so determining folk conceptions of freshwater fish" (p. 59).
The picture painted by these studies is the one that I try to give when answering questions about cultural differences in cognition in general. The fact is, regardless of where we live, and in what cultures we are raised, we all have highly similar perceptual experiences, and come into the world with the same perceptual and cognitive mechanisms. Thus, for the most part, the way we think about things will be pretty similar, especially for natural categories. However, differences in knowledge across cultures, related to goals and expertise, as well as differences in perspective (Medin et al.'s "cultural models"), will inevitably create differences in the representations we use in processing that information, how accessible certain information is, and as a result, how we reason about things.
1 E.g., Rosch, E. (1974). Linguistic relativity. In A. Silverstein (Ed.), Human Communication: Theoretical Perspectives. New York: Halsted Press; & Rosch, E. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382-439.
2 Medin, D.L, Ross, N., Atran, S., Cox, D., Wakaua, H.J., Coley, J.D., Proffitt, J.B. & Blok, S. (in press). The Role of Culture in the Folkbiology of Freshwater Fish. Cognitive Psychology, p. 2.
3E.g., Boster, J.S. & DÂAndrade, R. (1989) Natural and human sources of cross-cultural agreement in ornithological classification. American Anthropologist, 91, 132-142.
4 Malt, B. C. (1995). Category coherence in cross-cultural perspective. Cognitive Psychology, 29, 85-148.
5 Medin, D. L., Lynch, E. B., Coley, J. D., & Atran, S. (1997). Categorization and reasoning among tree experts: Do all roads lead to Rome? Cognitive Psychology, 32, 49-96.
6 Atran, S., Medin, D.L., Ross, N., Lynch, E., Vapnarsky, V., Ucan EkÂ,E., Coley, J., Timura, C. & Baran, M. (2002). Folkecology, Cultural Epidemiology and the Spirit of the Commons: A Garden Experiment in the Maya Lowlands. Current Anthropology, 41, 1-23.
7 Medin, D.L, Ross, N., Atran, S., Cox, D., Wakaua, H.J., Coley, J.D., Proffitt, J.B. & Blok, S. (in press). The Role of Culture in the Folkbiology of Freshwater Fish. Cognitive Psychology.
First a little background. While research on cultural differences in cognition is very sexy these days, and is in fact older than cognitive science itself (e.g., the work on memory by Bartlett), for about the last 30 years, cognitive psychology, and the study of concepts and categories in particular, has been dominated by objectivism. In the study of concepts and categories, this view was first articulated by Eleanor Rosch1, who argued that our categories "carve the world at its joints," or more technically, "that correlated features or properties create natural 'chunks' or basic level categories that any well-adapted categorization system must acknowledge or exploit"2. The implication of this idea is that everyone, regardless of their cultural background, will have pretty much the same categories, because they're exposed to pretty much the same correlational structure of features in the world. And to a large extent, empirical research has borne this idea out, especially for natural categories like animals and plants3, but even for some artifacts4.
Research over the last decade or so has begun to challenge the objectivist view of concepts. Central to the objectivist view is the idea of a "basic level," which is the level at which within-category similarity is high while between-category similarity is low. Examples of basic level categories include BIRD, CAR, CHAIR, and FISH. Other levels of categorization might admit some cultural variability, under the objectivist view. This is the level that is supposed to be the most constricted by the world's joints. But in the early 90s, researchers produced evidence that the Basic Level shifts with expertise. When people are experts in a particular domain, they tend to treat subordinate level categories (ROBIN, VW BEETLE, DESK CHAIR, GROUPER; categories in which within-level similarity is higher than at the basic level, but between-category similarity is higher as well) like basic level categories. This means that how we perceive the world's correlational structure depends, to some extent, on our knowledge. Along this line, research on theory-based theories of concepts, which argue that things like knowledge of causal relations, in addition to correlational structure, factor in to how we divide the world into categories.
So, with the recognition that knowledge influences categorization, it's possible to start thinking about cultural differences, since knowledge across cultures will certainly differ to some extent. In order to explore potential differences, then, Doug Medin and his colleagues have studied the differences in categorization among different kinds of experts and novices. For example, when categorizing trees, landscapers tend to sort them differently than park maintenance workers and botanists. The latter two groups' sortings tend to agree with scientific taxonomies, as do the sortings of novices, while tree landscapers appear to be using knowledge related to their use of trees (goal-related knowledge) to classify them5. This implies that different cultural groups that have different sets of goals related to the same objects may classify them differently.
Recent research also suggests that there may be cultural differences in categorization that are not due to goal-related knowledge. Atran et al.6 found that three neighboring tribes in a Guatemalan rainforest with similar agrarian lifestyles, and thus similar goal-related knowledge bases, classified plants and animals similarly, but reasoned about them differently. In other words, while their categories divide the world at about the same joints, their concept representations contain different information. It may be, then, that even when members of different cultures classify things in the same way, they actually have very different representations of those things.
In order to explore this, Medin et al.7 conducted a series of experiments comparing the ways in which two expert populations, American fisherman of European ancestry (majority culture) living in Wisconsin and and Menominee Indian fishermen in Wisconsin, categorize and represent freshwater fish. Both populations have similar goal-related knowledge of fish, and live in the same region, so any differences observed can't be attributed to goal-related knowledge from expertise. In the first experiment, both groups were given the names of 44 different species of local fish, and asked list the properties of each species and then to sort them into "meaningful categories," and to provide reasons for sorting them the way they did. They found that the sortings of the two groups were similar, but different in ways that corresponded with the types of justifications that they gave. For example, the majority culture participants used goal-related categories such as food and garbage fish, that the Menominee participants did not have. The Menominee participants, on the other hand, had categories based on the fish's habitat, whereas the majority culture participants did not. These sorting differences were reflected in the justifications the two groups gave, with both groups giving goal-related justifications, but the Menominee giving ecological (e.g., habitat-related) justifications much more often than the majority group participants.
In order to explore these differences in more depth, in a subsequent experiment, Medin et al. asked participants from the two groups to describe interactions between the different species of fish. They found that both groups had a great deal of knowledge of the interactions between different species of fish, but that the types of interactions they described different significantly. The Menominee participants were much more likely to mention two types of interactions: positive interactions, in which one species helps another, and reciprocal interactions, in which two species benefit each other, than were majority participants. These differences in representations are not likely to be due to expertise, since both groups have similar levels of knowledge of the relationships between species, and they're not likely to be goal related, since they both have similar goals (catching the fish). So the differences must be due to some other aspect of culture. Medin et al. offer the following explanation:
Our speculation is that cultural attitudes and beliefs reinforce certain Âhabits of mind or characteristic ways of thinking about some domain. Specifically, responses of majority culture informants concerning ecological relations may be filtered through a goal-related framework whereas the responses of the Menominee informants appear to be less Âviewer-centered. (p. 44)They argue that these differences in focus or orientation will make some types information more or less available to the members of the different groups. Thus, the two groups may have the same knowledge base, but some parts of that knowledge base will be easier to access, depending on culturally-induced focus. This will lead the groups to activate different information from their knowledge base when questions are more general. In order to test this explanation, Medin et al. conducted a third experiment in which they directly elicited participants' knowledge of the relationships between the different species of fish, by asking them to sort them on the basis of habitat. By directly eliciting this kind of information, it should be easily accessible even when participants' cultural focus makes this information less accessible in other contexts (e.g., that of the previous experiment). Consistent with this, they found that both groups were able to reliably sort based on habitat. Thus, it does appear that the two groups have the same information in their knowledge base, but that Menomineeinee participants could access ecological information more easily than the majority group participants.
So, the Medin et al. studies, combined with the studies of different types of experts, and the neighboring tribes in Guatemala, all imply that cultural differences in how we categorize and conceptualize things do exist. These differences are likely due to differences in how we interact with those things (i.e., differences in our goals), as well as to cultural differences in focus or perspective. It's important to note, however, that while the differences in our conceptual representations can be big, and important for how we reason, ultimately, the differences in categorization, or how we divide the world up, are pretty small. As Medin et al. put it, cultural differences "emerge against a backdrop of similarity" (p. 58). This similarity is likely due to the correlational structure of the world that motivates the objectivist view. So, Medin et al. state, "cultural models and biological reality both 'bend' a bit in answering one another's demands, and so determining folk conceptions of freshwater fish" (p. 59).
The picture painted by these studies is the one that I try to give when answering questions about cultural differences in cognition in general. The fact is, regardless of where we live, and in what cultures we are raised, we all have highly similar perceptual experiences, and come into the world with the same perceptual and cognitive mechanisms. Thus, for the most part, the way we think about things will be pretty similar, especially for natural categories. However, differences in knowledge across cultures, related to goals and expertise, as well as differences in perspective (Medin et al.'s "cultural models"), will inevitably create differences in the representations we use in processing that information, how accessible certain information is, and as a result, how we reason about things.
1 E.g., Rosch, E. (1974). Linguistic relativity. In A. Silverstein (Ed.), Human Communication: Theoretical Perspectives. New York: Halsted Press; & Rosch, E. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382-439.
2 Medin, D.L, Ross, N., Atran, S., Cox, D., Wakaua, H.J., Coley, J.D., Proffitt, J.B. & Blok, S. (in press). The Role of Culture in the Folkbiology of Freshwater Fish. Cognitive Psychology, p. 2.
3E.g., Boster, J.S. & DÂAndrade, R. (1989) Natural and human sources of cross-cultural agreement in ornithological classification. American Anthropologist, 91, 132-142.
4 Malt, B. C. (1995). Category coherence in cross-cultural perspective. Cognitive Psychology, 29, 85-148.
5 Medin, D. L., Lynch, E. B., Coley, J. D., & Atran, S. (1997). Categorization and reasoning among tree experts: Do all roads lead to Rome? Cognitive Psychology, 32, 49-96.
6 Atran, S., Medin, D.L., Ross, N., Lynch, E., Vapnarsky, V., Ucan EkÂ,E., Coley, J., Timura, C. & Baran, M. (2002). Folkecology, Cultural Epidemiology and the Spirit of the Commons: A Garden Experiment in the Maya Lowlands. Current Anthropology, 41, 1-23.
7 Medin, D.L, Ross, N., Atran, S., Cox, D., Wakaua, H.J., Coley, J.D., Proffitt, J.B. & Blok, S. (in press). The Role of Culture in the Folkbiology of Freshwater Fish. Cognitive Psychology.
The Cognitive Science Cafe
Here is the menu. Some are pretty funny, though most of them require more than a little familiarity with the field to make sense. Here's my favorite:
If you get "The Keil Transformation Sandwich," you shouldn't have any trouble with the others.
The Keil Transformation SandwichThe "Prototype" is peanut butter and jelly on white bread, of course. Is the Keil Transformation Sandwich still a Prototype?
A prototype with the jelly and peanut butter surgically removed, cucumber and cream cheese injected, and wearing a costume of white bread.
If you get "The Keil Transformation Sandwich," you shouldn't have any trouble with the others.
Subscribe to:
Posts (Atom)