Tuesday, March 29, 2005

Political Analogies

After I got over my initial disgust, this blatantly deceptive post by Steve Burton at the conservative group blog Right Reason got me thinking about the use of analogies in political discourse. Burton presents a version (Burton's version has nothing to do with Cole's own) of an analogy that Juan Cole used to argue that the Republican Party is becoming more theocratic. Burton also uses his own analogy, comparing his misrepresentation of Cole's argument to the use of analogies between the Democratic Party and communism. With that many analogies in such a short post, I couldn't help but be reminded of the work in cognitive psychology on political analogies, and of the discussion of framing in politics that George Lakoff inspired during the 2004 presidential campaign. Analogies are often used to frame issues and debates, and for anyone, liberal or conservative, to effectively use Lakoff's insights (most of which are not really original, but which only began to receive widespread attention through his political writings), they must have an understanding of how analogies work in general, as well as how they are used in politics specifically. So I thought I'd talk a little about this.

The Basics of Analogy

First the basics. I may have said all of this before in more detail, but I don't really expect people to go back and read old posts, so I'll say it again. Analogies generally consist of two components: the target, about which we are trying to say something, and the source (or base), which we are using to say something about the target. For example, in Rutherford's classic atom-solar system analogy, the solar system, which was at the time a better-known domain, is used as the source, and the atom, the lesser-known domain, is the target. There are three steps in analogical reasoning. In the first, we have a target, and have to retrieve a source domain. In the second step, we construct a mapping between the source and the target. This involves aligning the representations of the two domains, so that their common relational structure is in correspondence. Thus, in mapping the atom onto the solar system, we align electrons with planets, and the nucleus with the sun, so that the common relation "small object revolving around a larger object" is preserved. After the mapping stage, we can construct inferences from the source to the target. For example, we might infer from the atom-solar system analogy that there is a force caused by the nucleus that keeps the electrons rotating around it, just as there is a force (gravity) that keeps the planets in their orbits around the sun. These inferences are made based on the common relational structure between the two domains, so that attributes of the source that do not correspond to anything in the target will not be used to form inferences. It is this stage that makes analogies so useful, because they allow us to use our knowledge of better-known domains to reason about lesser-known domains.

The lesson to draw from the preceding paragraph is that analogies work because they preserve structural commonalities between two domains. Superficial comparisons, based only on surface similarities (e.g., common perceptual properties, like color or size), are less effective at producing inferences, and thus less effective at producing or conveying knowledge about the target domain. However, experiments on analogy have shown that people are not very good at noticing structural similarities between a target and potential source domains that they have been given1. Instead, they tend to make comparisons based on surface similarities (about 70% of participants use surface similarities, in most experiments). Since people do in fact use analogies effectively in many real-world contexts, implying that they are able to make analogies that are based on structural, rather than superficial similarities, researchers began to study analogy outside of the laboratory, to see where they might be going wrong. One of the areas they looked to was politics, because even a cursory look at political discourse shows that analogies are used with frequency there.

Political Analogies: The Research

During the lead up to the Gulf War in 1990-1991, two competing analogies were used extensively. The first, used mostly by opponents of the war, compared the situation in Kuwait to the Vietnam War. The second, used by proponents of the war, especially President Bush, compared the same situation to World War II. In fact, the World War II analogy was largely used to reframe the debate so that Vietnam would no longer be seen as analogous. Here are a couple examples of the use of the World War II analogy from statements by Bush:
A half a century ago our nation and world paid dearly for appeasing an aggressor who should and could have stopped. We're not about to make the same mistake twice.

Facing negligible resistance from its much smaller neighbor, Iraq's troops stormed in blitzkrieg fashion through Kuwait in just a few short hours.
In addition to broad comparisons like these, Bush and others also repeatedly compared Saddam Hussein to Hitler, further solidifying the Gulf War-World War II analogy/frame.

The effectiveness of the World War II analogy (it virtually eliminated the influence of the Vietnam frame in the debate over the war) belies its comlexity. For instance, when creating a mapping between the two domains (the second stage above), what should we map the U.S. onto? There are at least two possibilities: the U.S. during World War II, or Great Britain during World War II. And what about Bush himself? Is he Roosevelt or Churchill? In order to use the analogies effectively, people must be able to resolve these ambiguities. How do they do so? In order to find out, Spellman and Holyoak2 brought this analogy into the lab. They gave participants various versions of the Gulf War-World War II analogy, and asked them to map the various predicates of the World War II domain onto those of the Gulf War domain. They found that even though it was possible to map the U.S. (in 1990) onto the U.S. (in 1941), which would be a mapping based on surface similarities, participants preferred mappings that preserved the structural relations between the two domains, even if that meant mapping the U.S. onto Great Britain. Thus, participants resolve the ambiguities in the analogy through the use of structural similarities. This confirmed the suspicion that while participants in experiments that used experimenter-constructed analogies had difficulty producing structural mappings, people use, and actually prefer, such mappings when constructing and interpreting real-world analogies.

But why? Why are participants able to utilize structural similarities with real-world analogies, while they have so much trouble doing so in ordinary experimental contexts? To answer this question, Isabelle Blanchette and Kevin Dunbar left the laboratory again, and looked to the real-world use of analogies in political discourse for insight. As their domain of study, they chose the referendum in 1995 to decide whether Quebec should secede from Canada, and form a separate nation. For the Americans among us, here are the basics of that debate, from Blanchette and Dunbar (2001)3.
The Canadian province of Québec held a referendum on October 30th 1995 to determine whether Québec should separate from the country of Canada and become an independent country or whether it should stay part of Canada. The province of Québec is where the majority of the French-speaking population of Canada resides. Voters had the choice of voting YES for becoming a new country, or NO for staying in Canada. The political campaign was divided into the YES side and the NO side. Both sides campaigned extensively and the NO side won by the slimmest of majorities (51%). (p. 731)
They analyzed the discussion of the referendum in three Montreal newspapers, searching for any instance of analogy. For the purposes of data collection, they defined analogy as "All items in which a person stated a similarity exists between X and Y and mapped a feature or features from X to Y" (p.731). They found analogies in 38% of the almost 500 articles that they searched. Most of these analogies (77%) were to non-political domains, indicating that people were constructing analogies based on structural, rather than superficial similarities (the most common domains were "magic/religion, sports, and family relationships," p. 732). Example of the analogies they found include:
Québec’s path to sovereignty is like a hockey game. The referendum is the end of the third period.

Separation is like a major surgery. It’s important that the patient is informed by the surgeon and that the surgeon is impartial. The referendum is a way to inform the population. But in this case the surgeons are not impartial and they really want their operation.

It's like parents getting a divorce, and maybe the parent you don't like getting custody.
In addition to coding the analogies by source domain, they also coded their emotional content (positive, negative, or neutral), the position of the author (pro-separation or anti-separation), and the goal of the analogy (whether it was used to support the author's position, or to attack the alternative position). They found that analogies with positive (45%) and negative (40%) emotional content were used about equally often, and that both the position of the author and the goal of the analogy influenced the analogy's content. For instance, pro- and anti-separation authors tended to use analogies from the same domains, but with different sources from those domains. One example Blanchette and Dunbar give comes from the family relations domain. Pro-separation authors tended to use "birth" analogies with frequency, while anti-separation authors used "divorce" analogies. The goal of the domain tended to influence the emotional content of the analogy. When the analogy was used to support the author's position, the emotional content was almost always positive, while analogies used to attack the other position almost always had negative content.

What does all of this tell us? Well, it shows us again that people are able to construct analogies based on structural, rather than superficial similarities, in real-world situations. In addition, it provides a hint as to why they are able to do so easily, while experiment participants are not. Since the emotional content of the analogy, position of the author, and goal of the analogy were all important, the retrieval of a source appears to be both highly constrained (e.g., pro-separation authors who are attacking the anti-separation position will, in most cases, only retrieve emotionally negative sources from one of a few source domains, and with content that does not produce negative connotations for their own position, such as "divorce"). In most experimental contexts, participants are given a set of sources which may be from unusual domains, and must later retrieve those sources when given a target. Thus, they are not retrieving their own sources, which are based on their own position and goals, which may make it more difficult for them to find structural similarities4.

To test this hypothesis, Blanchette and Dunbar went back to the laboratory. There they presented participants with a political problem (the zero-sum deficit problem in Canada in the late 1990s), and asked them to generate either pro or con analogies5. Consistent with the hypothesis that participants in previous studies had trouble noticing structural similarities because they were not generating the sources themselves, Blanchette and Dunbar (2000) found that when participants generated their own analogies, they produced significantly more analogies based on structural features than on surface features.

Blanchette and Dunbar (2000) also noted another interesting feature of the analogies that participants produced when they generated their own source domains. In most cases, participants left most of the analogy implicit. In particular, while they tended to give at least some features of the source domain, they did not do so for the target, leaving it up to the reader of the analogy to draw appropriate inferences. This fits well with their observations of real-world political analogies, which are often largely implicit as well.

Political Analogies: The Implications

So, in politics, people are using structurally-based analogies, which tend to be emotionally charged, drawn from familiar but non-political domains, and which vary in content depending on the position the user of the analogy has taken and how he or she is using the analogy (i.e., the goal of the analogy), with most of the content left for the hearer or reader to fill in. Furthermore, when participants in laboratory experiments generate their own analogies, the features of those analogies are similar to those that are found in real-world political discourse. It's likely that all of these features serve a communicative and persuasive purpose. To wrap up this post, I want to briefly discuss how these features work to frame issues and positions.

To understand how analogies can be so effective in political discourse, it's important to understand that they are not just simle rhetorical devices. The mapping between the source and target domains can serve as a schema for reasoning about the target. In fact, this is likely why most analogies are largely implicit. Researchers have shown that schema-formation through analogical mapping is more likely when people use an analogy (i.e., construct inferences from the source to the target)6. Thus, by allowing people to do most of the work in mapping the target onto the source and drawing inferences, we allow the mapping between the source of our choosing and the target to serve as a schema for reasoning about the schema. If our source has particular emotional connotations, these will likely be part of the schema, and thus people will attach those emotional connotations to the target when reasoning about it in the future. The effectiveness of the "Saddam is Hitler" metaphor, used over and over again in the U.S. since 1990, is an excellent demonstration of this. So is the Vietnam analogy, which has negative connotations about both sides. In 1990-1991, it was inneffective and overshadowed by the World War II analogy, which would create positive emotional associations with the United States. However, it has been more effective in comparisons to the current conflict in Iraq, which has led to negative emotional associations with the U.S., its government, and its military tactics.

All of this shows that it is important to choose your analogies, and how you express them, very carefully. Effective analogies can change the way large groups of people reason about an issue. It is also important to make sure that you do not discuss an issue within the frame of your opponent's analogy. As research has shown, the initial use of an analogy effectively sets the structure of the frame, and thus determines the sorts of inferences that people will draw. While trying to highlight alternative properties of the target to license alternative inferences from an analogy (e.g., if one opposed the Gulf War in 1990, one might have noted that the allies actions after the war led to the destabalization of much of Asia, as well as creating the eastern block), can be effective, it is better to create new analogies that can cause people's representations to change more dramatically. This is what Lakoff is getting at when he says that liberals should talk about taxation as "dues paid for services rendered" as opposed to an affliction (the source imlied by speaking of "tax relief"). Finally, it's important to draw analogies from domains with which people are very familiar. These will make the mappings easier to create, and thus make it easier for people to draw inferences from them and to recognize emotional associations. This is probably why so many of the analogies that Blanchette and Dunbar discovered were to domains like family relations, sports, and religion.

1 E.g., Gick, M. & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15(1), 1-38. See also Gentner, D., Ratterman, M., & Forbus, K. (1983). The roles of similarity in transfer: separating retrievability from inferential soundness. Cognitive Psychology, 25, 524-575.
2Spellman,B. & Keith, H. (1992). If Saddam is Hitler then Who Is George Bush? Analogical Mapping Between Systems of Social Roles. Journal of Personality and Social Psychology, 62, 913-933.
3Blanchette, I., & Dunbar, K. (2001). Analogy use in naturalistic settings: The influence of audience, emotion, and goals. Memory and Cognition, 29(5), 730-735.
4Dunbar, K. (2001). The analogical paradox: Why analogy is so easy in naturalistic settings, yet so difficult in the psychological laboratory. In D. Gentner, K.J. Holyoak, & B. Kokinov (eds.), The Analogical Mind: Perspectives from Cognitive Science, pp.
5Blanchette, I., & Dunbar, K. (2000). How analogies are generated: The roles of structural and superficial similarity. Memory and Cognition, 29, 730-735.
6See e.g., Gick, M.L., & Holyoak, K.J. (1983). Schema induction in analogical transfer. Cognitive Psychology, 15, 1-38.



Saturday, March 26, 2005

Higher Brain Death and Personhood

The question of what "personhood" is, is one of the more difficult, and more central questions raised by many of today's ethical dilemmas. Obviously, it is at the center of the abortion debate, and as the Terri Schiavo case has made clear, it is also the primary question raised by the definitions of "life" and "death." For better or worse, this is a question that medicine and science cannot answer for us. This means that we have to come up with definitions which, while they may be based on empirical facts, also include interpretations of those facts that are not strictly scientific. In a previous post, I stated unequivocally that I believe that cases of obvious higher-brain death (i.e., when it is physically impossible for a person to ever have any higher-brain functioning again), there is no reason to use the modifier "higher-brain." The "person" is simply "dead." Brandon of Siris disagrees, and in two posts, has presented well-reasoned arguments for his position. In the first, he wrote:
Claims, then, that higher-brain death is death simpliciter, that a person who has ceased having higher-order brain functions is no longer a person, appear to me to be utterly irrational and arbitrary. They are a denial of the fairly obvious fact that human personhood is exhibited in all our vital functions. They are a flight from human animality, a pretense that higher-order thought is all that there is to being a human person. (It is an old story. The excuse that some human persons should not be regarded as persons because they do not properly exhibit rationality is one of the oldest moral dodges in the book; it has been used to justify slavery, racism, sexism, sterilization of the mentally disabled, and the like. If you are going to go this route, you had better have a damn good argument for doing so.)
Brandon considers the higher-brain death position (my position) to involves treating the forebrain as the homunclar center of personhood. In the second post, he wrote:
[The higher-brain death view] is essentially a view under which a person is a humunculus in the brain; the humunculus is the person, and because of this, when the humunculus goes away, the person goes away.
Ultimately, I don't think that Brandon can treat the higher-brain death position as arbitrary without treating his own as arbitrary as well. The reason he fails to see this is that he misunderstands the higher-brain death position. Ultimately, the only difference between them is that they take different views of personhood from the start, but the higher-brain death position does not take the one that Brandon claims. At least, it does not do so in my own version of it, which I do not think is all that idiosyncratic.

An individual human person is differentiated from other human individuals, as well as nonhuman individuals, by a collection of memories, beliefs, knowledge, skills, and tendencies. In other words, what defines an individual person is a history. That history is, by and large, contained in the cerebral cortex and surrounding brain areas, along with the cerebellum. The hind-brain and mid-brain structures that may survive higher-brain death may contain some simple motor programs and associations, but for the most part, they contain innate reflexes and other programs that, while they allow the body itself to survive, do not provide for any real differentiation between individuals. In higher-brain death, then, all that survives is the basic functioning of the organism, while all of the properties that comprise a particular self, a particular person, are gone. Sure, the physical appearance remains, as do the internal bodily idiosyncrasies of an individual (if you have a bad liver from years of drinking, the liver will remain damaged even with higher-brain death), but these differences do not make an individual person. They make an identifiable body, and survive even complete brain death.

From an empirical standpoint, both ways (Brandon's and mine) of delineating personhood are arbitrary. Both place the line between life and death at empirically verifiable boundaries. What criteria, then, can we use to decide between them? My reason for requiring the possibility of higher-brain functioning is this: I believe that to define personhood as the basic functioning of the organism (e.g., breathing, digestion, blood circulation, etc.), or as anything less than the existence of a neurally-realized history, a set of physical properties of the higher-brain that differentiate and define individuals, is harmful to the dignity of human individuals. It results in the view that all of the things that make us different from one another -- all that makes us who we are -- is superfluous, merely icing on the cake of undifferentiated vital functioning. But ultimately, who we are, what we are, as persons, is more than just breathing, circulation, and digestion; it is a history that is recorded in the configurations of our higher-brain structures. When those are gone, we are not persons; we are merely bodies with life, but no lives.

Tuesday, March 22, 2005

A Note From The Author

This blog has received what is for me a lot of attention over the last month, with my visitor tally for March exceding the total for any previous month, and there are still 9 days left to go. I just wanted to thank everyone for stopping by, linking, and especially commenting, whether you agree with what I've said or not. In fact, I actually prefer the comments and links that express disagreement, because while it's nice to know that there are people who agree with me, I enjoy the discussion that comes with disagreement much more. Anyway, thanks again, and let me know if there's anything related to cognition that you'd like to see me post about.

Enough with the Schiavo Already

I have to admit, I am amazed at the amount of blog attention that the Terri Schiavo case is getting, and to be honest, I'm sick of it. And that's not just because I'm disgusted by some of the rhetoric (like, say, calling a survey question misleading because it states medical facts; or implying that Michael Schiavo should have no say because he's had a relationship with another woman since his wife's death). It is in part due to the fact that I am thoroughly disgusted by the fact that so much money can be spent (by the government, especially) on keeping a person "alive" when there is such incontrovertible evidence of higher-brain death. Honestly, I'm disgusted by the very use of the modifier, "higher-brain" behind the word "death" in such cases, and I'm even more disgusted by the use of the word "alive" as a description of the state that Terri Schiavo is currently in. Obvious cases of higher-brain death are just death, period. And spending money that could be spent on people who are actually alive, just to keep a person's hind-brain firing, makes me really angry.

But more than all of that, I'm disgusted by the amount of attention the Schiavo case is getting because of the amount of attention it is taking away from issues that are really important. Somewhere, people who talk about the government/media manipulating what we think about are rolling their eyes. Why don't we expend all of this blog-writing energy and effort writing hundreds of posts about issues like social security, the war (why is it that these things were called "police actions" in the 50s and 60s, by the way?), or all the people who are alive, sick, and can't get the treatment they need because they can't afford adequate medical insurance? I'm serious. If liberals in particular spent this much time researching and writing about universal health care, maybe somebody would finally pay attention. But instead, we're writing about removing a feeding tube from a dead woman, because Republicans have made it an issue.

Blogging About Science

My exasperated attempt to give potential science writers advice (written without the belief that a single science writer would actually read it) has received a lot of links, and a fair amount of comments, relative to 99.9% of the posts I write. Clearly I've struck a nerve. Some people agree with me, some disagree, some agree with the sentiment but disagree with the specifics, and some still think I believe there are no differences between men and women (sorry, slightly inside joke). Since it's gotten so much attention, I want to make a few things clear that I didn't express very well in the first post.
  1. If you write a podunk blog, like this one, that's only read by you, your mother, and the occasional Malaysian university student who is searching for "hot baked hedgehogs" on google, you probably don't need to follow my advice. You're not going to be contributing to the general misunderstanding of science, which afflicts this country like a stomach illness you might get from eating baked hedgehogs. The post was meant to apply to people with large readerships, and in particular, people who are seen as authorities (i.e., experts in other fields), because readers tend to believe what they say, no matter how deep down in their asses they had to reach to pull it out. If you're an expert in something (other than, say, basketweaving), and you have a moderate-sized blog, you should probably think carefully about how much you blog about scholarly topics that are largely outside of your knowledge base. Most importantly, if you're writing an article for a print publication (magazine, newspaper, or policy-influencing think-tank report), then you'd better do the damn research, and if the only excuse you have is, "It's hard and it takes a long time to read all of those papers," then you shouldn't be writing anything, anywhere, on any topic.
  2. Reading trade books (books about science, by scientists in the field, that are written for non-experts) is perfectly fine. I do it. It's a great way to be introduced to the work in a field to which you will probably never need to be anything more than introduced. Trade books are like intellectual casual sex -- they're relatively easy, you enjoy them, you learn something, and you don't have to worry about a long-term commitment to the field. But if you want to write about work in a field (under the circumstances described in 1.), then you've got to do more than just read a trade book or two. You've got to, at the very least, seek out different perspectives on the issues you're writing about. Very rarely in any science, but particularly in young disciplines like cognitive science, are there many issues about which a large majority of experts agree. If you're writing about science (again, under the circumstances described in 1.), it is incumbent upon you to seek out and pay attention to the evidence. That's what science is about: evidence. You don't have to find every paper ever written on the Stroop Effect to write about it (and if you have read every paper on the Stroop Effect, and there are tens of thousands of them, contact a therapist). However, as I said before, if you feel like it's too hard, or takes too much time, to research the available data on a particular topic, then don't write about it (one more time, see 1.).
  3. Science books written by journalists, people from fields other than the one about which they are writing, or (sorry about this) worst of all, philosophers, should never be relied upon if you are writing about a scientific topic (I won't say it again, but imagine there's a parenthetical note about a prime number less than 2 here). Again, they can be good reads, though I recommend reading books written by people in the field instead. If trade books are the intellectual equivalent of casual sex, then these are the intellectual equivalent of getting to second base. They'll almost always be incomplete, but if the book is well-written, you'll get a good feel (I couldn't resist that one) for the general direction of a field or sub-field.
  4. If you read it in Time, The New York Times, or some equivalent non-scientific popular publication, and don't want to do any further reading, ignore it. I mean that. You may disagree, but too often do these sorts of publications just get it wrong. For one, their writers tend to suffer from a gross misunderstanding of how science works, treating "objectivity" as roughly equivalent to "equal time," a strange mindset that Lindsay discussed so well in the recent Iron Blog battle. Biologists and climatologists are all too aware of this, as they get the worst of it in the form of endless references to intelligent design, creationism, the belief that evolution is "just a theory" (interestingly, the moon is also just a moon), or references to unscientific anti-global warming positions. Mostly though, these publications are not out to accurately represent science. They're out to increase their circulation.
  5. If you are writing for a widely-read publication (and that includes really big blogs), you'd be surprised how eager many scientists are to talk with you. In general, scientists are interested in the way the public views their field. For one, the public elects politicians, and politicians determine the amount of funding that various sciences get. Furthermore, scientists are most often fact-oriented, education-minded people, and thus they want the public to develop a good understanding of science, how it works, and what it's finding. If I were to write a particle physicist for help with a post on the Higg's boson, she might write me back with a book or paper suggestion, but if a journalist or even a big blogger writes her, she's likely to try to help, unless she's a year from her tenure review and hasn't published a thing since grad school. If she can't help, for whatever reason, she's likely to know someone who can. And there's no harm in asking.
  6. Yes, it's true, I don't like Steven Pinker's work, and I do like Ray Jackendoff's. But Pinker is a very good science writer. Read Foundations of Language for its really interesting chapter on the evolution of language, and read The Language Instinct for the nice prose.

Monday, March 21, 2005

Experimental Evidence for Memory Suppression

If the Larry Summers deba(cle)te has taught me anything, it's that where issues of science are concerned, very few opinions are worth anything. Too often have I seen appeals to the (often baseless) opinions of "authority figures" (e.g., Stephen Pinker) treated as evidence for one position or another. But this is not how scientific debates are supposed to work. We're supposed to reference facts, not names, reputations, or degrees. This is particularly true for issues with serious practical implications. In these cases, suddenly everyone is an expert, though in most cases the opinions of these "experts" are based on selective readings of the literature, especially when the "experts" aren't actually experts in the area on which they are voicing their opinions (as is the case for Pinker and sex differences in mathematical ability).

All of this aplies to me as much as anyone. While I do study memory (long-term, schematic memory), I am not an expert on repressed, recovered, false, traumatic, or emotional memories, and therefore my opinions on these topics should not be substituted for a careful review of the available scientific research. My responsibility, as a scientist who is making public statements about these topics, is to try to present the facts. While in my previous two posts I tried to be fair to both sides, even discussing at length (by blog standards) two of the most often cited defenses of the pro-repression position, I have not presented all of the evidence. I feel justified in leaving out some research (e.g., that of Bessel van der Kolk) because it is either inferior or irrelevant. In other cases, however, my ommissions may not have been justified. Because I've recently received a lot of visits (and as a result, a couple emails) from Google searches on recovered memories, I feel like it's incumbent upon me to present the issue in a more balanced fashion, sticking to the facts, in the interested of fairness and reasoned scientific debate. With that in mind, I'm going to describe one line of laboratory research that has produced findings that may be taken as evidence for memory mechanisms that could lead to repression.

Before I get to that, though, I want to briefly note that I am not a dispassionate observer when it comes to recovered memory phenomena. One of the charges leveled against cognitive psychologists (including me, in an email), who have generally taken anti-recovered memory positions, is that they are drawing conclusions without any real contact with the real-life effects of repressed and recovered memories. Clinicians, on the other hands, see their effects on a daily basis, and are therefore less inclined to deny their existence or dismiss recovered memories as false or implanted. However, I have seen their effects up close and personal. My son's mother, with whom I lived for several years, is an abuse survivor who, while she has always had memories of intense physical and psychological abuse, only "recovered" memories of sexual abuse, which she believes took place between the ages of 3 and 6, in her early 20s. The same is also true of my closest friend of many years, who has always had memories of parental abuse, but "recovered" memories of sexual abuse at the hands of a non-family member in her late 20s1. The effects of these "recoveries" on both of these people has been staggering, and for the people close to them, like me, heartbreaking. Their lives have been forever altered. Personal experiences like these are the impetus for my own attempt to learn and understand the research on repressed memories.

OK, so on to the science. Michael Anderson and his colleagues2 have developed experimental paradigms that they feel provide tests for the existence of memory suppression mechanisms. In one paradigm, which they call the "retrieval practice paradigm," (Anderson et al., 1994; Anderson & Spellman, 1995) participants are given listes of pairs of words (e.g., "fruits-banana"), the first of which names a category, and the second of which names a member of the category. After learning the word pairs, they complete a stem-completion task for half of the category member words from half of the categories. This involves being presented with the category word (e.g., "fruit") and the beginning of one of the category members (e.g., "ba_____"). The task is to complete the stem with one of the words from the word pairs they had previously learned. They do this three times for each of the words in the stem-completion task, in order to practice retrieving these words from memory. Finally, they are given a recall test for all of the original word pairs, in which they were given the category word and asked to recall the words that had been paired with it. In this paradigm, there are three types of category member words: words from categories used in the stem-completion task that were practiced, words from those same categories that weren't practiced in the stem-completion task, and words from categories that weren't used in the stem-completion task. They found that recall was best for practiced words and worst for unpracticed words from categories used in the stem-completion task, with words from categories not used in the stem-completion task in between the two. They argue that this demonstrates the existence of a mechanism for inhibiting the recall of related items.

There is an alternative explanation for this finding, namely that with practice, the stem-completion category members are strengthened to such an extent that they completely overshadow the memory traces for the non-practiced items from the same category, making it very difficult to recall the latter. To rule this explanation out, they ran a modified version of the same experimental paradigm in which some of the category member words were actually members of two separate studied categories (e.g., participants studied the categories "red" and "food," and the word "tomato," which is a member of both). For these words, one of the categories was used in the stem-completion task, while the other wasn't. If memory inhibition is the cause of the difficulty in recalling non-practiced words from stem-completion categories, then words like tomato should be inhibited regardless of the cue (practiced category or unpracticed category) used during the retrieval task. And that is what they found: practiced words were again recalled the most, with unpracticed words from two categories were recalled the least regardless of whether the cue was the practiced or unpracticed category.

Interestingly, the effects demonstrated in these two experiments (called retrieval-induced forgetting) have been replicated using "mock crime scenes"3. In these experiments, participants view images of a crime scene, and are subsequently interviewed about some of the details of that scene. Aftewards, their recall for information about the crime scene that was not discussed in the interview is inhibited, and thus recalled at a lower rate.

While these show that there may be inhibition processes recruited during memory retrieval, they do not show that unwanted memories can be actively inhibited. To show this, Anderson and Green (2001) used the following paradigm. Participants first learned fourty word pairs, this time containing unrelated words (e.g., "ordeal-roach"). Immediately after the completion of this task, they were given a list of words to recall (recall pairs), and a list of words not to think about when they received the associated cue (suppression pairs). They were then given the cue words (the first word from each pair), and either recalled the associated word, or avoided thinking about it, depending on whether it was from a recall or suppression pair. Finally, they were given another recall task, and told to ignore the previous instructions to not think about some of the words when they received the cue. In other words, in the final recall task, they were to recall all of the words when given their associated cues. Recall for the suppression pairs was significantly poorer than recall for the words from the recall pairs. Furthermore, the amount of recall for a word from a suppression pair depended on how many times they had suppressed thoughts of a word when receiving its associated cue, with more suppression leading to poorer recall. As with the retrieval practice paradigm, they ran several variants of this experiment designed to rule out alternative explanations.

The results of these (and other related) experiments appear to indicate that people can actively inhibit unwanted memories by suppressing thoughts of those memories when the relevant retrieval cues are present. Levy and Anderson (2002) speculate that the inhibitory mechanisms involved in these tasks may involve the anterior cinculate cortex and the dorsolateral prefrontal cortex or left inferior prefrontal cortex (depending on the type of information being inhibited). The anterior cinculate cortex, they claim, likely sends a signal to the prefrontal cortex that executive control is needed, and the actual inhibition takes place in the prefrontal cortex.

Before the repression advocates start jumping up and down, they should be aware of several limitations to these studies. First of all, the Anderson and Greene results have proved difficult to replicate in other labs4. Since it is the Anderson and Greene experiments that provide the most direct evidence for the suppression of unwanted memories, failure to replicate them is problematic. In addition, none of the Anderson studies use emotional memories, or memory under conditions of high arousal. Since research has shown that it is much more difficult to suppress emotional memories, this makes the generalizability of these studies to traumatic memories difficult. It may turn out that the suppression of unwanted traumatic memories is much more difficult, a finding that would be consistent with previous research. A third problem is that in all of the Anderson experiments, and in the Anderson and Greene experiments in particular, recall for the inhibited or suppressed memories is still quite high (never dropping below 70%). In fact, when the task is changed from a cued recall task to a stem completion task (a more implicit measure of memory), the difference between suppressed and non-suppressed words becomes quite small, with suppressed words being recalled about 80% of the time. If the type of suppression Anderson argues his experiments demonstrate can account for the long-term and complete repression of traumatic memories, research will have to show how these memories can be completely forgotten. This hints at another problem with the study: they all take place over the course of about an hour. We cannot know from these studies how long-term suppression of unwanted memories will effect them. Will it reduce recall even more, perhaps to the point where people are compeltely unable to recall the unwanted memories? Or is there a limit to the effects of suppression? In order to apply the findings to traumatic memories, we would need to answer these questions. Finally, there is no indication of how suppressed memories might be recovered. As of yet, there is absolutely no empirical evidence for anything resembling a memory-recovery mechanism for suppressed memories.

In conclusion, then, while the research of Anderson and his colleagues may provide the first evidence of suppression mechanisms, which Anderson believes may be related to Freud's concept of repression, it is yet to be seen whether their data is relevant to the recovered memory debate. I do think it's the best evidence repression advocates have to offer, because it is the only real evidence from carefully controlled experiments, but the fact that this is all they really have is just one more indication that the repression hypothesis has no direct empirical support.







1 If you're still wondering how I could be a skeptic after having observed the effects of recovered memories on two people close to me, then let me add this. In one case, implantation is very likely. Both highly suggestive therapy and the revelation by a sibling that she had been sexually abused "triggered" the recovered memories. In the other case, there is strong evidence that the "recovery" is actually a case of forgetting that she remembered. Other friends who knew here before me report that even before the age at which she "recovered" the memories of abuse, she discussed the abuse with them, indicating that the "recovery" may have been of memories that had never actually been forgotten.
2 Anderson,M.C., et al. (1994). Remembering can cause forgetting: retrieval dynamics in long-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1063-1087. Anderson, M.C., & Spellman, B.A. (1995). On the status of inhibitory mechanims in cognition: memory retrieval as a model case. Psychological Review, 102, 68-100. Anderson, M.C. & Neely, J.H. (1996). Interference and inhibition in memory retrieval. in E.I. bjork & R.A. Bjork (eds.) Memory. Handbook of Perception and Cognition (2nd edition), pp. 237-313, Academic Press. Anderson, M.C., et al. (2000a). Similarity and inhibition in long-term memory: evidence for a two-factor model. Journal of Experimental Psychology: Learning, Memory, and Cognition, 1141-1159. Anderson, M.C., et al. (2000b). Retrieval-induced forgetting: evidence for a recall-specific mechanism. Psychological Bulletin and Review, 3, 380-384. Anderson, M.C., & Greene, C. (2001). Suppressing unwanted memories by executive control. Nature, 410, 131-134. For a short review, see Levy, B.J. & Anderson, M.C. (2002). Inhibitory processes and the control of memory retrieval. Trends in Cognitive Sciences, 6(7), 299-305. All subsequent references will be to these papers, by authors and year.

3 Macleod, M. (2002). Retrieval-induced forgetting in eyewitness memory forgetting as a consequence of remembering. Applied Cognitive Psychology, 16, 135-149.
4 I know of no published descriptions of replication attempts, but I do know of at least two labs that have attempted to replicate the Anderson and Greene suppression findings, and failed to do so.

Democracy, Texas Style

How do avoid removing language from a bill after an amendment to remove it has passed? Texas Republicans have found a way! You just say that some of the voting machines malfunctioned. At least, that's what Texas House of Representatives speaker Tom Craddick did with HB2, a bill designed to change the way public schools in Texas are funded, after an amendment to remove language that allows"at-risk" schools, schools with the lowest performance in the state, to be run by "for-profit" companies was passed. The vote was 73-70 for the amendment, but not to be denied, Craddick soon announced that four voting machines (all used by Republicans) had malfunctioned, causing their votes to be registered as for the amendment when they had actually voted against it. We may never know whether those four Republicans had actually intended to vote against the amendment or not, but it probably wouldn't have mattered anyway. If Craddick and the House Republicans were that determined to keep the privatization language in the bill, votes were only a minor obstacle.

The Religious Left

There has been a lot of discussion since the presidential election about the role of religion in the Democratic party, due in large part to a belief that values determined the lection (despite the fact that the empiical evidence does not support this belief). On one side, people like Amy Sullivan have been arguing that the secular left should do more to embrace the religious left. On the other side, people like PZ Myers have argued that the religious left has been so impotent that embracing them would be counterproductive. Then there's a third position, voiced well by Mike the Mad Biologist (link via Siris) which argues that they're both wrong, because they focus exclusively on the Christian left, and thus ignore the other major religious group that has been important to the Democratic party, the Jewish left.

While I think that the third position makes a very valid and important point which could be made about virtually every discussion of religion and politics, on both the left an the right, namely that the discussants tend to ignore the many differences between various religious groups on both sides, I ultimately don't understand the purpose of the discussion. How, exactly, should we go about embracing the religious left, be it the Christian or Jewish left? Should we make policy concessions, for instance? On the question of abortion, perhaps? Or gay marriage? Or religion in the public sphere (schools, courthouses, etc.)? The answer to this has to be no. We shouldn't compromise our values simply to make the members of religious groups who agree with us on other issues happy.

Thankfully, this doesn't seem to be what Sullivan, at least, is arguing for. She wants the Democratic party to do more than simply give a nod to the religious. And with this, I have no problem, as long as it doesn't involve compromising the values of the secular left. I have no problem with highlighting the fact that there are plenty of Christians who believe that the positions of the religious right are, in fact, anti-Christian, or at least inconsistent with Christian values. I have no problem with a political candidate who is Christian (or Jewish, or Muslim, or Wicca, for all I care) openly stating this, and making it part of his or her campaign. It would be nice if the religious left, particularly the Christian left, got off its ass and started calling the religious right on its backwards, intolerant, and often inhuman bullshit. If the Democratic party paying attention to the religious left helps them to do this, and thereby gets many Christians who don't vote Democrat to see that they actually hold the values that the religious and secular left share, that's a good thing.

Saturday, March 19, 2005

Are Octopi More Neurotic Than Hyenas? Social Psychologists Investigate

When I took a graduate course in social psychology several years ago (it was required), I had little respect for the area. A friend of mine -- another cognitive graduate student taking the course because it was that or evolutionary psychology -- and I took to calling it "E! Psychology," after the cable network, because we often felt that the emphasis was on flashy results over good science. Social psychologists are even fond of calling particularly flashy effects "sexy." We noted that in social psychology, just about everything is called "the [insert sexy label here] effect," so that I once wondered whether social psychologists came up with the names for their effects, and then went out and looked for something to which they could apply the name. My attitude toward social psychology has improved a great deal since then, even if I still have qualms about some of the areas' methods. But every now and then, I happen upon a paper that reminds me why I once thought social psychology was the astrology of the social sciences.

Take, for example, the paper I read last week, titled "Personality Dimensions in Spotted Hyenas (Crocuta crocuta)." Part of a surprisingly large literature on personality traits in nonhuman animals, this paper reports on the study of the personality dimensions of a group of captive spotted hyenas in California. Here's the gist of the study. The author, along with a few experts on hyena behavior, developed a list of potential personality traits, drawing them largely from the literatures on human personality traits and the personality traits that have been observed in other nonhuman animals (mostly primates). After deciding on how to operationalize these traits in terms of the types of behaviors that these traits should produce in hyenas, four observers who were familiar with the individuals being studied spent 26 weeks observing the hyenas and rating on a 5 point scale the extent to which each individual exhibited the traits. Then, the author used factor analysis techniques to analyze the ratings and determine the dimensions along which hyena personalities varied.

Before moving on to the results, it should be noted that there was a great deal of variability in interrater reliability across the different traits, with reliability scores (coefficient alphas, for those who are interested) ranging from .05 (indicating almost no correlation between the ratings of the different observers) to .90 (indicating a high correlation between them), with a median reliability of around .7. Now, that much variation, along with a moderate median reliability score, might give some people pause, but this is E!... I mean social psychology, so we can let it slide.

Anyway, back to what's really important, the results (this is an exploratory study, after all, so who really cares about methodological imperfections, however glaring?). Out of the factor analysis came 5 dimensions that accounted for 77% of the variability in the observers' ratings. They were assertiveness, excitability, human-directed agreeableness, sociability, and curiosity. Now, assertiveness was highly correlated with dominance/submissiveness, which plays an important role in hyena social interactions, and human-directed agreeableness may be limited to captive hyenas, so it's hard to know what to make of those two, though the author does argue that the human-directed agreeableness dimension may be a product of underlying personality traits (e.g., sensitivity to social relations) that manifest themselves differently in wild hyenas. The other three dimensions, however, did not correlate with dominance, sex, age, or appearance, thus ruling out the most obvious alternative explanations.

What does all this mean, you ask? The purpose of the study is to provide a comparative framework for studying personality that might provide insight into human personality. The idea is that studying animal personalities, how they develop, the role of socialization, etc., might provide some insight into the development and relative dependence on nature vs. nurture of human personality traits. With that in mind, the author compares hyena personality dimensions to the Big Five personality dimensions, believing that the hyena traits are related to human Extraversion, Agreeableness, and Neuroticism. The comparisons are rough at best, however.

After reading this study, my curiosity was piqued. I had to go out and read more about animal personalities. It turns out, studying hyena personalities isn't really that strange. Other studies have looked at the personalities of everything from chimps and Rhesus' monkeys to guppies and octopi (see this paper for a review). Who knew guppies had personality? So far, of the Big Five, Extraversion, Agreeableness, Neuroticism, and Openness have been observed in a variety of species, with Conscientiousness only appearing in chimps (and even there, in a fairly rudimentary form).

One more thing before I let you go read those fascinating papers. If you're like me, when you read about personality dimensions in animals that are drawn from (and compared to) human personality traits, the word "anthropomorphic" flashes in bright neon orange in your mind. But the authors of the review linked in the last paragraph have anticipated this flashing mental sign, and provided an argument against it. As evidence that anthropomorphism is not a problem, they cite high interrater reliability in the study of personality traits in a variety of species (hopefully higher, and with less variability, than in the hyena study); the use of careful behavioral observation and "carefully recorded ethological observations," with both types of studies yielding similar results; and the existence of "meaningful differences" between species, indicating that raters are attending to nuances in behavior patterns. I'm not sure how any of these speak directly to the problem of anthropomorphism in trait attribution, given that all the raters are human, and therefore likely to have similar anthropomorphic trait concepts. It's quite possible that they would find consistent between species differences, since different species display different behaviors, and that they would find similar results with different methodologies (which all ultimately involve attributing human-based traits to animal behaviors). But who cares? Someone's studying octopus personalities, and that's just plain cool.

Wednesday, March 16, 2005

The Basics of the Basic Level

Rosch, Mervis, Gray, Johnson, and Boyes-Braem. Chances are, if you've taken an advanced course in cognitive psychology, you've heard those names, in that order, before, and if you study concepts, they may have been melded into one word for you. Roschmervisgrayjohnsonboyesbraem. I've said it so many times, and so rapidly, that I sometimes forget the Gray is even in there. The paper (Roschmervis[gray]johnsonandboyes-bream, 1976) deserves the fame, too. It's a Behemoth, with 12 experiments, in an age when it was still possible to publish papers with only one. The experiments range from developmental studies looking at early category acquisition, to a linguistic analysis of American Sign Language, and an experiment that determines the most general level at which category members elicit common motor programs. And all of them are designed to show one thing: for hierarchically organized (taxonomic) categories, there is a priveleged level, the basic level. Since Roschmervis and so on, it's been necessary for any theory of categorization and concepts to deal with basic level phenomena, and that hasn't always been easy.

What is the basic level? Obviously, it's the level between the superordinate and subordinate levels. Duh. Seriously though, it's a level in a conceptual taxonomy somewhere between the most general (e.g., animal, or anything more general than that) and the most specific (e.g., yellow-bellied sapsucker). Where that level is, exactly, depends on the concept. For example, the basic level for some animal concepts is somewhere around the class or order level. Thus, BIRD is a basic level concept. For others, the basic level falls somewhere further down. The basic level for most mammals, for instance, tends to fall somewhere around the family or genus level. There's no good explanation for this difference between birds and mammals, other than that American undergrads (who tend to make up the bulk of the subject pools used for concept experiments) don't have as much experience with mammals. For non-biological concepts, the basic level tends to follow distinctions of function. So, CHAIR is a basic level concept, and so are TABLE, HAMMER, and COMPUTER. At least, that's where we'll find the basic level if we're studying typical American undergraduates under typical conditions. Things start to get much more complicated when we deviate from that.

But before we get into that mess, there are at least two questions that need answering: What's so basic about the basic level, and why? To answer the first question, the basic level tends to be the first concepts young children learn1, the first names they use, and even appear in infancy (in pre-linguistic infants, in fact)2. That's not surprising since it's the level at which parents tend to label concepts when speaking to young children (a canine walking in the park is a "dog," not a "Turkish Wolfhound," when you're talking to an 18 month old). In fact, it's the level at which adults tend to label things in general conversation. It's the most general level at which it's easy to obtain conceptual priming, whether the priming is is designed to facilitate visual detection or visual categorization; the most general level at which a common motor program can be used to interact with objects; the level at which people tend to first identify an object when presented with it visually; the last concepts to drop out when we reduce the number of lexical items in a language (e.g., in the movement from spoken English to American Sign Language); the level at which information tends to be remembered over time, regardless of how general or specific the information was at the time of encoding3; and much, much more. That's the easy question to answer. The why question is not so easy.

Since the Roschandabunchofotherpeople paper's publication in 1976, there have been all sorts of attempts to explain existence and effects of the basic level. To try to describe all of them, along with the arguments or an against them, in a blog post would be... well, if you didn't fall asleep, I would. Rosch et al. explained the basic level effects by referencing cue validity. Basically (pardon the pun), they (and in one way or another, most since) believed that the basic level was the level with the largest combined within-category similarity and between-category distinctiveness. Thus, while the members of the category yellow-bellied sapsucker are more similar to each other than all birds, they are less distinctive from other birds than birds in general are from non-birds, so bird is the basic level. A related early explanation4 concerned the parts of objects. Tversky and Hemingway showed that when asked to list features of concepts, participants listed parts more often for basic level concepts than for either superordinate or subordinate level concepts. They argued that for natural kind concepts, parts are associated with their causal origins, and for artifact concepts, with their functions. Since common origins and functions tend to converge at the basic level, the basic level tends to be defined by the sharing of common parts, with basic level category members sharing a lot of properties, while members of different basic level categories do not.

There is no explanation of basic level effects that hasn't been challanged by the results of some experiment or another. Tversky and Hemingway's part-convergence explanation has been called into question by experiments showing that, at least for artificial categories, common parts are not necessary for the existence of basic level effects5. My personal favorite, due largely to my own theoretical biases, involves differences, rather than similarities. Markman and Wiskiewski6 showed that what distinguishes categories at the basic level is a higher number of "psychologically relevant" differences. In their experiments, participants listed more "alignable differences" (differences that are part of a common relational structure - e.g., wings vs. arms) for objects from different basic level concepts from the same superordinate category, than they did for subordinate concepts from the same basic level category, or different supordinates. In other words, Markman and Wisniewski showed that the basic level is important because basic level categories differ from other categories at the same level in important ways.

I like this explanation because it fits best with a "theory theory" of concepts, and I think that's important because of one very interesting basic level phenomenon: the shifting of the basic level in experts. In a set of experiments, the basic findings of which have been replicated several times (including in interesting cross-cultural research), Tanaka and Taylor7 showed that experts in a particular domain (e.g., experienced birdwatchers) tend to treat subordinate level categories like non-experts treat basic level categories, in that they are as different from each other as basic-level categories, their names are used as often as basic-level names, and their instances are recognized as rapidly as instances of basic-level categories. Thus, it appears that for experts, the basic level shifts, in their domain of expertise, to what would ordinarily be the subordinate level. Why is this? Clearly, the added domain knowledge has an effect on where the basic level lies. Part-convergence and other traditional cue-validity explanations have a difficult time explaining this effect, but the "alignable difference" explanation described above can handle it easily. As knowledge of the relevant differences between different subordinate-level categories are learned (through, e.g., the acquisition of theoretical knowledge about a domain), people are able to differentiate subordinate-level concepts better, and thus to treat it like the basic level.

So, that's the basic level. While I've obviously endorsed the "alignable differences" account of the basic level, there are other explanations out there that can explain the shift of the basic level in experts, along with the other basic level phenomena. There's really no theoretical concensus about why the basic level exists, and how it does what it does. As is often the case in concept research, it's difficult to provide rigorous tests of theories, because while natural categories in which taxonomic relations tend to exist fairly straightforwardly do not provide for a great deal of experimental control, and thus do not allow strong tests of particular theories, it is damn near impossible to produce a set of artificial categories with a structure sufficiently rich to produce taxonomic relations analogous to those in natural categories. So, we're left to try to rule out explanations rather broadly (e.g., strict similarity views vs. views that focus on differences). But because the basic level is so pervasive and important in conceptual development, learning, and use, any general theory of concepts will have to explain it, and so we have to try to study it as best we can.

1 The phenomena described in this paragraph were first demonstrated in the experiments by Roschmervisandtherestofthem, unless otherwise indicated.
2 Pauen, S. (2002). The global-to-basic level shift in infants' categorical thinking: First evidence from a longitudinal study. International Journal of Behavioral Development, 26(6), 492-499.
3 Pansky, A., & Koriat, A. (In Press). Hierarchical memory distortions: The basic-level convergence effect. Psychological Science.
4 Tversky, A., & Hemingway, K. (1984). Objects, parts, and categories. Journal of Experimental Psychology: General, 113(2), 169-197.
5 Murphy, G.L. (1991). Parts in object concepts: Experiments with artificial categories. Memory & Cognition, 19(5), 423-438.
6 Markman, A.B., & Wisniewski, E.J. (1997). Similar and different: The differentation of the basic level. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(1), 54-70.
7 Tanaka, J.W., & Taylor, M. (1991). Object categories and expertise: Is the basic-level in the eye of the beholder? Cognitive Psychology, 23, 457-482.

Saturday, March 12, 2005

Moving at the Speed of Speech

Richard asks a very interesting question: How fast is the speed of (conscious) thought? The answer to that question is not easy, as there are all sorts of types of information in consciousness. Visual information, for instance, makes its way to consciousness in a very short amount of time (on the order of tens of milliseconds), and transitions as rapidly as we can take new information in. Affective information gets there pretty quickly, too. Limits on the speed of perceptual and affective consciousness are largely due to the limits on transmission and processing speeds. Some types of information (e.g., shapes, edges, etc.) are processed very early on, and very automatically, and are thus processed rapidly. Others, such as complex scenes and perceptions that require top-down influences, are a bit slower. As a general rule, effortful processing, is slower than automatic processing.

But what Richard is really interested is in the influence of inner dialogue on the speed of more deliberate conscious thought. Much of our more conceptual (as opposed to perceptual) conscious thought is in verbal form. In fact, there is evidence that information that is difficult to verbalize (like, say, non-linear, quadratic categorization rules) is not processed consciously*. So, verbalization is important for consciousness, and if much of our conscious thought is processed verbally, then it stands to reason that the speed of verbalization may limit the speed of non-perceptual consciousness.

But, speech itself is primarily processed in perceptual and motor areas of the brain, and at least at the phonetic and syntactic levels, is processed automatically, so it moves very fast. Internal speech moves along at about 4-6 Hz, which, as far as processing goes, isn't the fastest we can go, but it's far from the slowest. The main limit on this speed is motor. As decades of research on verbal working memory (and more recently, a great deal of brain imaging research) has shown, internal verbal information is processed by most of the same regions that process spoken verbal information, including the motor areas. The only real difference between spoken and internal verbal processing is where the information from the motor system gets sent.

So, once we're above the limits of the speed of speech, what really limits the speed of conscious thought is how effortful the processing is, and the extent of our cognitive load, two things that are closely related. In other words, really complex information will slow thought down, and so will consciously processing a lot of information. When I'm thinking about simple things, or talking to myself without consciously deliberating about content, I can think at about the speed of speech, but when I'm reasoning about a complex philosophical problem, or I'm consciously thinking about a lot of things at once, everything slows to a crawl.

* Ironically, while this information is more difficult to learn, once it is learned, we process it automaticlaly, so it's actually processed faster than the simpler, consciously processed information.

The Importance of Mental Names

In a comment to my last post, Clark alerted me to an interesting post at Certain Doubts. In the post is an example that is supposed to bear on whether we can, under some circumstances, be mistaken about the content of our own minds. Here is the example (quoted from the post):
You go to the doctor complaining of an itch. He listens to your complaint, observes the location of the itch, writes down how the problem started, and the details about physical symptoms including duration and intensity of the experience. Then he tells you that he thinks he knows what the problem is. He tells you that it’s not really an itch, but a pain. People have confused these two in the past, but we now have a well-confirmed theory that distinguishes the two in a slightly different way than “the folk” do. The theory has led to two technologies. One is a machine for distinguishing the two underlying states, and the other is medicine for treating the two conditions. Your doctor tells you that one of the medicines will solve the problem if you’re experiencing a pain, but not the itch; and the other medicine will have the alternative results. You insist that you’re experiencing an itch, but he uses the machine and shows you the results: you’re in pain, it says. If you still insist, he’ll give you the itch medicine. You do, and he does; you return two weeks later, still suffering, and ask for the pain medicine. You take it and get well. So you say, “I guess I was wrong. It was a pain, not an itch, after all!”
Clark thinks that the problem in this example has to do with names, and I think he's right. In fact, I think this example presents a very nice analog to Barbara Malt's "water" experiment that I described in the previous post. In that experiment, participants' beliefs about the H2O content of various liquids were not correlated with their use of the name "water" to describe the liquids. Malt uses this result to argue, effectively I think, that people are not using names to refer to essences. In the pain example, the patient refers to "pain" that is not consistent with what the doctor believes to be its essence, namely the presence of certain physical states (presumably composed of patterns of neural firings). Obviously there's an important difference between the two examples: in the Malt experiments, participants' uses of a name were inconsistent with their own beliefs about the essential content of the referents, while in the pain example, the patient's use of a name is inconsistent with the doctor's belief about the essential content of the referent. Also, to even consider the possibility that the patient is mistaken about her mental states, we have to accept a reductionist theory of mental content, which is more difficult than accepting some form of reductionism about the nature of water. Differences aside, though, if the Malt experiment shows that people do not use names to refer to essences, can we conclude from the example that the patient is using the name incorrectly, and is thus mistaken about the content of her mental states? It seems reasonable to conclude that the name "pain" is not co-extensive with the physical essence of pain, just as the name "water" is not co-extensive with H2O. In that case, the example doesn't show that the patient is making a mistake. In fact, until we actually determine the meaning of "pain," as used by the patient, the example doesn't really tell us anything.

In Malt's "water" experiment, it appears that there are two senses (or intensions) for "water," one which is scientific (water = H2O), and one which is broader, and used in everyday speech. In fact, to describe Malt's experiment, you have to use both senses. Participants' beliefs about the water (in the scientific sense) content of a liquid are not correlated with whether participant call that liquid water (in the everyday sense). I suspect that the same can be said of the "pain" example. The scientific sense of "pain" involves a theory about the physical states that cause pain, while the everyday sense refers to a range of qualitative experiences. That range of qualitative experiences may not be co-extensive with the range of experiences caused by the physical states scientifically referred to as pain.

It could turn out that there really is a one-to-one mapping between brain states and qualitative experiences, and thus that the patient really is mistaken about the content of her mental state. But what would that mean (and why do I feel like J.L. Austin when I ask that?)? How could I possibly have a qualitative experience that mistakes itself? Perhaps a lapse of memory, which causes the patient to believe that the content of her mental state is more similar to past experiences of pain, when in fact it is more similar to past experiences of itching, might cause this. I'd be willing to bet that, given the fuzzy nature of categories, and the ambiguity of some experiences, this is possible. The patient's qualitative experience might be on the experiential border shared by itching and pain, and thus difficult to categorize. However, if it really does feel like pain, I see no reason to say that the patient should be considered wrong in calling it pain. The doctor may need to think of it as pain, in the scientific sense, in order to treat it effectively, but the patient, in order to classify her experience, should label it in a way that is most consistent with her own experience. How would doing otherwise help her?

Tuesday, March 08, 2005

The Importance of Names

What's in a name, for a concept I mean? Cognitive psychologists studying concepts and categorization have, notby and large, treated concept names (often called "category labels") as just another kind of feature. I'm not sure there's really been any good reason to do this, other than the fact that the models of categorization that have been most prominent over the years haven't had straightforward ways for dealing with labels as anything other than features. Treating names as just another piece of information about a category like any other probably seems pretty counterintuitive to many outside of cognitive psychology. Concept names receive a lot of attention in philosophy, for instance, and the same is true for other areas of psychology, particularly clinical psychology, where the labels for mental illnesses, and their influences on the reasoning of both patients and therapists, are often the topic of duscussion, and in other social sciences, such as sociology with its "labeling theory." But so long as concept researchers didn't have any data from studies of concepts to indicate that labels should be treated differently, few if any saw fit to challenge the equivalence of labels and other features. Of course, since no one was questioning it, there wasn't much of an incentive to go out and look for such data. Even in science, dogmas can be self-perpetuating.

But with the theory theory, and the corresponding psychological essentialism view of concepts, came a recognition that concept names might be important. Since concepts are treated as embedded in a larger knowledge structure, and according to psychological essentialism, concept names refer to underlying essences (ala Putnam), whether they exist or not, it made sense to treat names as unique among category features, if they could be treated as features at all. If this is true, then concept names should be treated as of a different kind than other features of concepts.

Concept researchers took heart, and finally went out and sought data about the relationship between names and other features. For example, Yamauchi and Markman1 used the classic classification paradigm, in which participants are given features and asked to infer the category label, and a variation of that paradigm, in which participants are given the category label and some features, and asked to infer a missing feature. If labels are just like any other features, then the variation (called the "inference task," perhaps unwisely, since both tasks involve an inference), then participants should perform similarly in both tasks. However, if labels are different from other features, then when participants are given the label, the sorts of inferences they make should be diferent. Yamauchi and Markman's experiments showed that people do in fact treat the label differently. When the label is present, participants infer category-typical features more often, even when the other features they are given are more similar to the features of contrast categories. For instance, if the prototype of category A is
1 1 1 1
and the prototype of category B is
0 0 0 0
where 1's and 0's represent the value on a particular feature dimension (e.g., 1=tall and 0=short), and participants receive the category label A, along with three features, 0 0 0, they are more likely to infer that the fourth feature will be a 1, even though the three 0's makes the instance about which they are making an inference more similar to the prototype of category B. In other words, given the implication of category membership that comes with the presence of the category name, people will infer category-typical features when the label is present, even when all of the other information seems to contradict this inference. This finding is a blow both to similarity-based views in general and their belief that labels are just ordinary features.

In developmental research, similar results have been found. For instance, in one study2, children were presented with information about a person (e.g., she eats a lot of carrots), and then either given a label (e.g., "She is a carrot-eater") or a sentence that confirms that information ("She eats carrots whenever she can"). The results showed that children were much more likely to make inferences about a person when given a label, and that they believed the traits implied by the label were much more stable than when they were attributed to the person without the label. Thus, children think that a carrot-eater is much more likely to be eating carrots at some later date than someone who just eats carrots whenever she can. Once again, it appears that concept names, and their implications of category membership, are influencing the inferences that people will make about instances, and furthermore, that the importance of names develops fairly early in childhood (the children in the study were between 5 and 7).

The Markman and Yamauchi studies used artificial categories that were designed to be like natural kinds (complete with bug-like pictures), and the carrot-eater studies used trait concepts applied to humans. It is reasonable to assume that people are essentialists about natural kinds, and perhaps even human personalities, but what about artifacts? Putnam's "Twin Earth" thought experiment, and the original claims of psychological essentialism, are difficult to apply to artifacts. Water can be said to have an essence (its chemical composition, H2O), but what is the essence of a chair? If artifacts don't have essences, are artifact labels more like the labels of natural kinds and human traits, or are they more like other artifact features? Paul Bloom3 has recently argued that psychological essentialism does apply to artifacts. People treat the intentions of the author (or creator) as the essence of artifacts. This explains why certain objects are treated as members of artifact categories despite being highly dissimilar from other members of that category (e.g., bean bag chairs). Conducing experiments similar to the Yamauchi and Markman and carrot-eater studies on artifacts might help to confirm Bloom's view.

Or maybe not. Do labels really refer to essences? A recent review of the developmental literature has shown that concept names have much more influence on children's reasoning about artifacts than about natural kinds, with human traits being somewhere in between the two4. This is the opposite of what we would expect from traditional essentialist theories. Furthermore, in a now imfamous study, Barbara Malt has shown that people's use of the name "water" is not consistent with their essentialist intuitions about water5. She asked people about their beliefs about the amount of water in different kinds of liquids, and showed that such beliefs were not correlated with their use of the label "water." For instance, tea and lemonade were believed to have a much higher water (as in H2O) content than a natural lake, but the liquid in lakes is referred to as water, while lemonade and tea are referred to as, well, "lemonade" and "tea." If this is the case, then psychological essentialism may ot explain the importance of names, and the importance of names in artifacts may not provide evidence of psychological essentialism about artifacts.

What are names, then? Clearly they are important, but why? What information do they carry with them that other features do not? If labels do not refer to underlying essences, how do we explain their deep connection to category membership, as demonstrated in the inference and carrot-eater studies? My own suspicion is that names are closely connected to the position that a concept takes in a larger relational system. Thus, names not only provide information about typical features and essences (such as function for artifacts and chemical composition or genetic make up for natural kinds), but also how a concept differs from related concepts, and how it fits into our knowledge base. I think of names as sort of like suitcases that are held in the hands of relations. When names are used, the suitcases can be unpacked, and out will flow all sorts of relational information. Many names may even be empty suitcases, telling us little more than where a concept is situated in a relational system. It is in these cases that people hold strong essentialist intuitions, but when queried, are unable to express what the essences of such concepts may be. Such appears to be the case for concepts like GAME, for instance. The position of the suitcase itself is all of the information we really have about the concept. Any inferences we make will have to come from that. Of course, there's no direct empirical evidence for my view, but there's not much direct empirical evidence for other views of names either, since the essentialist theory of names has been called into question. So, mine's as good as anyone elses, from an empirical standpoint.

1 Yamauchi, T., & Markman, A.B. (2000). Inference using categories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 776-795.
2 Gelman, S.A., & Heyman, G.D. (1999). Carrot-eaters and creature believers: The effects of lexicalization on children's inferences about social categories. Psychological Science, 10(6), 489-493.
3Bloom, P. (1996). Intention, history, and artifact concepts. Cognition, 60(1), 1-29.
4Diesendruck, G. (2003). Categories for names or names for categories? The interplay between domain-specific conceptual structure and language. Language and Cognitive Processes, 18(5-6), 759-787.
5Malt, B.C. (1994). Water is not H2O. Cognitive Psychology, 27, 41-70.

Monday, March 07, 2005

Google Gives Us Meaning

Computer scientists have long struggled with the problem of the sheer amount of world knowledge that is needed to allow a computer to reason about even the simplest concepts effectively. Imagine a simple novel sentence, like the following:
Anne crutched the apple to Steve.
Given a little context, humans can understand this sentence very easily, because we can use our knowledge of crutches, apples, and our folk theories of physics to help us interpret it. How to get a computer to understand this, however, is one hell of a problem. The difficulty lies in getting all of that world knowledge in, and making it accessible in the right contexts. The second problem may be the easier one, but the first one has always seemed damn near intractable, and until you solve it, the second one has to wait.

Enter Google, and two computer scientists who had the insight that its vast network of interconnections might serve as just the sort of knowledge base that computers need to begin to look, well, like us. They basically measure how many hits a word gets in a google search, along with how many hits it gets when combined with another word, and use that to compute a score that measures the similarity between words. Out of this, semantic knowledge emerges. Or at least, that's the claim.

The idea that the interconnections between words in a corpus are somehow related to their meaning, and thus knowledge, is not new. Latent Semantic Analysis (LSA) has been around since the late 90s. It analyzes a large corpus of text, and looks at how often two words co-occur (and how close together they are when they do), along with how often they occur without each other, and uses this score to place the words in a high-dimensional space. Similarity relations between different words, as measured by their distances from each other in that space, can then be used to approximate meaning. LSA is very powerful, in that it simulates human performance on many semantic tasks, such as lexical priming, word sorting, categorization, and even verbal SAT questions.

What's new with the use of Google is the access to an unprecedentedly vast source of co-occurrence information: the entire internet. The technical details are pretty, well, technical, and I have to admit I am not qualified to critically evaluate most of the paper (linked above, and again below). When I read the words Kolmogorov complexity, my eyes tend to roll up into the back of my head, and I lose consciousness for entire sections of AI papers. Much of the work in this paper is built around Kolmogorov complexity. However, despite my inability to comprehend everything in the paper, I think I've grasped the basics (it's enough like LSA, with which I am intimately familiar, for me to get the gist). Still, I think it's incredibly cool. Especially when we consider the experiments they ran.

In one of the experiments, they googled the names of paintings by three 17th century Dutch painters, Rembrandt, Steen, and Bol. Using their normalized Google distance calculation, the computer was able to sort the paintings roughly by painter (see the figure below - and see the paper to get a description of the method for deriving the tree), without being given any information about who painted what. They also used it to learn the meaning of electrical terms and words like "religious," to sort color and number names, to distinguish emergencies from "near emergencies," and distinguish prime numbers from non-prime numbers.

Click for a larger view.


Whether anyone will ever be able to use this method to get a computer to understand "Anne crutched the apple to Steve," I do not know. However, given the size of the knowledge base that this method uses, it's probably the best method anyone's come up with so far, and if it turns out to be successful on some simpler problems, I don't doubt that AI researchers will improve upon it and develop programs that can use it to do all sorts of nifty things.
Once again, the paper, titled "Automatic Meaning Discovery Using Google," and written by Rudi Cilibrasi and Paul Vitanyi, is here.

Sunday, March 06, 2005

I Don't Want to See Anymore

As an academic, I have spent a lot of time hiding away in the ivory tower, oblivious to the larger world around me. As a graduate student, especially, I had almost no time to pay any attention to what non-scientists were saying about cognitive science. However, on a fateful day in early 2004, I chose to crawl out of my hole and actually look at what other people were saying. I started reading blogs. And now I want to crawl back in!

You see, I was aware that there were some scientists who were writing well-written, but conceptually poor books on cognitive science (I won't name names, but let's just say that the most prominent such scientist is a little more rose-colored, by name), and judging from conversations with students, as well as their placement on the Barnes and Noble shelves, that people were reading these books, people who probably wouldn't have the requisite background knowledge to determine that these books didn't represent sound cognitive science. I even knew that linguists had a hard time justifying their work to the general public (Ray Jackendoff told me this in the introduction to one of his books), and that it wasn't always easy to explain cognitive scientific concepts, and their importance, to my grandmother. I never knew just how misinformed many people out in the world were about cognitive science, though. Blogs have been one hell of an education.

Recently it seems to have gotten worse. That could be because the scales have fallen from my eyes and I am now able to see (in cog sci talk, I've activated the relevant schemas, making the perception of previously knowledge-inconsistent and perhaps even irrelevant information available), or it could be because cognitive science is getting more and more attention from non-experts. There was Todd Zywicki's kindergartener-like inability to grasp even the simplest concepts related to implicit attitudes, the consciousness and the cognitive unconscious, or the various implicit tests. Then there was Will Wilkerson's attempt to sketch out the implications of Evolutionary Psychology for political thought, despite the fact that it's all-too-apparent that Wilkerson has never really read any Evolutionary Psychology (short of, perhaps, a couple trade books), along with this strange attempt to defend Wilkerson's essay. Finally, there was this shot at the "mirror test" (sometimes called the "mark test") of self-consciousness, which repeats an alternative explanation that was ruled out years ago, a fact that is cited in pretty much every paper on the "mirror test." There have been other examples, but these are three of the most recent, and at least in the first two cases, most egregious misrepresentations, misuses, or just plain misses, on topics of cognitive science in the blogospohere that I have read.

Like I said, I want to crawl back into my hole deep inside the Ivory Tower. But I won't. Instead, I'll offer some advice to anyone who wants to talk about cognitive science, but has not spent a lot of time studying it. This advice isn't unique to the cognitive sciences; you'd probably hear it from any scholar, or at least any scientist, in any field. Naturally, I don't expect anyone to follow it, but I feel like if I'm going to stay out here, I have to actually try.
  1. Read before you write. Do not write about any scientific idea that you have not read about extensively. Extensively. I particularly hope that Zywicki and Wilkerson take this to heart. It's the best way to avoid making yourself look like a fool, or worse, and it also avoids pissing off people like me. I'm not sure the last one is really a motivation for reading before writing (it might actually be a motivation to do the opposite), but I imagine most people do want to avoid looking like fools.
  2. When it comes to science, do not go by what you read in the popular press. The popular press generally doesn't know what it's talking about, either. It may be a good place to discover interesting research, but before you write about it, go read the primary sources. You'll find that at least as often as not, the presentation of scientific work in the popular press barely resembles the presentation of that same work in peer-reviewed journals, and that's not just because the latter is more technical.
  3. Again, when it comes to science, do not go by what you read in books, especially books that weren't written for experts. You can say anything in books, and people do. I have read books on cognitive science that sounded to me like they were written in a parallel universe in which the findings in the field were entirely different than they actually were. Even when they describe the empirical research correctly, book authors may interpret it in ways that no one else in the field would. Often, people write books because they want to express ideas that wouldn't see the light of day in the peer-reviewed literature. This can be a good thing, in that it allows the more speculative ideas of scientists to be expressed, and thus influence the direction of the science (if they're worth considering), but in most cases, it just means you're going to get shitty ideas.
  4. Given 2 and 3, you should probably know what you have to read -- the peer reviewed literature. If you're going to write about, say, the neuroscientific research on moral reasoning, don't, I repeat do not, read a book by a journalist with no training in EP, cognitive psychology, or neuroscience, on a related topic, and think that you are now qualified to make political arguments on the basis of that research. Instead, look at the journalist's citations, go to your local university library, read those articles, and the articles that they cite. After this, search some article databases (you could even use Google Scholar) for papers that aren't cited in the ones you've read, because there may be research that contradicts that you've read, but which you haven't seen cited. Scientists aren't above forgetting some recalcitrant data, now and then, even when it's been published. After you've done this, maybe you can start to think about what you want to say on the subject.
  5. Before you publish something in the press, official reports, or even popular blogs, consult an expert or two. Scholars are generally willing to look over pieces that will be representing their work, or work in their field, to the general public. Hell, if you're writing about cognitive science for a blog, feel free to send me a draft. I'd be happy to look over it, and if I don't feel qualified to evaluate it, I'll send it to or recommend someone who is. I wouldn't recommend sending me posts about EP, of course, because I'll just tell you EP is shit, and you shouldn't write about it. Anything else is OK though.
heeding these five pieces of advice should be enough to keep you from writing something stupid, and from making me want to crawl back into my hole. If everyone followed these guidelines when they wrote about fields in which they are not experts, maybe the public wouldn't have such a god-awful understanding of the sciences.

UPDATE: Mark Liberman responds to this post at Language Log. I agree with his comments on open access wholeheartedly. In fact, open access would making following my advice much easier. The only point with which I really disagree is the one about popular science books. I genuinely dislike them, at least as anything more than introductions to fields and ideas. If that's all you're looking for (and I readily admit that for some fields, like say physics or paleontology, that's all I am looking for), then they're probably OK, but if you want to delve deeper, attempt to draw inferences that go beyond the text, or (and this is the biggie) represent the field you've been introduced to the public, then they should not be relied upon. In fact, given the ways in which our initial exposure to concepts influences our later representations of those concepts, and that there are so many misrepresentations in so many science books, if you plan on doing anything more serious, or more public, than writing about a scholarly idea on a podunk blog like this one, you'd probably do well to avoid trade books altogether.

Also, in response to a couple commenters, I should note that I don't expect every blogger to follow my advice, but I do expect well-read bloggers, especially those who are experts in some other field, and therefore treated as authorities by the throngs, to do so. Whether they like it or not, these bloggers influence public perception, even when they don't know what they're talking about. I believe that influence comes with a responsibility to at least make an effort to get it right. You don't have to agree with what the experts say, but you at least have to know what it is that they say, if you're going to comment on it.