Sunday, October 31, 2004

Metaphor I: A Brief History of Metaphors in Cognitive Science

From Aristotle through speech act theories, metaphor had been viewed as a secondary type of language, built on literal speech which is, in turn, the true nature of language. However, since the 1970s, cognitive scientists have become increasingly convinced that metaphor is not only central to thought, something that Aristotle would admit, but that it is also a central aspect of language, and no less priveleged than literal language. Metaphors are processed as quickly as literal language, and as automatically. In addition, metaphors, while generally literally false, are difficult to label as literally false. This new status for metaphor has led to a great deal of attention among cognitive scientists, and a wealth of theories and models of metaphor. It would be prohibitively time-consuming (and unthinkably pointless) to detail each of these models along with the evidence for them, so instead I will focus on two models, the structure mapping model of Gentner and colleagues, and the attributive categorization model of Glucksberg and Keysar. These models are the most prominent, and the most empirically viable. Furthermore, they capture two diametrically opposed (at least ostensibly) ways of viewing metaphor. Also, I refuse to talk about Lakoff and cognitive linguistics until after the election, which means that cognitive linguistics theories of metaphor are off the table. Before I get to the current theories, though, I'll give a very brief history of cognitive views of metaphors.

Originally, almost all theories of metaphor were two-process theories, which involved metaphorical statements first being processed as literal statements, and when it was discovered that the statements did not work as literal statements, they were then, and only then, processed as metaphors. This view was already on its way out in 1982 when a classic experiment by Glucksberg et al. 1 made it completely untenable. In their experiment, participants read three types of class-inclusion statements. The statements were either literally true ("Some birds are robins"), literally false and "anomolous" (i.e., they couldn't easily be interpreted metaphorically, as in "Some birds are apples"), or metaphorical ("Some lawyers are sharks"). Participants were asked to judge whether the statements were literally true, and their reaction time in making this judgement was measured. Participants were quick to judge literally true statements as true, and literally false and anomolous statements as false, but they were significantly slower when rating metaphorical statements as literally false. This was interpreted to mean that the participants were automatically interpreting the metaphorical statements metaphorically, and a subsequent literal interpetation required more time. This rules out the view that the statements must be interpreted literally first, prior to any metaphorical interpretation. If this were in fact the case, then participants should have noticed that they were literally false in about the same time that they noticed the falsity of the anomolous statements.

Cognitive scientific views of metaphor that do not treat them as less priveleged than literal statements began to surface in the 1970s, andsince then almost all of them have arisen out of the work of Max Black. Black2 discussed several different cognitive theories of metaphor, the most prominent of which were the substitution, simile, and interactive theories. In the substitution theory of metaphor, the cognitive processing of metaphor involved substituting a property of the vehicle for the vehicle itself. So, "My surgeon is a butcher" becomes "My surgeon is sloppy," or "My lawyer is a shark" becomes "My lawyer is aggressive." The simile theory is similar, in that it views metaphors as highlighting properties of the vehicle that can be attributed to the topic. However, under this view, metaphors are essentially substituted with a corresponding simile (shades of Aristotle). "My surgeon is a butcher" becomes "my surgeon is like a butcher," and the comparison highlights the pragmatically relevant common attributes. Finally, the interactive theory of metaphor again involves a comparison between the vehicle and topic, but in this case, the properties that are highlighted by the comparison are determined by the interaction of the topic and vehicle.

In general, the interactive theory of metaphor has been the most influential of Black's various theories, and is ultimately the one that he himself accepts. However, it does suffer from a glaring problem: it's extremely abstract. I wish I could tell you that it was more detailed than my short description in the preceding paragraph, but it's really not. How do the topic and vehicle interact? How are the relevant properties selected? These questions would need to be answered before we could claim that the interactive account of metaphor is sufficient as a cognitive theory. To rectify these defects, several theories based on the interactive theory have been proposed over the last few decades. The first of these, called the salience imbalance theory3, solves the problem of feature selection by positing that metaphors involve comparisons between topics and vehicles that exhibit an imbalance in the saliency of the properties the creator of the metaphor wishes to hilghight. For instance, sharks exemplify aggression in a much more salient way than lawyers, and by making the comparison in the metaphor "My lawyer is a shark," the salience of the "aggressive" feature in sharks serves to highlight this feature in my lawyer. One of the primary motivations for this theory is the assymetry in metaphorical comparisons. "My lawyer is a shark" is a much more acceptable, and powerful statement (in most contexts) than, "The shark is a lawyer." This theory, and many subsequent theories, have been motivated by a desire to explain this inherent assymetry in all metaphors.

The salience imbalance theory, while it is an improvement on the interactive theory, still suffers from many problems. It doesn't detail how the comparison takes place, and its explanation of how properties are highlighted is still somewhat abstract. After the salience imbalance approach, theories of metaphor split into three different types, all still roughly based on Black's work, but with different solutions to the problems of Black's and Ortony's theories. The first type treats metaphor as an instance of analogy, or more accurately, as using the same processes as analogy. Under this view, called structure mapping, metaphor involves mappings between the topic and vehicle. Thus, it is also a comparison view of analogy, but one with a much more sophisticated comparison process than either the interactive or salience imbalance theories. The second type treats metaphors as categorization statements. Specifically, under the attributive categoriztion theory, metaphors are treated as class-inclusion statements, which involve placing the topic into a category defined by a feature or features exemplified by the vehicle. For example, "My lawyer is a shark" involves placing "My lawyer" in the category of "aggressive things" of which "shark" is a highly salient or typical member. The final widely used type of cognitive theory of metaphor comes out of cognitive linguistics, and is either based on Lakoff's conceptual metaphor theory, or blending. However, talking about this will have to wait until after the election, because I don't want to type type name "Lakoff" again until then.

In the next post, I'll describe the structure mapping view of metaphors in more detail. This theory is very interesting because it comes up with a way to classify different types of metaphors based on the sorts of arguments that are mapped between the topic and vehicle. After that, I'll discuss the attributive categorization model in a third post. After that, who knows what will happen?

1 Glucksberg, S ., Gildea, P., & Bookin, H. (1982) . On understanding non-literal speech : Can people ignore metaphors? Journal of Verbal Learning and Verbal Behavior, 1, 85-96.
2 Black, M. (1955). Metaphor. Proceedings of the Aristotelian Society, 55, 273-294, Black, M. (1962). Models and Metaphors. Ithaca, NY: Cornell University Press, and Black, M. (1979). More about metaphor. In A. Ortony (Ed.), Metaphor and Thought (pp. 19-43). Cambridge, England: Cambridge University Press.
3 Ortony, A. (1979). Beyond literal similarity. Psychological Review, 86, 161-180, and Ortony, A. (1993). The role of similarity in similes and metaphors. In A. Ortony (Ed.), Metaphor and Thought 2nd Ed., (pp. 342-356). Cambridge, England: Cambridge University Press.

Saturday, October 30, 2004

The First (Sort Of) Request: Memory and Epistemology

In a comment, Clark Goble writes:

I've always wondered about the role of memory in epistemology. It seems that whenever we talk about rational reasons they are always dependent upon our memory. Even in a math equation, we assume we recall the early steps correctly. Memory is what allows us to transcend the temporal limits of reasoning. Yet, so far as I'm aware, it never gets brought up in epistemological articles. (Or at least I've missed most of them) The role of false memories seems to render a lot of epistemology quite problematic and at a minimum suggests that knowing we know doesn't follow naturally from knowing. While I may be completely wrong, I have this gut feeling that it ought to lead one to externalist accounts as well.

Now I'm no expert in epistemology, and I don't read a lot of the contemporary epistemology literature (I find it kind of boring, to be honest), but as I'm the foremost authority on epistemology among the authors of this blog, I will take it upon myself to post on it anyway. I do know of a few discussions of memory in the context of epistemology. One is online, here. It takes an internalist position, but there is a lengthy discussion of both externalist and internalist solutions to the fallibility of memory. Audi has a chapter on memory in his Epistemology: A Contemporary Introduction to the Theory of Knowledge, and I think a few other epistemology textbooks have chapters on memory as well.

Otherwise, memory has been a hot topic in philosophy for a long time. Aristotle discussed it, as did Hume (Section III here and Section IV and V here) and the other British empiricists. In the 20th century, Russell talked about memory often, including some great passages in The Analysis of Mind; the logical positivists talked about it fairly frequently (Ayer has a lengthy discussion of memory, and statements about the past in particular, in The Problem of Knowledge); and C. I. Lewis has an excellent chapter in An Analysis of Knowledge and Valuation in which he argues that the fallibility of memory demands a coherentist account of knowledge. Here is what writes about the implications of the potential for memory errors:

When the whole range of empirical beliefs is taken into account, all of them more or less dependent on memorial knowledge, we find that those which are most credible can be assured by their mutual support, or as we shall put it, by their congruence.

Of course, phenomenologists talk about memory a lot, as well. I'm sure Clark knows of Bergson's Matter and Memory, and (inspired by Bergson) Merleau-Ponty spends a lot of time talking about memory (as does Deleuze, at least when talking about Bergson). There was also a special issue of Philosophy and Phenomenological Research in the early 80s (82 or 83) that contained several excellent papers on memory. One of the papers in that issue is a brief review of the philosophy of memory over the last few millenia.

The philosopher who probably paid more attention to memory than any other in the last half century was Norman Malcolm. Much of his work on memory (e.g., in Memory and Mind ) involves a Wittgensteinian critique of representationalist, or trace theories of memory. In particular, he argues that the existence of "enduring representations" doesn't fit with our experience of direct access to memory, and creates the need for homunculi. He also talks about "false memories," though he doesn't call them that, and argues that memory is like knowledge, and therefore can't be mistaken. Any case that we might call "false memory" doesn't involve memory at all, or if it does, it involves a correct memory of a past impression which is what was actually mistaken. Here's an example Malcolm gives in "Memory and the Past" (which was in The Monist in 1963, vol 47):

Someone could point at a man in plain sight and say, "I met him last week." The even the refers to is meeting-that-man-last week. His memory is wrong, let us suppose, because he and the man pointed at were in different parts of the world last week. His erroneous memory does not pressupose some correct memory of the event referred to, for it did not take place. Still, his memory might be partly correct, for it might be that he remembered meeting this man but is wrong about when it happened. Or it could be that he had never met this particular man, but had met one who could easily be mistaken for him. Correct memory would here be mixed in with incorrect memory. Another possibility would be that previously he had dreamt of meeting this man, or had hallucinated it, or had formed in some other way an erroneous impression of having met him. But if his present belief was based on a previous false impression, then the present belief would not involve an error of memory: the error would be in the original impression.

I disagree with Malcolm's anti-trace arguments, in part because I think that the reconstructionist view of memory which is popular in cognitive psychology now does away with the problems he discusses. Under the reconstructionist view, which I share, memories do involve enduring representations, but remembering involves reconstructing those representations on-line, and this reconstruction is influenced by the structure of the representation as well as the context in which the remembering takes place. In this case, no homunculus is needed, as the present context is doing the work that the homunculus would have to do in older representationalist theories.

On the subject of false memories, Malcolm makes an interesting point, though what research shows is actually happening is that the memory mistakes the context of the original impression. The original impression is often not mistaken, however. For instance, during a therapy session involving suggestive visualization techniques, the visualizations may or may not be interpreted as products of the imagination or as memories of past events, but later memories of those visualizations may mistakenly involve the belief that they are memories of actual events, rather than a visualization exercise.


Moving on, I do think Clark is correct that the reconstructive nature of memory, as well as its fallibility, can pose problems for theories of knowledge which require that we have evidence to justify a belief. In most cases, that evidence will not be present at the time that we have a belief, except in the form of memories. I think epistemologists recognize this, though I'm not sure they're aware of much of the memory research of the last 30 years. For instance, research on the remember-know distinction (and similar distinctions, e.g., remember, know, and feel) has shown that people have qualitatively different types of memory experiences, and these types relate to accuracy -- when people feel they know something happened in the past, they tend to be correct more often than if they remember or feel it happened in the past. Also, research on the effect of background knowledge on memories for particular events has shown over and over again that background knowledge tends to intrude on memories of related events, causing us to falsely remember as being part of the event information in background knowledge that was not in fact present in the event. I suspect that these sorts of findings have implications for epistemological issues, but I'm not exactly sure what those implications are.

Here's another interesting finding from memory research that might have epistemological implications. In a classic experiment, Pichert and Anderson1 asked participants to read a story in which a house was described. The participants were told to read the story from one of two perspectives, either a potential home buyer or a burglar. After a delay, participants were asked to recall as much as they could about the story. During this first recall session, participants recalled significantly more information about the house that was relevant to their perspective (e.g., the potential home buyer might remember defects in the house, and burglars might remember information about the entrances and exits) than information that was relevant to the other perspective, but not theirs. After the first recall session, participants were told to think about the story again, but this time, from the other perspective (potential home buyers were now told to be burglars, and vice versa). Then, without reading the story again, they were told to recall as much as they could about the story again. During this second recall, participants were able to recall information about the house that was relevant to their new perspective, but which they had not recalled before. This result shows two things: 1.) The information that was irrelevant to their original perspective (schema) was encoded and 2.) This information was not accessible unless a relevant perspective (schema) was activated.

Now back to epistemology. Imagine a person has a belief, P, and has encoded the information, S, that would justify this belief, but because she has not activated the schema that would allow her to access S, she is not aware of this information. If asked to justify P, she could not access S, and therefore would not be able to do so. Does her belief constitute knowledge, even though she is (consciously) unaware of its justification residing in her memory? If it doesn't constitute knowledge, would it suddenly do so if she were to activate the relevant schema/perspective, and gain access to S?

There you have it, my response to my first request. If that wasn't rambling enough to dissuade anyone from ever asking me about epistemology again, I don't know what would be.

1 Pichert, J. W., & Anderson, R. C. (1977). Taking different perspectives on a story. Journal of Educational Psychology, 69, 309-315.

Friday, October 29, 2004

Down with Politics, Up with Cognition!

I am so thoroughly annoyed/frustrated/bored/disenchanted with politics at this point, with the election only 4 days away, that I can no longer even read political posts. Voter fraud? I don't want to hear about it. Missing explosives? I won't read about it. A bin Laden video? Seriously, who cares? Not I, that's for sure. So, since I won't be reading about it, I sure as hell won't be posting about it. Instead, I'll post about cognition up until the election. So, if anyone who happens by has any issues in cognitive science, psychology (including cognitive, social, or clinical), or some other related field that they would like to read about, let me know in comments or email, and I'll post on it. I'm not getting many visitors right now, so I may end up just posting about what interests me in those fields, but I'm up for suggestions. I'll list some potential topics below, in case something strikes a reader's fancy.

  • Memory, including:
    • False memories/repressed and recovered memories
    • Schematic memory
    • Working memory, including its role in other cognitive processes (e.g., reasoning, analogy, or decision making)
  • Decision making, particularly:
    • Goals: what are they, and how do they influence preference
    • Intertemporal choice
    • The role of analogy in decision making
  • Analogy
    • Models of analogy
    • Analogy and memory
    • Analogical reasoning
    • Analogy and metaphor
    • Analogy and similarity
  • Language
    • Language evolution
    • Language and thought/linguistic determinism/Sapir-Whorf
    • Figurative language (metaphor, idioms, metonymy, etc.)
    • Polysemy
  • Concepts and Categorization
    • Theories of concepts
    • Conceptual combination
    • Types of concepts
  • Reason and Emotion

  • Consciousness

  • Evolutionary Psychology

  • Embodied Cognition

  • Knowledge Representation (in particular, the status of "representations" in cognitive science)

  • The big debates (nature v. nurture, rule-based v. similarity-based, computation v. something else, symbolic vs. connectionist, etc.)

Birthday Post

This week, my we celebrated my son's seventh birthday. That calls for a picture.

Posted by Hello

I remember when he was only 7 pounds 8 ounces, and 21 inches. Now he's 51 pounds, 4 feet tall, and a first grader. Seven years can go by really fast.

How Unclever People Can Use Analogies Cleverly

Analogies are powerful reasoning and rhetorical tools, often capable of convincing even when they lack substance or justification. Simply making a comparison can create inferences that are otherwise unlicensed. Here are two examples, from a recent post by Keith Burgess-Jackson:

  1. The unfairness of denying gay couples the right to marriage is like the unfairness of denying dogs the right to vote.

  2. The unfairness of denying gay couples the right to marriage is like the unfairness of denying men the ability to have an aborition.

If one actually considers the structure and mappings in these analogies, they make no sense. Still, it's likely that they are rhetorically powerful. Why is that? First, let's look at the structure of the analogies. The structure of the first analogy seems to be something like this:

Base: "Only humans can vote. Dogs are not humans. Therefore dogs cannot vote, and this is not unfair."

Is analogous to:

Target: "Only straight people can marry. Gay people are not straight. Therefore, gay people cannot marry."

The inference we're supposed to make from the Base to the Target is that the situation in the Target is not unfair, either, because the fairness of the Base and Target follow from the same set of relations. Obviously it's not true that the relations in the two domains are even remotely analogous except at the most abstract level. Gay people can marry. However, they can't marry members of the same sex. Dogs, on the other hand, can't vote for humans but not dogs. Furthermore, the difference in sexual orientation is hardly analogous to the difference in species. They are entirely different types of relations.

The second analogy doesn't fair any better. Obviously, gay couples are not biologically incapable of marrying, as men are biologically incapable of having an abortion. What is the real point of Burgess-Jackson's analogies, then? To compare the fairness of denying gay's the right to marriage to cases in which it would be absurd to call something unfair. The only premise required for making these comparisons is the one that they are trying to prove, namely that "Only straight people can marry, therefore denying gay people the right to marry is not unfair." While neither analogy presents a rational argument for Burgess-Jackson's position, because they presuppose that position and the analogies don't hold, simply making the comparison to absurd cases allows people to make the inference that calling anti-gay marriage laws unfair is absurd as well. Now that's good framing.

Thursday, October 28, 2004

Once More into the Breach

OK, this is my last post about Lakoff before the elections, but someone has to clear up some misconceptions. So here goes:

  1. "Strict Father" and "Nuturant Parent" moralities are meant to describe all conservatives and liberals. This is not in fact the case. Instead, they are meant as prototypes around which conservative and liberal world-views cluster. Lakoff's work draws heavily on Prototype Theories of concepts popularized in the 1970s by Rosch and Mervis1. In these theories, prototypes are central tendencies, or mean or median instance of a concept abstracted across many instances of the concept. The features represented in the prototype are the most characterisic (in the sense of being present in the most instances) of the concept. Some instances of a concept are more typical than others by virtue of sharing more of these characteristic features. However, some members of a category will, in fact, share very few of the characteristic features. Thus, under the prototype view of concepts, instances of a concept have a "family resemblance" to each other, rather than sharing necessary and sufficient features, and some members of a category will actually be more similar to members of a contrasting category than to members of the same category. The classic example used to illustrate prototype theories is the concept BIRD. A robin is a highly typical instance of the BIRD concept (at least for people raised in the United States), while the emu is not. Bats share many perceptual features with birds, and few features with many mammals, but are in fact highly untypical members of the concept MAMMAL rather than fairly typical members of the concept BIRD. There are a lot of problems with prototype theories (e.g., how do you explain bats as mammals without some sort of causal theoryl or definitional property?), making Lakoff's reliance on them troublesome, but it's important to realize that this is what his two moral metaphors are. For more on Lakoff's view of concepts, and how it relates to his political theories, read Chapter 17 of his Moral Politics, as well as this excellent post at Semantic Compositions.

  2. "Nurturant Parent" means "Nurturant Mother." This seems to be a common misconception, and it has been argued against well here and here. Lakoff uses the term "parent" instead of "mother" in order to avoid this misconception, but in a way he brings it on himself. By contrasting "Nurturant Parent" with "Strict Father" morality, he naturally invites the father-mother contrast. However, the Nurturant Parent is meant to embody both feminine and masculine traints. Here is a short characterization of the Nurturant Parent world-view from Lakoff:

    In the Nurturant Parent family, it is assumed that the world is basically good. And, however dangerous and difficult the world may be at present, it can be made better, and it is your responsibility to help make it better. Correspondingly, children are born good, and parents can make them better, and it is their responsibility to do so. Both parents (if there are two) are responsible for running the household and raising the children, although they may divide their activities. The parents' job is to be responsive to their children, nurture them, and raise their children to nurture others.Nurturance requires empathy and responsibility.

    As this description makes clear, the use of the term "parent" instead of "mother" is not just meant to clear up confusion, but also to highlight the fact that within the nurturant parent world-view, both parents take part in child rearing on an equal footing with similar roles. The use of "Father" in Strict Father morality, while it may invite confusion, is meant to contrast the conservative "paternalistic" world-view with this less male-dominant one.

  3. "Framing" is a clever euphemism for "spinning." This seems to be the almost universal reaction to Lakoff's political theory among conservatives, but is also shared by some liberals. It's probably true that Lakoff's advice and techniques could be used for spin, but that is not what he advocates at all. Instead, what Lakoff wants liberals to do is to start describing their views in terms of their values, something he thinks Republicans do (often deceptively) very well. Here is a short description of the project and the reason it's needed:

    Progressives must rethink their policy goals in terms of values. There are always underlying moral reasons for supporting certain policies and opposing others. The first task then becomes identifying the values behind any given policy.

    The temptation for progressives is usually to talk about policies in terms of statistics: affirmative action is necessary because African Americans make up 14.7% of the current college-aged population but only 8% of college students. Seat belt laws are needed because 9,200 people died needlessly in motor vehicle accidents in 2000, and because injuries to less than one-half of 1% of the population cost the rest of us $26 billion. This may be true. But eyes and ears glaze over at statistics if they aren’t contextualized in terms of values. The question progressives don’t often answer explicitly is: Why do these statistics matter to us?

    Lakoff feels that an exclusive focus on the facts, without detailing the values that make facts "good" or "bad," puts liberals at a disadvantage. Instead of "spinning" the facts, then, he wants liberals to include a description of why the facts matter from the perspective of liberal values. This is what he means by "framing." And this is why he sees "framing" in terms of values as important:

    Americans believe that leaders should have a moral core that informs what they do. This moral core is seen as more important than one’s stance on any particular issue. George W. Bush is perhaps the best example of this in recent memory. He summed it all up in the 2000 campaign, when, after facing criticism for his lack of mastery of foreign policy details, he responded: “I may not know where Kosovo is, but I know what I believe.”

    Progressives hear this and tear their hair out: How could the President of the United States actually boast about his lack of knowledge? And who really cares what he believes if he doesn’t know what he’s doing?

    For better or for worse, however, most Americans do believe that values are important when choosing leaders. This shouldn’t surprise us, given that the American democratic system—in contrast to so many European democracies—involves choosing a person, rather than a party. Parties are about policies, but people are about values.

So, there you have it, three misconceptions, and three attempts to explain why they're wrong. I hope this helps. More than likely, conservatives will still see "framing" as synonymous with "spinning," and "Nurturant Parent" as a label for a feminine world-view. In part, this is because Lakoff's own inadequate descriptions of the prototypical conservative and liberal world-views (both of which are thoroughly critiqued here and here). If I were a conservative, I would be offended by his caricature of me, or at least of the prototype of my political world-view, as well. Lakoff's larger point is an excellent one, but as I've said on many occasions, his own use of it is highly problematic, and in most cases, highly impractical as well.

UPDATE: I just realized that I left out the reference to the Rosch and Mervis paper, which I'd meant to include in a footnote. This is what I get for blogging in a fit of insomnia. It's a shame, too, because this paper is one of the classic papers in cognitive science, and presents some excellent experiments which have inspired many future experiments. Anyone interested in cognitive science in general, and concept research in general, should read the paper. So, here it is:

1 Rosch, E., & Mervis, C.B. (1975). Family resemblances: Studies in the internal structure of categories. Cognitive Psychology, 7, 573-605. There is a draft of a good paper on prototypes by James Hampton here, as well.

Wednesday, October 27, 2004

What She Said

Once again, I have just read something that demands comment, but someone smarter than me has already done so better than I could. So, my main comment is: what she said. This is in response to Will Wilkinson's poorly thought-out post, of course.

I started to say something more, but then I saw that it had already been said in the comments at the above link. For instance:

The issue isn't whether increased voter turnout is intrinsically valuable. This is an issue of basic fairness. Some citizens face much greater obstacles to voting than others. If an eligible citizen wants to vote, they have a right to do so.


I don't think that we have any reason to believe that the people who are discouraged by voter intimidation are less informed than their voting counterparts. In order to support your hypothesis, you would have to show not only that the groups targeted by voter suppression are less informed, but also that those who succumb to voter suppression are less informed than other members of the target demographic who manage to vote anyway. I'm just not aware of any data to support such a bold conjecture.

The point, then, is that Wilkinson is apparently unacquainted with reality, and with the realities of traditionally disenfranchised groups like African Americans and individuals with low socio-economic status in particular. It is probably true that most voters, be they discriminated against through various means of disenfranchisement or not, are uninformed and therefore do not improve the effectiveness of the democratic process. Still, Wilkinson has no reason to think that the voting groups that are more likely to be intimidated are composed of higher percentages of uninformed voters. There simply is no a priori reason to think that these voters' voices are less deserving of being heard than anyone else's, including Wilkinson's, and his insistence that they are smacks of either racism or simple, unadulterated ignorance. In fact, I'd go so far to say that if Wilkinson's post is an indication of his own informedness, then he's the one whose vote can only detract from the effectiveness of the democratic process.

OK, so I did comment, but only by repeating what someone else had already said. I told you I couldn't say it better. Oh, and go Red Sox!

Tuesday, October 26, 2004

When Intellectual Innovators Become the Status Quo

Brian Leiter's email exchange with Jerry Fodor on the definition of analytic philosophy has generated a great deal of discussion. One comment to the post, by Jason Stanley, has generated still more discussion at Crooked Timber. Here is Jason's comment:

There is a certain kind of very influential academic who has a difficult time recognizing that they are no longer a rebellious figure courageously struggling against the tide of contemporary opinion, but rather have already successfully directed the tide along the path of their choice. Chomsky is one such academic, and Fodor is another.

Ortega y Gasset, in his generational theory of history, posited that at any given time in history, there are three generations, each with its own episteme and ethos. The oldest generation has been supplanted by the middle generation, which is the current dominant paradigm, but which is constantly being undermined by the youngest generation. It seems to me like this has always been the way things have worked in the history of ideas: each successive generation works to overcome and supplant its immediate predecessor, and once it has done so, it becomes the status quo. Sometimes, the middle generation has trouble letting go of its subversive attitude, as Jason noted in his comment. For instance, Noam Chomsky and others in his generation revolutionized the way we view the mind and human beings in general, and it must have been a big rush to do so. Now that their ideas have become the status quo, I imagine it's a bit of a let down. Of course, in many ways, and on many fronts, they are already being supplanted, so while they feel they are still they rebels, they are actually the target of new rebellions.

I wonder why it is that things tend to work out that way, particularly in intellectual spheres such as science and philosophy. I'm sure one reason is that it's easier to criticize than defend complex ideas, as any undergraduate who has written papers on famous philosophers has learned. Also, the best way to make a name for oneself is to supplant accepted ideas. However, I think the biggest reason for the constant flow of intellectual upheaval is that every system of ideas is flawed, and the people best equipped to notice the flaws in any system are those who are not fully invested in it. Someone (maybe Plank?) once remarked that the best way for a new theory to gain acceptance is for all of its detractors to die. That may be a bit extreme, but I think it captures this key aspect of the intellectual world: for new theories to gain prominence, the theories they have supplanted have to be widely questioned by those who have not spent their lives formulating and defending them.

Monday, October 25, 2004

Philosopher' Carnival IV: This Time, It's Personal

The 4th Philosophers' Carnival is up at Doing Things With Words. I like the posts at Majikthise on Quine, Mormon Metaphysics on Derrida (Clark Goble actually wrote several on Derrida in the week or so after his death, and they were all very good), and PEA Soup on moral luck. Head over and read them all. I even have a post in this edition of Carnival, but I'm not quite sure how. I don't recall submitting it, but I have been known to blog in my sleep.

Sunday, October 24, 2004

The Vatican is Pissed, But It Still Has the U.S.

I must admit that earlier this week, I was happy to read that the Vatican is upset about what it perceives as increasing secularization in European politics. The Vatican released the following remarks:

"It looks like a new Inquisition. It is a lay Inquisition, but it is so nasty," Cardinal Renato Martino, who heads the Vatican's Council for Justice and Peace, told reporters this week in response to the dispute. "You can freely insult and attack Catholics and nobody will say anything. If you do so for other confessions, let's see what would happen."

The Vatican expressed these sentiments in response to the EU Parliament's rejection of Italy's nomination of Rocco Buttiglione, who has publically voiced anti-gay and misogynistic sentiments. The Vatican further lamented the fact that "government after government [has approved] measures on abortion, family law and scientific study that run counter to Catholic teaching." So, the Vatican is upset that, contrary to its teachings, European governments and the E.U. are acting to counter discrimination against gays and women, and funding stem-cell research that will ultimately benefit mankind as a whole. If these are the sorts of things that upset the Vatican, then I can only be happy when I hear that the Vatican is upset. The histrionics the Church, and Cardinal Martino, have displayed in calling these trends in Europe signs of a "lay Inquisition" only make their disappointment seem that much sweeter, as they absurdly cast progressive policies as motivated by anti-Catholic sentiments.

Across the pond, the Vatican can only smile at the way things are going in the U.S. Abortion rights are coming under increasingly intense fire, and with Supreme Court positions likely to open up in the next few years, the Church and other anti-choice activists can only be salivating at the mouth. The U.S.'s stance on embryonic stem cell research and gay marriage must be a point of pride for the Church as well. Then there's the "Constitution Restoration Act of 2004," which states:

`Notwithstanding any other provision of this chapter, the Supreme Court shall not have jurisdiction to review, by appeal, writ of certiorari, or otherwise, any matter to the extent that relief is sought against an element of Federal, State, or local government, or against an officer of Federal, State, or local government (whether or not acting in official personal capacity), by reason of that element's or officer's acknowledgement of God as the sovereign source of law, liberty, or government.'.

Rest assured, Vatican, there is no "lay Inquisition" here. We're working hard to retain your regressive, antiquated values, and so far, it looks like we're succeeding. "Secular" remains a four letter word here, and if the backers of the CRA have their way, it will be against the law, too.

Happy Birthday Universe!

Yesterday, the Universe turned 6007. Happy Birthday Universe! And thank you, Bishop Ussher, for that little bit of nonsense.

The Neuroscience of Repressed Memories

In A Treatise of Human Nature, Hume wrote:

All the perceptions of the human mind resolve themselves into two distinct kinds, which I shall call IMPRESSIONS and IDEAS. The difference betwixt these consists in the degrees of force and liveliness, with which they strike upon the mind, and make their way into our thought or consciousness. Those perceptions, which enter with most force and violence, we may name impressions: and under this name I comprehend all our sensations, passions and emotions, as they make their first appearance in the soul. By ideas I mean the faint images of these in thinking and reasoning; such as, for instance, are all the perceptions excited by the present discourse, excepting only those which arise from the sight and touch, and excepting the immediate pleasure or uneasiness it may occasion. I believe it will not be very necessary to employ many words in explaining this distinction. Every one of himself will readily perceive the difference betwixt feeling and thinking. The common degrees of these are easily distinguished; tho' it is not impossible but in particular instances they may very nearly approach to each other. Thus in sleep, in a fever, in madness, or in any very violent emotions of soul, our ideas may approach to our impressions, As on the other hand it sometimes happens, that our impressions are so faint and low, that we cannot distinguish them from our ideas. But notwithstanding this near resemblance in a few instances, they are in general so very different, that no-one can make a scruple to rank them under distinct heads, and assign to each a peculiar name to mark the difference.

When Hume published the first volume of the Treatise in 1739, he couldn't have known that 260 years later, we would begin to learn that "impressions" (or sensations), and ideas, particularly those of the visual imagination, would turn out to share the same parts of the brain. Not only do they share some of the same brain regions, but there may be little difference in the brain activation caused by vivid impressions (sensations of external objects) and vivid visual images created by the imagination.

This has all sorts of implications for cognitive science. One of the most interesting implications concerns the formation of "false memories," which are now at the forefront of the repressed memory debate. For instance, in a recent study, neuroscientists presented participants with object words and asked to imagine a visual image of the object. Half of the words were followed (2 seconds later) by a picture of the object. While they were looking at the words and pictures, functional MRI scans were taken. Participants studied these words, while being scanned, for seven consecutive phases. Finally, twenty minutes after the last study phase, participants heard words, a third of which had been presented with pictures, a third of which had been presented without pictures, and a third of which had not been presented in the study phases. They were asked to indicate whether they had seen a photo of the object in the study phases.

Researchers have known for some time that producing visual images through imagination can sometimes lead to memory errors, called "reality-monitoring errors" because they fail to distinguish between observed and imagined images. Some of this is likely to be due to retrieval problems, but some also may be due to how visual images, be they imagined or produced by the senses, are encoded. What this study found is that when false memories, or reality-monitoring errors, were produced for words that had been presented without pictures (i.e., when these words were remembered as having been presented with pictures), the brain areas that were active during the study phase (most notably, the precuneus, right parietal, and anterior cingulate regions) overlapped significantly with those active during actual picture presentation. These areas were significantly less active during the presentation of words during the study phase in cases that were subsequently remembered correctly as having been presented without pictures. False memories were produced for study words about four times as often than for words that had not been seen in the study phase.

The lesson, then, is that contrary to Hume, during both in encoding and retrieval, there is often little difference between visual images created by the senses and those created by the imagination, even the images created of objects when reading the words that refer to them (which Hume specifically claims is impossible or at least uncommon). The difference is small enough as to make it difficult to distinguish between sensory and imagined images in memory. What, exactly, this means for the repressed memory debate will require further research, but it clearly demonstrates that there is a neural basis for false memories produced by repeatedly imagining images, for instance through repeated suggestions or rumination. Since we've yet to come up with anything like a neural, or even an algorithmic or representational basis for memory repression of dissociation, this sort of advancement in memory research certainly doesn't bode well for the champions of repressed memory theories and recovered memory therapies.

Saturday, October 23, 2004

How Undecided Are Undecideds?

I sometimes wonder how undecided undecided voters really are. Specifically, I wonder whether implicit attitudes that are at least partially independent from self-reported attitudes can predict the behavior of undecided voters. Since the late 70s, psychologists have produced more and more evidence that our introspective experience of our own higher-order cognitive processes and attitudes is highly inaccurate. We don't have direct access to these processes and attitudes, but instead produce theories about them from our behavior, and if our conscious beliefs about them are accurate, it is only because we've come up with a good theory. For this reason, psychologists have produced a wealth of indirect measures of cognitive processes and attitudes that do not rely on subjects' self-reports. One species of indirect tests is designed to test for "implicit attitudes," or attitudes that the subject may not be aware of, and may not accord with their self-reported, or even experienced attitudes. Some of these implicit attitude tests have been shown to be able to predict behavior (e.g., consumer behavior) quite well. I wonder, then, if they might also be able to predict voting behavior, particularly the voting behavior of undecideds.

To see how this would work, I'll briefly describe one implicit attitude test. This test, called the Evaluative Movement Assessment, or EMA1, is built around the idea that people are motivated to approach positively evaluated stimuli, and avoid negatively evaluted stimuli. In addition, for some strange reason (don't ask me why; it's voodoo), these approach/avoidance motivations are active when people are asked to move something toward or away from their names2. In EMA, the participant's name is placed in the center of a computer screen, while words appear on either side of her name. The participant is told to move positive words towards her name, and negative words away from it. In addition, she is told to learn a set of target words, and in each EMA session, she is told to move all of the positive word in one direction (either toward or away from her name). The idea is that response latencies will be greater when participants are told to move positively evaluated target words away from their name, and negatively evaluated words towards their name. After several sessions, the average toward and away scores can be obtained for each target word. The toward scores are then subtracted from the away scores to produce a "valence score," with positively evaluated words having strongly positive valences, and negatively evaluated words having strongly negative valences.

The relationship between implicit attitudes and actual behavior is still somewhat controversial, but implicit attitude tests have been shown to accord with several different types of behavior, including consumer behavior, which is roughly analogous to voting behavior. The idea, then, would be to have undecided voters move the names of candidates and/or political parties towards and away from their names, compute a valence for each candidate, and then use that to predict for whom they will vote. My suspicion is that most undecided voters do have implicit attitudes toward the candidates that are not reflected in their self-reports, and perhaps even in their own beliefs about their attitudes toward the candidates. Furthermore, I suspect that the more positive the implicit attitude toward a candidate is, the more likely an undecided voter is to vote for that candidate, while the more negative the implicit attitude, the less likely a person will be to vote for the candidate.

1 Brendl, C.M., Markman, A.B., & Messner, C. (in press). Indirectly measuring evaluations of several attitude objects in relation to a neutral reference point. Journal of Experimental Social Psychology.
2 Chen, M., & Bargh, J. A. (1999). Consequences of automatic evaluation: Immediate behavioral predispositions to approach or avoid the stimulus. Personality and Social Psychology Bulletin, 25, 215-224.

Friday, October 22, 2004

Only in the South...

I wish I had words for this, but I don't, other than to say it's damn funny. Here's a blurb:

Stranger moves in, redecorates while woman's on vacation

DOUGLASVILLE, Georgia (AP) -- A woman came home from vacation to find a stranger living there, wearing her clothes, changing utilities into her name and even ripping out carpet and repainting a room she didn't like, authorities said.

Once, when I was a kid, a skunk moved into the crawl space beneath our house while we were on vacation. All it changed was the odor of the place, though, so I'm not sure the situations are comparable.

Ignorance or Sins of Memory?

This corner of the blogosophere is abuzz with posts and more post (and still more) about the latest PIPA survey. In that survey, it was discovered that:

Even after the final report of Charles Duelfer to Congress saying that Iraq did not have a significant WMD program, 72% of Bush supporters continue to believe that Iraq had actual WMD (47%) or a major program for developing them (25%). Fifty-six percent assume that most experts believe Iraq had actual WMD and 57% also assume, incorrectly, that Duelfer concluded Iraq had at least a major WMD program. Kerry supporters hold opposite beliefs on all these points.

Similarly, 75% of Bush supporters continue to believe that Iraq was providing substantial support to al Qaeda, and 63% believe that clear evidence of this support has been found. Sixty percent of Bush supporters assume that this is also the conclusion of most experts, and 55% assume, incorrectly, that this was the conclusion of the 9/11 Commission. Here again, large majorities of Kerry supporters have exactly opposite perceptions.

These are some of the findings of a new study of the differing perceptions of Bush and Kerry supporters, conducted by the Program on International Policy Attitudes and Knowledge Networks, based on polls conducted in September and October

Not only are Bush supporters ignorant of reality in general, but they are also ignorant of the reality of their own candidate, as this table shows:

Posted by Hello

Click for Larger View.

The second part of the survey, showing that Bush supporters don't know where he stands on some important issues, is interesting, but I'm not exactly sure how to explain it. I imagine a lot of credit for this widespread ignorance can be given to the Bush campaign for effectively hiding what it is that Bush thinks and does. A lot of credit probably goes to the desire to avoid cognitive dissonance, and the confirmation bias, as well. Many of the people who are ignorant about what Bush actually thinks are probably the sort of people who vote Republican in every election, and they have to work hard to justify that vote to themselves, even if it means distorting the views of Republican candidates to make them more consistent with their own.

To me, the first part is more interesting. Last Novemember I saw a talk* on false memories for events and facts about the Iraq war. The researchers asked American, German, and Australian students to report whether certain statements about the Iraq war were true or false. The found that most American students still believed long-since retracted news reports about, e.g., the discovery of WMD and connections between Iraq and Al Qaeda, while German and Australian students by and large recognized these statements as false. It is interesting that simply planting the seeds of these beliefs in the minds of most Americans is sufficient to have them believe it. Thus, the news stories, often reporting information directly from the Bush administration, about the discovery of WMD or Iraq-Al Qaeda connections served to create beliefs even as subsequent news reports retracted the earlier ones. This is the brilliance of the Republican Party in 2004. They recognize that they don't have to stick to the facts, but they don't have to lie either. All they have to do is make reports that are likely false, but are in accord with sketchy information, and even if it is later shown that the reports are in fact false, people will still believe them.

Why is it that Americans, in this study, were more gullible than Germans or Australians? I suspect that the opposite would have been the case had evidence of WMD and Iraq-Al Qaeda connections had been discovered. The early reports showing no connection would have been widely believed by Germans and Australians, who were primed by their anti-war attitudes to believe anti-war facts. Americans, in turn, primed by their pro-war sentiments, were more likely to believe information consistent with those sentiments. Such is the human mind. This is the reason that I am much less surprised than the other liberal bloggers seem to be at Republican ignorance. I know how susceptible we (as a group) are to such ignorance, as well. After all, Republicans are humanoids, sort of like us.

* I can't for the life of me remember who gave the talk. It was in one of the Saturday (morning, I think) memory sessions at the 2003 meeting of the Psychonomic Society, in Vancouver. Maybe someone who reads this blog (yeah right) was there, and saw the talk, or has a schedule from the conference, and can tell me who the hell gave it.

The Table that Just Left

Linguists are always noticing quirky uses of language, what with all the spoonerisms, "eggcorns", Bushisms, and creative uses of grammar common to everyday speech. I notice things too, though the things I notice are not so much linguistic oddities as representational ones. My favorite example of all time is a convoluted instance of metonymy uttered by a friend of mine in the course of an ordinary conversation. She is a restaurant owner, and was talking about one of her employees, a bus boy, whom I think she had a bit of a crush on. In the course of the story, she mentioned that she told the bus boy:

"Go clean the table that just left."

I was blown away. What a metonymy! The "table" refers to both the actual table, and to the person who was sitting it, but had just left. Naturally, the bus boy had no problem understanding that she meant to clean the table itself, and not the person sitting at it, but he also knew exactly which table she meant. That may not seem very impressive to you, but try coming up with an account of how that works cognitively! I suspect a linguistic analysis would be tough, too. Then again, we can always blend it.

The Title of this Blog

When I was creating this blog (which I did pretty much on a whim), I was forced to give it a title. I'm an indecisive person by nature, and the prospect of creating a name that I would be forced to live with in perpetuity was unsettling. Finally, I decided that I would give it a title from one of my favorite poems, which was also cryptically (and eliptically, through the allusion) descriptive of what it is that I do _ study memory, motivation, and interactions between the two. Today, I discovered that there happens to be a paper on the topic with the same title. Here's the abstract:

MacLennan, Bruce J. (1998) Mixing Memory and Desire: Want and Will in Neural Modeling. In Pribram, Karl H., Eds. Proceedings Brain and Values: Is a Biological Science of Values Possible? (5th Appalachian Conf. on Behavioral Neurodynamics), pages pp. 31-42, Radford, Virginia.


Values are critical for intelligent behavior, since values determine interests, and interests determine relevance. Therefore we address relevance and its role in intelligent behavior in animals and machines. Animals avoid exhaustive enumeration of possibilities by focusing on relevant aspects of the environment, which emerge into the (cognitive) foreground, while suppressing irrelevant aspects, which submerge into the background. Nevertheless, the background is not invisible, and aspects of it can pop into the foreground if background processing deems them potentially relevant. Essential to these ideas are questions of how contexts are switched, which defines cognitive/behavioral episodes, and how new contexts are created, which allows the efficiency of foreground/background processing to be extended to new behaviors and cognitive domains. Next we consider mathematical characterizations of the foreground/background distinction, which we treat as a dynamic separation of the concrete space into (approximately) orthogonal subspaces, which are processed differently. Background processing is characterized by large receptive fields which project into a space of relatively low dimension to accomplish rough categorization of a novel stimulus and its approximate location. Such background processing is partly innate and partly learned, and we discuss possible correlational (Hebbian) learning mechanisms. Foreground processing is characterized by small receptive fields which project into a space of comparatively high dimension to accomplish precise categorization and localization of the stimuli relevant to the context. We also consider mathematical models of valences and affordances, which are an aspect of the foreground. Cells processing foregound information have no fixed meaning (i.e., their meaning is contextual), so it is necessary to explain how the processing accomplished by foreground neurons can be made relative to the context. Thus we consider the properties of several simple mathematical models of how the contextual representation controls foreground processing. We show how simple correlational processes accomplish the contextual separation of foreground from background on the basis of differential reinforcement. That is, these processes account for the contextual separation of the concrete space into disjoint subspaces corresponding to the foreground and background. Since an episode may comprise the activation of several contexts (at varying levels of activity) we consider models, suggested by quantum mechanics, of foreground processing in superposition. That is, the contextual state may be a weighted superposition of several pure contexts, with a corresponding superposition of the foreground representations and the processes operating on them. This leads us to a consideration of the nature and origin of contexts. Although some contexts are innate, many are learned. We discuss a mathematical model of contexts which allows a context to split into several contexts, agglutinate from several contexts, or to constellate out of relatively acontextual processing. Finally, we consider the acontextual processing which occurs when the current context is no longer relevant, and may trigger the switch to another context or the formation of a new context. We relate this to the situation known as "breakdown" in phenomenology.

I'll be honest, I haven't read the paper. I got to the word "dynamic" and, after a brief cringe, decided I'd put it somewhere near the bottom of my "to read" list. At least it relates the issues to phenomenology, which is the area of philosophy (the work of Henri Bergson and Maurice Marleau-Ponty in particular) that inspired me to approach these issues. Still, more often than not, when "dynamic systems" models are used to model cognitive phenomena, all the modeler is doing is replacing one thing we don't understand very well with another that we understand even less well.

If We Had a Cognitive Account of Counterfactuals, This Would Be It

I think counterfactuals are cool, not because of their importance in post-Fact, Fiction, and Forecast analytic philosophy, but because of the role they play in everyday cognition. Counterfactuals figure in guilt and regret, blame-attribution, ordinary causal reasoning, scientific reasoning (including in the structure of scientific experiments and hypothesis testing in general) and all sorts of other things. There are even implications for mental health. For instance, depressed individuals tend to use more counterfactuals, and in particular, more counterfactuals for "controllable" events1. As interesting and widespread as they are, though, counterfactuals haven't been widely studied by psychologists outside of social psychology.

One could easily launch an inquiry into how people produce, understand, and use counterfactuals by attempting to address Nelson Goodman's two major questions about counterfactuals: what are the conditions that hold in the counterfactual scenario (in simple counterfactuals, stated as "if-then" hypotheticals, the question is about what conditions hold in the antecedent), and how do we determine whether, given the counterfactual scenario (e.g., in the antecedent), the resulting inferences (e.g., in the consequent) are true? Since people are able to produce, understand, and use counterfactuals fairly easily, and even to judge whether they are true, it's important to understand how they do this. In fact, because people use counterfactuals to reason about such a wide variety of things, understanding the cognitive mechanisms underlying counterfactuals is likely to provide insight into many other mental phenomena.

As far as I know, there's only one comprehensive theory of counterfactuals in cognitive science. It comes from Gilles Fauconnier (in Mappings in Language and Thought and Mental Spaces). Naturally, his account involves blending. In essence, the blend results from a combination of the counterfactual space and the factual space, along with added inferences and the like. As is usually the case with blending, this sort of approach to counterfactuals is fun but not terribly productive. Still, I think the insight that the way we form and understand counterfactuals involves mapping is important.

One way to gain insight into how counterfactuals work, cognitively, is to look at the sort of differences that ordinary counterfactuals produce. As I've said before, differences are important, and there are two primary types of differences: alignable and nonalignable differences (see the link for an explanation of what these are). I suspect that a careful look at counterfactuals will show that the differences highlighted by the antecedents are alignable ones. In other words, counterfactual scenarios are likely to differ from their corresponding factual scenarios on dimensions that are relevant in our representations of the factual scenarios. This would provide a straightforward explanation for how people know which conditions should hold in the counterfactual scenarios. All they have to do is look at the condition(s) specified by the antecedent, map that onto the factual condition(s) with which the antecedent contrasts, and mutate the arguments.

To see how this world work, consider the following counterfactual: "If I had not stepped off the curb, I would not have twisted my ankle." If we map the condition in the antecedent (which specifies that I had not stepped off the curb) onto the factual condition (I had stepped off the curb), we can then carry over the structure of the resulting factual scenario (landed awkwardly, twisted ankle), and mutate each relevant dimension. Thus, I did not land awkwardly, and therefore did not twist my ankle. Philosophers might quibble over this, perhaps arguing that in some possible worlds, instead of stepping off the cliff, I stepped on a briefcase that a passerby had dropped, and ended up turning my ankle anyway, but that's because philosophers don't care about pragmatics. The context of the counterfactual likely constrains the types of inferences we can make, pragmatically, and the dropped briefcase is probably not a very good inference. Why? Because it's not alignable with anything in the factual scenario (that we know about). There was no briefcase, and creating a random nonalignable difference like that is just not the sort of thing we ordinarily do.

This account would also provide an answer to the question of how people determine the truth of counterfactual inferences. All they have to do is re-align the now fully-structured counterfactual domain with the factual one, and use their background knowledge to determine whether the mutations they've made make sense. For instance, in the ankle-twisting example, people can surmise that if I had not stepped off the curb, and thus had not landed awkwardly, it's unlikely I would have twisted their ankle. Landing normally rarely causes twisted ankles.

Obviously, this is a pretty sketchy cognitive theory of counterfactuals, and it certainly hasn't been tested empirically. I'm not quite sure how to go about testing it, though I've got some ideas. Perhaps someone who happens by here and has read this far has some suggestions. Or better yet, perhaps someone has some criticisms of this account (preferably logical or empirical ones). I do think that once its fully fleshed out, and if it stands up to empirical tests, it can be a powerful theory of counterfactual reasoning, and that makes it worth further attention in my mind.

1Markman, K. D., & Weary, G. (1998). Control motivation, depression, and counterfactual thought. In M. Kofta, G. Weary, & G. Sedak (Eds.), Personal control in action: Cognitive and motivational mechanisms (pp. 363-390). New York: Plenum Press.

Family Resemblance Philosophers

Brian Leiter and Jerry Fodor had an email exchange in which they discussed the definition of analytic philosophy. I find it a bit odd that Fodor, of all people, wants to give something like a definition for this sort of concept, and I imagine anyone familiar with Fodor's work on concepts, and his belief that only a few primitive concepts have definitions will be as well. Is analytic philosophy a conceptual primitve? Anyway, I can't help but wonder how genuine Leiter's incredulity really is. I can't imagine someone as bright as he is doesn't understand that "analytic philosophy" isn't a category meant to capture some essential property, be it a doctrinal or methodological one, common to every instance it captures. One might be able to provide a strictly historical definition, placing philosophers in a tradition by their influences and their scope, but that project would probably fall apart pretty quicly, particularly at the beginning (with Frege, Russell, and the like) and the end (how much are Alva Noë and Hubert Dreyfus working within the tradition that began with Frege and moved through the logical positivists to Quine, Strawson, or Ryle, and on to Dennett, Chalmers, or... Fodor, vs. the tradition that runs from Brentano through Husserl to Heidegger and Merleau-Ponty?). Perhaps we're just going about this the wrong way, and my suspicion is that Leiter is well aware of this.

By the time I was in my second year of college, I think I had a pretty good idea about which philosophers were doing what people, be they students or professors, pretty regularly called "analytic philosophy" or "continental philosophy." To be sure, even then, if I had been asked to define either category, my answer would have been similar to Justice Potter Stewart's in reference to what "hard-core pornography was: "I shall not today attempt further to define the kinds of material I understand to be embraced [by the concept of "Analytic Philosophy"]... but I know it when I see it." And I do know it when I see it.

The whole discussion between Leiter and Fodor reminds me of the discussions I used to have, again with professors and other students, about what "postmodernism" was. The term seemed to make a lot of sense in art, because it was a strictly historical term there, but in philosophy, it seemed to be both a historical term (philosophy after modern philosophy; radiczlized modern philosophy) and a doctrinal one. Yet, then as now, I would have been hard pressed to pin down any doctrine that constituted a necessary and sufficient feature of "postmodern philosophy." The problem, with concepts like "postmodernism," "analytic philosophy," "continental philosophy," or even "modern philosophy" (to say nothing of "pornography!") is that these aren't concepts that are meant to have a clearly distinguished set of necessary and sufficient features. They are family resemblance categories, in which there is no single common attribute, but each member shares at least one attribute with the others, and members tend to share more attributes than non-members. In that way, they're like pretty much every other kind of category, particularly those that aren't "Natural Kinds."

What attributes are analytic philosophers likely to share? Well, there's the historical attribute, which I've already mentioned. There are doctrinal attributes that many share, such as the ones noted by Fodor, as well as scientism and a close connection to the sciences, a distrust of metaphysics, or at least non-realist metaphysics, a sort-of Kantian approach to language and perception, or at least, a linguistic approach to the problems raised by the First Critique, etc. - the sorts of things criticized by Fodor, as well as J. L. Austin and the non-Kripkenstein Wittgenstein. There are also methodological attributes that many share, including a focus on clarity and structure, logical argumentation, often with case-by-case analyses (including counterfactual cases), a focus on sub-areas and sub-problems, and the attention to detail and focus on certainty over generality that goes with that, and an overall desire to write and think like 20th century scientists1. Obviously, no analytic philosopher embodies each of these attributes, but analytic philosophers will tend to embody more of them than non-analytic philosophers2.

The of this whole exercise, and Leiter's likely feigned confusion, escapes me. I'm just blogging about it because, well, I have a blog, and I blog about whatever I want. I doubt many people are surprised when Jerry Fodor, Donald Davidson, or W. V. Quine are referred to as "analytic philosophers," any more than they are surprised when Martin Heidegger, Jean Paul Sartre, or Jean-Francois Lyotard are referred to as "continental philosophers." The distinctions aren't meant to carry serious philosophical weight, and can't imagine that anyone thinks they are. Still, they make for practical rhetorical tools, and if anyone is really confused by the terminology, perhaps they might consider why they're not confused by the use of such labels, which don't admit simple definitions, in virtually every other intellectual field.

1 I still think the best description of an analytic philosopher ever came in a review of one of Dennett's works by Thomas Nagel, in which he called Dennett "Giblert Ryle crossed with Scientific American."

2 In one of his remarks to Fodor, Leiter wonders whether philosophers like Hegel, Husserl, Heidegger, or Habermas could be considered analytic philosophers, primarily for doctrinal reasons. While I think Hegel is out automatically, because "analytic" and "continental" are at least partially historical (anyone before Frege is neither), Husserl was certainly more similar to Frege or Russell, in the early years, than he was to the later Heidegger, much less Sartre or Derrida. Still, the influence of Husserl has been more widely felt among the "continentals" than the "analytics," and that's probably why he's referred to as one of the former rather than the latter. As for Heidegger, while his view of language may be compatible with some (not many!) analytic conceptions of language, he doesn't really embody any of the other properties of analytic philosohpers, so he's probably not someone we should include. Habermas is actually a good choice for an analytic philosopher, though, and if he were read by more analytic philosophers, I think they would probably agree. Why he's considered a continental philosopher, I'm not quite sure. It probably has something to do with his use of terms like "crisis," which are ordinarly pretty vague. However, in his hands, they become quite specific, and powerful at that.

Tuesday, October 19, 2004

Framing for Third Parties

While George Lakoff goes into great detail about how and why Democrats should get into the framing business, in his books and papers on politics, he pays almost no attention to third parties. Yet, third parties are likely to benefit as much or more than Democrats from careful framing. While there are institutional barriers in place that prevent third parties from being overly effective, either in affecting the outcomes of elections (at least at the state and national level) or influencing major-party platforms, third parties could do more to get their candidates and positions noticed, if they worked hard to carefully frame their positions in such a way that they are both easily alignable with and differentiated from the positions of the two major parties.

In order to see why this is the case, it might help to think of candidates as brands. This shouldn't be too much of a stretch, with all the advertising and merchandise that goes into a campaign. In addition, as with choosing a product, people who aren't wedded to a certain brand will likely be comparing the products, and analyzing their commonalties and differences. In American politics, two brands are deeply entrenched. New brands (third parties) have to take advantage of the ways the mind works if they want to reach the forefront of the public's consciousness.

In a previous post, I briefly described how frames, or schemas, influence expectations, inferences, memory, and the interpretation of new instances. Brands (and candidates) are, in essence, schemas (or frames). Much of politics, and business competition, consists of trying to frame one's own brand, and at the same time, frame the other guy's brand, as Lakoff has demonstrated quite well (even if his details are more than a bit... off). New candidates/schemas/brands suffer from a disadvantage in that they will always be compared to the entrenched brands on their own terms. However, new brands can turn this disadvantage into an advantage, if they play their cards right. To illustrate how, I'm going to talk about two findings from cognitive psychology and marketing that illustrate the principles involved in comparing brands (and schemas in general), and thus how third parties should go about structuring their own public representations. There's a lot of theory behind all this, and I'm going to try to avoid getting into it at the moment. For now, suffice it to say that neither Lakoff's conceptual metaphor theory, nor blending, are much help here, as is usually the case in practical situations.

It's well known that brands that do well early in the existence of a particular market (early entrants) have an advantage over later entrants. The first of the two findings came from experiments1 designed to look at how later entrants are compared to early entrants, and how they can overcome the early entrant advantage. Previous research2 had shown that early entrants provided the structure for comparisons with new brands when preferences are being determined, as well as that the attributes of early entrants are better remembered than those of later entrants3. In this set of experiments, participants were introduced to three brands over three sessions (extended over 3 weeks). In the first session, they were introduced to one brand, the early entrant. A week later, they saw the early entrant brand again, as well as two new brands, one of which had previously been rated by another group of participants as equally preferable when compared to the early entrant, while the third had been rated as more preferable. The researchers found that the more preferable later entrant could be preferred to the early entrant, and its attributes better remembered, thus overcoming the early entrant advantage, but only if the more preferable later entrant differed from the early entrant in a particular way. If the preferable later entrant's differences were not relevant to the structure provided by the early entrant (i.e., the two brands differed not on the same dimensions, but instead had entirely separate dimensions -- if the differences were not related to shared attributes), then the early entrant was preferred, and the later entrants attributes not remembered. However, if the differences between the early and late entrants were on relevant dimensions (i.e., related to shared attributes), then the preferable later entrant was in fact preferred, and its attributes were remembered.

The second, related finding came from another set of experiments4 which further explored comparisons between early and late entrant brands. Using a similar design, these experiments found that when participants were highly motivated to process brand information, they noticed, remembered, and were influenced in their preference by differences that were not relevant to the structure provided by early entrants. In other words, they were influenced by differences that did not relate to attributes that the early and late entrants had in common. This was not the case when participants were not very motivated to process information about the brands. In this case, participants only tended to notice and use information that pertained to common attributes, as in the previous set of experiments.

The moral of this story, in case it is not obvious, is that to overcome the early entrant advantage (which comes both from being the first brand or brands, and from getting more exposure, as is the case for the two major political parties), new entrants have to pay attention to how their public representation lines up with those of earlier entrants. In politics, it's unlikely that most people, including independents, are highly motivated to process information about third party candidates. They're probably not sufficiently dissatisfied with the two major parties, and they're likely to think that the third party candidates are inviable. This means that people are unlikely to notice, remember, or be swayed by third party positions that aren't framed in such a way that they are relevant to peoples' representation of major party candidates. Third party candidates, at least in the beginning, must make it clear how they differ on the issues that the major party candidates are talking about. Of course, this isn't sufficient to overcome the major parties' advantage; the third party candidates also have to sell their positions as preferable. However, even if most people would prefer their positions, they're unlikely to notice or remember them if they're not alignable with the major parties'.

To illustrate the truth of this in the political arena, consider Ralph Nader's anti-corporate stance. While this is his favorite talking point, it has thus far done little to influence his perception among the majority of voters, including those who are planning on voting for him. This is because his anti-corporate stance doesn't line up with any of overt stances of Bush and Kerry, which means that when people compare Nader to one or both of them, they're unlikely to even remember, much less have their choice influenced by the differences between Bush and Kerry on the one hand, and Nader on the other, on the issue of corporate influence. Nader likes to believe that he can make people believe that he differs on a relevant dimension, or a common attribute (stance toward corporations), but it doesn't work this way. The early entrants determine which attributes are, and are not, relevant in the comparison of two brands, and as long as people aren't motivated to carefully consider Nader's positions, nothing he does is going to change this.

Once people begin to be motivated to consider the stances of third party candidates, these candidates can begin to show how they differ on the issues that the major party candidates aren't even talking about. Nader can talk up his anti-corporate positions, Badnarik can talk about how Congress is crossing over its Constitutionally-provided boundaries, and what's-his-name from the Constitutional Party can tell us how crazy he is. Of course, it's often the case today that the only goal of a particular third party candidate (e.g., Nader) is to bring awareness to a certain issue or issues. These issues are not alignable with the major party talking points, and therefore tend to hover outside of the public's awareness. Even though these issues are not alignable, third party candidates who want to bring them to the public's attention can still learn from what we know about brand comparisons. By framing the non-alignable issues in such a way that they do line up with the talking points of the major parties, third party candidates are much more likely to get their pet issues notice.

To sum everything up, because of the advantages of the two entrenched parties in comparisons with candidates from third parties, third party candidates must pay close attention to how they frame their positions. In particular, because the major parties will determine the dimensions on which the comparisons will take place, third party candidates will have to work hard to make sure that they frame their positions so that they line up with positions that are explicit in the major party candidates' platforms. This is the case even when the third party candidates' issues don't line up in a straightforward way with explicit major party positions. It's not ideal that major parties are able to determine the issues on which they will be compared with third parties, but unfortunately, that's just the way things work. Third parties can take advantage of this, though, if they are willing to play the game. If not, they'll probably never be noticed, or influence American politics or political attitudes at all. They'll fade into obscurity as Ralph Nader is destined to do.

1 C.P. Moreau, D.R. Lehman, & Markman, A. B. (2001). Entrenched category structures and resistance to 'really' new products. Journal of Marketing Research, 38(1), 14-29.
2 Carpenter, Gregory S. and Kent Nakamoto (1989). "Consumer Preference Formation and Pioneering Advantage." Journal of Marketing Research, 26, 285-298.
3 Kardes, K. & Kalyanaram, G. (1992). Order-of-Entry Effects on Consumer Memory and Judgment: An Information Integration Perspective. Journal of Marketing Research, 29, 343-357.
4 Zhang, S., & Markman, A.B. (2001). Processing product-unique features: Alignment and involvement in preference construction. Journal of Consumer Psychology, 11(1), 13-27.

Sunday, October 17, 2004

Ralph Nader: Dubmass

As dissatisfied as I am with the Democratic Party, I've never found Ralph Nader to be an even remotely attractive alternative. He strikes me as a right-of-center, mildly insane megalomaniac. This morning, I saw him interviewed on ABC. During the interview, he talked about a new t-shirt being sold on his campaign website. The front of the t-shirt shows a picture of the liberty bell, along with the word "spoiler." The back reads, "Revolutionaries always spoil corrupt systems." Nader followed this by detailing how the Democrats have failed by allowing the Republicans to win at all levels of government for over a decade. If you're like me, all of this makes your head spin.

The first question I have is, what system does Nader think he is spoiling? No one with 2% of the vote is spoiling anything, and the fact that he may harm John Kerry's election chances in some key states doesn't make him a spoiler of the system, but a participant in it! This is particularly true if you take seriously Nader's point about the failure of the Democrats over the last ten years. The system has, according to Nader, consistently produced wins for Republicans at all levels of government. If Nader, by taking votes from Kerry, perpetuates this, is he spoiling the system, or making it more efficient? I can't see how anyone can believe its the former.

The secon question I have is, does Nader really believe himself to be a revolutionary? Though his anti-coporate ideas are not shared by most candidates from the two major parties, and his stance on Israel is controversial, most of his ideas are hardly new, much less revolutionary. In what world is a man who gets on state ballots through the concerted efforts of the ruling party considered a revolutionary? Is this man living on this planet? Has anyone considered prescribing him antipsychotics?

Why is it that third party candidates are so often strikingly delusional, not about their chances of winning elections, but about the way the world actually works?

Friday, October 15, 2004

They're Humanoids, Sort of Like Us

What's with all the antirobotism and robophobia I'm seeing around the web these days? I am disturbed, truly I am. Some robots are humanoids too, just like Republicans. If Republicans can marry humans, why can't robots? Can't we all just get along? I mean, check out this example of robophobia from a supposedly "liberal" blogger, Belle Waring:

I'm going to go out on a limb here and say no daughter of mine is going to marry a robot. Call me old-fashioned and intolerant, but I can't go for these machine-intelligence loving perversions. Think of the cyborg children!

Belle, I have just one question for you and your fellow robophobes: What have you got against robots with faces like this:

Posted by Hello

Could you spew your hate speech to a face like this? I long for a day when robot-human love is widely accepted in our culture, so that robots like Kismet can have the oppurtunity to express their love for humans through the ancient and revered institution of marriage.

I showed this post to Kismet, and this is how he looked afterwards:

Posted by Hello

At least we have truly progressive liberals, like Lindsay Beyerstein, who writes:

For the record: If I ever have kids, and they bring home anything that passes the Turing test, I'll be okay with that.

Bravo, Lindsay. Bravo.

UPDATE: There are still more truly tolerant people out there. PZ Myers is one of them. He wrote:

I am more open-minded than Belle Waring, since I am perfectly willing to allow my kids to marry robots. As long as they are kind and loving robots, that is.

Dr. Myers, I'd like to introduce you to Cog

Posted by Hello


Posted by Hello

They're two kind and loving robots looking for love. How 'bout we set them up with a couple of your offspring?