Saturday, December 31, 2005

"Terrorism is Like Cancer"

One of my work-related hobbies is analogy hunting. It's a pretty easy hobby to take up, because the damn things are everywhere. One of their more common habitats is politics, where, as Lakoff has famously noted (even if he gets the terminology, and the mechanisms, and the actual analogies, and pretty much everything else, wrong), they're nearly ubiquitous. Some of my favorites, such as "If Clinton were the Titanic, the iceberg would have sunk," come from politics. So when I listen to political speech or read articles on politics, I'm always on the lookout for them. I was surprised, then, to learn that I'd missed a fairly common one in recent political discourse. Apparently, people have been going around comparing terrorism to cancer for a while. I only learned of the analogy, which is a beaut, when I came across (via LGM) John Quiggin's post on it yesterday. I guess I'm not as skilled a hunter as I had believed.

The analogy is beautiful for many reasons. For one, they have a few very salient surface similarities (similar properties that don't necessarily involve similar relations), such as being dangerous and deadly, as well as difficult to deal with. Since surface similarities generally drive the retrieval of analogs, it's likely that these surface similarities drive the use of the terrorism-cancer analogy, and that makes the analogy a good illustration of the retrieval stage of analogy. In addition, because the concepts are really broad, and our representations of them rich with relational structure, there are all sorts of directions in which the analogy could go, and thus all sorts of conclusions that we could derive from it. To see this, just look at some of the uses of the analogy a quick google search produced:
  • Terrorism is like cancer. Either you deal with it and you cut it out, or it eats
    you up. -
    Itimar Rabinovich
  • Certainty is that terrorism is like cancer: it must be rooted out from our history with every means, even if the method does not promise to be painless. This cancer has already reached a metastatic stage. Hence it is our duty not to prevaricate with unproductive, sterile debates in order to avoid the destruction of the little there still is to defend in our world. - Genina Iacabone
  • Terrorism is like cancer. Cancer spreads rapidly, and if cells are left to proliferate, they kill their host organism, and thereby kill themselves. There is no reasoning against it. So what do we do about it? - The Real Kato Online
  • Terrorism is like cancer. It's starts small then spreads rapidly. - from a comment at Mental Mayhem
  • Terrorism is like cancer. You have to eliminate it. Sometimes you use surgery and sometimes radiation. - Lt. Col. William Bograkos, quoted by Ron Jensen in Stars and Stripes
  • Terrorism is like cancer, it is both a specific and systemic condition. - William Meyer
  • An article titled "Fighting Cancer and Terrorism - Our Fight Is Similar" by Karl Schwartz at Lymphomation.org, of all places. This one compares the two concepts on many different dimensions. Here are two paragraphs:
    First, the sickness and unreality we feel at diagnosis is very much like the experience of Americans on September 11 and it's aftermath. The enemy is also similar. It comes from ourselves and is somehow twisted (mutated) to become something that betrays us—that seeks our death. Just as every siren post 9-11 evokes renewed fear of assault and senseless violence; every new feeling and symptom carries with it a fear that the cancer is back or growing.

    There is no reasoning with this enemy although it’s theoretically possible to do so, just as it’s possible to induce cancer cells to differentiate to normal cells—but this change over is rare and not curative. We understand that humanity is a body [another analogy... score!] that requires cooperation and rules of conduct. We know that cancer cells have lost this connection—that they have lost the rules that govern normal function and service to the body.
There are hundreds more, including some that use cancer primarily as imagery (e.g., "The Metastasizing Cancer Of Pakistan/Aghanistan-based Islamic Terrorism"). In each case, terrorism is the target of the comparison, and cancer the base. In other words, no one's trying to say that cancer is like terrorism, they're all saying that terrorism is like cancer -- there's a difference, which I'll get to in a bit. Since analogies are generally used to carry over parts of the representation of the base concept to the concept, people are using cancer to say something about terrorism. In the above examples, the use of cancer as a base concept teaches us that terrorism must be treated, preferably quickly; that we must use every means at our disposal to do so, including the surgical and the less accurate (killing some civilians, or healthy cells, along with the bad guys, or cancer cells); that like cancer, terrorism begins locally, but rapidly spreads; and that like cancer cells, terrorist cells can rarely be reasoned with, and reasoning never solves the whole problem; etc.

Of course, some of the parts of the representation of cancer that are carried over to terrorism are pretty general, many other concepts could have been used in place of cancer. For instance, kudzu spreads fast, and if you don't hit it early and often, pretty soon it will be all over the place. Terrorism, then, is like kudzu. Fashion trends spread pretty quickly, too. Terrorism is like fashion trends? It's likely that, at least in some cases, the use of cancer as a base in the analogy is due more to the fact that cancer is really, really bad, and making the comparison reiterates the badness of terrorism (as if this were something of which we needed to be reminded). This illustrates a finding from the research on political analogies: base concepts are often chosen for their emotional valence. If you want to remind people that something is bad, you pick something really bad for your analogy, and if you want to make people feel like something is good, you pick something with positive emotional value. That might be why the "fashion trends" analogy doesn't work well (though I've been known to compare fashion trends to cancer).

It's important to note that the analogies all move in one direction, so I'll say it again: cancer is the base, and is used to tell us something about the target, terrorism. This is important because of one of the most pervasive features of analogy: tasymmetrymetry. Saying X is like Y, and then carrying over information about Y to X, doesn't necessarily make it possible to carry over information from X to Y. asymmetrymetry is part of what makes analogies so useful. The more salient, and extra information in the base domain allows us to make novel inferences about the target domain. The classic examplasymmetrymetry in comparisons, from the work of Amos Tversky, is "North Korea is like Red China." In Tversky's experiments with this comparison, participants judged North Korea to be much more similar to China than China is to North Korea. This is, in part, because the properties that drive the comparison between the two are more salient in China than in North Korea. That's why people chose China as the base concept in the comparison in the first place, and is likely one of the reasons why people chose cancer as a base concept in the terrorism-cancer analogy. In addition, it's likely that participants' representations of China were richerichar than their representations of North Korea. This means there were a wealth of candidate inferences from China to North Korea, and few, if any, from North Korea to China. Most of those candidate inferences probably wouldn't hold for North Korea, but the analogy makes it possible to test them out if we're so inclined. The same is probably true of the terrorism-cancer analogy. As the few examples above show, we can try inferences from several different parts of our cancer representation, including treatment, the behavior of cancer at large and individual cancer cells, and even how people feel when they find out that they have cancer. It's not clear from those comparisons, however, which if any inferences we might draw from terrorism to cancer. Thus, contrary to what one ofcommentersntors on Quiggin's posts asserts, the assertion that "terrorism is like cancer" does not imply the assertion that "cancer is like terrorism," or at least, not that cancer is like terrorism in the same way or to the same degree.

One other interesting, and common, feature of the above comparisons (except, perhaps, Meyer's strangely-placed article) is how narrow the comparisons are. Each author aligns the representations of terrorism and cancer on only a few of the many possible relations in each of the concepts' representations. The analogy, while it is likely meant to carry some emotional weight, isn't meant to be a very deep one. It's just used to say one or two things about terrorism using facts about cancer. It's not quite fair, then, to criticize the uses of the analogy by pointing out that the analogy itself fails if we try to extend it to other parts of the cancer representation, which is part of what Quiggin appears to be doing. Of course, Quiggin's own use of the analogy serves a pretty clear rhetorical purpose, and serves it well. In a sense, what he does is hypothetically reversecomparisonrsion, so that it is now cancer is like terrorism, and he then shows how aspects ofrepresentationntion of terrorism, or the discourse on terrorism, either don't carry over to cancer, or look really silly when they do. Using an opponent's analogy against him or her in order to show the absurdity of his or her position is yet another demonstration of a common use of analogy.

So, while the analogy may not be very good, or very useful for reasoning about terrorism (I'm not sure any of the things people said in the above examples were unknown to their audiences before they read the analogy), I'd bet it's a pretty effective analogy from a communication standpoint, and from the perspective of an analogy hunter like me, it's wonderful, because it perfectly illustrates so many aspects of analogy's production, use, and comprehension. I wish I'd run across it sooner.

Friday, December 23, 2005

Quick Note on Dover

I'm on vacation in the Athens of the South, so I haven't had much time to post on anything. I've got some nice cog sci tidbits planned, though. Anyway, I wanted to express how happy I am at the Dover ruling on Intelligent Design, and that I hope its effects are felt in school districts throughout the country. Many pro-science bloggers have pointed out their favorite parts of the ruling, and there are many great ones from which to choose. I think, though, that the most revealing comment in the ruling was this one:
First, defense expert Professor Fuller agreed that ID aspires to "change the ground rules" of science and lead defense expert Professor Behe admitted that his broadened definition of science, which encompasses ID, would also embrace astrology. Moreover, defense expert Professor Minnich acknowledged that for ID to be considered science, the ground rules of science have to be broadened to allow consideration of supernatural forces.
You have to wonder how Michael Behe could have thought that his testimony, and that of the other expert witnesses for the ID side, showed "a lot of people that ID is a very serious idea," when his testimony included admitting that in order to include Intelligent Design as science, you'd have to include astrology as well. Perhaps his mistaken assessment of his own testimony was due to the fact that his Tarot cards weren't very accurate that day.

Wednesday, December 14, 2005

The Problem With Meaning

A few years ago, during one of my rabid "embodiment" phases, I sat in on an AI discussion group comprised mostly of people who actually worked on AI. Being a mere cognitive psychologist, without a much knowledge about computers, I learned a lot, though I think most of the points and questions I raised were pretty stupid (I imagine some of the people in the group cringed every time I opened my mouth). However, there was one problem that I noticed when the AI people talked about "meaning," and no one in the group gave me what I felt was a satisfactory answer to it, so it's bugged me off and on ever since. The way I saw the problem back then was that in certain cases, it's necessary to understand the affordances of objects in order to understand sentences, and understanding affordances requires having a body. I'm no longer so certain that you need a body to understand the affordances of objects, but what you would need in place of that would be such a large body of interconnected knowledge, along with a wealth of "theories" about how to use that knowledge, that building an artificial system capable of comprehending some very simple sentences would be near impossible.

To illustrate this problem, consider two sentences.
(1) John kicked the ball to Mary.
(2) John crutched the ball to Mary.
The syntactic structures are the same, and it would be easy to teach the syntax to a computer. Thus you could get the machine to understand the outcome of the event described in the sentence: Mary ends up with the ball, as a result of an action performed by John. But what if you asked the machine to describe how John got the ball to Mary? What if the computer's task was to describe John's action? It could probably do OK with sentence (1), because the action isn't a novel one, and having been programmed with the meaning of the word "kicked," all it would have to do is spit that meaning out. But sentence (2) contains a novel verb, one which the machine is unlikely to have in its lexicon (unless the programmer has read the paper from which I stole the verb, and is trying to get one past me). You and I shouldn't have much trouble figuring out John's action in (2), even without context (if we added a little context, such as a sentence preceding both (1) and (2) that read, "John was standing across the table from Mary, and the ball was on the table," figuring out (2) would be even easier for us). What would the computer need to describe John's action in (2)?

The answer is that the computer would have to understand what sorts of actions are possible with a crutch -- the affordances of crutches -- as well as being able to reason about which of those actions would be effective in performing the action described in the sentence. As I said, I used to think that this required having a body (because affordances are organism-specific, and even body-specific), and that may still be the case. But if it doesn't require having a body, and even if it does, you've got to have a whole heck of a lot of background knowledge and folk theories to get the affordances of a crutch for John, and pick the relevant ones for a given context. For example, you'd have to know that the hard end of a crutch can be used to apply force to other solid objects (i.e., to push them); you'd have to know that balls are generally light enough to be pushed by a crutch; you'd have to know that some ways of applying force will work in some situations, but not in others (e.g., John and Mary may be close to each other, or indoors, making swinging the crutch like a baseball bat to hit the ball over to Mary impractical, and even dangerous); if you knew that the ball was far enough away from John that he would be forced to fully extend his arm and utilize the full length of the crutch to get to the ball, you'd have to know that the mechanics of the situation would probably require John to hold onto the bottom of the crutch and use the top (the part that goes under your arm when walking with a crutch) to hit the ball, while if the ball were closer to John, it might be easier to use the crutch the other way around. The list of things could go on and on. And the machine would have to know things like this for every novel verb it came across (and novel verbs, particularly denominal verbs like "crutched," are pretty common in everyday language).

So that's the problem I saw back then, and still see today. In order to write a program that can understand the meaning of sentences with which you and I would have no trouble, you basically have to program in most or all of the knowledge of at least a well-developed human child, if not an adult. And I don't see how that's really possible. It certainly doesn't seem to be possible today, since we don't have a firm understanding of how people reason about the mechanics of situations like the one in (2), or how they activate the relevant background knowledge (the paper linked above gives one potential answer, in the form of the "Indexical Hypothesis"). If a machine doesn't have that level of knowledge, every time it gets a novel verb, it's going to be lost.

Tuesday, December 13, 2005

Annual Plea to Ban Christmas

Sorry for the dearth of posts, but I haven't gotten into the swing of blogging again, and this is an incredibly busy time of year, as you can imagine. However, I wanted to take a few moments of my precious time to speak on a topic about which I am very passionate: banning Christmas. I don't really want to ban the holiday that falls on December 25, because let's face it, I really need the break that comes with it, and the presents aren't so bad either. What I really want to do, is remove the word Christ from its name. By calling it Christmas, the American public is explicitly endorsing Christianity, and all that comes with it -- such as hating gay people, women, gay people, Muslims, scientists, gay people, black people (in some denominations), and atheists (particularly if they're gay). Folks, this simply will not do! I say, let them teach about God in public schools, through the Pledge of Allegiance or Intelligent Design creationism, and let them openly display Judeo-Christian doctrine in courtrooms and other government buildings. The government has a right to side with any religion it chooses. But the people? That's downright unconstitutional! What's more, it's un-American.

Of course, I fully realize that in order to keep the holiday, but remove all Christian connotations from it, we'll need to come up with a new name. My first thought was Secularmas, or maybe Darwinmas, but those don't really roll off the tongue. Besides, what would Darwin's mass be like (while I'm certain that Dr. Myers would be happy to hold such a mass, I'm not sure that anyone would attend, and holiday songs about finches and zebra fish would be a bit creepy). But after some very careful thought, I've come up with a few workable ideas for new holiday names, and in the spirit of democracy, I thought we could put it to a vote. So, please, pick one of the following names, and write to your Senators, congresspeople, and alderpeople to say that you believe this should be the new name for Christmas. Here are the choices:
  • Bowless Day - This is the only day between December 21 and December 30th on which there is not a college bowl game. Given how boring and meaningless all of those December bowl games are, this is truly a cause for celebration.
  • Newtonmas - The father of modern physics was born on December 25, 1642. Since Christ wasn't actually born on December 25, Newtonmas is not only more constitutional, but more accurate as well.
  • Buffettmas - The singer was also born on December 25. In place of eggnog, we could serve margaritas. This fact alone should get "Buffettmas" several votes.
  • Couldn't Make it Home for Thanksgiving Day Day - This one is mostly personal. Every year, I miss my mother's wonderful cooking on Thanksgiving, because she lives 1,000 miles a way, and cornbread stuffing doesn't do well in the mail. Fortunately, I usually get to go home during the winter break, and on December 25 she reprises her Thanksgiving meal. Mmm... cornbread stuffing.
  • Lie To Your Children Day - Yes, son, Santa can see you wherever you are, and he can tell whether you're being good or bad. He really does slide down chimneys with a bag full of presents, and eat all of those wonderful cookies that you laid out. And yes, it is strange that both Santa and your father prefer Belgian beer to milk with their cookies.
  • Anti-Gay Marriage Day - I don't really like this one, but I figure it might make some of the less zealous Christians more amenable to changing the holiday's name. All we need is a majority of the 30% of the population that votes, people!
  • Easter II - I know Easter is ostensibly a Christian holiday, but the name "Easter" doesn't really have any religious meaning, and wouldn't it be cool if Santa and the Easter Bunny worked together?
  • And last but not least, Budget Breaking Day - Do you know how much presents have put me out this year? Let's just say that a few more years like this, and the kid's college fund is going to be renamed the six-week computer course fund.
There they are, the potential new names for Christmas. Many of you could probably come up with other names, but that would just lead to pointless bickering and debate, so I say we stick with these. Remember, it's your duty as an American (and if you're not an American, then as someone who wishes he or she was) to remove all religious references from the public, non-governmental sphere. And the greeting card companies will need a few months to rewrite all of the Christmas cards for Newtonmas, or whatever name we choose, so if we're going to start the new holiday by next year, we need to choose the name soon. So pick a name and start lobbying for it right away.

Friday, December 02, 2005

I'm a Racist: One Cognitive Psychologist's Thoughts on Racism Part I

Yesterday was Blog Against Racism Day, but having been out of the blogging loop for a while, I missed it. Several bloggers participated, and there were many interesting posts. I'll just link to two of them, because I found them particularly interesting. The first is at Heo Cwaeth (the coolest blog name out there, and also a pretty good blog that you should be reading), discussing the relationship between racism and discourse on sexuality. The second is by Amanda at Pandagon (she also has a follow up), in which she talks about stereotypes. I wish I had something insightful to say about Heo Cwaeth's post, but I don't. I do, however, have a lot to say about stereotypes, even if I can't promise that it will be all that insightful either. But before I don the cognitive psychologist cap and start speaking about stereotypes in a cold, scientific way, I want to say a little about my experience with racism, and (dare I say it) as a racist. Because even if you're white, Anglo-Saxon, and protestant (I'm only the first of those three), racism is still something that you can, and should, experience personally. If you don't want to read the personal stuff, you can skip this post, and read the next, which is all cog psych. I won't be upset.

OK, where to begin? At the beginning, I suppose. I was born and grew up in the South, where, in the 70s and 80s, racism was still very much out in the open, unlike in today's South , where it tends to bubble just under the surface, rearing its ugly head in ways that are often difficult to detect if you don't know what you're looking for, while more overt examples are quickly shunned by almost everyone, including the some of most racist among us. My parents, thankfully, are fairly enlightened souls, and during my early childhood, they never discussed racial differences or stereotypes, so as a young child, I never really thought about race. It simply wasn't a dimension on which I divided people into groups. It can safely be said that until age six, I was not a racist.

But when I started first grade, attending public school for the first time (after going to private preschools and kindergarten), I ran face first into a towering wall of racism. One of the first friends I made at my new school, a boy named Tony, was black. After that, I made several white friends, who frequently made fun of Tony for being black, and constantly referred to him as "nigger," a word that I had never heard before. It wasn't long before I was shunning Tony, not wanting to be associated with someone whom my other friends thought was inferior. What's worse, I began to see him as inferior too, as dirty even. Once I even refused to drink at a water fountain immediately after him, because I didn't want to get "nigger germs" (a phrase one of my friends frequently used). I was internalizing all sorts of classic racial stereotypes, and had become a full fledged member of the racist southern culture, a culture that was clearly evident in my school. White children ate and played with white children, black children ate and played with black children, and there was little if any intermingling between the two groups. I was no different.

In my own defense, I was only six at the time, and had no concept of the implications of my thoughts and actions towards Tony and other black children. But racism has to start at some age, and for me it started, but didn't end, at age six. It wasn't until I was in fourth grade, when a black family moved in next door to us (the first black family in the neighborhood, and the only one for many years -- self segregation was not limited to elementary schools), that my thinking began to change. One of the children in the family was a boy about 3 years younger than me. One day he was out playing in his back yard, when a neighborhood friend and I came out and saw him there. Hurling racial epithets, and teasing him more generally, we got into a verbal altercation with him, and soon began throwing rocks from our gravel driveway at him. His mother finally came out and yelled at us, ending the altercation. She then went to my parents, and told them what had happened. That evening I received one of the longest, and most needed lectures of my life, about how he was a boy just like me, with feelings just like mine, and so on. And I suddenly realized how wrong I had been in my thoughts and actions towards black people. The boy next door and I soon became good friends, and remained so until he moved away when I was a junior in high school (we are still friends, in fact; he and his new wife recently visited me while they were in town).

That story alone could probably serve as a good lesson on how easily negative societal views on race can be perpetuated, even when a child's parents are not themselves (overtly) racist. If a black family had not moved in next door, and my parents had not intervened, I would likely have grown up to be as racist as many in this country still are. But the story doesn't really end there, because I did grow up, and though my explicit attitudes on race changed dramatically during that lecture, racial stereotypes are much more difficult to completely kill than that, even within an individual.

By the time I went off to college, I considered myself to be an even more enlightened soul than my parents. I was fervently anti-racism, and attacked it wherever I saw it. However, with the exception of my childhood neighbor, I hadn't really had any black friends, and wasn't really in close contact with many black people on a regular basis. When I finally did make some black friends, my illusory self-image was quickly put to the test. Not many days went by that one of them didn't point out something I did that was insensitive, indicated prejudice, or seemed downright racist. I was shocked each time, and rebelled against it, arguing vehemently that I was not a racist, and that they were simply misperceiving my actions. It goes without saying that such blind defensive behavior is not conducive to maintaining friendships, be they with people who are black, white, or purple for that matter. And when the friendships did end (none of them ended in anger; they generally just dissolved into a complete lack of interaction), my belief that I was not a racist persisted.

Then I got a real shock to my system. I began dating a black woman. You can probably guess what I was thinking when the relationship began. "See, I'm not a racist? Would a racist be dating a black woman? I think not!" But she was blessed with a personality trait that my earlier black friends were not (or at least that they had not exhibited with me; it wasn't their job to educate me, after all): incredibly stubborn persistence. When she observed me doing something that was, to someone who had been the object of racism all of her life, clearly a sign of prejudice or, as we white people who want to put a positive spin on our attitudes and actions call it, "race consciousness," she let me know in no uncertain terms. And as before, I would kick and scream in denial. But she would persist, I would deny, she would persist, and eventually, and uncomfortably, I would realize that she was right. I was a racist. Or better put, I am a racist.

You see, many of the racist attitudes that I had developed as a young child, and that I was sure were long gone by the time I was a young adult, are still in there, affecting my behavior in ways that I, as someone who has never really had to worry about the negative effects of discrimination in my own life, simply didn't notice. I've been lucky, in that I've since had several close relationships with people from various minority groups, who have been kind enough to help me learn how to perceive some of those effects on my behavior, but I'd be deluding myself if I believed that I had learned about all of them. And believe me, that's not a comfortable thing for an "enlightened soul" like myself, a staunch left-winger, to admit, to myself or to anyone else. But if I don't admit it to myself, then I will never be able to learn how to better approach issues of race, and to relate to people from other races. And if I were a betting man, I'd wager that most of the people who are reading this (assuming someone has made it to this point) should be admitting this same thing to themselves, if they haven't already. If we don't all begin to do so, then ridding our society of racism will be completely impossible.

Now I can't offer any good social or political advise on how to rid society of racism, but I can say a little bit about how stereotypes and attitudes toward race work, why they're so persistent, and how they can affect our behavior without us even realizing it. So in my next belated Blog Against Racism Day post, that's what I'll try to do.

[Completely Unrelated Note: Has anyone else who uses blogger ever noticed that the word "blogger" is not in Blogger's spell-check lexicon? How odd is that?]

Wednesday, November 30, 2005

Back

Hello. I'm sorry for my extended absence. It was due to factors beyond my control, and which are, for the most part, too personal for me to get into. I especially want apologize to everyone in the reading group for not being able to participate for the last few months. If it had been up to me, I would have. Also, if you've sent me emails during the time that I haven't been posting, I am probably just now getting to them. You may want to send anything important again, just in case.

Monday, September 05, 2005

One More Time On Katrina

I just want to say that I hope that there are extensive investigations into why the local and state governments did not have a plan to evacuate those who could not evacuate themselves (the poor, the elderly, and the infirm); why the federal government, with its now obviously stupid cost-benefit analyses, could not, in the last 7 years, fund the $14 billion improvements to the levees and restoration of the coastlines that New Orleans had asked for; and why, why, why FEMA is such a disaster in and of itself. On the last one, I hope heads roll. Obviously, FEMA's director must go as soon as possible, but I hope that every single responsible party is exposed and fired, if not beheaded. While it's not clear, yet, what role Bush played in all of this, it's not a good sign that his administration has already begun to use its typical ass-covering strategy: blaming anyone else in the room. I can only hope that no one at the federal level impedes an investigation that is likely to leave almost everyone in Washington with mud on his or her face, or worse.

OK, that's all I can muster. My heart swells high with rage.

Your regularly scheduled cognitive science blogging will return shortly.

Saturday, September 03, 2005

Happy Birthday Mixing Memory

Mixing Memory turns 1 year old today. Blogging for a year has been an interesting experience. I've run into many stimulating people, learned a lot about a bunch of different things, and wasted many hours.

Thank you to everyone who's visited in the last year. I hope you all continue to drop by. You should all feel free to write me with requests for posts, suggestions for the blog (after a year, I still don't really know what the hell I'm doing), or glowing praise. I prefer the last of these personally, but the first two are probably better in the long run. Also, it appears that the blog world says happy first birthday with an influx of comment spam. If anyone knows how to get rid of it, I'd love to hear from you in particular.

Thursday, September 01, 2005

Katrina and the Media

There's been a lot of blaming going around in the wake of Katrina, especially on liberal blogs. Clearly, the federal government was not ready for this, and since they are in charge of the federal government, the Bush administration and the Republican controlled congress are likely deserving of much of the blame. But I have to admit that I feel it's too early to blame anyone for specific failures. People have been upset about the lack of aid in Mississippi, but it appears that the military has just today cleared the airport of the tons of debris the storm surge left there. It took them more than three days of working 24 hours a day to do so. The point is, most of us aren't on the ground in the stricken areas, and we just don't know what sorts of logistical problems the people there are facing, and how much preplanning might have mitigated some of those problems. All of that will come out eventually, but for now, I think blaming anyone for the delays, difficulties, and roadblocks in distributing aid is uncalled for.

Of course, one thing we do know for certain is that resources that should have been used to improve the New Orleans levies were diverted to Homeland Security:
As the New Orleans Times-Picayune has reported in a devastating series of articles over the last two years, city and state officials and the Corps of Enginners had repeatedly requested funding to strengthen the levees along Lake Pontchartrain that breeched in the wake of the flood. But the Bush administration rebuffed the requests repeatedly, reprograming the funding from levee enhancement to Homeland Security and the war on Iraq.
While that is truly unconsciounable, and the Bush administration should receive harsh criticism for this, let's wait until we figure out just what happened to the levies, and whether any amount of improvement might have prevented it, before we blame Bush or anyone else for what happened (and I say this as someone who's always prepared to blame Bush for his failures).

But there is one group whom I think we can start to blame for its role in this disaster now, and should continue to blame for a long time: the media. More and more, I've seen people in Mississippi say that they, or others they've come into conact with, didn't think that the hurricane would hit them, and thus didn't evacuate despite their governor asking them to do so. They thought it was going to hit New Orleans. That the hurricane hit Mississippi so hard should not have been unexpected. All of the forecast models, which, to their credit, the media did air, showed a projected path that covered much of Mississippi. However, every news channel and program I saw discussed New Orleans, and only New Orleans, in the days leading up to Katrina's landfall. Even though the projected path of the hurricane, which they themselves were showing, covered all three states directly affected, all the media could talk about was New Orleans. What were people in Mississippi to think? If the media is so sure that it's going to hit New Orleans, why should they leave their homes?

That's just irresponsible reporting. The media should have made it abundantly clear that the hurricane was likely to hit Biloxi and the rest of the Mississippi coast. The threat to New Orleans was the more dramatic story, but the threat to Mississippi and Alabama was the responsible story. I guarantee you that people in Mississippi lost their lives because of the media's focus on New Orleans. And that's tragic.

Everyone Interested in Cognitive Science Should Read This

In the late 1970s and early 1980s, it was clear that David Marr was one of cognitive science's brightest young minds. His philosophical, computational, and mathematical approaches to the problems presented to researchers by vision have been incredibly influential. Sadly, he was diagnosed with leukemia in the late 70s, and died in 1980 at the age of 35 (here is a short biography). As he was dying, he wrote a book that "redefined and revitalized the study of human and machine vision." The book, titled Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, was published in 1982. The book is so good that you have to wonder what great things Marr might have done if he hadn't died so young. Much of Vision betrays the urgency with which he wrote it, as it often appears to be written in the form of edited personal notes, and much of it is difficult to read. I remember reading it for the first time and not understanding a word after about midway through Chapter 2. After a course on vision, I picked it back up, and it then made sense, but it still wasn't an easy read.

But the first chapter is easy to read, and it contains within it a philosophical statement that has influenced researchers in many areas of cognitive science who study things that are distantly related, at least experimentally, to vision. Among other things, the philosophical statement defines three levels of description in the study of the mind. Here are Marr's labels and tasks that Marr gives to the three levels (from Chapter 1, Figure 1-4):
  • Computational theory: What is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out?
  • Representation and algorithm: How can this computational theory be implemented? In particular, what is the representation for the input and output, and what is the algorithm for the transformation?
  • Hardware implementation: How can the representation and algorithm be realized physically?
At the highest level, the level of computational theory, we search for descriptions and explanations of the computational problems faced by the mind, how those problems fit into the larger cognitive picture, and what sorts of strategies the mind might use to solve those problems. This is the level at which many cognitive psychologists, linguists, and philosophers of mind operate. You look for a particular cognitive task or ability, develop hypotheses about how it works, and then look for behavioral data to support your hypotheses. At the next level, below computational theory, is the algorithmic level. Here your focus is on the specific computational properties of the cognitive system: how does it represent the information it receives and processes, and what algorithms does it use to process that information? Cognitive psychologists, linguists, computer scientists, and some neuroscientists all work at this level of description and explanation. Finally, since cognition happens in the brain, you have to understand how it happens in the brain. Thus, the hardware implementation level involves the study of the neural mechanisms underlying cognition. This is the level at which neuroscientists, along with some computer scientists, linguists, and cognitive psychologists, along with the occasional philosopher of mind work. Different individuals in different disciplines will place more emphasis on different levels of description, but Marr argues that to gain a complete picture of the mind, or vision specifically, we have to approach it from all three.

Marr does such a good job of clearly describing the importance of this approach, and the approach itself has been so influential in cognitive science, that I think anyone who is interested in the discipline should read this first chapter. Thanks to brainsci001, whose real name I do not know, for pointing me to the web copy of the chapter.

Wednesday, August 31, 2005

Katrina

New Orleans is one of my favorite places in the world, and I've met many incredibly nice and interesting people (with worse accents than mine ever was) there. It's been years since I've been to Mississippi or Alabama, but I have fond memories of those places too. It's painful to see the pictures of the devestation, and I can't help but feel the deepest sympathy for all the people affected, especially those who will have a difficult time dealing with and recovering from this tragedy financially. There is a great deal of poverty in southern Louisiana, Mississippi, and Alabama, and I'm sure that this will be too much for many to overcome on their own. I hope everyone who can is donating to the Red Cross, or another equally reputable relief organization, so that no one will have to get through this alone.

CogBlogGroup: Tomasello, Ch. 2, Part 2: The Human Ratchet

The third section of Chapter 2, beginning on p. 37, is where Tomasello really begins the book. Up to this point, he's been arguing that his hypothesis is viable, based on the differences in human and nonhuman primate social cognition. Now he begins to show what his hypothesis can give us. And he doesn't start small. He goes straight to language and mathematics, to of the most striking human achievements. And along the way, he takes a stand that, in my view, makes this an excellent book. But I will get to that in a moment. First, let's look at the ratchet effect and the sociogenesis of language and mathematics.

By page 37, Tomasello feels that he's given us enough evidence to convince the convinceable that one of the major cognitive differences between humans and nonhuman primates lies in our ability to understand others as intentional agents like ourselves. This understanding makes possible, among other things, imitative learning, and as he puts it, imitative learning gives us the ability to learn new behaviors by understanding the goals of the behavior, allowing us to "faithfully [preserve] newly innovated strategies" (p. 40, and this in turn puts us "into a new cognitive space" (p. 39). This placement into a new cognitive space is important, because creativity is dependent on existing knowledge. Innovations always need something to build on, and by learning more than mere associations through social interactions, we are provided with the building blocks for further innovations. A tool one person makes can be improved upon by the person who learns to use that tool from her, and then that tool can be improved upon, and so on, and so on, until our knowledge-base grows to the point at which we get the sorts of technological explosions that humans have seen over the last few millennia, and over the last century in particular. This process is further enhanced by the ability to work in direct collaboration with others, in essence pooling our knowledge resources, and producing even better innovations. The process of learning and then improving upon the innovations of those who come before us is the ratchet effect, and the two types of cooperation that I described, building, individually, on something that someone else has created, and building on something in direct collaboration with another individual, Tomasello calls virtual and simultaneous collaboration.

As an example of virtual collaboration, Tomasello uses the development of the collections of linguistic symbols and constructions that make up individual languages. While children initially learn fairly exact copies of the symbols and constructions that their parents use, languages develop over time, meaning that as new individuals learn them, they make alterations (most often unconsciously, I'm sure) that end up making those symbols and constructions more efficient for speech and communication. On pages 43 and 44, he gives several examples of how English expressions have become more efficient over time, usually through shortening the expressions (by removing redundancies, e.g.). This is just sort of collaborative construction process that the model of Hutchins and Hazelhurst simulates. Their model shows just how powerful such collaboration is in the creation of a shared system of linguistic symbols. For instance, their model shows how different systems of symbols can develop by dividing agents into groups that can only collaborate with other members of their groups.

While there are many languages, language itself is shared by all human cultures. This may be, as Tomasello notes, because some aspects (perhaps its underlying structure) of language are innate. This is the Chomskyan position, and Tomasello is quick to point out that his hypothesis is not inconsistent with some versions of that position (he mentions the principles and parameters approach, which I briefly described here). But it is also consistent with the creation of a simple symbol system by our earliest human ancestors some 250,000 years ago. Out of this system, all languages would develop through the sort of collaboration Tomasello as well as Hutchins and Hazelhurst describe. Similarities across languages in their structure, which are fairly common as linguists will tell you, could then be a product of the fact that languages develop to work with existing cognitive and physical (e.g., the anatomy of the vocal tract) mechanisms.

After language, Tomasello describes how the sociogenesis of mathematics might have worked. Mathematics, unlike language, is not universal, and while some have posited that we have innate arithmetical abilities (the addition and subtraction of small numbers, specifically), few have argued that we have a complex mathematical cognition module. Instead, mathematical knowledge develops using existing cognitive mechanisms, and only to the extent that it is needed in a particular culture. As cultures develop more complex economic, architectural, and technological systems, mathematical knowledge grows more complex. And often, particularly at the highest levels, mathematics develops through the simultaneous collaboration of small groups of individuals (people we would call mathematicians). Each innovation along the way is built upon previous innovations, and thus the development of mathematics is a particularly impressive example of the ratchet effect.

After the discussion of language and mathematics as examples of the power of sociogenesis, Tomasello moves on to human ontogeny (human development), and he begins this discussion, which will comprise the bulk of Chapter 3, with a philosophical discussion. And I must say that this short discussion (in the subsection titled "Philosophical Nativism and Development," beginning on p. 48) is one of the main reasons I like this book so much, and why I think it's a very good first book for the reading group. This is because he touches on one of the foremost debates in cognitive science: the nativism vs empiricism debate. Since its inception in the mid-1950s, cognitive science has been dominated by a few far-reaching debates, the most famous of which being the imagery debate, connectionist vs symbolic architecture, and nativism vs empiricism (there has also been a debate over the use of similarity-based vs rule-based processes within the symbolicist camp, but this is more local and technical, and therefore less famous). The imagery debate was an incredibly heated debate in the 1970s over whether representational primitives (the building blocks of all representations) were images or propositions. This debate died out in the early 80s, with most cognitive scientists deciding that both images and propositions both comprised our representational primitives, but with an emphasis on propositions. Recently, the debate has been making a comeback, largely due to the work of Larry Barsalou, who argues that most or all of our representations are images (which need not be visual, I should add). The connectionist vs symbolic architecture debate, which took off in the 1980s (perhaps because people were sick of the imagery debate) has been equally heated, and concerns the ways in which our brains process information. Do they do so using syntactic processes (rules) operating over discrete symbols (symbolic representations), or do they use analog representations that allow for similarity-based pattern recognition, as in connectionist models? Despite more than two decades of fighting, neither side has one out yet, and I wouldn't be surprised if this debate ended in a compromise like the one that initially ended the imagery debate.

But the nativism vs empiricism debate has been a part of cognitive science since its birth, and probably isn't going away anytime soon. As you probably know, cognitive science was born out of a revolt against behaviorism. Behaviorism was a particularly strong version of empiricism, in which all, or virtually all behavior was learned through associations. Many of the cognitive revolutionaries rebelled against empiricism as they rebelled against behaviorism. The most famous of these is, of course, Noam Chomsky, whose nativist theory of language has dominated linguistics and cognitive science in general for nearly 50 years. But empiricism didn't die with behaviorism. So for those same 50 years, people have debated whether pretty much every particular cognitive ability and process is innate or learned. For the most part, cognitive scientists now believe that some processes are innate, and some are learned, and no one is a strict nativist or empiricist (except Jerry Fodor, who is a strict nativist, believing that even concepts are innate). But there are still local debates over innateness, and in most cases, a particular cognitive ability is thought to be either innate or learned, with no middle ground.

This is where Tomasello takes issue with the way cognitive scientists view the mind. He argues that this either/or way of approaching things is bad biology, and it's not very useful either. Very few of our cognitive abilities are present, in their entirety, at birth (though if they are, or even if primitive versions are available early in infancy, nativists will argue that they are innate). Instead, these abilities develop over time and through our interactions with our environment. And this, Tomasello believes, is what we should be trying to understand: how these abilities develop ontogenetically. Trying to decide whether the abilities are innate or not doesn't help us in that task, unless it is directed at attempting to understand that development. I think this is a very important message, and one that I obviously haven't fully taken to heart (in a paragraph in this post, I've taken an either/or perspective, even). But it is a message that we cognitive scientists must learn if we're going to be able to use the study of our cognitive development to fully understand human cognition.

Now, I can't say that I find Tomasello's approach to the study of development entirely satisfying. His approach is to divide learning into two types: individual and cultural. Individual learning occurs through insight that we gain while interacting with our environment on our own, while cultural learning occurs through collaborative interactions with other individuals. This may be a fruitful approach experimentally, in that it may be important to control either individual or cultural factors, to the extent that it is possible, in order to understand particular factors in cognitive development. But in practice, and certainly in actual development, I'm not sure it's really possible to separate the two. Sure, human children are occasionally alone (some more than others), but for the most part, our development takes place in a rich social environment, and while not all of our interactions with that environment are collaborative, they are all impacted by culture. Where does cultural learning end, and individual learning begin? I'm not sure cognitive science can provide an answer to that question, or that it needs to.

Still, I think Tomasello's general approach, which is to focus on development, be it individual or cultural, is the right one. God knows how many hours of research and pages in journals have been wasted arguing over whether a particular cognitive ability is innate, and with little insight into the ability itself coming of it. If we focus on development, we can gain great insight into the different aspects of those individual abilities. The book chapter on causal reasoning in infants that I linked in the last post on Tomasello (it's here, if you didn't read it then) is a great example of this. By looking at the stages of the development of infants' concepts of causation, the authors show us different aspects that are contained in our fully developed causal reasoning. Of course, if you read the chapter through, you'll notice that the authors end up taking a stand on one side of the nativism-empiricism debate, which is unnecessary, but old habits die hard. Fortunately, that wasn't the main focus of the chapter, and as a result, their review of the developmental research is enlightening.

In the next chapter, Tomasello will focus exclusively on development. He'll occasionally mention nativist and empiricist theories, and I must admit that at times it looks like he's taking a distinctly empiricist position, even if he believes that doing so is unproductive. But for the most part, his focus is on the developmental sequences in cognitive development. I hope his approach will serve as a model for all of those cognitive scientists who are stuck in mostly fruitless debates over innateness.

Monday, August 29, 2005

Two Good Resources on Language Evolution

One of the central topics in Tomasello's The Cultural Origins of Human Cognition is language -- how it evolved, how it develops ontogenetically, and how it enhances our cognitive abilities. But Tomasello discusses language at a fairly high level (e.g., syntax, semantics, and pragmatics). For spoken language to have evolved (biologically or culturally), several changes in the morphology of our vocal tracts had to take place. In addition, speech must be designed to take advantage of certain properties of our auditory systems, some of which may have evolved since the evolutionary lines of humans and other primates diverged. So, in order to really understand the language faculty, and how it evolved, you have to understand speech production and speech perception. With that in mind, here are two good papers on the internet.

The first is a book chapter titled "What Are the Uniquely Human Components of the Language Faculty?" by Marc Hauser and Tecumseh Fitch. Yes, that Hauser and Fitch. But this chapter isn't about the recursion hypothesis. It's a nice historical review of the speech production and perception literature. Unfortunately, since it's a book chapter, the scanned version doesn't include the references, but if they cite something that you think you'd like to read, and you can't find the full reference, let me know, and I will find it for you.

The second source, which is cited by Hauser and Fitch, is a Behavioral and Brain Sciences paper by Peter MacNeilage titled "The Frame/Content Theory of Evolution of Speech Production." It's mostly about speech production, as the title suggests, and contains a lot of great info on the physical aspects of speech, along with a theory about how these and neural organization influenced the evolution of language.

Dennett to ID: Get in Line

If you haven't read this opinion piece by Daniel Dennet in the New York Times (via The Fly Bottle), then I highly suggest you do so now. It may be the best opinion piece on Intelligent Design that I've seen yet, not because he says anything new, but because he distills the issue down to its essence and leaves it at that. When he does so, Intelligent Design is shown in its true light, which is to say, a light shining on a scientifically empty idea. After I read it, I tried really to choose the passage I wanted to quote here, but each paragraph is really good, so it's damn near impossible to decide. So I'll just pick a passage semi-randomly:

Instead [of doing real science], the proponents of intelligent design use a ploy that works something like this. First you misuse or misdescribe some scientist's work. Then you get an angry rebuttal. Then, instead of dealing forthrightly with the charges leveled, you cite the rebuttal as evidence that there is a "controversy" to teach.

Note that the trick is content-free. You can use it on any topic. "Smith's work in geology supports my argument that the earth is flat," you say, misrepresenting Smith's work. When Smith responds with a denunciation of your misuse of her work, you respond, saying something like: "See what a controversy we have here? Professor Smith and I are locked in a titanic scientific debate. We should teach the controversy in the classrooms." And here is the delicious part: you can often exploit the very technicality of the issues to your own advantage, counting on most of us to miss the point in all the difficult details.

William Dembski, one of the most vocal supporters of intelligent design, notes that he provoked Thomas Schneider, a biologist, into a response that Dr. Dembski characterizes as "some hair-splitting that could only look ridiculous to outsider observers." What looks to scientists - and is - a knockout objection by Dr. Schneider is portrayed to most everyone else as ridiculous hair-splitting.

Now go read the rest.

CogBlogGroup: Tomasello Chapter 2, Part 1: What the Apes Got

Since Chapter 2 is pretty dense, I'm going to talk about it in two separate posts. The first is about chimpanzees (mostly), and the second about humans.

Tomasello begins Chapter 2 with the distinction that drives the entire book: the distinction between biological and cultural inheritance. In fact, he states the hypothesis of the book in two sentences just a few paragraphs in to Ch. 2:
My particular claim is that in the cognitive realm the biological inheritance of humans is very much like that of other primates. There is just one major difference, and that is the fact that human beings "identify" with conspecifics more deeply than do other primates. (p. 14)
To argue for this thesis, he has to answer two questions, which he gives on p. 15:
  • How does the cognition of primates differ from that of other mammals?
  • How does the cognition of humans differ from other primates?
The first of these is important, because if Tomasello's hypothesis is correct, then understanding the differences between our closest evolutionary relatives, other primates, and other mammals will help us understand which aspects of human cognition developed through biological inheritance. Answering the second question will then serve two purposes. First, if he can demonstrate that humans can "'identify' with conspecifics more deeply than do other primates," then he can argue that other cognitive differences between humans and other primates may have developed through cultural transmission. Second, understanding the differences between human cognition and the cognition of other primates, over and above differences in theory of mind, will allow Tomasello to determine what it is, exactly, that may have developed through cultural transmission. The primary purpose of Chapter 2 is to do the first of these. He spends much of the rest of the book arguing that the demonstrated differences in theory of mind can in fact lead to the other cognitive differences we observe between humans and primates.

So what can most mammals do, cognitively? Tomasello gives two lists, the first general and the second social (p. 16-17). I'll just discuss one, because he doesn't explain it very well. On the first list (non-social cognitive abilities), he includes the ability to pass object permanence tests. What is an object permanence test? It's just what it sounds like. It's a test to determine if an individual (human or nonhuman) understands the basic properties of solid objects, which include numerosity (most things don't spontaneously multiply themselves), motion, and permanence (things don't just disappear). Here are some examples of classic Piegetian object permanence tests (from this paper):



These were obviously designed for children. In a typical experiment, the child, often a very young infant, will be shown several different versions, some of which an individual would expect if she understood the properties of solid objects, and some of which she wouldn't. Experimenters determine whether the child understands these properties by measuring how surprised the child is when a particular event occurs (often by measuring looking time or sucking, both of which will increase if the child sees something novel or unexpected). For example, in the top two sets of drawings above, an object (in this case, a toy train) starts out in plain view, and then disappears behind a larger solid object (the rock). If the train does not come out the other side, then the child should expect the object to still be behind the rock when it is lifted. If the train has come into view on the other side of the rock, the child should be surprised to see a train when the rock is lifted. Thus, if a child looks longer in the second case than in the first, we can infer that the child understands that particular aspect of object permanence. Human infants understand this from a very, very young age, and other mammals appear to understand it as well.

Back to Tomasello. After listing many similarities between primates and other mammals, he singles out one particular difference: the ability of primates to understand relational categories, particularly in the form of "third-party social relationships" (p. 17). While most mammals are able to understand kinship and dominance relationships between themselves and other individuals, primates are able to understand kinship and dominance relationships between two other individuals, and to use this knowledge in interacting with those individuals. Primates, unlike other mammals, are also able to understand relational categories in non-social domains, though as Tomasello notes, they must be extensively trained to do so. In essence, you could say that primates, but not other mammals, developed a concept of "same" in the social realm, and with extensive training, they can extend this concept to non-social domains. For example, if you give a dog two objects, one of which is larger than the other, and reward it for picking the larger object, it will eventually pick the larger object all of the time. If you then substitute the first two objects for two different objects of different sizes than the first, one of which is larger than the other, the dog will not know to pick the larger object until it has been trained to pick that specific object. It doesn't understand that the two pairs of objects involve the same relationship: one is bigger than the other. Chimps, on the other hand, can be trained to do this, though it takes a while. Human children, on the other hand, do this early in development and without any training.

This is where Tomasello draws the line between primate cognition and the cognition of other mammals. He then begins to draw the line between human and primate cognition. The first difference he notes is human understanding of causation. And it is this understanding that allows humans to attribute beliefs and goals to other humans. Only after gaining the ability to represent causal relations between two events (rather than simply associating the two events) can we understand that an individual's beliefs or goals caused him or her to act in a certain way. Interestingly, just as Tomasello believes that primate understanding of relational categories in non-social domains is an extension of their understanding of such categories in social domains, he believes that human understanding of causal relations in general arose from our understanding of mental causation.

I have to admit, I'm doubtful about that. Whereas chimpanzees have a very difficult time extending their social understanding of relational categories to physical relational categories, human infants appear to develop a concept of causation independently of the development of theory of mind. While there isn't much evidence for a "causation" module in the human brain, infants do begin to develop an increasingly complex concept of causation beginning at about six months of age, which is long before most of our theory of mind capabilities begin to emerge (check out this book chapter for a great review of the literature on infant causal knowledge). If our causal knowledge were dependent on our knowledge of mental causation, we would expect the latter to develop before, or at least simultaneously with causal knowledge.

But I'm digressing again. From page 26 on, Tomasello discusses primate culture. By this time, you've probably realized that "culture" is a pretty loaded word. Many anthropologists and biologists use it to refer to something like the following1:
[B]ehavioral patterns that diffuse across a group of individuals through some form of social learning and is subsequently displayed by a significant membership of the group. (p. 55)
Tomasello, on the other hand, reserves the term for cumulative culture that is achieved through imitative learning and direct teaching. And this is where we further see the dividing line between humans and nonhuman primates.

If we use the first definition of culture, then many nonhuman species can be said to display cultural cognition. Chimps, bonobos, macaques, and even dolphins and whales (as Clark notes) can develop behavioral patterns, ranging from tool use to courtship rituals, that are specific to particular isolated groups, and which are learned by young by observing adults (see this paper for a nice review of different types of cultural behaviors, in the broader sense, in chimps). But Tomasello argues that these aren't instances of what he refers to as culture, because they are not transmitted through teaching and imitation. Instead, they are transmitted through emulation learning and ontogenetic ritualization. Both of these, he notes, require a great deal of intelligence, but neither constitutes true cultural transmission.

Here is the example that Tomasello gives of emulation learning:
If a mother rolls a log and eats the insects underneath, her child will very likely follow suit. This is simply because the child learned from the mother's act that there are insects under the log -- a fact she did not know and very likely would not have discovered on her own. But she did not learn from her mother how to roll a log or to eat insects: these are things she already knew how to do or could learn how to do on her own. (p. 29)
The distinction between emulation learning and true imitation is very important. As Tomasello notes, there have been few, if any, observed cases of nonhuman primate parents (specifically mothers) teaching behaviors to their children. Instead, the mother performs a behavior with her child nearby, the child observes the end result (in the example above, the exposing of food), and repeats the act not because it's what the mother did, but because it understands the affordances of the objects involved, and uses that understanding to achieve the same end result that its mother did. This requires a very complex understanding of affordances, which, in the words of Norman Donald, are:
[T]he perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used. A chair affords ("is for") support, and, therefore, affords sitting. (p. 9)
It is through the complex understanding of the affordances of objects that nonhuman primates are able to innovatively develop tools. In order to use a stick to retrieve termites from a mound by placing it into holes in the mound, a chimpanzee must understand that a stick of a certain size will fit into holes of a certain size, for example. But because chimpanzees are not learning to use tools from their conspecifics through imitation, but rather through emulation learning, their ability to develop and use tools is highly limited. This is reflected in the differences between human and nonhuman primate tool use, which Marc Hauser and Laurie Santos list in the following table2:

Notice that both nonhuman primates and humans are able to recognize the "relevant design features" and use "multiple materials for different tools." Both of these require understanding the affordances of objects, but little else. However, the rest of the abilities on the list, which only humans exhibit, require more than this. They require an understanding of causation (for developing tools that consist of "parts with different functions," for example), the ability to build upon existing uses of a tool (cumulative culture), and most likely, the ability to learn the use of a tool through something more than the mere association of an event and an outcome that is involved in emulation learning. If you only learn an association between an event (e.g., rolling a log or putting a stick in a termite mound) and an outcome (getting food, e.g.), it will be much more difficult for you to use the object or action to achieve other, perhaps unrelated outcomes.

The second type of social learning that Tomasello attributes to chimps is ontogenetic ritualization (p. 31-33). He describes this type of learning in the context of gestural communication. Nonhuman primates develop group-specific, or even individual-specific gestural signals through a process in which one or more individuals comes to associate a particular gesture with a particular outcome. Once again, this type of learning does not require imitation or an understanding of the goals of the gesturer/observer. Thus he writes
[Emulation] learning and ontogenetic ritualization are precisely the kinds of social learning one would expect of organisms that are very intelligent and quick to learn, but that do not understand others as intentional agents with whom they can align themselves. (p. 33)
Tomasello argues that emulation learning and ontogenetic ritualization can explain all of the group-specific behaviors observed in several primate species. While he doesn't address the issue, because he is more concerned with our closest evolutionary ancestors, these types of learning might also explain what many see as cultural learning in other mammal species, such as whales and dolphins. For an example, in an article linked by Clark in a post on Chapter 1, researchers report observing orcas develop a unique technique for capturing gulls. Once one whale started using this technique, the rest of the whales in the tank picked it up and began using it successfully as well. This sort of learning is consistent with other examples of cultural transmission (in the broader sense) observed in toothed whales (see this paper for more examples). But at this point, there's no reason to think that the orcas were learning through imitation rather than emulation in which they observed a behavior and an outcome they desired, and thus emulated the behavior to achieve the same outcome. It may turn out that orcas, unlike nonhuman primates, are able to learn through imitation, which might present an interesting case of convergent evolution (and wouldn't affect Tomasello's arguments directly), but at this point there doesn't appear to be any reason to believe that's the case.

At the end of the section on primate culture in Chapter 2, Tomasello feels that the evidence overwhelmingly indicates that chimpanzees are not able to transmit information culturally (in his sense of the word), and that this is because they are not able to understand others as intentional agents. Thus, he feels he's answered both of the questions I quoted at the beginning of this post, and that the answers support his hypothesis. In the next post on Chapter 2 (which will be much shorter, I promise), I'll talk about his discussion of human culture. Hopefully by the end of Chapter 2, everyone will have a good understanding of just where Tomasello is going, and why.

1 Sinha, A. (2005). Not in their genes: Phenotypic flexibility, behavioural traditions cultural evolution in wild bonnet macaques.Journal of Biosciences, 30(1), 51-64.
2 Table 1 (p.20) in Hauser, M.D., & Santos, L.R. (In Press). The evolutionary ancestry of our knowledge of tools: From percepts to concepts.

Sunday, August 28, 2005

Argh Again

This post is a personal rant. Please ignore it.

It's been a busy couple of weeks, what with the beginning of the semester and all, and for the last few days, my son has been sick (sick and grumpy), so I've been way behind on my blog reading for some time. Today, my son is with his mother, and I'm just sitting around, so I decided to catch up on my blog reading. In the process, I came across this post by Brian Leiter from two weeks ago. It's an 1100+ word rant on a review of Friedrich Nietzsche by Curtis Cate in the New York Times. If you're a reader of Leiter, you probably know that 1100+ words is pretty long for a post that Leiter has actually written (rather than mostly quoted). And he is a Nietzsche expert, so I was eager to read it.

The gist of the post is that Leiter doesn't think that the author of the review, William T. Vollman, knows what he's talking about, and he provides several examples of Vollman's ignorance to prove it. As I read it, I was particularly struck by this in the last paragraph:
The broader issue, though, is about the responsibilities of newspapers, like The New York Times, that aspire to be serious and intellectual. If a decision is made to commission a long review of a book, why not enlist someone who actually knows something?
Now that's a sentiment with which I can agree wholeheartedly, as anyone who's read this blog knows (and hey, I told you this was a personal rant, so why are you still reading it?). But as I read it, something didn't sit right with me. It's not that I think Leiter's wrong about Vollman's knowledge of Nietzsche, it's that I don't think Leiter really believes what he says about the "responsibilities of newspapers." I mean, I think the thinks he believes it, but I don't think he really believes it. You see, when I read it, my mind immediately went back to this post . In it, Leiter favorably links to this review of David Buller's Adapting Minds, by Sharon Begley. So favorably, in fact, that Leiter wrote:
Law-and-economics folks, who are often especially partial to this shoddy science, would do well to read the review and the book. [Emphasis mine]
But wait a second. Wait a friggin' second! That review was written by someone with no qualifications in psychology, or philosophy, and it shows. She's a science editor for the Wall Street Journal, and has written on all sorts of scientific topics (ranging from psychology to astronomy), but she's not an expert in any of the areas of psychology or biology that Buller covers. To give one example of the ignorance of the issues that Begley displays, the last sentence of the review read:
After "Adapting Minds," it is impossible to ever again think that human behavior is the Stone Age artifact that evolutionary psychology claims.
Ummm... no. Buller's book is very good in places, especially where he sticks to existing critiques of Evolutionary Psychology, but it fails where a critique of EP must be at its strongest, in critiquing the work of Cosmides and Tooby (see the link under "no" for why this is the area in which it is most important to critique EP, and why Buller fails to do so effectively). And thus, contrary to the title of Leiter's post and the last sentence of Begley's review, Buller has not demolished Evolutionary Psychology. Anyone who was qualified to write that review would have known that, as would anyone who was qualified to evaluate the book and the review.

When I pointed this out, and criticized Leiter for endorsing the review and hyperbolically claiming that the book itself spelled the end of Evolutionary Psychology, Leiter was none too happy, and he made this very clear in an email exchange. All of this begs the question, why does Leiter have no problem endorsing a book review by someone who is not qualified to write it, to the point of being rather angry about any criticism of that endorsement, in one instance, but he writes an extended post ending with a criticism of newspapers that use unqualified reviewers in another? My best guess is that Leiter endorsed Begley's review because it came to the same conclusion he had, and that he cared not about the substance of the review itself, but when the NYT's published a review on a subject that is near and dear to him (he's written a book on Nietzsche, as I'm sure you know), the substance of the review suddenly became important to him. My guess may be wrong, but I can't think of any answer to that question in which Leiter comes out looking good.

OK, thus ends my rant. I'll admit that it's probably unhealthy for me to have spent that much time on it, but this sort of thing really bugs me. I mean, Leiter and I agree about the merits of Evolutionary Psychology, even if the argument he has presented against it isn't a very good one. Heck, Leiter and I probably agree on a lot of things, as we're both on the same end of the political spectrum. Though I must say it's hard to tell, because it's rare that Leiter actually offers any positive political positions on his blog, and his scholarly writings aren't overtly political (at least not the ones I know of). But regardless of whether we ultimately agree or disagree, Leiter's selective criticism of intellectual irresponsibility strikes me as just another instance of the attitude that leads to the politicization of science and other scholarly endeavors. You just can't decide which scholarship is good and which is bad based on whether you agree with the conclusions. Otherwise, you're no better than the creationists and global-warming "skeptics," even if the issues are less pressing.

Saturday, August 27, 2005

When Science, and the Philosophy of it, Meet Blogs and the Press

I feel sorry for Elizabeth Lloyd. I haven't read her book, as I'm not really up on all of he "evolution of the orgasm" literature, so reading a critique of a particular perspective would be pretty pointless. But what is painfully clear is that she has been a victim of the inability of the press to understand science, and the further inability of people to look past the bad reporting and their own prejudices when evaluating science. If ever there was an argument for better science education and reporting, it is Lloyd's post over at Philosophy of Biology. She provides several examples of outrageous comments on her book and the ideas within it. I'll just give a couple:
  • And we all think Dr. Elisabeth Lloyd is something of an Uncle Tom for women.
  • Thinking women, liberal or conservative, will not be thrilled by this book. Fundamentalist Christians, on the other hand, will be ecstatic.
As these and the rest of Lloyd's examples show, conservatives aren't alone in their politicized ignorance of science. Lloyd notes that she didn't "cherry-picked the worst of the bloggers; these are nearly all intelligent and relatively scientifically-informed feminists who are keenly worried about what theyhave heard." I think Lloyd's description of them as "scientifically-informed" may be a bit generous, but if it is accurate, then the quotes she attributes to them are that much worse, because they are made by people who should know better.

I've always believed that there are two general kinds of creationists: the ignorant and the stupid. The ignorant are your everyday, gardenvarietyy creationists, who would ask you things like, "If evolution is true, then why are their still moneys?" or claim that "Evolution violates the Second Law of Thermodynamics." These are people who know nothing about evolution (or thermodynamics, for that matter), and show it in everything they say about it. The vast majority of these people don't really care about evolution, and could go days or weeks without ever thinking or talking about it. For them, it's largely about religion and culture, and science is at most a scapegoat. Then there are the stupid creationists. These are the people who do have a little knowledge of evolution, or mathematics, physics, or chemistry, and who use it to actively campaign against evolution for largely political (notice I didn't say religious) and self-aggrandizing purposes. They're the ones who spend as much of their time as they can talking about evolution and science; who make sure that the ignorant remain that way, and that they think about evolution and science as much as they can, but from the stupid creationists' perspective. They really have enough knowledge to see where they are wrong, or at least to know that they don't know enough to make theclaim that they know what they're talking about. Feminists who are "scientifically informed," but still spout nonsense like the above, clearly without having read Lloyd's book, are no better than the stupid creationists, and just as dangerous as them. And just as in the case of creationism, the press is helping them along.

Tuesday, August 23, 2005

Traveling from West to East, Cognitively: The Causes of Cultural Differences in Reasoning

Since the publication of Richard Nisbett'’s book The Geography of Thought, researchers have been scrambling to study differences in cognitive style between Westerners (Europeans and Americans) and East Asians. In his book, Nisbett argues that Westerners reason more analytically, while East Asians reason more holistically, intuitively, and dialectically (meaning they're more likely to consider alternative views and take the middle ground). For example, East Asians prefer dialectical proverbs, in which there is a logical contraciction (e.g., "“Too humble is half proud") to proverbs in which there is no such contradiction (e.g., "“For example is no proof"”), while Americans prefer the latter. East Asians also prefer dialectical solutions to social problems that involve contradictions, like the following:
Mary, Phoebe, and Julie all have daughters. Each mother has held a set of values which has guided her efforts to raise her daughter. Now the daughters have grown up, and each of them is rejecting many of her mother's values. How did it happen and what should they do?
Americans, on the other hand, prefer solutions that donÂ’t contain a contradiction. Thus the East Asian participants in the experiment that used stories like the one above tended to lay some of the blame on both the mother and the daughters, while Americans tended to blame one or the other1. Nisbett and his colleagues have taken these findings to indicate that East Asians prefer dialectical to formal reasoning.

This preference for dialectical reasoning also manifests itself as a preference for intuitive over rule-based reasoning in other domains, such as categorization. Norenzayan et al. demonstrated this preference in a categorization task2. As an example, they use the following question: "“Is the Pope a bachelor?" If we adopt a rule-based approach, then we must conclude that the Pope is a bachelor because he is an unmarried adult male. However, if we take a more intuitive approach, we will tend to think about past examples of bachelors, and conclude that since the Pope isnÂ’t really very similar to those past examples, he is not a bachelor. When given questions like these, Americans tended to adopt a rule-based approach, while Chinese and Korean participants tended to categorize using a more intuitive, exemplar-based approach.

In addition to dialectical reasoning, East Asians Nisbett has argued that East Asians prefer holistic reasoning to analytical reasoning. This is most evident in situations in which East Asians rely more on context than Westerners, who prefer analytical reasoning. For example, Masuda and Nisbett3 conducted a memory experiment in which they presented Japanese and American participants with animated scenes. After a delay, they presented the scenes again, and asked the participants to answer whether an object in the scene was one they had seen before. Some of the objects were from the original scenes, and some were new. In addition, some of the old objects were presented in new scenes (i.e. a novel context). They found that Japanese participants made more errors when the object was in a novel context, while the Americans were unaffected by the context.

A just published study by Chua et al. demonstrates the increased attention to context at the level of eye-movements during the processing of scenes. John Hawks and Razib have already posted good descriptions of the study, so that I don't have to. I'll just say that the researchers found that Chinese participants tended to look at the background (context) much more than American participants, who tended to look at the focal objects more than the Chinese participants.

All of the work by Nisbett and colleagues, including the new eye movement study by Chua et al., do little more than demonstrate effects. There hasn't been much of an effort to test hypotheses about why these cultural differences in reasoning styles might exist. Nisbett has explained these differences as being the result of the dominance of individualism in Western cultures, and collectivism in Eastern cultures, but there is no experimental evidence to support this view. In fact, the very distinction between collectivist and individualist cultures may not be a very good one, as evidenced by research attempting to measure cross-cultural differences on these dimensions has shown4.

So what might cause these differences? Researchers have recently suggested that pervasive differences in social environments across cultures may play a role. In particular, they have suggested that fear of isolation, which manifests as a fear of being excluded from a group due to, among other things, failure to conform to majority opinions or trends, might affect reasoning styles, and they have shown that East Asians have higher levels of fear of isolation, on average, than Westerners5. But as they say, correlation is not causation. In order to really test the idea that fear of isolation causes differences in reasoning, you'd have to randomly assign participants to high or low fear of isolation conditions. That means experimentally manipulating fear of isolation. Then you'd have to show that inducing high levels of fear of isolation results in reasoning differences similar to those between East Asians and Westerners. So that's what they did6. They first randomly assigned Western (American) participants into one of two groups, and then had them write about past experiences. In the low fear of isolation group, they wrote about an experience in which they had isolated another individual from their group. In the high fear of isolation condition, they wrote about an experience of being isolated from a group. This caused the participants in the high fear of isolation group to score significantly higher on a fear of isolation test than those in the low fear of isolation group.

After manipulating fear of isolation levels, the researchers gave the participants tasks similar to those in the studies described above. In one experiment, they had them choose between dialectical and logically consistent proverbs, and tested the effects of context on memory. They found that participants in the high fear of isolation group preferred dialectical proverbs relative to the low fear of isolation group (they also showed that this preference correlated positively with individual differences in fear of isolation in an East Asian sample). In a second experiment, they gave participants a memory task identical to the one in Masuda and Nisbett, and showed that participants in the high fear of isolation group relied on context in a recognition memory task than participants in the low fear of isolation task. Thus, simply by manipulating fear of isolation levels, you can produce reasoning preferences in Westerners that resemble those of East Asians.

From these experiments we can conclude that fear of isolation plays a causal role in cultural differences in reasoning. Exactly how it affects reasoning was not studied, but it's likely that fear of isolation makes people pay more attention to multiple individuals (creating a preference for dialectical reasoning) and to the overall context. The authors of the studies note, however, that fear of isolation is likely not the whole story, and I suspect that in the future, researchers will look at other social factors in order to begin to piece together the causal picture. At the very least, though, these experiments rule out one explanation. Because we can induce Westerners to reason like East Asians, it's highly unlikely that there are large differences in the cognitive architectures of the two groups.


1 Both of these findings are from Sanchez-Burks, J., Nisbett, R. E., & Ybarra, O. (2000). Cultural styles, relationship schemas, and prejudice against outgroups. Journal of Personality and Social Psychology, 79, 174-189.
2 Norenzayan, A., Smith, E.E., Kim, B. J., & Nisbett, R. E. (In Press). Cultural preferences for formal versus intuitive reasoning. (In press). Cognitive Science.
3 Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 992-934.
4 Kim, K., & Markman, A.B. (In Press). Differences in Fear of Isolation as an explanation of Cultural Differences: Evidence from memory and reasoning. Journal of Experimental Social Psychology.
5 Kim and Markman, In Press.
6 Kim and Markman, In Press.

The Return of Hauser, Chomsky, and Fitch

Only this time, it's Fitch, Hauser, and Chomsky. They're playing musical authors.

Anyway, recall that Hauser, Chomsky, and Fitch published a paper a few years ago in Science in which they argued that an explanation of the evolution of language may require only one new ability: recursion. In fact, recursion may be the only new ability we need to explain a whole host of cognitive skills that are unique to humans. You may think Tomasello oversimplifies the case in arguing that all we needed were some adaptations that allowed for collaborative learning, but HCF have brought oversimplification to a level not before witnessed in the behavioral sciences. Steven Pinker and Ray Jackendoff responded to this silliness with a very good paper of their own, in which they show that the HCF theory just doesn't hold water. If you're keeping up with this debate, you might be interested to know that HCF, now FHC, have written a response, which will be published in the same journal that published the Pinker and Jackendoff paper (thanks to Razib, who I should note is a member of the reading group, which automatically makes him cool, for pointing this out).

If you weren't convinced by the HCF paper, you won't be convinced by the FHC. It's more of the same, with a lot of time spent detailing what they perceive as misconceptions on the part of Pinker and Jackendoff. They do make one very good point, which I guarantee was insisted upon by Hauser (because he's written about it elsewhere), namely that speculation about the adaptive function of language at different points in its evolution is worthless. It's better to use comparative methods (comparing human linguistic abilities to the cognitive and communication abilities of nonhumans, especially monkeys and apes), and data from contemporary humans, because, well, that's real data, while idle speculation isn't. In addition, I do like their "faculty of language in the narrow sense" (FLN) and "faculty of language in the broad sense" (FLB) distinction. The FLN is just those aspects of human language that are unique to humans, and in humans, unique to language. The FLB is language in general, which can include properties that human languages share with nonhuman animals' communication systems, as well as aspects of language that are used by other cognitive systems, and thus not unique to language. I'm not so sure that Pinker and Jackendoff have mischaracterized this distinction, despite FHC's protests. But all in all, the paper is a good read, and I recommend it if you're interested in this sort of thing.

But beware! In the paper, there is a reference to an online appendix that discusses Pinker and Jackendoff's misconceptions about the Minimalist Program. Do not, if you value your sanity, read this appendix! It was obviously written by Chomsky, as it might as well have been written by the Chomskybot. It's no wonder, then, that the editor asked them to remove it from the published version of the paper. The damn things well nigh unreadable. Honestly, it amazes me that in an appendix designed to clear up misconceptions that are apparently based on a lack of requisite background knowledge on Pinker and Jackendoff's part, Chomsky writes an explanation that has holes large enough to drive an entire fleet of Mack trucks filled with background knowledge through. And on top of being about as clear as mud, the appendix is filled with Chomsky's typical ego-filled "where I go, so goes linguistics" rhetoric. When Chomsky ends the appendix with the statement, "Little [of Pinker and Jackendoff's critique] survives such analysis, as far as we can tell" (emphasis mine), I wonder if the "we" refers to Chomsky, with Hauser and Fitch just nodding their heads and saying, "Whatever you say, Noam. Whatever you say." Ugh.

I would love to hear from a linguist whether the appendix makes any sense to them. My suspicion is that for most, it won't. The rest of you, stay as far away from that appendix as is humanly possible. To borrow a phrase from Nietzsche, "When you gaze long into the Chomsky, the Chomsky also gazes into you."

UPDATE: Mark Liberman responds to my plea to linguists, writing:
I'll limit myself to observing that it's entirely "inside baseball": seven pages of text that mention no linguistic facts and no specific languages, nor any simulations, formulae, or empirical generalizations. Aside from a very general and abstract account of Chomsky's view of the goals of his research, the only topic is who said what when, sometimes with a very abstract explanation of why. It's an odd document -- I can't think of anything at all comparable from a major figure in a scientific or scholarly field, except perhaps some controversies over precedence (which is not an issue here). I agree with the judgment of Jacques Mehler, the editor of Cognition, who asked for it to be cut; and it seems to me that it's a distraction for outsiders (including most of the normal readership of Cognition) to try to understand it.
Now I don't feel so bad about making fun of it, even if the Nietzsche joke might have been a bit much.

Perhaps more importantly, though, Liberman links to the next turn in the debate over the original HCF paper, Pinker and Jackendoff (now Jackendoff and Pinker, not wanting to appear less egalitarian than Fitch, Hauser, and Chomsky) have written a response to the respone. It's here.