Sunday, April 24, 2005

Is This Blog Readable?

Not wanting to look to the people who actually read this blog for the answer to such a question, I instead used this site, which I learned about from Washington Monthly. It turns out that the blog is somewhat readable. Here are my scores (see the site for an extended explanation of the scores):

Total sentences677
Total words11,009
Average words per Sentence16.26
Words with 1 Syllable7,057
Words with 2 Syllables2,058
Words with 3 Syllables1,176
Words with 4 or more Syllables718
Percentage of word with three or more syllables17.20%
Average Syllables per Word1.60
Gunning Fog Index13.39
Flesch Reading Ease55.29
Flesch-Kincaid Grade9.59

Basically, my Gunning Fog score says that the blog is a bit tougher to read than the Wall Street Journal; the Flesch Reading Ease score says that I'm not quite as readable as a good author would want to be; and the Flesch-Kincaid Grade score says that you'd have to be about half way through the 9th grade to get this blog. Interestingly, the blog has gotten more readable of late (perhaps because I haven't posted as many long cog sci rants this month, due to time constraints and computer problems). Here are my scores for February:

Total sentences1,576
Total words23,456
Average words per Sentence14.88
Words with 1 Syllable13,929
Words with 2 Syllables4,646
Words with 3 Syllables2,982
Words with 4 or more Syllables1,899
Percentage of word with three or more syllables20.81%
Average Syllables per Word1.70
Gunning Fog Index14.28
Flesch Reading Ease48.31
Flesch-Kincaid Grade10.22

Back then, you had to be in the tenth grade to get me, and my Flesch Reading Ease score was abysmal. Just as a comparison, here are the Gunning Fog Index scores for some of my favorite blogs:

Leiter Reports: 13.44
Majikthise: 9.29
Mormon Metaphysics: 9.87
No de Qur'tuba: 10.02
Pharyngula: 9.55
Philosophy, et cetera: 9.98
Siris: 9.89

In other words, I'm less readable than just about everyone (Dr. Leiter is slightly less readable than I currently am, but back in February, I made even his blog look like a Winnie the Pooh book). And people wonder why I am self-conscious about my writing! If these numbers are any indication, I've got work to do to become a more readable writer. Of course, I'm too lazy to actually do that. So, you'll have to deal with my unreadable blogging.

Goals

Goals are complex and elusive beasts, two properties that make them difficult to study. Yet it is their complexity and elusiveness that makes it imperative that we study them, if we are to understand human behavior and cognition. They are complex because they are intertwined with several different systems, including the perceptual, cognitive, motivational, affective, and motor sytems, and at a higher-level, have complex relationships to each other, as well as the means with which we satisfy them, and the contexts in which they are activated. Thus, in order to understand the workings of those other systems, it is imperative that we understand how they interact with our goals. Goals are elusive because, in many cases, either they themselves, or many of the aspects of the context in which they are activated, are unavailable to conscious awareness. Thus, our introspective accounts of our behavior, along with the self-reports about the motivations, goals, and reasons behind choices and behaviors are not always veridical. Their effects on cognition and behavior are therefore not always obvious to observers, including experimenters. It is important, then, to study goals more systematically in order to fully understand their influences.

Given the seemingly obvious importance of understanding goals in order to understand cognition and behavior, one would think that people like cognitive scientists and economists, who make their livings trying to understand cognition and behavior, would have spent a great deal of time studying goals systematically. As it turns out, neither have. Instead, cognitive scientists, perhaps associating goals exclusively with things like affect and motivation, have tended to ignore goals in favor of abstracted cognitive mechanisms. Economists, on the other hand, in their focus on concepts like utility and instrumentality, have tended to either ignore goals, or purposefuly create situations in which goals are obvious are universal, in order to avoid having to deal with goals directly (think, for example, of all the Kahneman and Tversky work that deals with risk aversion and similar phenomena, in which the goals involved, e.g., walking away with more money, are fixed in advanced so that mechanisms can be studied without considering specific goals). For the most part, the only researchers who have spent a lot of time studying goals are social psychologists. Yet these researchers have tended to dissociate goals from all of the things that cognitive scientists and economists might study, instead treating goals as isolated and static mental entities1.

Recently, these trends have begun to change in at least two of those disciplines (cognitive science and social psychology). In social psychology, Kruglanski, et al. (2002), for example, have called on social psychologists to begin taking a "motivation as cognition" approach to the study of goals, as opposed to the "motivation versus cognition" approach that has characterized social psychological theories. Here is how they describe this approach:
We assume that motivational phenomena are a joint function of cognitive principels (that goal-systems share with other cognitive systems) as they are applied to uniquely motivational contents, that is, to goals and to means. Put differently, the cognitive properties of goal-systems set the constraints within the motivational properties that may express themselves. (p. 334)
In cognitive science, Markman and Brendl2 have defined goals in the information-processing and cybernetic systems language that defines most cognitive scientific theory. For example, they wrote:
Goals are representational structures that guide the system in its pursuit of an end state or a reference state. When the end state associated with a goal is desired, the goal is in approach goal; that is, the feedback loop aims at reducing the psychological distance of the organism to the end state. However, when the end state associated with the goal is undesired, the goal is an avoidance goal. In this case, the system is geared to increase its psychological distance to the end state, which can be reprsented as a feedfoward loop. (p. 98)
They go on like this, but you get the point. When described like this, as representational states, goals aren't simply mental processes/entities that influence cognition; they are cognitive. The benefits of treating goals as cognitive, rather than something else (affective, non-cognitive motivational, or whatever) are many. It allows us to bring to bear our knowledge of representations and cognition on the study of goals, which, given the snails pace at which that study has progressed since the 1930s, can only be a good thing. For example, as Kruglanski, et al. (2002) noted, when treating goals as representations, we can treat their connections to contexts, actions, and other goals as similar to those of other types of representations, thus better understanding how goals are activated, how they activate the actions used to satisfy them, and how they strengthen or inhibit the activation of other representations (e.g., other goals). Perhaps the greatest benefit is that discussing goals in cognitive terms makes it OK for cognitive scientists to study them, without worrying about getting wrapped up in issues they generally choose to avoid, like emotion/affect.

And so the cognitive scientific study of goals has recently begun, though most of the researchers are still social psychologists by training. In subsequent posts, I will try to describe some of the key aspects of goals that the "motivation as cognition" approach has discovered or highlighted from past research. In particular, Markman and Brendl3 have highlighted "nine phenomena that a theory of goals and motivation must explain" (Quoted from their Table 1):
  1. People can talk about their actions.
  2. Talking about actions can interfere with choices
  3. People have difficulting predicting future preferences and future affective states.
  4. People express attitudes, but their attitudes do not always coincide with their future actions.
  5. Affective states are taken to reflect underlying motivational states, though they correlate with such states only loosely.
  6. States of the world can prime goals.
  7. Goals prime means.
  8. Means can remind people of goals.
  9. Explicit intentions to perform actions can influence behavior.
The first five phenomena relate to the elusiveness of goals that I mentioned in the first sentence of this post. The remaining phenomena are related to the cognitive structure of the goal system. In the next post, I will discuss the relationship between goals and consciousness, or the elusiveness of goals. In the third post, I'll talk about the models that Kruglanski, et al. (2002) and Markman and Brendl (in press) have developed to describe the cognitive structure of the goal system. Along the way, I'll try to describe a lot of empirical research on goals. By the end, I hope that it will be clear to everyone how important the study of goals is for cognitive science, and any discipline that wants to understand the human mind.

1 Kruglanski, A.W., Shah, J.Y., Fishbach, A., Friedman, R., Chun, W.Y., & Sleeth-Keppler, D. (2002). A theory of goal systems. Advances in Experimental Social Psychology, 34, 331-378.
2 Markman, A.B., & Brendl, C.M. (2000). The influence of goals on value and choice. In D.L. Medin (Ed.) The Psychology of Learning and Motivation, Vol. 39. (pp. 97-129) San Diego, CA: Academic Press.
3 Markman, A.B., & Brendl, C.M. (in press). Goals, policies, preferences, and actions. To appear in F.R. Kardes, P.M. Herr, & J. Nantel (Eds.) Applying Social Cognition to Consumer-focused Strategy. Mahwah, NJ: Lawrence Erlbaum Associates.

Thursday, April 21, 2005

If Reality Doesn't Fit Our Prejudices, We Will Make It

One of the more frequently used arguments against the legalization of gay marriage, or rather for making it illegal (in some cases, two times over), is that marriage is about families. More specifically, marriage is about raising children. The argument assumes that gay and lesbian couples cannot have children, and therefore they shouldn't be allowed to marry. Of course, this assumption is obviously false. While gay and lesbian couples cannot have children without help, they can and often do have children with a little help, through artificial insemination, surrogacy, or adoption. This is clearly a problem for what is, for many, the main argument against gay marriage. It just doesn't accord with reality.

Fortunately for gay marriage opponents, there is a little thing called the Texas state legislature. Never a group to be dissuaded by reality, the Republican-dominated legislature of that great state decided to change it so that their argument against gay marriage will work. And, as it turns out, altering reality to fit their prejudices just isn't that difficult. You see, in Texas, Child Protective Services are in shambles, and virtually everyone in the government agrees that it needs to be overhauled. So, a very popular bill designed to do just that is making its way through the legislature. A version of the bill passed in the Senate last week, and this week, it was the House's turn. Knowing that it would be politically impossible for Democrats, much less moderate Republicans, to vote against the bill, Rep. Robert Talton (R-Pasadena) decided to tack on an amendment (which passed 81-58, with voting largely along party lines) that would require CPS to ask all prospective foster parents whether they are gay or lesbian, and deny the requests of all those who answer yes. In addition, the amendment requires CPS to remove any foster children who are currently in gay or lesbian households. Talton had tried, and failed, to pass similar measures on two previous occasions, but now that it was part of a bill that no one in his or her right mind would vote against, he couldn't fail. And he didn't. The CPS reform bill passed by a vote of 135-6.

The bill now heads to the Senate. If it passes there, and there's no reason to think that it won't, Texas Republicans will have set in motion the forces needed to change reality so that they can still argue that their attempts to ban gay marriage are based on the fact that gay and lesbian couples don't raise children. One wonders what will be next. If foster parenting and adoption are out of the picture, will Texas Republicans concentrate on requiring that couples be straight in order to use artificial fertilization or surrogacy? What about gay and lesbian couples who already have children? Will they be forced to give them up? That seems extreme, but the fact that CPS is required to remove foster children from gay and lesbian households may be a foreshadowing of such measures.

Saturday, April 16, 2005

A Different September 11

One of my hobbies is reading about the history of psychology, and in particular, the history of psychology leading up to, and comprising, the cognitive revolution. It's not entirely for amusement, either. We can learn a lot from the historical movements of ideas in a particular science, especially in one as young as cognitive science. For example, cognitive science was born, in large part, in reaction to behaviorism, but also out of behaviorism, so in some ways it explicitly ignored some of the insights of behaviorism in order to distance itself from it, while in others it accepted some of the tenets of behaviorism unquestioningly. Even today, then, there are some largely unquestioned assumptions and biases in the majority of cognitive science research. The best example is the almost complete lack of attention to emotion and culture in cognitive scientific research. There have been movements, in recent years, to change this, especially in cognitive neuroscience, where researchers like Antonio Damasio have begun paying a lot of attention to the influence of emotion on cognitive processes, and cognitive anthropology, where people like Scott Atran, Doug Medin, and Richard Nisbett have begun to look at the role of culture in cognition. Still, the bulk of cognitive scientific research ignores these things, just as it did in the 1950s and 60s, either for philosophical or practical reasons.

While learning about important things like the inherent biases and unquestioned assumptions of cognitive science through studying its history is important, it's even more important that I learn useless trivia about my field. And one of the most important pieces of triva I've learned is the birthdate of cognitive science.

I imagine that for most sciences, there's not much of a concensus on the dates, or even the years, of their birth. We might say that modern physics was born on July 5, 1687, but couldn't we also say it was born earlier, with Kepler or Galileo? Even if we agree on the date, that's only the birthdate of modern physics (we could probably also say that modern biology was born on November 24, 1859), not the birthdate of physics (or biology) itself. But for cognitive science, there is widespread agreement about the year, and even the date of its birth. We could say that it was born in September, 1948, at the Hixon Symposium on Cerebral Mechanisms in Behavior at the California Institute of Technology, where Karl Lashley delivered the speech that likely killed behaviorism, but that date signals a death, more than a birth. It took a few years for researchers from multiple disciplines, including mathematics and computer science, psychology, neuroscience, linguistics, and philosophy, to come together and create something to replace behaviorism. And the consensus is that they finally did so on September 11, 1956.

What happened on September 11, 1956? Well, it was the second day of the Symposium on Information Theory at MIT. On that date, George Miller presented his now famous "Magical number seven plus or minus two" findings, and Noam Chomsky presented his "Three models for the description of language." Thus, Miller himself1, along with Newell and Simon2 and Gardner3 all give this date as the birthdate of cognitive science. Even if we can't agree on the exact date, the year is pretty uncontroversial. In addition to the works of Miller and Chomsky, it also saw the publication of The Study of Thinking, Claude Shannon and John McCarthy's edited book Automata Studies, and Newell and Simon's "The logic theory machine: A complex information processing system,"4 which was the most thorough description of an AI project to date. In addition, Newell and Simon, along with Marvin Minsky, McCarthy, Shannon, and Nat Rochester held a conference at Dartmouth that summer, which would turn out to be the first conference on artificial intelligence. So, in a sense, AI, cognitive psychology, and Chomskyan linguistics were all born in the span of a few months in 1956.

1 Miller, G. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive Sciences, 7(3), 141-144.
2 Newell, A., and H. A. Simon. 1972. Human Problem Solving. Englewood Cliffs, N.J.: Prentice-Hall. (p. 4)
3 Gardner, H. (1985). The Mind's New Science: A History of the Cognitive Revolution, New York: Basic Books.
4 Newell, A., and H. A. Simon. 1956. The logic theory machine: A complex information processing system.
IRE Transactions on Information Theory, 2(3), 61-79.

Jean Paul Sartre

I don't think I've seen it mentioned anywhere else, but yesterday was the 25th anniversary of the death of Jean Paul Sartre. Say what you will about his existentialism, he had one of the most sophisticated theories of consciousness around. For Sartre, like Brentano and several other existentialists (most notably Maurice Merleau-Ponty), consciousness was a unitary phenomenon, rather than the the dual-natured beast that it is in contemporary analytic philosophy of mind and cognitive science, particularly neuroscience (where models of consciousness still tend to posit the object of consciousness and an entirely separate component which is comprised of attention, or awareness of that object). Given the problems with the two-part theories of consciousness, I think we'd do well to go back to Sartre (and others with unitary theories), and the 25th anniversary of his death is a good time to start rereading him.

Sunday, April 10, 2005

A Plea for Blogging Advice

If you're reading this blog, the chances are you've been around blogs longer than me. I've been blogging since September of last year, and reading them since around March or April of the same year. So, while I feel like I know my way around blogs fairly well, there's a lot about blogging itself that I don't know. So, I'm hoping someone out there can give me some advice. I've been having trouble with blogger lately. It's completely lost a couple of very long cog sci posts that I didn't feel like rewriting, and it lost some shorter posts as well. The problem was that blogger decided not to be available when I clicked to submit the posts, and not having saved them some other way (I know, that was really stupid of me), they just disappeared. No amount of backtracking could reproduce them. Anyway, because of my frustration with blogger, I've been thinking about transferring this blog to one of the pay services (e.g., Typepad). So, my question is, are these services worth the money? And which services are the best? Any advice would be greatly appreciated. Thank you in advance.

Higher Brain Death and Personhood III

In response to my last post on the higher-brain death view, Brandon of Siris posted some very good comments, clarifying why he thinks the higher-brain death position is problematic. The comments are simply to good for me to try to respond to them in comments, so I'm giving them a post of their own. The gist of Brandon's objection to the higher-brain death view is that it alters our concepts of death, perssonhood, and the body in such a way as to make it too philosophically problematic to use for ethical or practical reasoning. To illustrate why this is, he offers the following analogy:
Suppose a pagan society that considers it an ethical obligation to worship as a god anything that is considered enduring, and include in this category material and immaterial things; and they are morally required to despise anything that's a not-god. A missionary comes along and says: "No, what a god really is, is something enduring and immaterial." This shift keeps things stable for things that are enduring and immaterial; but it leaves us completely in the dark about the things that are enduring and material. For it will only follow that the pagans should despise these (newly designated) non-gods if the principles behind their despising of non-gods before still apply to everything that is considered a non-god now. It doesn't follow from the shift itself that they do; the original principles may be some of the things that rationally need to be shifted in light of this shift in what is considered a god. Work needs to be done to determine how they should deal with non-gods now that things have move. Do the same principles apply? Does there need to be significant revisions of principles (e.g., if some of them were based entirely on the enduring criterion)? Should some principles be replaced? Should a distinction be instituted between non-gods who are enduring and non-gods who are not enduring, with differences in behavior toward each? The new definitions tell us nothing about these things.
The question we have to ask, in order to respond to this analogy, is does it hold? I think the answer is no. The problem with the analogy lies in the situation as it is prior to the arrival of the missionary. A better analogy would be this:
Suppose a pagan society that considers it an ethical obligation to worship as a god anything that is considered enduring and immaterial, and they are morally required to despise anything that is not-god.
The arrival of the missionary must be changed as well. When the missionary arrives, he says, “No, what a god really is, is something enduring, be it material or immaterial.”

This is, in fact, what I think Brandon is doing by stating that a body with no mind, no psychological states, in essence, is still “alive.” In doing so, I think he has done what the missionary does in the revised analogy: stretched the concept of “living” to a whole new category of being: bodies without personhood.

In response to my poor expression of this counterargument in comments, Brandon wrote:
I am very puzzled as to what you mean by saying you haven't changed the boundaries. Surely you don't mean that it has always been essential to our view of personhood that it cuts off when the higher brain shuts down? As I noted, I don't think it is the case that our ethical topoi on personhood themselves have ever precisely delimited a cut-off point (although various cut-off points have been proposed as wyas of delimiting where the ethical topoi apply); all they do is give us something to work with on the person side of the divide, wherever we happen to put it.
But I don't think it's the case that we have never had a preciseily delimited cut-off point for personhood. In fact, I think that traditionally, both materialist and non-materialist concepts of personhood have rested on the presence of a mind, be it in the form of a working brain or the presence of a soul. While I can't really address arguments that a soul may still be present in a body that is brain dead, because I myself am working within a materialist framework, I think that my view, rather than Brandon's, is consistent with traditional materialist concepts of personhood, because it limits personhood to cases in which the properties that we attribute to souls or materialistic minds (brains) are present.

I also think it is more consistent with traditional boundaries of life and death. It is true that medicine has, for some time, used non-brain criteria for determining death, but this is an artifact of the times. Throughout most of medical history, the technology required to determine when the brain is dead simply has not been present. However, this does not mean that medical professionals or laymen have considered the boundary between life and death to be defined by the presence of breathing or a heartbeat. It simply means that they have used those as measures for whether a person was still alive, because they were the best measures available.

To see the unnaturalness of the alternative to the higher-brain death view, consider the following scenario. A person's entire brain is dead (in other words, both the fore and hind-brain), but through artificial means, we are able to keep the heart beating, to keep oxygen in the blood, the digestive system working, etc. In other words, we have a case like that of Terri Schiavo, or others in persistent or permanent vegetative states, but with a dead hind-brain. Is this person's body alive? What if it is impossible to make the lungs work, but we can make the heart beat and digestive system work, and put oxygen into the blood artificially? What if the heart doesn't work, and the lungs don't work, but we can artificially circulate the blood with an external pump, infusing it with oxygen in the process, and the digestive system still works? What if we have to do all of these things artificially. In other words, the heart, lungs, and digestive system don't work on their own, and we must pump the blood that we have artificially infused with oxygen and the nutrients that would come from the digestive system? When do we declare death? When is the body no longer alive?

Now consider a case in which the forebrain is still working, but none of the organ systems work (i.e., we have to artificially pump the artificially nutrient and oxygen-infused blood throughout the body, including the brain). The person is no longer able to speak (if the lungs don't work, he or she won't be able to speak), but through blinking, can communicate complex messages, indicating that she is conscious of her surroundings (we don't need for the person to be conscious of her surroundings, but it helps to illustrate the contrast between the cases in the previous paragraph and this one). Is this person still alive?

I don't know about others, but my intuitions are that in the case in which the brain is dead, and the body's normal functionings are completely shut down, but can be artificially carried out, thus keeping cells alive and preventing the body from decaying as it would in "death," the body is not in fact alive. However, in the case where the body's functions have completely shut down, but can be carried out artificially, and the brain is not dead, the person is alive. I think this is what you would get with our commonplace concepts of life and death, as well. The reason is that brain life is a necessary, but not sufficient component of our concept of life and death. It makes no sense to talk of a "living body" without a living brain. However, it would make sense to say that the body kept functioning through completely artificial means is alive, so long as the brain is alive as well.

My reasons for holding the higher-brain death view are not only that I feel that it is a more natural view, consistent with our commonplace concepts of life and death as applied to human beings. Perhaps even more important are the implications of the alternative (Brandon's view, e.g.). For Brandon, it is possible that there exists human beings (e.g., Terri Schiavo) whose entirely lives are completely dependent upon the wills of other human beings. Now all of us are in some ways dependent on the wills of others, and there are some people (e.g., those with extreme disabilities) who are almost completely dependent upon the will of others, but Brandon's view creates an entire new class of individuals: those who lack both actual and potential autonomy altogether. Since these individuals have no minds, and in fact no wills as we would traditionally conceive of the concept "will," and no possibility of ever having minds or wills, their entire "lives" are dependent on the decisions and actions of others. If this doesn't create just the sort of horrific moral dilemma that Brandon is worried about, I don't know what would.

And to see just how much of a problem Brandon's new category of human persons can be, from an ethical standpoint, consider the rhetoric of Eric Cohen in a recent essay, with which Michael Bérubé has dealt masterfully in this post. In it, Cohen suggests (nay, states outright!) that there are cases, like the case of Terri Schiavo, in which individuals are not morally free to decide their own fate. In other words, even if a person, when he or she did have a mind (i.e., his or her higher brain was alive) has made a decision about how his or her body should be treated after higher-brain death, it is morally permissable for individuals (in this case, conservative lawmakers or the courts) to override that decision once the higher-brain is dead, because the person is, in that case, no longer morally free to decide his or her own fate. This view can only come about if we use a criterion for life that creates a category of individuals who are alive, but not morally free. And this is exactly the sort of ontological and ethical baggage that the higher-brain death view avoids. In short, then, rather than being ethically and philosophically more complex, the higher-brain view turns out to be simpler, and in fact less dangerous.

Saturday, April 09, 2005

Mapping Ain't Easy: A Personal Anecdote

Gilles Fauconnier has said over and over that mappings between conceptual domains are the product of incredibly complex, unconscious processes that are learned over the course of development. It's easy to forget this, though, because we perform such mappings constantly and effortlessly in our everyday cognitive functioning. But all we need to do is look at the trouble that young children have with some mappings that, for us adults, seem perfectly straightforward, to be reminded that Gilles is right. Take, for example, a conversation I recently had with my 7-year old son. He and I were walking along on the sidewalk of a fairly busy street when we came upon a deep hole dug by workers repairing a sewer line. It had rained for several days in a row, so no work had been done on the line, and the hole was now filled with about three feet of water (it was on a steep hill, so it got a lot of runoff). Being a seven-year old, my son found the water-filled hole fascinating, and we had to stop to look at it (being a seven-year old at heart, I was fascinated too). We talked about the water, and why the hole was there, and then he noticed that there was a huge pile of dirt next to the hole. Huge piles of dirt are fascinating to seven-year olds (and seven-year olds at heart) too, so the conversation quickly turned to the pile.

"Why is that dirt there?" my son asked.

"It used to be in the hole," I replied.

A look of confusion appeared on my son's face, followed immediately by the look that seven-year olds get when light bulbs go on in their heads.

"Oh, they had to take it out because it got all muddy with the water."

At first, I didn't understand what he meant, but then I realized what was going on. You see, my statement that the dirt "used to be in the hole" is one that any adult would have understood immediately to mean that the dirt was dug out of the place where the hole now is, creating the hole itself. However, this understanding of the sentence requires a very complex mapping. The hole, which exists now, must be placed in the position that it now exists, but in a time that it did not exist, so that the dirt can be in it. But when the dirt is in it, there is no hole. This seemingly simple mapping was a bit much for my otherwise intelligent son. Thus, when he performed the mapping, the dirt was still in the hole (i.e., there was a hole!) when the water came in, and thus the dirt got muddy and had to be removed (don't ask me why he thought that the dirt had to be removed because it was muddy).

The moral of this story is that sometimes it takes a child to remind a cognitive psychologist that his work isn't as easy as he sometimes believes it is. Just because it's easy to think, doesn't mean that thinking is easy.

Gay Marriage and Rights

Lawmakers in Texas are now considering a bill that would amend the state constitution to define marriage as a union between a man and a woman. In other words, it would amend the constitution so that it bans gay marriage. This might be considered overkill, given that Texas law already bans gay marriage, and mandates that the state cannot recognize gay marriages from other states, but this is the big government Republican Party, and no amount of government interference is too much. Now I don't want to get into a deep discussion of gay marriage, but I do want to note the utter stupidity of one of the arguments for the amendment. State Representative Charlie Howard (R - Sugar Land), one of the authors of the bill, has responded to the argument that the amendment, and gay marriage bans in general, limit the rights of gays and lesbians, by saying,
It's not a civil right. It has nothing to do with civil rights. It's not a right.
Now, Howard may be right. Marriage may not be a right, and the gay marriage ban may have nothing to do with civil rights, but Howard's argument for this position is frustratingly stupid. He says that the amendment would not limit anyones rights, because even under the ban, anyone can marry, as long as it is someone of the opposite sex. So, the rights of gays and lesbians aren't limited, because they can still marry, as long as they're marrying the type of person that the amendment says that they can. I wonder if, in the 1950s, it was argued that laws banning interracial marriage did not limit the rights of individuals, because they could still marry, as long as they married someone of the same race. Obviously, in the case of interracial marriage bans, individuals' rights were limited. What Howard, and others who have used this argument (and I've seen it all over the place, including in the blogosphere) apparently fail to grasp is that there simple access to an institution does not guarantee that rights are not limited. The right that pro-gay marriage advocates argue is denied to gay and lesbian couples is not the access to marriage, but the right of consenting adults to marry whom they choose. Once again, you may deny that this is in fact a right, and thus that the gay marriage ban "has nothing to do with civil rights," but you can't argue for this position by simply stating that gays can marry, so long as they marry a member of the opposite sex.

UPDATE: In a comment, Brandon provided a link to an excellent article comparing the gay marriage debate to the miscegenation debate. Specifically, the quotes from Justice Roger Traynor of the Supreme Court of California, and of U.S. Supreme Court Cheif Justice Earl Warren, both arguing against bans on interracial marriage, are relevant to this post.

Higher Brain Death and Personhood Revisited

The debate over whether higher-brain death should be considered "death simpliciter" which I unwittingly sparked with this post has produced some very good posts on both sides of the issue. First there was Brandon's criticism of my view, and my response (which led to some very good comments), and then, along with Brandon's reply to my response, several others weighed in. There is much that I agree with in each of these posts, particularly Richard's (in which he agrees with my position). However, Brandon and I still disagree about whether higher-brain death can be used as the criterion for death, so it's to his post that I'll be giving my attention here.

Brandon agrees with me that the issue at hand is not an empirical one, or even a directly ethical one, though it certainly has implications for how we reason about ethics. As he puts it,
The reason personhood is a concept relevant to ethics is that, despite providing no answers of itself and on its own, it turns out to be immensely useful. Ethical questions are immensely difficult; to reason about them we need a way to set them in order... Person, in the course of its historical existence in theological, metaphysical, and moral disputes, has gathered about itself a wide array of commonplaces that help us to simplify moral reasoning, diagnose moral situations, and put forward general guidelines. When new situations arise, we can take them as a template and modify them in the relevant ways.
It is here, in its function as template-provider for moral reasoning, that Brandon believes the higher-brain death position fails. He writes:
[The higher-brain death position] does not, strictly speaking, leave us with nothing. What the position does is give us a set whose members are all human organisms but not human persons. Our personal commonplaces must be set aside. And what we are left with is very murky. What we have in this case, understood in this way, are human beings (in a straightforward sense) that are alive (again, in a straightforward sense of the term) but are not persons. We have no topics for this. We do not know what our responsibilities to such a thing should be, any more than we know what our responsibilities to a manticore or chimera should be. Our personal commonplaces are set aside.
In other words, Brandon feels that instead of providing a template for moral reasoning, the higher-brain death position "annihilates" our traditional templates, and in doing so creates more problems and questions than it solves or answers. As you might expect, I disagree, but to explain why, I will first have to explain my position a little bit further.

The higher-brain death position basically states that when the higher brain is dead, the person is dead. It is true that the body can, after higher-brain death, continue to function. The heart can continue to beat, the lungs can continue to take in air and extract oxygen, the digestive system can continue to process nutrients, and in some cases, parts of the motor system (including the parts of that system involved in speech) can be activated. When the body continues to function, in the absence of any psychological functioning (many or all of the functions of the hindbrain, and perhaps even parts of the cerebellum, subcortical regions such as the thalamus and hypothalamus, and midbrain, are likely to remain intact), it might be tempting to say that the body is "alive." But this is not what the higher brain death position states. Under this view, the body is no more "alive" than is a corpse in which occasional electrical impulses produce movements, and even vocalizations. The body is thus not a "human being" any more than a corpse is. In fact, according to the view that higher-brain death is death simpliciter, the body in which a heart continues to beat, lungs continue to breathe, etc., in the absence of any higher-brain functioning is a corpse.

Note that this does not imply that personhood, and life itself, reside solely in the cortex. On the contrary, life and personhood reside in the complete organism, consisting of both the cortex and the rest of the body. The cortex cannot function without a body, and thus the cortex itself, without a body, is dead. The body cannot function as a living person without a cortex, and thus the body without a functioning cortex is dead.

So my answer to Brandon's criticism is straightforward. The higher-brain death view does not create a set that consists of living bodies with no corresponding living persons, and thus does not render our traditional moral templates meaningless or confused. Instead, it does what Brandon argues it cannot do when he attributes to it the living body/dead person set -- it "point[s] the way to answers nearby." It says that whatever ethical principles we must apply to living human beings, we must apply to individuals with bodies and functional cortices. These principles need not apply to dead bodies, a category that includes bodies with higher-brains that will never function again. In other words, it provides a template for ethical reasoning about living persons, as well as corpses, with the bodies of both Terry Schiavo in 2004 (or 1994, for that matter) and Pope John Paul II today both considered corpses.

It might be objected that there is a clear difference between the late Pope's corpse, or a corpse with random neural firings, and the body of Terry Schiavo in 2004. That is true, but the differences are not relevant to the determination of life and death (see my first post on the topic for why that is). These differences may have practical implications, of course. For instance, a functioning corpse (higher-brain death with a beating heart, working lungs, etc.) must be treated differently from the Pope's several-day-old corpse in cases where organ donation is still possible. However, reasoning about these practical (and ethical) implications works from the corpse template, not the living body/living person template. The latter template only applies where we have a living person, or a living human being. It only applies to cases in which the word "living" applies.

Friday, April 08, 2005

Lakoff's Back

Lakoff is back in the blogosophere, with the "Nurturant parent" metaphor under attack again. And as before, I can't let Lakoff discussions go without saying something myself. The discussion started when Ezra Klein argued that we should "let go of the Lakoff," specifically because the "Nurturant Parent" metaphor makes liberals look weak. Lindsay responded by noting that the "Nurturant Parent" metaphor isn't meant to be used as a frame, but as "a tool to figure out where other people are coming from," to which Ezra responded by saying that whatever the metaphor is for, taking the "nurturant" world view leaves us looking weak, and perhaps even acting weak (I'm not sure, in the second post, whether Ezra is arguing that we should change our metaphor, or change our world view). Lindsay quickly responded, writing, "Maybe we should get rid of the parent labels altogether and concentrate on the substance of each set of metaphors." In between and after these posts, there were posts all over the place commenting on the Ezra-Lindsay exchange (see here, here, here, here, here, and here).

First, here's what I think most of the discussants agree on. Lakoff is onto something when he says that Democrats have, for some time, been talking about the issues in Republican terms, and this means that we need to start framing these issues in terms that reflect our own views, and how they differ from Republicans'. Given the success of the Republican Party over the last decade or two, it's hard to argue against Lakoff on these points. Where people disagree, then, is on how to go about it.

Lakoff himself uses the "Nation as a Family" metaphor to characterize the general American mode of thinking about politics. From this he derives two familial metaphors, "Strict Father" and "Nurturant Parent" morality, to describe the two dominant world-views in American politics. The two parental metaphors, characterizing conservatives and liberals respectively, derive from the Nation as a Family metaphor, and our representations of political issues derive from the overarching political metaphor along with the particular parental morality metaphor to which we happen to adhere. Lindsay is right, then, that Lakoff doesn't intend for liberals to use the "Nurturant Parent" as a frame in political discussions, though he is certainly using it as a frame himself. However, I think Ezra's also right when he says that the "Nurturant Parent" metaphor isn't a good one, even if we're only using it to describe the liberal world view at an abstract level. In fact, like Lindsay in her second post, I think both parental metaphors are problematic, especially in combination.

The problem is evident in rampant the misunderstandings of the metaphors that both conservatives and liberals have displayed. A good metaphor, and especially a good frame, will lead people to represent your position, or your world view, in a way that is at least consistent with that position or world view. But as Ezra analysis of the "Nurturant Parent" metaphor shows, this is not what is happening. The problem arises from the comparison of "Strict Father" with "Nurturant Parent," and it's Lakoff's fault. It's his fault because he seems to be completely unfamiliar with the vast cognitive scientific literature on comparison. This might be excusable were Lakoff not in fact a cognitive scientist, and one who studies metaphor, to boot, but someone in Lakoff's position should know better.

As I recently discussed, comparisons are important. In a sense, the mapping between two domains (e.g., "Strict Father" and "Nurturant Parent") serves as a frame in itself. Because of this, it's imperative that in attempting to label and describe our positions and world views, we make sure that in likely comparisons, the predicates we want to highlight --those we want others to explicitly represent-- are, in fact, highlighted. We can do this by insuring that the common structure of our frames (be they metaphors, analogies, or anything else), the predicates we want to highlight in our domain are alignable in the comparison domain (i.e., there is a corresponding predicate, in a corresponding position, in the structure of the comparison domain). It's clear from all of the misunderstandings of Lakoff's parental metaphor comparison that he has not done this. Our representation of Strict Father, be it because of the word "Strict," or because of the masculinity of "Father," appears to include strength, whereas the corresponding predicate in Nurturant Parent appears to be something other than strength (perhaps weakness) in the corresponding position. But Lakoff makes it clear that the Nurturant Parent goes out of its way to protect its children, which should imply strength. The problem, then, is to present the liberal world view in such a way that strength predicate in its representation is alignable with strength, at least on issues such as terrorism, in the conservative world view.

I wish I had a really good suggestion for a pair of contrasting metaphors to use to label the prototypical conservative and liberal world views, but I don't. As Lakoff's failure makes clear, it isn't easy to come up with metaphors that work. Lindsay's suggestion (in her second post) that we just call the world views models Y and Z doesn't really work, either. One of Lakoff's insights is that the analogies we use (I call them analogies, while he would call them metaphors, but then his theory of conceptual metaphor is what's causing the all of the problems here) provide us with an entire structure of inferences, further analogies (or metaphors), and labels with which we can reason about and describe political issues. Y and Z don't provide us with much structure. Furthermore, I think the "Nation is like a Family" analogy is a good starting point. Research on political analogies has shown that the best analogies are those to highly familiar domains, and domains don't get much more familiar than family for most people. And since the government is the head of the family, parental analogies are the most straightforward. But we're going to have to get rid of "father," because it explicitly invites a gender contrast. No matter how hard he tries, Lakoff can't make people represent the "Nurturant Parent" as gender-neutral, when he contrasts it with a metaphor that implies a gender (once the gender predicate has a value on one side of the comparison, it begs for one on the other). "Strict" and "nurturant" are probably going to have to go, as well. I would suggest "Authoritarian Parent" and "Democratic Parent" (the contrast coming from personality research, and capturing the traits that Lakoff describes), but that would probably create a bunch of pissed off conservatives.

Monday, April 04, 2005

Is Political Diversity Really That Important?

Inspired by the study discussed in the previous post, Todd Zywicki of the Volokh crew tells us why we should be concerned about diversity in academia. Brian Leiter has already responded, and I agree with much of what he says, but I thought I would add and/or reiterate a few points.

First, let's look at what Todd says. First, he provides three purposes or goals for higher education:
(1) to develop human capital, (2) to educate and develop critical thinking skills and intellectual self-discovery and character in students, and (3) to develop individuals who can participate as responsible citizens in a free and democratic society.
Since I more or less agree with these, I won't address them specifically. However, Todd uses these goals as a launching point for his arguments for diversity. The first goal "develop human capital" doesn't require diversity, he says, but the second does. He writes:
Ideological diversity has a lot to do with this [critical thinking skills and intellectual self-discovery]. The purpose of education should be to teach students how to think, not what to think. I don't know how you can teach students to analyze arguments and determine the truth value about claims about the world if you don't expose them to a variety of ideas. As Greg Ransom observes the presence of an intellectual orthodoxy on campus can severely hamper student's critical reasoning skills. Ransom's experience is that many students do in fact absorb some degree of indoctrination at a very superficial level, and that the virtual absence of any serious counterarguments leaves them at this very superficial and unreflective mode of analysis. I think this is probably right--for instance, I am amazed at the shallowness of analysis that I hear from ostensibly educated students. Comments I hear about environmental issues, in particular, come to mind.

UPDATE: I added as an update after my initial post "intellectual self-discovery" for students in response to a perceptive reader comment. I did mean to include this as well as part of this point--in addition to developing individual critical reasoning skills, it is also important to develop individual student intellectual skills to understand themselves and the world better as well as guiding ethical and character attributes. Obviously this requires students to wrestle with various ideas in coming to their own world views.

There's something naive, even simple-minded about this (a seemingly persistent quality of Todd's thinking), and a discerning reader will probably notice it right away. The issue Todd is discussing in this paragraph is not the ideologically diversity of university professors, but of the ideas they present in their courses. But Todd doesn't present any evidence of a lack of ideological diversity in classroom material, and neither do any of the studies he and other conservatives cite. They only look at professors' political orientations. But professors' political orientations and class reading lists are not the same thing, and there is no reason why a liberal professor would not include conservative ideas in a course in which they would be relevant. Until someone produces data showing that course reading lists are heavily skewed toward liberal sources, even when conservative sources are available, or that professors are, more often than not, unfair to conservative ideas in their presentation of them to students (a distinct possibility, but not a possibility made inevitable, or even probable, by disparities in political orientation), this point of Todd's seems to be arguing past itself. It certainly doesn't provide a reason for working towards ideological diversity in academia.

Unfortunately, Todd merely repeats this argument for the third goal (responsible citizens, and all that), writing:
It seems to me that it is imperative that students be exposed to all viewpoints about the world and to learn to evaluate the truth and resonance of competing world views. Living together as citizens in a free society, and having the kinds of connections and conversations that make that possible, requires developing a depth of understanding that cannot be created in an atmosphere of one-sided intellectual orthodoxy. It is a pretty short road from the impoverished discussions in modern universities to the idiocy of Michael Moore and red v. blue America. I don't pretend that American political discussion was ever that exhaulted [sic], but surely we used to hold educated people to a higher standard of discourse then we see today, especially on university campuses? I personally would add to this that as part of educating free and responsible citizens we should make sure students understand the intellectual and historical foundations of the western world, but I recognize that this is a more controversial proposition.
Once again, Todd is assuming, without any objective evidence, that disparity in political orientation is identical with, or directly and invariably causes, disparity in classroom material. (Note that there can be classroom disparity, due to the availability of quality material for example, even without there being a disparity in the political orientation of professors -- the two really are mutually exclusive.) Sure, Todd has some anecdotal evidence, from his own experience, like every conservative seems to. But I can tell you honestly that I had at least two professors as an undergraduate -- one in political science, and one in philosophy -- who were heavily biased toward conservative ideas in their teaching. Of course, that's 2 out of almost 60(yes, I took about 180 of coursework; I was a nerd on a two-degree track), most of whom (if we believe the studies) were liberal, but who showed no bias in their teaching. So Todd, like so many before him, has left us without a real argument for diversity among professors. Instead, he's argued for the sensible position that students should be exposed to a wide range of ideas, but he can provide no evidence that they are not.

In fact, as Leiter hints in his response, Zywicki's argument (for diversity in the classroom, not in the ranks of the professors who teach in them) may actually undermine his own position. The key is to have a significantly diverse collection of scholarly ideas. Many of the scholarly ideas that professors teach in their courses were produced by other university professors, and thus it is important that, across all universities (in the world, not necessarily in the United States), there be a wide range of viewpoints (not just political, but with Todd, we have to keep things simple, and what's simpler than considering an incredibly broad issue unidimensionally?) among professors, so that a wide range of ideas are produced for instructors to use. But this doesn't mean that we need intellectual diversity at each individual university, and it certainly doesn't imply that we should require it by law. It may be that, as Leiter claims, scholarship advances more rapidly when several like-minded thinkers are working closely together. I know that in the sciences, that is often the case, because it facilitates productive cooperation on research. In cognitive science (I don't know about other sciences), this has led most universities to concentrate on hiring people studying similar things from similar perspectives. They can still teach about a variety of ideas, but when they do the research, they start from their own position.

So, Zywicki's arguments fail, but might there be other reasons for promoting diversity in academia? My feeling is that in the most departments (the natural sciences, most of the social sciences, applied fields like engineering and medicine, and even in many humanities departments), political orientation is completely irrelevant irrelevant. There is absolutely no reason to worry about diversity on this dimension. In departments where political orientation may be relevant, such as political science, how much would diversity improve education? To what extent would students come away with a more diverse education, if they studied at a university at a more politically diverse institution? I doubt anyone can answer that question effectively now, and until someone goes out and gather the relevant data, all of the posturing by conservatives seems misguided to me.

"Data not available until we finish our book."

The recent publication of a study by Rothman, Lichter and Nevitte which, they argue, demonstrates not only that there is a lack of diversity among American university faculty, but that this lack of diversity is due, in part, to discimination, has caused some blogosopheric hubub. Predictably, conservatives are saying "d'uh!" as liberals cry foul. But what does the study really show? I decided to see for myself.

First, I read the paper, but it is fairly stingy on the details, so I thought I'd get ahold of the data myself. So, I emailed the first author of the paper, asking for the data. Anyone in science will know that it's pretty common for scientists to ask each other for the data from published papers, and in fact, it's generally accepted that the data behind published results should be made available. However, I was brushed off by Rothman with a one sentence response, "Data not available until we finish our book."

So, without the data, I'm forced to take the authors at their word (there's no information about the peer review process for the journal, The Forum, at the journal's website). And what the authors say about the size of the disparity between "liberal/left-leaning" professors and "conservative/right-leaning" professors is interesting. They report that in their sample of 1643 faculty from 183 American universities, 72% are left-leaning/liberal (as measured by self report on a 10-point scale), and 15% right/conservative (the percentages in the general population are 18 and 37% liberal and conservative, respectively). Furthermore, 59% identified themselves as Democrats, and 11% Republican. This left-right disparity is most profound in the humanities and social sciences, as one might expect, but exists even in the natural sciences, and to a much lesser extent, in engineering and business departments. They have a nice table in which the percentages are broken down by department, if you're interested (it's on p. 6 of the pdf linked above).

But as I've said before, showing disparity is one thing, explaining it is another. If you believe that ideological diversity is important in academia, then you want to do something about it, you have to understand why disparities exist. With that in mind, the authors used multiple regression (much as I proposed, though they used far too few predictor variables, and their dependent variable is highly problematic; see the link at the beginning of the paragraph for my suggestion) to measure the effect of several variables, including political orientation, on academic success (as measured by the quality of the school at which a professor is employed). What they found, in this analysis is striking... in its worthlessness. They conducted three different regressions, one using "political ideology" (as measured by answers to several questions about political issues), one using party affiliation, and one using "left-right self designation" as predictor variables. Each of the regressions also included variables such as gender, religion (Christian or Jewish), race, sexual orientation, and a combination of several measures of academic acheivment, including the number of publications, time spent conducting research, membership on editorial boards, and attendance of international conferences. If discrimination is at play, then the three measures of political attitudes (ideology, party affiliation, and left-right self designation) should account for a significant amount of the variance in academic sucess even after the effects of the other variables (gender, religion, race, academic acheivement, etc.) are accounted for.

After conducting the three regressions, the authors conclude that discrimination based on political leanings is at play. Why? Well, they got statistically significant (at the .01 or .0001 level) for both party affiliation and "political ideology" (the regression coefficient for left-right self determination, which is the only measure of political leanings to receive its own table in the paper, and overall receives the most attention in the first half of the paper, is not significant!). But with N's so big, it's not surprising that coefficients would be significant. A more interesting question is, what is the effect size? Well, there's a straightforward way to determine that: the normalized regression coefficients are a measure of the amount of the variance accounted for by the variable. How much of the variance do "political ideology" and party affiliation" explain? Less than 1% (β = .086 and .073 for the "ideology" and affiliation regressions, respectively). Less than 1%! (By way of contrast, the acheivement index accounts for around 15% of the variance). So is there discrimination? Apparently, but not a whole hell of a lot. Interestingly, if we were to conclude from very small but statistically significant (with a large N, and only a few parameters) that ideology-based discrimination exists, we would also have to admit that institution-wide gender-based discrimination exists. In other words, conservatives who want to use this study to argue that ideology-based discrimination exists, will have to change their tune on the existence of gender-based discrimination, as gender accounted for almost as much of the variance in academic success as political ideology (less than 1%, β = .06).

P.S. I wonder if David Horowitz will say that the multiple regression portion of this study is pointless, or if he will use it as more evidence that discrimination exists. If he does the latter, then I have to wonder why he thought my suggestion of a nearly identical study (though I had planned to use more predictor and dependent variables) was a "ridiculous exercise."

On March Madness: A Personal Note

This blog will likely receive its 20,000th visitor today, and I thought that in honor of this momentous occasion (right up there with the day that I finally learned that there is an 'n' in 'wednesday'), I would offer up something personal. I don't do that very often, mostly because I'm pretty sure no one cares. But today, someone cares, namely me.

I am a college basketball fan. One might even say that I am a college basketball fanatic (the word "fan" might imply that, but its etymology is disputed). My passion for college basketball is primarily directed at the basketball team of one school in particular, my alma mater. Now, my alma mater is a national power in basketball, and won two national championships while I was a student there, but they can't win it all every year. (Or maybe they could, if it weren't for those darn refs! and some bad coaching decisions! and what was that player thinking?! and... and... well, as the school's most famous coach once said, "I never lost a game; I just ran out of time.") That means that in some years -- the fewer, the better -- they aren't playing in April. It is in these years that my passion for the sport itself is tested. You see, my love for the game itself is challenged by my despair at the fact that my team's season is now over. The despair so overrides the love that I was unable to watch the national semifinals on Saturday (though the fact that two of my school's most hated rivals were playing might have had something to do with that, too).

For me, this raises a deep philosophical, spiritual, ethical, and really quite trival issue. When does my love for University of __________ (as if the quote above didn't give away the name of the school) basketball become counterproductive? When am I too emotionally involved? I mean, I couldn't even go into a sports bar, or get some wings down the street -- and boy did I want some wings -- on Saturday, because I didn't want to be forced to watch those games on television. It's true, I could have gone to some other type of bar, or ordered the wings to go, but can you see me in a non-sports bar (just shake your head "no"), and are wings to go really wings? My love for __________ basketball is clearly depriving me of pleasure and peace of mind, and as philosophers from Aristotle to Larry Flint have noted, pleasure and peace of mind are good (sometimes Good is capitalized, when speaking of it in connection with pleasure, but philosophers are weird like that).

So as of today, I have made a decision. I am not going to get so emotionally attached to next year's __________ team. Of course, I make this decision every year after a gut-wrenching loss in the tournament, but this year I really mean it. I've already limited the number of articles I read about incoming freshman, prospects for next year, projected rankings, etc., to five a day. I think that's a good start. By next season I should be so emotionally detatched that come April 2006, if they're not playing (and why wouldn't they be?! I mean, they've got four returning starters, including two of their leading scorers and their point guard, along with two former McDonald's All-Americans coming off the bench as sophmores!), I will be able to eat wings or drink a beer anywhere I want on the Saturday of the semifinals. Really, I feel like this is a new me, a better me, a more emotionally balanced me. Now, if only November would come so that I can see another __________ game. I'm jonesin', man.

Sunday, April 03, 2005

Higher-Brain Death Cartoon

The higher-brain death view, in cartoon form (from the Philadelphia Inquirer):

Via blog.bioethics.net.

Self-Perpetuating Paradigms: How Scientists Deal With Unexpected Results

Previously, I discussed Kevin Dunbar's research on the use of analogy in politics. However, Dunbar is better known in cognitive psychology for his in vivo work on scientific cognition. I'll get to the basics of that work, but first I want to say something about unexpected results and paradigms. The connections between all of these will be apparent in a moment.

Over at Universal Acid, Andrew reports on research findings desribed in Nature that are apparently irreconcilable with the "traditional Mendelian paradigm" in biology. I won't get into the findings themselves, because I know jack about genetics, and Andrew explains them fairly well in his post. What I find interesting are Andrew's observations about the treatment of unexpected results, or in this case, results that don't fit within the accepted paradigm (which is pretty much what "unexpected results" means in science). He writes:
[T]he uncertainties involved in actual experiments are so huge that there is no way you can make sense of anything without a paradigm. For anything beyond the most simple of experiments, you have to take a lot of things for granted. And the truth is that experiments fail so often that it is quite sensible to dismiss experiments that don't work as expected as unexplainable failure rather than a blow to the reigning paradigm, and to pursue more productive lines of research instead.
He then applies these insights to the irreconcilable results from the Nature article (note that he uses an analogy to his own work, which is important), writing:
This is the kind of bizarre finding that, if you are too stuck within a paradigm, you just toss out as some weird error. I used to work in a fruit fly lab, and we constantly worried about contamination of fly lines (i.e., if a stray fly sneaks its way in while you are transferring flies from one bottle to another). If you had a bottle full of fly mutants with white eyes (the wildtype is red), and suddenly one day some red eyes (or orange eyes, which is the heterozygous color) appeared, you'd think the stock got contaminated, and you'd just throw out the red-eyed flies (or, more likely, the entire stock).
Why do scientists handle unexpected results in this way? Why, when they get results that are inconsistent with the "traditional paradigm," are they so quick to assume (and, as Andrew notes, it is usually just an assumption) that experimenter error, or something equally theoreticlaly innocuous, is to blame? The answer to that question is likely to be pretty complicated, touching on several aspects of the interplay between human cognition and scientific contexts. I won't attempt to provide a complete answer here (if I had a complete answer, I'd publish it for real, not in a blog post!), but I will provide a part of the answer, which can be summed up in one word, a word that happens to be one of my favorites: analogy.

But now I'm getting ahead of myself. Before I can say why analogy is a part (and likely a big part) of the answer to the question of why scientists treat unexpected results the way they do, I should first go back to Dunbar's work. Dunbar's in vivo method is really quite impressive. He initially chose moleclar biology as the field to study, because, as he put it1,
Many of the brightest and most creative minds in science are attracted to this field, and molecular biology has now taken over the biological and medical sciences as the major way of theorizing and as a set of methodologies. As a consequence, the field of molecular biology is undergoing an immense period of scientific discovery and breakthroughs, making it an ideal domain within which to investigate the scientific discovery process. (p. 119)
After familiarizing himself with much of the literature, consulting experts in the field, and interviewing the members of several labs, he chose four to study extensively (he's since studied more than ten, in the U.S., Canada, and Italy). While most of the data he collected came from lab meetings, where scientists usually present research and discuss it, he also spent an unbelievable amount of time with the members of the labs, interviewing, observing, and just hanging out with them. Using the data he gathers in the in vivo settings, Dunbar develops hypotheses that he then tests using experimental methods, the designs of which are also influenced by his in vivo observations. Given the time and effort it took to do the in vivo studies, it's no wonder most of us lazy cognitive psychologists just spend our time running experiments in our own labs.

The most interesting (to my mind, and since I'm the one writing this, my mind is the only one that really counts) insights gained from Dunbar's research have been in the area of analogy. It's been widely recognized for centuries that analogical reasoning is important, even ubiquitous, in scientific thinking. Kepler famously used several analogies2 to arrive at his laws of planetary motion (for example, his analogy between light and motion3), and Rutherford's atom-solar system analogy has been used in science education (and in cognitive psychologists' papers on analogy) for decades. But until Dunbar came along, no one had systematically studied the use of analogy among scientists. Sure, there have been some "case studies," but scientists' own reports of their use of analogy (in their writings, e.g.) are pretty much worthless, because people are terrible at remembering the analogies they use. So Dunbar's work has been invaluable.

One of the ways in which Dunbar's work has helped to paint a picture of the scientific use of analogies is by showing under what circumstances scientists are likely to use analogies. Not only do scientists use analogies frequently (he coded 99 uses of analogy in 16 lab meetings4), but they tend to use them for the same purposes. Dunbar describes four typical uses of analogy: to formulate hypotheses, design an experiment, fix an experiment, or explain a result5. It's the last of these that I'll talk about here.

Unexpected findings are extremely common in scientific research. Anyone who's ever conducted actual research can tell you that at least as often as not, experiments "don't work," i.e., you don't get the results you were expecting. In fact, in the labs that Dunbar has observed, more than 50% of the experiments have yielded unexpected results6. And in almost every case Dunbar observed, scientists use analogies to try to explain these unexpected findings. The types of analogies that scientists use to explain unexpected findings differ depending on how many unexpected findings there are. In the case of unexpected findings from a single experiment, scientists tend to use "local analogies," or analogies to highly similar experiments. Dunbar writes (Dunbar, 2001):
Following an unexpected finding, the scientists freuently draw analogies to other experiments that have yielded similar results under similar conditions, often with the same organism... Using local analogies is the first type of analogical reasoning that scientists use when they obtain unexpected findings, and is an important part of dealing with such findings. Note that by making the analogy they also find a solution to the problem. For examle, the analogy between two experiments with similar bands ["yes in my experiment I got a band that looked like that, I think it might be degredation...I did... and the band went away."]... led one scientist to use the same method s the other scientist and the problem was solved. (p. 316)
When there is a series of unexpected findings, the types of analogies that scientists uses change. The analogies used under these circumstances tend to involve source domains that are quite different from the target domain (the unexpected finding). In other words, scientists stop using local analogies. Dunbar writes (Dunbar, 2001):
In this situation [a series of unexpected results], they drew analogies to different types of mechanisms and models in other organisms rather than making analogies to the same organism. This also involves making analogies to research outside their lab. The scientists switched from using local analogies to more distant analogies, but still within the domain of biology. For example, a scientist working on a novel type of bacterium might say that "IF3 in ecoli works like this, maybe our gene is doing the same thing.
By now, you probably see where I am going with this. When scientists see unexpected results, their first instict is to go back to experiments in which they received similar results (in most cases, also unexpected), and use the explanations from those experiments to explain the current results. If experimenter error is at play (as in the example that Dunbar gives in the first quoted paragraph above) in the previous experiment, then experimenter error will be used to explain the unexpected results of the current experiment. And even if we acheive a whole series of unexpected results, while our analogies will be to more (conceptually) distant research, our explanations are still likely to be based on the explanations given for previous findings. As is often the case, analogies are serving as schemas, or templates, from which we can derive explanations. And while in most cases this practice is very productive (as Andrew notes, somewhat sarcastically), it can also be pernicious. It can cause us to miss potential alternative explanations for unexplained results. Perhaps experimenter error was not the reason for the unexplained result (in this experiment, or even in the previous experiment that is used as the source in the analogy). Perhaps there's something really important going on, which isn't explained by the current dominant paradigm. But because of the way the human (and that means scientific) mind works, our first recourse is to look for answers in previous research, and previous research is almost always conducted within the dominant paradigm. Thus, our answers, even for unexpected findings, will tend to be consistent with the dominant paradigm.

So once again, we have an example of the schema-driven mind at work. The paradigm itself serves as a schema, determining what is expected (and therefore, what we're likely to find), but it also spawns little schemas, in the form of previous experiments and analogies to those experiments, which determine how we interpret unexpected (and potentially paradigm-inconsistent) results. In this way, paradigms and theories, like all schemas, are self-perpetuating (note that in a way this all sounds like talk of memes, and I'll just briefly mention here, and perhaps explain in more detail later, that I think memes are really just schemas, and explainable using ordinary cognitive mechanisms). Of course, we can eventually get beyond our schemas, and come up with new explanations, new theories, and new paradigms. The example that Andrew discusses in his post demonstrates this. The scientists kept getting their unexpected result, and in all likelihood, none of the analogies they made to previous findings held up under the scrutiny of further experimentation. So they had to step outside of their paradigm, outside of their schema, and come up with a new explanation. But as Andrew's example from his own research shows, this shedding of schemas is not easy to do, and thus in the grand scheme of theings, it is quite rare.


1Dunbar, K. (2001). What scientific thinking reveals about the natre of cognition. In K. Crowley, C.D. Schunn, & T. Okada (eds.). Designing for Science: Implications From Everyday, Classroom, and Professional Settings, pp. 115-140, Mahwah, NJ: Lawrence Erlbaum Associates.
2 Kepler once wrote, "I cherish more than anything else the Analogies, my most trustworthy masters. They know all the secrets of Nature, and they ought to be least neglected in Geometry."
3 "Let us suppose, then, as is highly probable, that motion is dispensed by the Sun in the same proportion as light. Now the ratio in which light spreading out from a center is weakened is stated by the opticians. For the amount of light in a small circle is the same as the amount of light or of the solar rays in the great one. Hence, as it is more concentrated in the small circle, and more thinly spread in the great one, the measure of this thinning out must be sought in the actual ratio of the circles, both for light and for the moving power." From Kepler, J . (1956/1981). Mysterium cosmographicum 1,11(A . M. Duncan, Trans.) . (2nd ed.) . New York: Abaris Books . As quoted in Gentner, et al. (1997). Analogical reasoning and conceptual change: A case study of Johannes Kepler. The Journal of the Learning Sciences, 6(1), 3-40.
4Dunbar, K. (1999). How scientists build models: In vivo science as a window on the scientific mind. In L. Magnani, N. Nersessian, & P. Thagard (eds.), Model-Based Reasoning in Scientific Discovery.
5 Ibid.
6 Dunbar, K. (2001). The analogical paradox: Why analogy is so easy in naturalistic settings, yet so difficult in the psychological laboratory. In D. Gentner, K.J. Holyoak, & B. Kokinov (eds.), The Analogical Mind: Perspectives from Cognitive Science, pp. 313-334. By the way, this is an excellent collection of essays, and I highly recommend it to anyone who's interested in the cocnigitive scientific study of analogy.

Schiavo, Feeding, and Bulimia

This is by far the most interesting and original perspective on the Schiavo case that I have read. An excerpt:
Theresa Schiavo spent more than a decade fighting an eating disorder. As millions recoil in horror at the fact that she died from the removal of a feeding tube, the irony that a woman who was plagued by food should die in that way has been lost. Ms. Schiavo entered her persistent vegetative state, in all likelihood, as a result of a heart attack brought on by her struggle with weight.

But when beautiful people, dressed in clothes to tiny to fit most Americans, host one program after another in which Terri Schiavo is fashioned as a vulnerable symbol of death by starvation, it is all too easy to miss the fact that Terri Schiavo did her 'starving' twenty years ago.

I am still disgusted at all the attention that the Schiavo case got, especially from liberals. I can't help but feeling that all that attention, energy, and effort could have been spent on more important issues. Of course, the ethical issues surrounding death, dying, and the quality of life are important, but the Schiavo case was a particularly odd one to receive the attention. The right to die is accepted by most, regardless of political or religious affiliations. Furthermore, the one important ethical question of death and dying, euthanasia, that this case could have raised, was almost completely ignored by both sides.

But Terri's life, and death, could also have been used to raise awareness of another important problem, that of eating disorders. In all the talk of whether Michael Schiavo was to blame, for her PVS, her death, or for not remaining faithful for the 15 years that she remained irreversibly unconscious, Terri's bulimia got lost in the mix. Reporters couldn't even bring themselves to admit that it was the likely cause of her death. They always qualified any reference to the role of her eating disorder in her death with things like, "According to Michael Schiavo, her brain damage was the result of a heart attack suffered due to a potassium imbalance caused by her bulimia." According to Michael Schiavo! As if there weren't a medical history of her disorder. As if he'd come up with the diagnosis all by himself. If you can't bring yourself to admit that an eating disorder had something to do with her death, or even that she had one, then you can't use her death to raise awareness of eating disorders. Instead, her bulimia gets swept under the rug, as eating disorders almost always do. To me, that's just tragic.