Friday, January 27, 2006

The Intellectual Teeth of the Mind

Early one morning earlier this week, I received an email about a radio program in Massachusets called Radio Open Source, which aired a program that evening on TheEdge.org's question, "What is your dangerous idea?" (I believe you can listen to the program at any time by following that link). I'm sure some of you who commented on the question received an email as well (Razib was even quoted on the show). The email mentioned that they were going to be interviewing Steven Pinker and others. So while I was sitting around doing some work, I listened to the program. It was actually pretty interesting. The "others" included Jesse Bering, Daniel Dennett, and Steven Strogatz. I'll try to get to Berring's answer, and his work (which is interesting as well) in a future post, but for now, I want to concentrate on something that the answers given by Dennett and Strogatz reminded me of. If you recall, Dennett's answer to TheEdge.org's question was about memes (big surprise). He said, in essence, that our minds are being inundated with memes, and pretty soon, if it isn't the case already, there will be more memes than we can handle. Strogatz' answer is related, though in a non-obvious way, perhaps. Strogatz, a mathematician, wrote:
I worry that insight is becoming impossible, at least at the frontiers of mathematics. Even when we're able to figure out what's true or false, we're less and less able to understand why.
Anyone who's been around cognitive scientists for long will know that this is not uncommon, though perhaps for different reasons than in mathematics. Many times I've heard cognitive psychologists discuss a surprising finding, only to admit that they have no idea what's going on. My favorite examples are some strange approach-avoidance phenomena, such as the tendency to be slower in moving negative words towards your name than away, and positive words away from your name than towards it (the finding that underlies the Evaluative Movement Assessment test, which is similar to the infamous Implicit Association Test). I've heard this phenomenon described as "voodoo" by intelligent scientists. When that's the best explanation a scientist can come up with, you know that "understanding" is nowhere to be found.

It may not be immediately obvious why, but both Dennett's and Strogatz' answers reminded me of one of my favorite concepts in cognitive psychology, the illusion of explanatory depth. I'm generally fascinated with anything that shows just how little we know about our individual selves (and man, do we know very little!), and the illusion of explanatory depth is a great example of a lack of self-knowledge. The idea behind the illusion of explanatory depth (and it may be a dangerous one) is simply that there are many cases in which we think we know what's going on, but we don't. There are many great examples in cognitive psychology (e.g., psychological essentialism, in which we believe that our concepts have definitions, but when pressed, learn that either they do not have definitions, or we don't have conscious access to those definitions), but you don't have to look to scientific research to find them. If you ask 100 people on the street if they know how a toilet's flushing mechanism works, many, if not most will tell you "Of course I do!" But if you then ask them to explain it, you will quickly find that they really have no idea how a toilet's flushing mechanism works. This is the illusion of explanatory depth. They know that when they push down on the flusher, the water leaves the bowl, and then fills back up, but they don't know how this happens, they only think they do.

Let me describe some of the research on the illusion of explanatory depth (henceforth, IOED), and then I'll try to connect it with The Edge answers given by Dennett and Strogatz. The first systematic study of the IOED was undertaken by Rozenblit and Keil1. In a series of studies, they demonstrated the existence of the IOED by having participants rate their level of knowledge for several devices over time. After the first rating, participants were asked to write a detailed explanation of how the devices work, and then gave another rating. They were then asked to answer detailed diagnostic questions about the devices, and gave another rating. Finally, they were given detailed explanations of how the devices function, and re-rated their level of knowledge prior to receiving the explanations. They were then asked to rate their current, post-explanation level of knowledge. In each of several experiments, participants' confidence in their own explanatory knowledge decreased across the experiment, and then rose again after receiving an actual explanation. In other words, as participants were forced to explore their knowledge of a device, by writing explanations and answering questions, they realized that they had initially overrated their level of explanatory knowledge. Thus, their ratings of their own knowledge dropped significantly.

Figure 1 from Rozenblit & Keil (2002), representing ratings of explanatory knowledge over time. The drop in knowledge from T1 to T2 is evidence of the illusion of explanatory depth. The rise in knowledge at T5 occurred after receiving a detailed explanation.

After demonstrating the existence of overconfidence in explanatory knowledge, the IOED, Rozenblit and Keil performed several more experiments designed to determine what factors influenced the IOED. First, they showed that for facts and stories, there was no drop in participants' confidence in their knowledge. Thus, the illusion that we have more knowledge than we actually do appears to be limited to explanations. They also found that devices with more visible than hidden parts were more likely to elicit the IOED than devices with mostly hidden parts. From these results, they hypothesize that at least three factors influence the IOED. They are:
  • "Confusing environmental support with representation": The fact that devices with more visible parts lead to greater overconfidence in explanatory knowledge suggests that people may be relying on visual information about a device, but without realizing that they are doing so. They mistakenly believe that the information about the workings of the device that they can get from the visual environment is represented in their minds, when in fact it is not.
  • "Levels of analysis confusion": Explanations that involve complex causal connections often allow for multiple levels of analysis. As you look at parts within parts, you can find multiple embedded causal chains. People resist overconfidence in their knowledge of facts and stories because, in general, they have no causal connections (as in some facts) or only one or two levels of causal relations. This makes it difficult to mistakenly believe we have deeper knowledge than we actually do. However, people may know how to explain a device (e.g., a toilet flusher) at one level, and mistakenly believe that they understand how it functions at deeper levels as a result.
  • "Indeterminate end state": As a result of the existence of multiple possible levels of analysis, it may be difficult to determine when we have sufficient knowledge of the workings of complex devices. Not knowing when we have sufficient explanatory knowledge makes it difficult to evaluate that knowledge. This in turn may lead to overconfidence. Stories, on the other hand, have determinate beginning and ends, and thus evaluating our knowledge of stories should be easier, leading to less overconfidence, a fact confirmed in their experiments.
In a later set of studies, Mill and Keil2 found that the IOED appears in children as young as seven years, and that the same factors appear to be at play in the IOED for children as for adults. To sum up, then, the IOED exists for explanations that involve multiple relations between parts, particularly causal relations, but not for more surface knowledge (e.g., facts, stories, and simple procedures), and it shows up fairly early in childhood. It appears to be caused, in large part, by mistaken beliefs about visual information vs. internally represented information, confusion about levels of analysis, and difficulty determining the end state of an explanation.

What does all of this have to do with Dennett and Strogatz' answers to TheEdge.org question? I'll start by trying to connect it with Strogatz answer. While Strogatz is specifically discussing problems about which we know the facts, but do not know the explanations for those facts, and are keenly aware of our ignorance, there seems to be a growing number of cases in science in which we know the facts, but aren't really aware that we don't know much about the explanations for those facts. I suspect that this is largely a product of the increasing specialization of the sciences. As the sciences become more and more specialized, with people studying sub-areas of sub-areas of sub-areas within a particular scientific discipline, it becomes very easy for individual scientists to mistakenly believe that because they know the basics about a particular finding, they understand the finding. The more scientific knowledge becomes specialized, the more levels of analysis it is possible to use, and the more difficult it is to determine the end state of an analysis. Furthermore, while individual scientists may read much of the literature on a particular topic that is not their primary area of study, their knowledge of that topic is likely dependent on external information (the papers they've read, their notes, etc.), and not fully represented internally. All of this may contribute to the IOED for scientists.

Dennett's answer concerns the massive influx of information that we receive everyday. In his interview, Dennett talks about an "intellectual sweet tooth" that leads us to continually seek out more easily-obtained knowledge. Dennett worries that this will lead to an information overload. I, on the other hand, worry that what it leads to is incredibly shallow knowledge, along with rampant illusions of explanatory depth. As we take in so much knowledge, we rely more and more on external representations (e.g., blog posts by bloggers who are experts in a particular domain). As the Rozenblit and Keil studies show, this reliance on external representations is one of, if not the primary factors in the formation of illusions of explanatory depth. In essence, the information age is producing an entire society of dilettantes who don't fully realize that they are, in fact, dilettantes.

Interestingly, the connections between the IOED and the answers of Dennett and Strogatz both indicate just how dependent we are coming on what Keil calls the "division of cognitive labor"3 (a broader version of Putnam's division of linguistic labor). More and more, we rely on the knowledge of others to supplement our own, often without being aware that we are doing so.

1 Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-562.
2 Mills, C.M., & Keil, F.C. (2004). Knowing the limits of one's understanding: The dvelopment of an awareness of an illusion of explanatory depth. Journal of Experimental Child Psychology, 87, 1-32.

3 Keil, F.C. (2003). Categorisation, causation, and the limits of understanding. Language and Cognitive Processes, 18 (5-6), 663 - 692.

Thursday, January 26, 2006

Diversity in Academia Debate

At Legal Affairs, there's a debate on "ideological diversity" in law schools between Peter H. Schuck and Brian Leiter. As you may know, the "ideological diversity" debate is one of my favorite topics, but even I had trouble wading through the debate, because it is just terrible. Schuck's answers, specifically, stink. For example, after listing statistics from some studies showing that law faculty at several institutions gave more to Democratic than Republican candidates, he writes:
And for those who enjoy irony, it is amusing to learn that the faculties that most loudly proclaim their commitment to diversity do not exemplify it in the very area, viewpoint, that is (or should be) most central to their professional mission.
Honestly, I don't know how anyone can say that with a straight face. Nevermind the fact that Schuck clearly knows nothing about sampling bias, as the figures he use come from 29% of the faculty at the surveyed institutions (a fact that Leiter notes). Since when has bivariate analysis of political contributions said anything profound about diversity of viewpoints? Does Schuck really think that all viewpoints are either Democratic or Republican (or liberal and conservative, as though contributing to Democrats necessarily indicates liberalism, and contributing to Republicans necessarily indicates conservativism)? Does he not understand that both within and outside of those two categories, there are a wide range of viewpoints? And what's more, since when has ensuring that Democrats and Republicans are both well represented been the professional mission of anyone? Unless they're teaching a course on contemporary poliiics, I can't imagine it's an academic's mission to do so.

When Leiter makes roughly these points in response, Schuck replies:
I too would prefer rigorous social science studies, but until they arrive, I feel fairly confident in relying on the ones I cited plus my personal observations and knowledge of the political leanings of elite faculties over a 25-year period.
That's exactly why Schuck's idea of "viewpoint diversity" can't cut it in law schools, or in any other academic institutions. Relying on a bunch of extremely flawed, and potentially irrelevant studies, combined with the answer "I know what I've seen," just doesn't cut it.

Sunday, January 22, 2006

That Octopus Is Clearly a Capricorn

Via Lindsay of Majikthise and David Velleman of Left2Right, I came across this article by Charles Siebert on animal personalities in the New York Times magazine. David focuses on the "duh" factor of the article, writing:
Long around paragraph 30, however, the author, Charles Siewart [sic], finally admits that the findings he reports have a "prodigious 'duh factor'". What scientists have found is that some individual octopus or spiders are more aggressive than others; some are more cautious than others; some are more sociable than others; and so on. Anyone who has owned more than one cat, hamster, or goldfish knows that.

Nor has anyone ever thought that in displaying behavioral traits like aggressiveness or caution, non-human animals are poaching on human territory. If having a personality is just a matter of being passive or aggressive, rash or cautious, sociable or shy, then personality is hardly "that one aspect of the self we have long thought to be exclusively and quintessentially ours".
Of course, the scientists studying animal personality traits knew that animals exhibit personality differences. This is what led them to study them in the first place. The idea is that the study of animal personalities might provide some insight into human personalities.

Lindsay focused on the discussion of octopus personalities, because she's apparently having a bit of an ethical crisis over eating the poor mollusks. She quotes some interesting descriptions of octopus personalities from the article. I liked these in particular:
One particularly temperamental G.P.O. [Giant Pacific Octopus] so disliked having his tank cleaned, he would keep grabbing the cleaning tools, trying to pull them into the tank, his skin going a bright red. Another took to regularly soaking one of the aquarium's female night biologists with the water funnel octopus normally use to propel themselves, because he didn't like it when she shined her flashlight into his tank. Yet another G.P.O. of the Leisure Suit Larry mold once tried to pull into his tank a BBC videographer who got her hand a bit too close, wrapping his tentacles up and down her arm as fast as she could unravel them. When she finally broke free, the octopus turned a bright red and doused her with repeated jets of water.
I can't help but also think of the shark-eating giant octopus. That one clearly ranked high on the aggressiveness scale.

Anyway, I discussed this research earlier, focusing on a study of hyenas, in a post last March. In that post, you'll find a link to a review of the research on personality in several species, including primates, octopus and even guppies (but not David's goldfish). Enjoy.

Tuesday, January 17, 2006

Fido on Pinker on Race

I've been meaning to link to Fido the Yak, because the posts are consistently good, and if nothing else, I want to remember to go read them. The recent post on Pinker's "dangerous idea" makes for a nice linking opportunity. First of all, Fido gets Pinker exactly right. As I've said many times, Pinker has a nasty habit of speaking authoratatively about topics on which he is anything but an authority (like, say, gender differences in mathematical ability). And Fido also links to this very informative forum on race and genomics, titled "Is Race 'Real?'" Like Pinker, I'm not an expert in genomics, or anything remotely related to genetics, but unlike Pinker, I'm not going to comment on the issues discussed in the forum as though I am an expert. You should just go read the articles, which were written by people who are, in fact, experts. But the best part of the post is its discussion of "conventional wisdom" vs. "common sense." Fido writes:
[W]hat's this business about going against conventional wisdom in favor of common sense [in Pinker's comments on the biology of race]? Is that particularly scientific, or even reasonable? Common sense tells us that the sun rises in the East and sets in the West. Conventional wisdom among astronomers, at least since Copernicus, is that the earth orbits the sun while rotating on its axis once every twenty-four hours or so (a period astronomers call "mean solar time"--go figure). The common sense view of sunrises and sunsets is not invalidated by conventional astronomical wisdom, although with advances in technology, we see that it in some regards common sense, like conventional wisdom, is open to revision. The common sense view is rooted in the experiential world, encompassing certain facts of perception like the way we inhabit perspectives, the way we pattern our everyday activities in accordance with environmental, cultural and physiological regularities, and also some hard physical realities like being relatively puny bipeds dwelling on the surface of a planet that stretches farther than the eye can see.
And ends the post with:
The antithesis to "convential wisdom," I've decided, is not common sense at all, but "invidious stupidity." Whatever problem you're having with conventional wisdom, invidious stupidity is not likely to solve it. That's just common sense.
That's just good stuff. And it means I can finally get my link to Fido in.

Truthiness (Slight Reprise)

In a comment to the previous post, MathCogIdiocy wrote:
I've now returned to this post several times in the past day in an effort to understand how people can conceive of something that is not fact as being true.
Besides making me self-conscious at the thought of someone reading one of my blog posts more than once, the comment made me realize that I hadn't been very clear. It's not the case (very often) that people have a belief that they recognize as counterfactual and believe it to be true. People do have blatantly counterfactual beliefs, but in most cases, it's unlikely that they recognize them to be counterfactual. More often, though, people just aren't worried about the facticity of their beliefs. Unless their beliefs are threatened, there's little reason for them to be. So people may believe things that are not "fact," not because they consciously rebel against their counterfactuality, but because they just don't know that what they believe is counter to fact.

When I think of the emphasis on "truthiness" over "factiness," I think of the classic memory studies in which active representations determine what information we process and remember from a situation. My favorite of these is Anderson and Pichert's study in which they showed
that the perspectives people took when reading a story determined which information they remembered from the story1. It's pretty easy to see how this is relevant to politics, since people often bring very different perspectives to political discussions. But in addition to the influence of active representations, there are other factors at play. And serendipitously, Cognitive Daily has a recent post (via Clark) describing recent research exploring one of these factors. You can read a detailed description of the research in that post, or check out the paper. I'll just give a quick summary.

In three experiments, Preston and Epley looked at the effects of two types of information, explanations for beliefs (observations that explain the beliefs) and applications of beliefs (observations that the beliefs explain), on the importance of three types of beliefs: novel beliefs, the beliefs of others, and "cherished beliefs" (e.g., religious beliefs). In all three experiments, they found that when participants were asked to list applications of the beliefs, they rated the beliefs to be more important than when they listed explanations for the beliefs. Preston and Epley argue that this is because the importance of a belief is directly related to how many observations it can explain. One of the interesting implications of this position is that new facts or beliefs that can explain the same observations diminish the importance of currently held beliefs. They write:
We also believe these experiments can help account for people’s resistance to explanations for their cherished beliefs. Those of religious faith, for example, seem threatened when scientific explanations—such as evolution—are offered for observations otherwise explained by religious concepts or when psychological concepts are used to explain religious belief itself. Even if these explanations do not impinge on the core tenets of a religious ideology, they may nevertheless seem to devalue religious beliefs, and lead to an intense resistance to such explanations. Indeed, the history of science and religion is replete with examples of such resistance. In some cases, it may be so intense that believers wish to avoid the search for underlying explanations altogether.
When you combine this to the fact that the very observations that we want to explain are heavily influenced by our beliefs, it's easy to see how "factiness" can suffer, even when we're not avoiding the search for explanations.

1 Anderson, R., & Pichert, J.W. Recall of previously unrecallable information following a shift in perspective. Journal of Verbal Learning and Verbal Behavior, 17, 1-12.

Friday, January 13, 2006

Truthiness

As you all probably know by now, the American Dialect Society announced that "truthiness," a word coined by Stephen Colbert on the first episode of his Daily Show spinoff, is the 2005 word of the year. Here's the segment from The Colbert Report, in which he gives his "word of the day":
Truthiness. Now I'm sure some of the Word Police, the wordanistas over at Webster's, are gonna say, "Hey, that's not a word." Well, anybody who knows me knows that I'm no fan of dictionaries or reference books. They're elitist. Constantly telling us what is or isn't true, or what did or didn't happen. Who's Britannica to tell me the Panama Canal was finished in 1914? If I wanna say it happened in 1941, that's my right. I don't trust books. They're all fact, no heart.
"Wordanistas," by the way, should definitely have been the runner up for word of the year. Anyway, the linguists have already spent a lot of time discussing Colbert's new word, pointing out, among other things, that it's not actually a new word, though Colbert's meaning is different from the OED's established (and from what I can tell, rarely used) meaning. Michael Adams, a linguist at North Carolina State University, defined truthiness, in the Colbertian sense, as something that is "truthy, not facty."

Now that the linguists have had their say, it's time for the cognitive psychologists to speak about truthiness. "Truthy, not facty" strikes me as a pretty good way of describing the way we usually think about things. Most of the time, when we're thinking about the world, we're not trying to determine whether the information we're receiving from it is factual, but instead working to integrate it with the representations we've already got. The information that we're likely to notice, and keep, is just the information that fits with those representations, regardless of whether that information happens to fit with the facts. If something is truthy because it fits with our beliefs, but not with facts, then a lot of what we'll end up believing with be truthy, not facty. It's because of this that you get things like cognitive dissonance or the confirmation bias. Those involve searching for and emphasizing truthiness over factiness.

The preference for truthiness over factiness is also one of the things behind the sorts of rhetorical strategies that underlie someone like Lakoff's framing analysis. The factiness of any particular political or ethical issue can be interpreted in many different ways depending on the way one thinks and talks about the truthiness of those issues. It can't be a coincidence, then, that Colbert's coining of the term was meant as a spoof of the anti-intellectualism that's rampant on the right side of the political spectrum. That anti-intellectualism is all about spinning the truthiness of an issue to cause people to either ignore the issues factiness, or interpret that factiness in a certain way. It's a direct manipulation of the way people naturally think. And Colbert's, or his writers' recognition of this fact makes me wonder if he or they took a few cognitive psychology courses in college. Or perhaps they're just naturally insightful in the ways of the mind. Either way, since the word is so descriptive of what's going on when people reason about the world, it's only a matter of time before "truthiness" makes its way into the cognition literature.

Tuesday, January 03, 2006

At Least One of Them Really Is Dangerous

Just minutes after I finished writing the previous post on "dangerous ideas," I stumbled across this post (via Pandagon) on a NYT opinion article by David Brooks. Unfortunately, the David Brooks article is available by subscription only, but the part of the article quoted in the post has exactly what I needed: a dangerous misuse of one of the ideas mentioned in the "dangerous ideas" post. Here's Brooks:
Her third mistake is to not even grapple with the fact that men and women are wired differently. The Larry Summers flap produced an outpouring of work on the neurological differences between men and women. I'd especially recommend "The Inequality Taboo" by Charles Murray in Commentary and a debate between Steven Pinker and Elizabeth Spelke in the online magazine Edge.

One of the findings of this research is that men are more interested in things and abstract rules while women are more interested in people. (You can come up with your own Darwinian explanation as to why.
Oh look, there's Simon Baron-Cohen again, this time in the mouth of Brooks, helping a misogynist hack to argue that women should stay home with the family instead of having careers. But there are two problems with the way Brooks uses this. First, "abstract rules" and "systematizing" aren't quite the same things. I don't know of any recent researcher who has argued that men are more interested in abstract rules specifically. I suppose it depends on what kinds of rules we're talking about.

The second problem, is that there's really no good evidence for Baron-Cohen's systematizing-empathizing distinction, and there aren't really any other explanations in cognitive science of the old stereotype that men like objects and women like people that get any attention. Elizabeth Spelke did a good job of describing the research that bears on Baron-Cohen's theory, and pointing out how none of it really supported his theory, in her in press paper on sex differences in math ability, which you can read here (the discussion of B-C's work is in the first section after the introduction, beginning on p. 5). Thus, Brooks is using a theory with no empirical support to argue for his misogynistic beliefs.

And you can't really blame Brooks; he's a rabid misogynist, and he will look for anything to justify that. You have to blame Baron-Cohen, and people like Steven Pinker, who make these claims public before they've undergone rigorous scientific scrutiny. Once they're out there, it's damn near impossible to get rid of them, no matter how many scientists come forward to say that it turns out the evidence tells a different story. Responsible scientists don't build large theories on one or two unreplicated studies, and then spend a great deal of time talking about them to the media, or writing books for laypeople about them, all the while ignoring a wealth of conflicting evidence. This, I think, is the main reason why people like me have such a strong dislike towards Evolutionary Psychologists, a dislike that goes beyond simply believing that their work is subpar. They are dangerous, not because of what they say, but because of whom they say it to.

The Dangerous Ideas of Cognitive Scientists

Last year, The Edge asked 120 scientists what they believed to be true but could not prove. I covered the answers of the cognitive scientists and people from related disciplines here, even though they were mostly uninteresting. So, I thought it only appropriate that I cover the answers to this year's question, which may be even sillier than last year's (it was suggested by Steven Pinker, I should note, though I will not comment on whether that has anything to do with its silliness). The question for 2006 is: What is your dangerous idea? I know, that probably seems pretty vague. Fortunately, they provided a longer version:
The history of science is replete with discoveries that were considered socially, morally, or emotionally dangerous in their time; the Copernican and Darwinian revolutions are the most obvious. What is your dangerous idea? An idea you think about (not necessarily one you originated) that is dangerous not because it is assumed to be false, but because it might be true?
Just as last year, many cognitive folk sent in their answers, and I'll try to get to all of them here. Like last year, I may get a bit snarky. You've been warned. Since there are so many, I should probably get started. This is going to be long, so take a deep breath.

V.S. Ramachandran:
First there was the Copernican system dethroning the earth as the center of the cosmos.

Second was the Darwinian revolution; the idea that far from being the climax of "intelligent design" we are merely neotonous apes that happen to be slightly cleverer than our cousins.

Third, the Freudian view that even though you claim to be "in charge" of your life, your behavior is in fact governed by a cauldron of drives and motives of which you are largely unconscious.

And fourth, the discovery of DNA and the genetic code with its implication (to quote James Watson) that "There are only molecules. Everything else is sociology".

To this list we can now add the fifth, the "neuroscience revolution" and its corollary pointed out by Crick — the "astonishing hypothesis" — that even our loftiest thoughts and aspirations are mere byproducts of neural activity. We are nothing but a pack of neurons.
I have to admit that I think Ramachandran (whom you may remember from posts on the cognitive science of art) is right, in the sense that this idea, which we know is true, is going to receive a lot of resistance from the general public and religious figures when it becomes clear that it is central to so much of neuroscience and psychology. I also think it's dangerous because it's going to be used by some to make unwarranted claims about human nature. For example...

David Buss (aka Rudyard Kipling):

When most people think of torturers, stalkers, robbers, rapists, and murderers, they imagine crazed drooling monsters with maniacal Charles Manson-like eyes. The calm normal-looking image starring back at you from the bathroom mirror reflects a truer representation. The dangerous idea is that all of us contain within our large brains adaptations whose functions are to commit despicable atrocities against our fellow humans — atrocities most would label evil.
This is followed by some stuff that really isn't worth copying and pasting, after which Buss ends with
On reflection, the dangerous idea may not be that murder historically has been advantageous to the reproductive success of killers; nor that we all house homicidal circuits within our brains; nor even that all of us are lineal descendants of ancestors who murdered. The danger comes from people who refuse to recognize that there are dark sides of human nature that cannot be wished away by attributing them to the modern ills of culture, poverty, pathology, or exposure to media violence. The danger comes from failing to gaze into the mirror and come to grips the capacity for evil in all of us.
Once again, I have to agree with this answerer. His idea is dangerous, but not for the reasons that he says it is. When someone who is seen as an authority on psychology, and the role of evolution in modern human psychology in particular, says that because people have killed for a long time (at least since Genghis Khan), often for selfish reasons, it's clear that murder is hard-wired into our brain, but offers up no evidence to support this, because he has none, and is unlikely to obtain any, that's dangerous.

Paul Bloom:
I am interested in the... position that mental life has a purely material basis. The dangerous idea, then, is that Cartesian dualism is false. If what you mean by "soul" is something immaterial and immortal, something that exists independently of the brain, then souls do not exist. This is old hat for most psychologists and philosophers, the stuff of introductory lectures. But the rejection of the immaterial soul is unintuitive, unpopular, and, for some people, downright repulsive.

So Bloom, Ramachandran, and I all agree that this is a dangerous idea. Bloom even discusses the dangerousness of this idea relative to evolution, writing:
The rejection of souls is more dangerous than the idea that kept us so occupied in 2005 — evolution by natural selection. The battle between evolution and creationism is important for many reasons; it is where science takes a stand against superstition. But, like the origin of the universe, the origin of the species is an issue of great intellectual importance and little practical relevance. If everyone were to become a sophisticated Darwinian, our everyday lives would change very little. In contrast, the widespread rejection of the soul would have profound moral and legal consequences. It would also require people to rethink what happens when they die, and give up the idea (held by about 90% of Americans) that their souls will survive the death of their bodies and ascend to heaven. It is hard to get more dangerous than that.
Scott Atran:

In case you're wondering just why that's a dangerous idea, Atran gives us a good reason:
Ever since Edward Gibbon's Decline and Fall of the Roman Empire, scientists and secularly-minded scholars have been predicting the ultimate demise of religion. But, if anything, religious fervor is increasing across the world, including in the United States, the world's most economically powerful and scientifically advanced society. An underlying reason is that science treats humans and intentions only as incidental elements in the universe, whereas for religion they are central. Science is not particularly well-suited to deal with people's existential anxieties, including death, deception, sudden catastrophe, loneliness or longing for love or justice. It cannot tell us what we ought to do, only what we can do. Religion thrives because it addresses people's deepest emotional yearnings and society's foundational moral needs, perhaps even more so in complex and mobile societies that are increasingly divorced from nurturing family settings and long familiar environments.

From a scientific perspective of the overall structure and design of the physical universe:

1. Human beings are accidental and incidental products of the material development of the universe, almost wholly irrelevant and readily ignored in any general description of its functioning.

Beyond Earth, there is no intelligence — however alien or like our own — that is watching out for us or cares. We are alone.

2. Human intelligence and reason, which searches for the hidden traps and causes in our surroundings, evolved and will always remain leashed to our animal passions — in the struggle for survival, the quest for love, the yearning for social standing and belonging.

This intelligence does not easily suffer loneliness, anymore than it abides the looming prospect of death, whether individual or collective.

Religion is the hope that science is missing (something more in the endeavor to miss nothing).
Sure, Atran doesn't mention Ramachandran and Bloom's "dangerous idea," but it is just one way in which science impedes on territory ordinarily reserved for religion, and inevitably, religion will fight back, as it always does.

Stephen Kosslyn:

This one gets a bit weird, and I'm not really sure what to say about it, so I'll just let you read it for yourselves at the link above. Here's how he starts out:
Here's an idea that many academics may find unsettling and dangerous: God exists. And here's another idea that many religious people may find unsettling and dangerous: God is not supernatural, but rather part of the natural order. Simply stating these ideas in the same breath invites them to scrape against each other, and sparks begin to fly. To avoid such conflict, Stephen Jay Gould famously argued that we should separate religion and science, treating them as distinct "magisteria." But science leads many of us to try to understand all that we encounter with a single, grand and glorious overarching framework. In this spirit, let me try to suggest one way in which the idea of a "supreme being" can fit into a scientific worldview.

I offer the following not to advocate the ideas, but rather simply to illustrate one (certainly not the only) way that the concept of God can be approached scientifically.
From there, he offers a pretty lengthy illustration. Again, I don't know what to make of it, but if you have something to say about it, feel free to do so in comments.

Daniel C. Dennett:
Dennett begins by telling us that he's saving his most dangerous ideas for a time when we can handle them. Thank you, Dan. After that, he gives us a merely "unsettling" idea:
The human population is still growing, but at nowhere near the rate that the population of memes is growing. There is competition for the limited space in human brains for memes, and something has to give. Thanks to our incessant and often technically brilliant efforts, and our apparently insatiable appetites for novelty, we have created an explosively growing flood of information, in all media, on all topics, in every genre. Now either (1) we will drown in this flood of information, or (2) we won't drown in it. Both alternatives are deeply disturbing. What do I mean by drowning? I mean that we will become psychologically overwhelmed, unable to cope, victimized by the glut and unable to make life-enhancing decisions in the face of an unimaginable surfeit. (I recall the brilliant scene in the film of Evelyn Waugh's dark comedy The Loved One in which embalmer Mr. Joyboy's gluttonous mother is found sprawled on the kitchen floor, helplessly wallowing in the bounty that has spilled from a capsized refrigerator.) We will be lost in the maze, preyed upon by whatever clever forces find ways of pumping money–or simply further memetic replications–out of our situation. (In The War of the Worlds, H. G. Wells sees that it might well be our germs, not our high-tech military contraptions, that subdue our alien invaders. Similarly, might our own minds succumb not to the devious manipulations of evil brainwashers and propagandists, but to nothing more than a swarm of irresistible ditties, Noφs nibbled to death by slogans and one-liners?)

If we don't drown, how will we cope? If we somehow learn to swim in the rising tide of the infosphere, that will entail that we–that is to say, our grandchildren and their grandchildren–become very very different from our recent ancestors. What will "we" be like? (Some years ago, Doug Hofstadter wrote a wonderful piece, " In 2093, Just Who Will Be We?" in which he imagines robots being created to have "human" values, robots that gradually take over the social roles of our biological descendants, who become stupider and less concerned with the things we value. If we could secure the welfare of just one of these groups, our children or our brainchildren, which group would we care about the most, with which group would we identify?)
This is actually something I've wondered about myself (I'm sure Dennett doesn't think he's the first to think of this). In an age when more and more information is thrown at us on a daily basis, how do we process enough of it to make relatively rational decisions? I'm even afraid that it is becoming more common that information overload is used against us, by marketers, politicians, and so on, to make it more difficult for us to make decisions.

Even in science, this is a growing problem, as we learn so much that scientists themselves are forced to become increasingly specialized in order to keep up with the flow of information. We're forced to lose the forest for the trees, and inevitably scientists will come up with theories that look really good for their particular tree, but which, when looked at from a step or two back, clearly do not fit with the neighboring trees. I know this is already a problem in cognitive science, where modeleres often come up with models focused on one or two particular processes, but have no way of integrating those models with all of the related processes. Thus it's never clear just how useful the models actually are.

Rodney Brooks:
The thing that I worry about most that may or may not be true is that perhaps the spontaneous transformation from non-living matter to living matter is extraordinarily unlikely. We know that it has happened once. But what if we gain lots of evidence over the next few decades that it happens very rarely.
I can't really figure out how this idea is dangerous. He goes on to say that it could mean we are alone in this galaxy, or even in this universe, but really, isn't that what humans have believed throughout most of their history? At least, haven't humans generally believed that except for the divine beings on Mount Olympus or wherever, humans are alone? So why is this dangerous? Because it would make alien invasion movies more unbelievable?

Alison Gopnik:

This is by far the best answer among the cognitive science people, so I'm going to reproduce it in its entirety.
It may not be good to encourage scientists to articulate dangerous ideas.

Good scientists, almost by definition, tend towards the contrarian and ornery, and nothing gives them more pleasure than holding to an unconventional idea in the face of opposition. Indeed, orneriness and contrarianism are something of currency for science — nobody wants to have an idea that everyone else has too. Scientists are always constructing a straw man "establishment" opponent who they can then fearlessly demolish. If you combine that with defying the conventional wisdom of non-scientists you have a recipe for a very distinctive kind of scientific smugness and self-righteousness. We scientists see this contrarian habit grinning back at us in a particularly hideous and distorted form when global warming opponents or intelligent design advocates invoke the unpopularity of their ideas as evidence that they should be accepted, or at least discussed.

The problem is exacerbated for public intellectuals. For the media too, would far rather hear about contrarian or unpopular or morally dubious or "controversial" ideas than ones that are congruent with everyday morality and wisdom. No one writes a newspaper article about a study that shows that girls are just as good at some task as boys, or that children are influenced by their parents.

It is certainly true that there is no reason that scientifically valid results should have morally comforting consequences — but there is no reason why they shouldn't either. Unpopularity or shock is no more a sign of truth than popularity is. More to the point, when scientists do have ideas that are potentially morally dangerous they should approach those ideas with hesitancy and humility. And they should do so in full recognition of the great human tragedy that, as Isiah Berlin pointed out, there can be genuinely conflicting goods and that humans are often in situations of conflict for which there is no simple or obvious answer.

Truth and morality may indeed in some cases be competing values, but that is a tragedy, not a cause for self-congratulation. Humility and empathy come less easily to most scientists, most certainly including me, than pride and self-confidence, but perhaps for that very reason they are the virtues we should pursue.

This is, of course, itself a dangerous idea. Orneriness and contrarianism are in fact, genuine scientific virtues, too. And in the current profoundly anti-scientific political climate it is terribly dangerous to do anything that might give comfort to the enemies of science. But I think the peril to science actually doesn't lie in timidity or self-censorship. It is much more likely to lie in a cacophony of "controversy".
All I can say is, right on! So many of the scientific ideas that are seen as "dangerous" really aren't. Evolution is the obvious one. Unless you believe in a relatively recent, literalist interpretation of one book of the Bible, this really isn't a dangerous idea at all. Then there are the ideas that really are dangerous (see, e.g., Buss' answer above, and anything else he's ever written or said), because there is no evidence for them, but they're offered up anyway. A cynic would believe that people like Buss discuss these ideas because they know that the media will publicize them. I think you all know that I'm a cynic. That's probably why I like Gopnik's answer so much.

Simon Baron-Cohen
:
So here's the dangerous new idea. What would it be like if our political chambers were based on the principles of empathizing? It is dangerous because it would mean a revolution in how we choose our politicians, how our political chambers govern, and how our politicians think and behave. We have never given such an alternative political process a chance. Might it be better and safer than what we currently have? Since empathy is about keeping in mind the thoughts and feelings of other people (not just your own), and being sensitive to another person's thoughts and feelings (not just riding rough-shod over them), it is clearly incompatible with notions of "doing battle with the opposition" and "defeating the opposition" in order to win and hold on to power.
In other words, under Baron-Cohen's view, what would it be like if women designed our political system? Of course, I'm not quite sure what it means to base an entire political system on the principles of empathizing, any more than I'm sure how one could explain our current political system using the principles of "systematizing," i.e., what men's brains do. But then again, the systematizing-empathizing dichotomy, even with the tiny bit of evidence that might be used to argue for it, is clearly an abstraction that oversimplifies a wide range of observed differences between individuals, so it would be asking too much of it to have it explain current or future political systems, whether they're designed by men, women, or just people in general.

Steven Pinker:
Groups of people may differ genetically in their average talents and temperaments.
I don't really have the stomach to copy and paste any more of Pinker's answer, because he basically uses it as a chance to say what he wants to say about the Summers affair and gender differences in mathematical ability, a topic on which completelyare comletely wrong, because he's not familiar with the actual research. But I agree with him that it is a dangerous idea. It's hard to argue that it's not, given how heated the public debates on these issues are. Of course, part of the debate is about whether the ideas are dangerous because they're true, or are dangerous because they're not true, but "experts" like Pinker (note that Pinker has never done any actual empirical research on gender and math ability, or even in remotely related fields like, say, math ability in general, or cognitive development, or anything outside of psycholinguistics, and thus is hardly an expert) champion them, while the ideas themselves do little more than reinforce stereotyopes?

Richard E. Nisbett:
Do you know why you hired your most recent employee over the runner-up? Do you know why you bought your last pair of pajamas? Do you know what makes you happy and unhappy?

Don't be too sure. The most important thing that social psychologists have discovered over the last 50 years is that people are very unreliable informants about why they behaved as they did, made the judgment they did, or liked or disliked something. In short, we don't know nearly as much about what goes on in our heads as we think. In fact, for a shocking range of things, we don't know the answer to "Why did I?" any better than an observer.
I've got to say that I agree with this one, as well. The fact that the vast majority of our thoughts, decision processes, motivations, etc. are unconscious, and unavailable for introspection, not to mention mostly automated, is scary for a lot of reasons, not the least of which is the fact that it makes us feel like strangers to ourselves. And it is a fact.

Susan Blackmore:
We humans can, and do, make up our own purposes, but ultimately the universe has none. All the wonderfully complex, and beautifully designed things we see around us were built by the same purposeless process — evolution by natural selection. This includes everything from microbes and elephants to skyscrapers and computers, and even our own inner selves.
Susan Blackmore, ladies and gentleman, the world's leading evolutionary existentialist.

Marc D. Hauser:
The theory I propose is that human mental life is based on a few simple, abstract, yet expressively powerful rules or computations together with an instructive learning mechanism that prunes the range of possible systems of language, music, mathematics, art, and morality to a limited set of culturally expressed variants. In many ways, this view isn't new or radical. It stems from thinking about the seemingly constrained ways in which relatively open ended or generative systems of expression create both universal structure and limited variation.

Unfortunately, what appears to be a rather modest proposal on some counts, is dangerous on another. It is dangerous to those who abhor biologically grounded theories on the often misinterpreted perspective that biology determines our fate, derails free will, and erases the soul. But a look at systems other than the human mind makes it transparently clear that the argument from biological endowment does not entail any of these false inferences.
He goes on from there. But wait a minute, Marc! That's basically the same answer you gave to the question last year. If you're going to give the same empirically unsupported and either logically unsound or trivially true (depending on what, exactly, he means) answer every year, The Edge is going to have to find someone else to take your place on the panel... I hope.

Dan Sperber:

Sperber, who is always interesting, writes that the idea that cutlure is "natural," that it has evolved through natural processes, is a dangerous idea. His answer is very long, and so is this post, so I'll just let you go read it. If you're interested in Sperber's work, or the study of culture in general, you'll probably enjoy it.

So there you have it, the dangerous ideas of the cognitive scientists. There are some clear themes. For example, materialistic approaches to the mind are dangerous, and so are Evolutionary Psychologists, but for entirely different reasons. Despite the fact that this year's question was really dumb, I think the answers are much better than they were last year, overall. I think Gopnik's answer is clearly the best among the cognitive scientists, in part because it puts The Edge and its silly question on blast, but mostly because she's dead on. Some of the people above wouldn't even have careers if it weren't for the media and public's love for "dangerous" ideas. The love of the "dangerous" has become so prevalent that "sexy" ideas with little or no empirical support can get published in some cognitive science and social psychology journals. In addition, that love for "dangerous" science has also led the media and certain uneducated groups of people to treat ideas such as evolution as dangerous, even when they aren't. That means that good scientists, and good scientific ideas, have to struggle to win public relations debates against fear mongerering idiots in order to make sure that science wins out in the classroom and elsewhere. And that's just sad.