Friday, January 27, 2006

The Intellectual Teeth of the Mind

Early one morning earlier this week, I received an email about a radio program in Massachusets called Radio Open Source, which aired a program that evening on TheEdge.org's question, "What is your dangerous idea?" (I believe you can listen to the program at any time by following that link). I'm sure some of you who commented on the question received an email as well (Razib was even quoted on the show). The email mentioned that they were going to be interviewing Steven Pinker and others. So while I was sitting around doing some work, I listened to the program. It was actually pretty interesting. The "others" included Jesse Bering, Daniel Dennett, and Steven Strogatz. I'll try to get to Berring's answer, and his work (which is interesting as well) in a future post, but for now, I want to concentrate on something that the answers given by Dennett and Strogatz reminded me of. If you recall, Dennett's answer to TheEdge.org's question was about memes (big surprise). He said, in essence, that our minds are being inundated with memes, and pretty soon, if it isn't the case already, there will be more memes than we can handle. Strogatz' answer is related, though in a non-obvious way, perhaps. Strogatz, a mathematician, wrote:
I worry that insight is becoming impossible, at least at the frontiers of mathematics. Even when we're able to figure out what's true or false, we're less and less able to understand why.
Anyone who's been around cognitive scientists for long will know that this is not uncommon, though perhaps for different reasons than in mathematics. Many times I've heard cognitive psychologists discuss a surprising finding, only to admit that they have no idea what's going on. My favorite examples are some strange approach-avoidance phenomena, such as the tendency to be slower in moving negative words towards your name than away, and positive words away from your name than towards it (the finding that underlies the Evaluative Movement Assessment test, which is similar to the infamous Implicit Association Test). I've heard this phenomenon described as "voodoo" by intelligent scientists. When that's the best explanation a scientist can come up with, you know that "understanding" is nowhere to be found.

It may not be immediately obvious why, but both Dennett's and Strogatz' answers reminded me of one of my favorite concepts in cognitive psychology, the illusion of explanatory depth. I'm generally fascinated with anything that shows just how little we know about our individual selves (and man, do we know very little!), and the illusion of explanatory depth is a great example of a lack of self-knowledge. The idea behind the illusion of explanatory depth (and it may be a dangerous one) is simply that there are many cases in which we think we know what's going on, but we don't. There are many great examples in cognitive psychology (e.g., psychological essentialism, in which we believe that our concepts have definitions, but when pressed, learn that either they do not have definitions, or we don't have conscious access to those definitions), but you don't have to look to scientific research to find them. If you ask 100 people on the street if they know how a toilet's flushing mechanism works, many, if not most will tell you "Of course I do!" But if you then ask them to explain it, you will quickly find that they really have no idea how a toilet's flushing mechanism works. This is the illusion of explanatory depth. They know that when they push down on the flusher, the water leaves the bowl, and then fills back up, but they don't know how this happens, they only think they do.

Let me describe some of the research on the illusion of explanatory depth (henceforth, IOED), and then I'll try to connect it with The Edge answers given by Dennett and Strogatz. The first systematic study of the IOED was undertaken by Rozenblit and Keil1. In a series of studies, they demonstrated the existence of the IOED by having participants rate their level of knowledge for several devices over time. After the first rating, participants were asked to write a detailed explanation of how the devices work, and then gave another rating. They were then asked to answer detailed diagnostic questions about the devices, and gave another rating. Finally, they were given detailed explanations of how the devices function, and re-rated their level of knowledge prior to receiving the explanations. They were then asked to rate their current, post-explanation level of knowledge. In each of several experiments, participants' confidence in their own explanatory knowledge decreased across the experiment, and then rose again after receiving an actual explanation. In other words, as participants were forced to explore their knowledge of a device, by writing explanations and answering questions, they realized that they had initially overrated their level of explanatory knowledge. Thus, their ratings of their own knowledge dropped significantly.

Figure 1 from Rozenblit & Keil (2002), representing ratings of explanatory knowledge over time. The drop in knowledge from T1 to T2 is evidence of the illusion of explanatory depth. The rise in knowledge at T5 occurred after receiving a detailed explanation.

After demonstrating the existence of overconfidence in explanatory knowledge, the IOED, Rozenblit and Keil performed several more experiments designed to determine what factors influenced the IOED. First, they showed that for facts and stories, there was no drop in participants' confidence in their knowledge. Thus, the illusion that we have more knowledge than we actually do appears to be limited to explanations. They also found that devices with more visible than hidden parts were more likely to elicit the IOED than devices with mostly hidden parts. From these results, they hypothesize that at least three factors influence the IOED. They are:
  • "Confusing environmental support with representation": The fact that devices with more visible parts lead to greater overconfidence in explanatory knowledge suggests that people may be relying on visual information about a device, but without realizing that they are doing so. They mistakenly believe that the information about the workings of the device that they can get from the visual environment is represented in their minds, when in fact it is not.
  • "Levels of analysis confusion": Explanations that involve complex causal connections often allow for multiple levels of analysis. As you look at parts within parts, you can find multiple embedded causal chains. People resist overconfidence in their knowledge of facts and stories because, in general, they have no causal connections (as in some facts) or only one or two levels of causal relations. This makes it difficult to mistakenly believe we have deeper knowledge than we actually do. However, people may know how to explain a device (e.g., a toilet flusher) at one level, and mistakenly believe that they understand how it functions at deeper levels as a result.
  • "Indeterminate end state": As a result of the existence of multiple possible levels of analysis, it may be difficult to determine when we have sufficient knowledge of the workings of complex devices. Not knowing when we have sufficient explanatory knowledge makes it difficult to evaluate that knowledge. This in turn may lead to overconfidence. Stories, on the other hand, have determinate beginning and ends, and thus evaluating our knowledge of stories should be easier, leading to less overconfidence, a fact confirmed in their experiments.
In a later set of studies, Mill and Keil2 found that the IOED appears in children as young as seven years, and that the same factors appear to be at play in the IOED for children as for adults. To sum up, then, the IOED exists for explanations that involve multiple relations between parts, particularly causal relations, but not for more surface knowledge (e.g., facts, stories, and simple procedures), and it shows up fairly early in childhood. It appears to be caused, in large part, by mistaken beliefs about visual information vs. internally represented information, confusion about levels of analysis, and difficulty determining the end state of an explanation.

What does all of this have to do with Dennett and Strogatz' answers to TheEdge.org question? I'll start by trying to connect it with Strogatz answer. While Strogatz is specifically discussing problems about which we know the facts, but do not know the explanations for those facts, and are keenly aware of our ignorance, there seems to be a growing number of cases in science in which we know the facts, but aren't really aware that we don't know much about the explanations for those facts. I suspect that this is largely a product of the increasing specialization of the sciences. As the sciences become more and more specialized, with people studying sub-areas of sub-areas of sub-areas within a particular scientific discipline, it becomes very easy for individual scientists to mistakenly believe that because they know the basics about a particular finding, they understand the finding. The more scientific knowledge becomes specialized, the more levels of analysis it is possible to use, and the more difficult it is to determine the end state of an analysis. Furthermore, while individual scientists may read much of the literature on a particular topic that is not their primary area of study, their knowledge of that topic is likely dependent on external information (the papers they've read, their notes, etc.), and not fully represented internally. All of this may contribute to the IOED for scientists.

Dennett's answer concerns the massive influx of information that we receive everyday. In his interview, Dennett talks about an "intellectual sweet tooth" that leads us to continually seek out more easily-obtained knowledge. Dennett worries that this will lead to an information overload. I, on the other hand, worry that what it leads to is incredibly shallow knowledge, along with rampant illusions of explanatory depth. As we take in so much knowledge, we rely more and more on external representations (e.g., blog posts by bloggers who are experts in a particular domain). As the Rozenblit and Keil studies show, this reliance on external representations is one of, if not the primary factors in the formation of illusions of explanatory depth. In essence, the information age is producing an entire society of dilettantes who don't fully realize that they are, in fact, dilettantes.

Interestingly, the connections between the IOED and the answers of Dennett and Strogatz both indicate just how dependent we are coming on what Keil calls the "division of cognitive labor"3 (a broader version of Putnam's division of linguistic labor). More and more, we rely on the knowledge of others to supplement our own, often without being aware that we are doing so.

1 Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-562.
2 Mills, C.M., & Keil, F.C. (2004). Knowing the limits of one's understanding: The dvelopment of an awareness of an illusion of explanatory depth. Journal of Experimental Child Psychology, 87, 1-32.

3 Keil, F.C. (2003). Categorisation, causation, and the limits of understanding. Language and Cognitive Processes, 18 (5-6), 663 - 692.

16 comments:

Anonymous said...

strange approach-avoidance phenomena, such as the tendency to be slower in moving negative words towards your name than away, and positive words away from your name than towards it...I've heard this phenomenon described as "voodoo" by intelligent scientists.

Why is this strange? Presumably no one thinks it's strange that we want nifty things close to us and icky things far away, so I assume the strangeness lies in the fact that we identify our selves with our names. But why is that strange? I'm not arguing that an utterly rational being would identify itself with its name, assuming it had one. But human beings aren't, in my experience, utterly rational.

His mother told him 'some day you will be a man,
And you will be the leader of a big ol' band
Many people comin' from miles around,
To hear you play your music when the sun go down,
Maybe some day your name will be in lights, sayin
'Johnny B. Goode' tonight

--Chuck Berry

Chris said...

Now come up with a mechanism for how it works.

Clark Goble said...

Very interesting. The illusion of depth is something I think fairly common in science, where familiarity is confused with understanding. I've seen it a lot in my own field of physics. Of course especially in the 19th century this was common with "phenomenological" theories like thermodynamics prior to statistical mechanics.

Your point about explaining is good. I remember even back in college you often felt you understood. You sometimes didn't realize you really didn't until exam time. Because of that I always made sure we had someone having difficulty in class in our study groups. If you couldn't explain things to them you didn't really understand. It was very helpful.

BTW - what do you think about memes? I confess to hating the idea on many levels.

Chris said...

Clark, I'm not a big fan of memes either. I'm not sure what, if anything, the concept of a meme explains. Even as a metaphor (to genes), it seems to say little more than "some ideas/concepts/sayings/whatever will persist over time, while some won't," to which I want to reply, "duh." Honestly, I'll develop some respect for the meme concept the day it is used to make a prediction that can be experimentally tested. Until then, "meme" just looks like a somewhat clever neologism to me.

Chris said...

Anon, I agree (and I ahve read that paper). I've said here before (though I'm too lazy to look up where) that I think current cognitive accounts of representations, communication, social cognition, etc., are perfectly capable of accounting for the "something" that persists and propagates, without ever referencing the concept of a "meme."

Anonymous said...

"I'm not a big fan of memes either. I'm not sure what, if anything, the concept of a meme explains."

May be the current definition of memes is not entirely appropriate, still there is no doubt that "something" persists and propagates.
For an enhanced memes approach see Liane Gabora:
Gabora, L. (2004). "Ideas are not replicators but minds are." Biology & Philosophy 19(1), 127-143
http://www.vub.ac.be/CLEA/liane/Publications.htm

Chris said...

crmj, you're right, a little knowledge helps you to gain more knowledge. the problem is, we don't always realize we don't have that extra knowledge.

and the IAT is infamous because its creators, and the press, have made some pretty outrageous claims about what it can do and what it tells us.

PlatoHagel said...

While I am familiar with the meme terms as well, I tried to see how it might be applicable in our psychology. From a child perspective it made sense to me.

Of course I tried to make more of it, and saw the ball on the water, as the "mind," as it swings from tree to tree. A struggle to gain on bringing th eadult out in all situations. It's never that easy, being as human as we all are.

Anonymous said...

And do whiter teeth mean brighter mind?:)

Anonymous said...

i loved strogatz' answer especially this part "When the End of Insight comes, the nature of explanation in science will change forever. We'll be stuck in an age of authoritarianism, except it'll no longer be coming from politics or religious dogma, but from science itself."

Cheers,
Ed

Anonymous said...

What do you know aoc gold. And do you want to know? You can get conan gold here. And welcome to view our website, here you can play games, and you will get age of conan gold to play game. I know cheap aoc gold, and it is very interesting. Do you want a try, come and view our website, and you will learn how to aoc money. Come and join with us. We are waiting for your coming.

Anonymous said...

I like your blog . They are really great. Ermunterung ++ .please pay a return visit to my blog .thank you.
http://www.soulcast.com/iblog99/
http://iblog99.jugem.jp
http://iblog99.cocolog-nifty.com/blog/

Cheryl said...

Anyone who's been around cognitive scientists for long will know that this is not uncommon, though perhaps for different reasons than in mathematics.
Rosetta Stone French and Rosetta Stone Chinese

Anonymous said...

The cold winter, does not seem too for wedding. Most recently, the cold one after another,

hangzhou air temperature pelter, let many wedding in late November bride is due a single wear

gauze or dress, will feel cold, wear too much and feel very bloated. For the bride, the winter is

the biggest test how wedding in temperature and balance between poise.

Don't be too upset, actually this season only fees cheap Wedding Dresses, marriage can create a different

character "winter wedding", also more memorable.

You can use the glittering and translucent white fairy tale, cheap cell phones
,the artistic conception to dress up oneself's wedding, choose blue, green, white ice cold tonal,

decorous atmosphere to create beautiful, Can borrow snow machine and bubble machine build indoor

romantic atmosphere, snowflake, feathers, Christmas tree,cheap cocktail

dresses
, even is the element such as silver crystal, can add to your winter wedding dreamy

colour, cheap

jewelry
you can even in a pile of snowman YingBinChu lovely, guests, we must take it and will

soon be well.

月月 said...

http://mixingmemory.blogspot.com/2006/01/intellectual-teeth-of-mind.html

ai said...

polo boots
It's all about fierce glamour with high octane gloss and lashings of sparkle as fabrics go metallic with shimmering luxe finishes. Forpolo shoes
, gloriously excessive embellishment is absolutely key, championed at cheap herve leger outlet
and Elie Saab.