I worry that insight is becoming impossible, at least at the frontiers of mathematics. Even when we're able to figure out what's true or false, we're less and less able to understand why.Anyone who's been around cognitive scientists for long will know that this is not uncommon, though perhaps for different reasons than in mathematics. Many times I've heard cognitive psychologists discuss a surprising finding, only to admit that they have no idea what's going on. My favorite examples are some strange approach-avoidance phenomena, such as the tendency to be slower in moving negative words towards your name than away, and positive words away from your name than towards it (the finding that underlies the Evaluative Movement Assessment test, which is similar to the infamous Implicit Association Test). I've heard this phenomenon described as "voodoo" by intelligent scientists. When that's the best explanation a scientist can come up with, you know that "understanding" is nowhere to be found.
It may not be immediately obvious why, but both Dennett's and Strogatz' answers reminded me of one of my favorite concepts in cognitive psychology, the illusion of explanatory depth. I'm generally fascinated with anything that shows just how little we know about our individual selves (and man, do we know very little!), and the illusion of explanatory depth is a great example of a lack of self-knowledge. The idea behind the illusion of explanatory depth (and it may be a dangerous one) is simply that there are many cases in which we think we know what's going on, but we don't. There are many great examples in cognitive psychology (e.g., psychological essentialism, in which we believe that our concepts have definitions, but when pressed, learn that either they do not have definitions, or we don't have conscious access to those definitions), but you don't have to look to scientific research to find them. If you ask 100 people on the street if they know how a toilet's flushing mechanism works, many, if not most will tell you "Of course I do!" But if you then ask them to explain it, you will quickly find that they really have no idea how a toilet's flushing mechanism works. This is the illusion of explanatory depth. They know that when they push down on the flusher, the water leaves the bowl, and then fills back up, but they don't know how this happens, they only think they do.
Let me describe some of the research on the illusion of explanatory depth (henceforth, IOED), and then I'll try to connect it with The Edge answers given by Dennett and Strogatz. The first systematic study of the IOED was undertaken by Rozenblit and Keil1. In a series of studies, they demonstrated the existence of the IOED by having participants rate their level of knowledge for several devices over time. After the first rating, participants were asked to write a detailed explanation of how the devices work, and then gave another rating. They were then asked to answer detailed diagnostic questions about the devices, and gave another rating. Finally, they were given detailed explanations of how the devices function, and re-rated their level of knowledge prior to receiving the explanations. They were then asked to rate their current, post-explanation level of knowledge. In each of several experiments, participants' confidence in their own explanatory knowledge decreased across the experiment, and then rose again after receiving an actual explanation. In other words, as participants were forced to explore their knowledge of a device, by writing explanations and answering questions, they realized that they had initially overrated their level of explanatory knowledge. Thus, their ratings of their own knowledge dropped significantly.
Figure 1 from Rozenblit & Keil (2002), representing ratings of explanatory knowledge over time. The drop in knowledge from T1 to T2 is evidence of the illusion of explanatory depth. The rise in knowledge at T5 occurred after receiving a detailed explanation.
After demonstrating the existence of overconfidence in explanatory knowledge, the IOED, Rozenblit and Keil performed several more experiments designed to determine what factors influenced the IOED. First, they showed that for facts and stories, there was no drop in participants' confidence in their knowledge. Thus, the illusion that we have more knowledge than we actually do appears to be limited to explanations. They also found that devices with more visible than hidden parts were more likely to elicit the IOED than devices with mostly hidden parts. From these results, they hypothesize that at least three factors influence the IOED. They are:
- "Confusing environmental support with representation": The fact that devices with more visible parts lead to greater overconfidence in explanatory knowledge suggests that people may be relying on visual information about a device, but without realizing that they are doing so. They mistakenly believe that the information about the workings of the device that they can get from the visual environment is represented in their minds, when in fact it is not.
- "Levels of analysis confusion": Explanations that involve complex causal connections often allow for multiple levels of analysis. As you look at parts within parts, you can find multiple embedded causal chains. People resist overconfidence in their knowledge of facts and stories because, in general, they have no causal connections (as in some facts) or only one or two levels of causal relations. This makes it difficult to mistakenly believe we have deeper knowledge than we actually do. However, people may know how to explain a device (e.g., a toilet flusher) at one level, and mistakenly believe that they understand how it functions at deeper levels as a result.
- "Indeterminate end state": As a result of the existence of multiple possible levels of analysis, it may be difficult to determine when we have sufficient knowledge of the workings of complex devices. Not knowing when we have sufficient explanatory knowledge makes it difficult to evaluate that knowledge. This in turn may lead to overconfidence. Stories, on the other hand, have determinate beginning and ends, and thus evaluating our knowledge of stories should be easier, leading to less overconfidence, a fact confirmed in their experiments.
What does all of this have to do with Dennett and Strogatz' answers to TheEdge.org question? I'll start by trying to connect it with Strogatz answer. While Strogatz is specifically discussing problems about which we know the facts, but do not know the explanations for those facts, and are keenly aware of our ignorance, there seems to be a growing number of cases in science in which we know the facts, but aren't really aware that we don't know much about the explanations for those facts. I suspect that this is largely a product of the increasing specialization of the sciences. As the sciences become more and more specialized, with people studying sub-areas of sub-areas of sub-areas within a particular scientific discipline, it becomes very easy for individual scientists to mistakenly believe that because they know the basics about a particular finding, they understand the finding. The more scientific knowledge becomes specialized, the more levels of analysis it is possible to use, and the more difficult it is to determine the end state of an analysis. Furthermore, while individual scientists may read much of the literature on a particular topic that is not their primary area of study, their knowledge of that topic is likely dependent on external information (the papers they've read, their notes, etc.), and not fully represented internally. All of this may contribute to the IOED for scientists.
Dennett's answer concerns the massive influx of information that we receive everyday. In his interview, Dennett talks about an "intellectual sweet tooth" that leads us to continually seek out more easily-obtained knowledge. Dennett worries that this will lead to an information overload. I, on the other hand, worry that what it leads to is incredibly shallow knowledge, along with rampant illusions of explanatory depth. As we take in so much knowledge, we rely more and more on external representations (e.g., blog posts by bloggers who are experts in a particular domain). As the Rozenblit and Keil studies show, this reliance on external representations is one of, if not the primary factors in the formation of illusions of explanatory depth. In essence, the information age is producing an entire society of dilettantes who don't fully realize that they are, in fact, dilettantes.
Interestingly, the connections between the IOED and the answers of Dennett and Strogatz both indicate just how dependent we are coming on what Keil calls the "division of cognitive labor"3 (a broader version of Putnam's division of linguistic labor). More and more, we rely on the knowledge of others to supplement our own, often without being aware that we are doing so.
1 Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-562.
2 Mills, C.M., & Keil, F.C. (2004). Knowing the limits of one's understanding: The dvelopment of an awareness of an illusion of explanatory depth. Journal of Experimental Child Psychology, 87, 1-32.
3 Keil, F.C. (2003). Categorisation, causation, and the limits of understanding. Language and Cognitive Processes, 18 (5-6), 663 - 692.