The brain is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.
So, it's understandable that through most of the disciplines history, its practitioners have been fairly restrained in the breadth of their claims. Sure, there have been some lapses in judgement (for instance, Schank and Abelson came close, and the ACT theories come closer to close to being tauted as general theories of human cognition), but most researchers, even in neuroscience (where restraint is not a common virtue), realize that the scientific study of the mind is in its infancy, and that we've only begun to scratch the sufrace of what's going on "in" there.
Enter the cognitive linguists. Suddenly, members of one group of cognitive scientists have begun to claim (and perhaps believe) that they have at least the makings of a general theory of human cognition. Hence book titles like, The Way We Think 1. Why do they think that? I'm not quite sure. They certainly don't seem to consider much of anything we know about perception, and god knows how blending (or image schemas, frames, conceptual metaphors, etc.) could explain something like this. Perhaps the boldness is seen as necessary, given the fact that cognitive linguistics is a fairly unheralded alternative to the primary paradigm in cognitive science, computationalism (and even gets less press among cognitive scientists than related alterantives, like dynamic systems theories). If you want to get noticed, you have to be a little flashy.
These theories are problematic, even if we forget for a moment that they talk about perception, movement, etc., without actually giving much thought to the research on these topics. The major concepts of cognitive linguistics are not very amenable to laboratory study, as Fauconnier and Turner note (e.g., The Way We Think, p. 109-110). In general, the theories are arrived at and tested through linguistic (and sometimes psycholinguistic) evidence. In fact, if you look at the non-linguistic empirical evidence, you'll discover that as far as we know, we don't think that way2. Still, the theories themselves are sometimes interesting, and I think they have one thing right: to most cognitive linguists, much, if not most of higher-order cognition involves some sort of mapping between structured representations.
Unfortunately, even if they have this right, it's usually implemented in a way that doesn't do anyone much good. I've already discussed the details and problems with Lakoff's general theory (see here), so I thought I'd talk a little bit about another cognitive linguistic version of how we think. This version has many names (e.g., blending, network theory, conceptual integration, many space model), but I just call it blending. It's primary proponents are Gilles Fauconnier and Mark Turner, but it's become popular complimentary or competing perspective along the conceptual metaphor view in cognitive linguistics (see this article for a comparison of blending and conceptual metaphors), and is even used in literary and art criticism these days3.
Blending begins with mental spaces. These are "small conceptual packets constructed as we think and talk, for purposes of local understanding and action"4. In the circle below, the circles represent the mental spaces. There are several types of mental spaces. The most important are the input spaces, generic spaces, and blend spaces. The input spaces are the spaces between which there is to be a mapping. Generic spaces are built from the elements that the input spaces have in common. Blend spaces are a combination of the general information in the generic spaces and the specific information in the input spaces, as well as "emergent information" that is not contained in either the input or generic spaces. In essence, blend spaces represent the output of the mappings between input spaces.
In general, the operation of blending consists of abstracting the common information from the input spaces, placing it into the generic space, and then using one or more of three processes to create a blend space. The first type of process, composition, simply involves taking specific information from the two input spaces and placing it in the same blend space. The second, completion, involves using general knowledge to fill in the details of the information combined in the blend. The third, elaboration, "develops the blend through imaginative mental simulation according to principles and logic in the blend"5.
Figure 1. From Fauconnier & Turner (1998) Figure 6.
If you're like me, this all sounds a bit obscure, and the figure doesn't help much. Unfortunately, working all this out with examples doesn't really bring it down to earth either, but I'll try anyway. Afterwards, you let me know if you can figure out what the hell is going out in Figure 2 (the blend in the example), because I sure as hell can't.
Figure 2. From Grady et al. Figure 1. Click for larger view.
This figure represents the metaphor, "This surgeon is a butcher." Who knew there were so many lines involved in this metaphor? Obviously, the input domains for this metaphor are the SUGEON domain and the BUTCHER domain. These two input spaces contain information about the particular surgeon being compared, and about butchers in general (because it's a metaphor about a particular surgeon, this surgeon, and uses general knowledge of butchers for the comparison). The generic space contains abstract information common to the two spaces. For instance, both contain an angent (the surgeon and the butcher), an object (patient, meat), a "sharp instrument" (scalpel and butcher's knife), a place of work (O.R. and butcher shop), and a "procedure" (surgery and carving up meat). In the blend space, the surgeon is identified with the butcher role; the patient role remains the same; the cleaver and scalpel are associated, though ambiguously; the action takes place in the operating room; and the surgeon's procedure, which has the goal of healing the patient, becomes butchery, which leads to the inference of "incompetence."
OK, did you get all that? It all sounds a bit more complex than the metaphor would seem to require, doesn't it? Well, one of Fauconnier and Turner's major talking points is that underneath the surface, beyond our awareness, complex processes are going on so that their outpust, like this metaphor, seem much simpler than they really are. Still, it seems like we could probably come up with a simpler model for that metaphor. In fact, I think we already have one, but that's a topic for another day. The purpose of this (admittedly fairly pointless) exposition has been to present, in brief, the details of another grand theory of human cognition from a cognitive linguist whose name is not George Lakoff. Even though it may not sound like it, I actually like this approach better than Lakoff's. That's because I think it says exactly what I would say about mappings, which I agree are central in human higher-order cognitive processes ranging from metaphor to deductive reasoning, just in a much different way. As it's formulated, it all looks a little bit loopy to me. It doesn't really make any predictions either, and to be quite honest, I can't see a thing that's not post hoc in the entire exposition of the "butcher" metaphor. It's not surprising that blending can be used to describe just about any cognitive output, then. You just have to get the output, draw some circles, write some stuff in the circles, and then draw some lines between the stuff. Despite all that, maybe you think it's more sane than I do, and perhaps you even agree with Lakoff that, combined with image schemas, frames, and the like, blends give us the complete picture of the "basic elements of thought." As for me, I, like Dickinson, am a little bit too awed by the mind to think that after fourty years (or twenty, if you're a cognitive linguist), we've got the thing pinned down.
The brain is deeper than the sea,
For, hold them, blue to blue,
The one the other will absorb,
As sponges, buckets do.
The brain is just the weight of God,
For, lift them, pound for pound,
And they will differ, if they do,
As syllable from sound.
UPDATE: Towards the end of the post, I criticized blending analyses for being post hoc and unable to make predictions. Fauconnier and Turner actually anticipate this. I would summarize what they wrote, but I'm not sure you'd get the full effect, so here it is, from The Way We Think, p. 55:
We have so far given analyses of blends, but we have not framed our analyses in terms of prediction and confirmation.
Questions:Doesn't science involve making falsifiable predictions? What falsifiable predictions come from the theory of blending?
Our answers:
Actually, sciences like evolutionary biology are not about making falsifiable predictions regarding future events. Given the nature of the mental operations of blending, it would be nonsense to predict that from two inputs a certain blend must result or that a specific blend must arise at such-and-such a place and time. Human beings do not think that way. Nonetheless, in the strong sense, we hope to make many falsifiable predictions, including predictions about types of blending, what counds as a good or bad blend, how the formation of a blend depends on the local purpose, how forms propt for blending, what possibilities there are for composing mappings, what possibilities there are for creating successive blends, how other cognitive operations (such as metonymy) are exploited during blending, and how categories are extended.
If an evolutionary biologist happened to read that, he or she will probably never read this blog again, so offended must he or she be. Of course evolutionary biology makes falsifiable predictions regarding future events. They probably mean that evolutionary biologists aren't going to try to predict what sorts of new species will evolve, which fits nicely with there admission that they're not going to try to predict specific blends from specific inputs. So far, there aren't many cognitive theories that are designed to do that sort of thing, though, so when people point out that, so far, all blending analyses have done is produce post hoc descriptions of already-formed outputs, and has not produced any falsifiable predictions, they are referirng to the sorts of predictions that Fauconnier and Turner say they do think blending can make. So the question is, when are we going to get to see some of these predictions? And will these predictions require tests of blending that do more than demonstrate that blending can handle any data set whatsoever, because it's simply unfalsifiable? I'll wait with bated breath for the answers to these questions and many others.
1 Fauconnier and Turner are certainly not alone in believing that cognitive linguistics has discovered the fundamental secrets of the mind. Check out Lakoff's review of the book:
Over the last two decades, cognitive linguists have mapped out the basic elements of thought -- image-schemas, frames, conceptual metaphors and metonymies, prototypes, mental spaces. Now Fauconnier and Turner have filled in the last piece of the puzzle: conceptual blending, the mental mechanism that binds together and integrates these elements into complex ideas. The Way We Think is a dazzling tour of the complexities of human imagination. (emphasis mine)
2 Check out this book for an excellent review of what we know (and don't know) about concepts, and why Lakoff, e.g., is quite wrong about them.
3 See e.g., here and here.
4 From Fauconnier, G. & Turner, M. (1998). Conceptual Integration Networks. Cognitive Science, 22(2), 133-187.
5 Ibid.
3 comments:
There is nothing being refered to in what is called philosophy of mind, or really cognitive linguistics, except chemical processes. There is a bio-chemical apparatus--the brain. Evenutally it will completely mapped, and what do you bet it wll have some structural similarites to a computer, or at least computational model. I suggest you pass your Organic Chem. class before proceeding any phurther....
there may still be reasons to hold to forms behaviorism, since it is unlikely neuroscience will ever be able to predict what the human organism will do simply from mental evidence, unless perhaps there is some neural mapping software, a scanner, attached to the brain.
Posted by nemesis
I didn't know the Churchland's read my blog! Thanks for that.
Posted by Chris
I've been searching the web for critiques of the cog sem theories of everything and you make some nice points. Steven Pinker has a bit to say in "The Stuff of Thought". The problem is that it's all a bit knockabout - an pointers to anything a bit more sustained?
One of the things that both Lakoff and Faulconnier seem to say is that utterances, words, sentences etc massively underspecify the thoughts that they are expressions of. They then go on to demonstrate - particularly Lakoff - that, with all the pre-concious/unconcious/hidden/not accessible to conciousness goings on that we are all apparently engaged in, that actually the utterance/words/sentences that come out of our mouths are actually fully specified (indeed, over-specified). Curious.
Incidentally, "That surgeon is a butcher" seems to me to express a feeling more than it does a thought!?
Post a Comment