Thursday, January 06, 2005

What the Cognitive Scientists Believe

Recently, Edge.org, the "world question center," asked scientists and "science-minded thinkers" an interesting but vague question: What do you believe is true even though you cannot prove it? There are 120 answers from various types of scientists, and cognitive scientists/philosophers of mind are well reprsented in that list. I thought I'd link to a few of their answers. A warning: my commentary may be a bit snarky at times, but never where it's not deserved. Here are the answers, in no particular order (in most cases, the longer answers can be read by following the links):

Stephen Kosslyn:
These days, it seems obvious that the mind arises from the b rain (not the heart, liver, or some other organ). In fact, I personally have gone so far as to claim that "the mind is what the brain does." But this notion does not preclude an unconventional idea: Your mind may arise not simply from your own brain, but in part from the brains of other people.
He goes on to describe the notion of "Social Prosthetic Systems," which operate on principles similar to those captured by the concept of extended mind that I've posted on before.

Steven Pinker:
In 1974, Marvin Minsky wrote that "there is room in the anatomy and genetics of the brain for much more mechanism than anyone today is prepared to propose." Today, many advocates of evolutionary and domain-specific psychology are in fact willing to propose the richness of mechanism that Minsky called for thirty years ago. For example, I believe that the mind is organized into cognitive systems specialized for reasoning about object, space, numbers, living things, and other minds; that we are equipped with emotions triggered by other people (sympathy, guilt, anger, gratitude) and by the physical world (fear, disgust, awe); that we have different ways for thinking and feeling about people in different kinds of relationships to us (parents, siblings, other kin, friends, spouses, lovers, allies, rivals, enemies); and several peripheral drivers for communicating with others (language, gesture, facial expression).
I, for one, am glad that Pinker realizes he can't prove these beliefs, because, well, they're wrong (see here), but that's what you get for thinking about cognition from an evolutionary perspective.

Alison Gopnik:
I believe, but cannot prove, that babies and young children are actually more conscious, more vividly aware of their external world and internal life, than adults are. I believe this because there is strong evidence for a functional trade-off with development. Young children are much better than adults at learning new things and flexibly changing what they think about the world. On the other hand, they are much worse at using their knowledge to act in a swift, efficient and automatic way. They can learn three languages at once but they can't tie their shoelaces.
This is an interesting idea, and one that I'm not completely averse to. It certainly fits with the way automaticity develops (see here), but it does conflict with some research, and many peoples' intuitions, about the role of language in conscious thought.

Susan Blackmore:
It is possible to live happily and morally without believing in free will. As Samuel Johnson said "All theory is against the freedom of the will; all experience is for it." With recent developments in neuroscience and theories of consciousness, theory is even more against it than it was in his time, more than 200 years ago. So I long ago set about systematically changing the experience. I now have no feeling of acting with free will, although the feeling took many years to ebb away.
All I have to say is, she wrote a book about memes. Eww... enough said.

Elizabeth Spelke:
I believe, first, that all people have the same fundamental concepts, values, concerns, and commitments, despite our diverse languages, religions, social practices, and expressed beliefs.

Second, one of our shared core systems centers on a notion that is false: the notion that members of different human groups differ profoundly in their concepts and values. This notion leads us to interpret the superficial differences between people as signs of deeper differences.

Third, the most striking feature of human cognition stems not from our core knowledge systems but from our capacity to rise above them. Humans are capable of discovering that our core conceptions are false, and of replacing them with truer ones.

Together, my three beliefs suggest a fourth. If the cognitive sciences are given sufficient time, the truth of the claim of a common human nature eventually will be supported by evidence as strong and convincing as the evidence that the earth is round. As humans are bathed in this evidence, we will overcome our misconceptions of human differences. Ethnic and religious rivalries and conflicts will come to seem as pointless as debates over the turtles that our pancake earth sits upon, and our common need for a stable, sustainable environment for all people will be recognized.
Spelke's belief is based on some research, showing that humans do tend to classify things similarly, or at least represent them fairly similarly, across a wide variety of domains (particularly natural kind concepts). However, there is also a lot of research demonstrating significant culturally and linguistically-derived differences in the way we represent and reason about the world. Even without these differences, however, I'm not sure that cognitive science research on the similarity of ethical and conceptual intuitions will lead to the social revolution that Spelke implies. It would be nice if it did, though.

Scott Atran (yes, that Scott Atran):
There is no God that has existence apart from people's thoughts of God. There is certainly no Being that can simply suspend the (nomological) laws of the universe in order to satisfy our personal or collective yearnings and whims—like a stage director called on to change and improve a play. But there is a mental (cognitive and emotional) process common to science and religion of suspending belief in what you see and take for obvious fact. Humans have a mental compulsion—perhaps a by-product of the evolution of a hyper-sensitive reasoning device to serve our passions—to situate and understand the present state of mundane affairs within an indefinitely extendable and overarching system of relations between hitherto unconnected elements. In any event, what drives humanity forward in history is this quest for non-apparent truth.
No need to click the link. That's Atran's answer in its entirety. Sort of sheds some light on his research, doesn't it? I hope that he doesn't use his research, and theory, to support his atheism. Nothing in the cognitive literature supports that inference. I agree with him, of course, but not because I've gained a deeper understanding of the cognitive and emotional mechanisms involved in religious belief and practice.

David Buss (aka Rudyard Kipling):
I've spent two decades of my professional life studying human mating. In that time, I've documented phenomena ranging from what men and women desire in a mate to the most diabolical forms of sexual treachery. I've discovered the astonishingly creative ways in which men and women deceive and manipulate each other. I've studied mate poachers, obsessed stalkers, sexual predators, and spouse murderers. But throughout this exploration of the dark dimensions of human mating, I've remained unwavering in my belief in true love.
Considering the source, an evolutionary psychologist, and one of the most perniciously Kiplingesque of that breed (I'll consider writing a post on some of his research, but I don't know if anyone wants to see me being that mean), I'm inclined to cease believing in true love right now. Of course, my belief in true love is more of a self-conscious fantasy that just makes the world seem like a better place, and which doesn't do any harm, so I've never had a reason to get rid of it. This might be one.

Philip Zimbardo:
believe that the prison guards at the Abu Ghraib Prison in Iraq, who worked the night shift in Tier 1A, where prisoners were physically and psychologically abused, had surrendered their free will and personal responsibility during these episodes of mayhem.

But I could not prove it in a court of law. These eight army reservists were trapped in a unique situation in which the behavioral context came to dominate individual dispositions, values, and morality to such an extent that they were transformed into mindless actors alienated from their normal sense of personal accountability for their actions—at that time and place.
Who would have expected a comment on Abu Ghraib from the author of the Stanford Prison Experiment? He's probably right, but still, couldn't he have come up with something that everyone (including Zimbardo himself) and his brother hadn't written about last year?

Paul Bloom:
John MacNamara once proposed that children come to learn about right and wrong, good and evil, in much the same way that they learn about geometry and mathematics. Moral development is not merely cultural learning, and it does not arise from innate principles that have evolved through natural selection. It is not like the development of language or sexual preference or taste in food.

Instead, moral development involves the construction of a intricate formal system that makes contact with the external world in a significant way. This cannot be entirely right. We know that gut-feelings, such as reactions of empathy or disgust, have a major influence on how children and adults reason about morality. And no serious theory of moral development can ignore the role of natural selection in shaping our moral intuitions. But what I like about Macnamara's proposal is that it allows for moral realism. It allows for the existence of moral truths that people come to discover, just as we come to discover truths of mathematics. We can reject the nihilistic position (help by many researchers) that our moral intuitions are nothing more than accidents of biology or culture.

And so I believe (though I cannot prove it) that the development of moral reasoning is the same sort of process as the development of mathematical reasoning.
This is Bloom's entire answer as well, and I like it. He's sort of the anti-Pinker, wouldn't you say? I've always liked Bloom's work, though, so it's not surprising that I would agree with him. While it does appear that certain precursors (e.g., reciprocity or reciprocal altruism) may be innate, most research seems to indicate that specific moral content is learned, while the emotional and cognitive foundations of morality are at least partially innate, just as in the research on mathematical reasoning.

Ned Block:
I believe that the "Hard Problem of Consciousness" will be solved by conceptual advances made in connection with cognitive neuroscience. Let me explain. No one has a clue (at the moment) how to answer the question of why the neural basis of the phenomenal feel of my experience of red is the neural basis of that phenomenal feel rather than a different one or none at all. There is an "explanatory gap" here which no one has a clue how to close.

This problem is conceptually and explanatorily prior to the issue of what the nature of the self is, as can be seen in part by noting that the problem would persist even for experiences that are not organized into selves. No doubt closing the explanatory gap will require ideas that we cannot now anticipate. The mind-body problem is so singular that no appeal to the closing of past explanatory gaps really justifies optimism, but I am optimistic nonetheless.
Alva Noë might take offense at the suggestion that "no one has a clue" how to answer the hard problem (see his papers here and here), and I'd venture to say that some people do have a clue, even if it's only that. People like Antonio Damasio are doing very interesting research on this problem.

Daniel Dennett (official philosopher of Harley Davidson... follow the link and you'll see what I mean):
I believe, but cannot yet prove, that acquiring a human language (an oral or sign language) is a necessary precondition for consciousness–in the strong sense of there being a subject, an I, a 'something it is like something to be.' It would follow that non-human animals and pre-linguistic children, although they can be sensitive, alert, responsive to pain and suffering, and cognitively competent in many remarkable ways–including ways that exceed normal adult human competence–are not really conscious (in this strong sense): there is no organized subject (yet) to be the enjoyer or sufferer, no owner of the experiences as contrasted with a mere cerebral locus of effects.
I'd like to see Dennett and Gopnik debate this issue. I'm never really sure what Dennett means by consciousness, as a.) he tends to deny the existence or importance of many of the properties that most of us would attribute to conscious experience and b.) he seems to gloss over the distinctions, now almost unviversal among neuroscientists, and common among philosophers (e.g., Block), between different levels or types of consciousness. One might argue that for autobiographical consciousness, or whatever you want to call it, you need language, but do we really need language for all the levels of consciousness? I don't think so. Dennett's tendency to gloss over important distinctions makes it dificult to evaluate his positions. One also wonders whether he's read any research (particularly on attention and consciousness, and on cognitive development) from the last 10 years.

Daniel Gilbert:
In the not too distant future, we will be able to construct artificial systems that give every appearance of consciousness—systems that act like us in every way. These systems will talk, walk, wink, lie, and appear distressed by close elections. They will swear up and down that they are conscious and they will demand their civil rights. But we will have no way
to know whether their behavior is more than a clever trick—more than the pecking of a pigeon that has been trained to type "I am, I am!"
I wonder if these systems will have language? Will they be as conscious as babies? Anyway, we have trouble developing robots that can competently navigate their ways around computer science departments, and there are some (e.g., Rod Brooks) who are essentially "starting over" where AI is concerned, though he now has some very sophisticated but nowhere near conscious robots, so this might be a stretch for the foreseeable future. But hey, I'm all for optimism.

Joseph Ledoux:
For me, this is an easy question. I believe that animals have feelings and other states of consciousness, but neither I, nor anyone else, has been able to prove it. We can't even prove that other people are conscious, much less other animals. In the case of other people, though, we at least can have a little confidence since all people have brains with the same basic configurations. But as soon as we turn to other species and start asking questions about feelings, and consciousness in general, we are in risky territory because the hardware is different.
Anyone who knows Ledoux's work saw something like this coming. I think it's an interesting position, and it implies (as much of Ledoux's work does) that someone has a clue about the hard problem (e.g., the role of feelings in consciousness, Ledoux's primary area of research).

Marc Hauser:
What makes humans uniquely smart?

Here's my best guess: we alone evolved a simple computational trick with far reaching implications for every aspect of our life, from language and mathematics to art, music and morality. The trick: the capacity to take as input any set of discrete entities and recombine them into an infinite variety of meaningful expressions.

Thus, we take meaningless phonemes and combine them into words, words into phrases, and phrases into Shakespeare. We take meaningless strokes of paint and combine them into shapes, shapes into flowers, and flowers into Matisse's water lilies. And we take meaningless actions and combine them into action sequences, sequences into events, and events into homicide and heroic rescues.

I'll go one step further: I bet that when we discover life on other planets, that although the materials may be different for running the computation, that they will create open ended systems of expression by means of the same trick, thereby giving birth to the process of universal computation.
Thank you, Noam Chomsky... I mean Marc Hauser (remember this paper?). This is interesting, but probably one of the grossest oversimplifications of all time. And I wonder if this capacity really is unique to humans. It certainly seems to be unique to human language, though bees and parrots might have something to say about that (pun intended), but is it unique to humans in all other areas of cognition? I wish I could think of some research that might answer that question, but since the point is so damn broad, it's hard to know where to look.

4 comments:

Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...

I was struck by Paul Bloom's entry. I see he has written a book, "DesCartes Baby". Can anyone point me to something that John MacNamara has written? Lately I have become more interested in the psychology behind our choice of morals and I'm browsing for more points of view. 

Posted by wmr

Anonymous said...

I was struck by Paul Bloom's entry. I see he has written a book, "DesCartes Baby". Can anyone point me to something that John MacNamara has written? Lately I have become more interested in the psychology behind our choice of morals and I'm browsing for more points of view. 

Posted by wmr

Anonymous said...

I know MacNamara's old language learning stuff. If you'd like some citations, I can give you some. I don't, however, recall anything he said about moral devleopment, and I'm not sure where Bloom got the quote. Maybe (maybe) from Names for Things

Posted by Chris