I've now returned to this post several times in the past day in an effort to understand how people can conceive of something that is not fact as being true.Besides making me self-conscious at the thought of someone reading one of my blog posts more than once, the comment made me realize that I hadn't been very clear. It's not the case (very often) that people have a belief that they recognize as counterfactual and believe it to be true. People do have blatantly counterfactual beliefs, but in most cases, it's unlikely that they recognize them to be counterfactual. More often, though, people just aren't worried about the facticity of their beliefs. Unless their beliefs are threatened, there's little reason for them to be. So people may believe things that are not "fact," not because they consciously rebel against their counterfactuality, but because they just don't know that what they believe is counter to fact.
When I think of the emphasis on "truthiness" over "factiness," I think of the classic memory studies in which active representations determine what information we process and remember from a situation. My favorite of these is Anderson and Pichert's study in which they showed
that the perspectives people took when reading a story determined which information they remembered from the story1. It's pretty easy to see how this is relevant to politics, since people often bring very different perspectives to political discussions. But in addition to the influence of active representations, there are other factors at play. And serendipitously, Cognitive Daily has a recent post (via Clark) describing recent research exploring one of these factors. You can read a detailed description of the research in that post, or check out the paper. I'll just give a quick summary.
In three experiments, Preston and Epley looked at the effects of two types of information, explanations for beliefs (observations that explain the beliefs) and applications of beliefs (observations that the beliefs explain), on the importance of three types of beliefs: novel beliefs, the beliefs of others, and "cherished beliefs" (e.g., religious beliefs). In all three experiments, they found that when participants were asked to list applications of the beliefs, they rated the beliefs to be more important than when they listed explanations for the beliefs. Preston and Epley argue that this is because the importance of a belief is directly related to how many observations it can explain. One of the interesting implications of this position is that new facts or beliefs that can explain the same observations diminish the importance of currently held beliefs. They write:
We also believe these experiments can help account for peopleÂs resistance to explanations for their cherished beliefs. Those of religious faith, for example, seem threatened when scientific explanationsÂsuch as evolutionÂare offered for observations otherwise explained by religious concepts or when psychological concepts are used to explain religious belief itself. Even if these explanations do not impinge on the core tenets of a religious ideology, they may nevertheless seem to devalue religious beliefs, and lead to an intense resistance to such explanations. Indeed, the history of science and religion is replete with examples of such resistance. In some cases, it may be so intense that believers wish to avoid the search for underlying explanations altogether.When you combine this to the fact that the very observations that we want to explain are heavily influenced by our beliefs, it's easy to see how "factiness" can suffer, even when we're not avoiding the search for explanations.
1 Anderson, R., & Pichert, J.W. Recall of previously unrecallable information following a shift in perspective. Journal of Verbal Learning and Verbal Behavior, 17, 1-12.