Analogical reasoning involves three primary steps: retrieval, mapping, and inference production (sometimes called transfer). There are other stages in some models (e.g., learning, evaluation, and representation-building -- see this paper, for a discussion of some research in these areas), but I'm going to focus on theese three parts. Analogies involve the comparison of two domains that are usually considered dissimilar. In most cases, we know more about one domain (called the base) than the other (called the target), and the goal of the analogy is to say something about the target from our knowledge of the base. Generally, we start with a target, and have to retrieve a base representation. Thus, the first step in forming an analogy is the retrieval of an analogue. While analogical reasoning is ubiquitous in human cognition, it turns out that we are not very good at retrieving suitable analogues. The classic experiment demonstrating this was conducted by Glick and Holyoak1, using Duncker's radiation problem:
Suppose you are a doctor faced with a patient who has an inoperable stomach tumor. You have at your disposal rays that can destroy human tissue when directed with sufficient intensity. How can you use these rays to destroy the tumor without destroying the surrounding healthy tissue?Before participants were given this problem, they read various stories. For some participants, one of those stories involved a general who had to decide how to attack a fortress. The fortress was surrounded by mines which would be set off if crossed by a large force. The general solved this problem by approaching the fortress with several small forces from multiple directions that would converge on the force all at once. Using this problem as an analogue to the problem they were supposed to solve, participants would know that they could use smaller doses of radiation from multiple directions, which would arrive at the tumor at one time, thus allowing for the same large dose of radiation without having to destroy tissue by sending it through the body in one place. Participants who weren't given the fortress story prior to reading the radiation problem solved it about 10% of the time. Participants who were given the fortress problem solved the radiation problem about 30% of the time, which was a significant increase, but which meant that 70% of the time, participants were not able to retrieve the relevant analogue.
Subsequent research has shown that the reason people have such a difficult time retrieving suitable analogues is that we tend to retrieve items from memory based on surface (based on objects and atrbitues), rather than relational (based on common relational structures) similarity2. Since analogies are all about relational similarity, this is a problem. Fortunately, people tend to reject inferences that come from bases that are surface similar to targets, but not relationally similar. The tendency to retrieve surface similar analogues is also an explanation for why experts are better at analogical reasoning in their domains of expertise than novices3. This last fact actually hints at the reasons for accepting domain-general theories of analogical reasoning. While expert reasoning may appear to be using domain-specific processes, what may actually be happening is that experts simply have more complex relational knowledge within that domain. More complex representations yield different types of results in analogical reasoning.
After an analogue has been retrieved, people must them form an analogy between it and the target. Almost every theory of analogy in cognitive psychology, the mapping process involves aligning two representations so that their common relations and objects are in correspondence. In some models (most notably, LISA4), alignment occurs after an abstract representation of the base has been produced, and projected onto the target. In others, (including Structure Mapping theory5), projections from the base are determined by the mapping. In this second type of model, mappings occur by aligning the representations of the domains using certain constraints. Structure Mapping theory uses three constraints, called systematicity, parallel connectivity, and one-to-one mapping. Systematicity requires that higher-order mappings (mappings between relations are of a higher-order than mappings between objects, e.g.) be preferred. Parallel-connectivity requires that when two relations are placed in correspondence, their arguments (e.g., the objects in the relations) are also placed in correspondence. One-to-one mapping allows an entity in a domain to be mapped onto at most one entity in the other domain. Using these constraints, the mapping produces a representation that consists only of the common relational structure and any entities (objects and their attributes, or relations) attached to that structure in the base that are not present in the target.
After the two domains are mapped onto each other, we can then transfer information from the base domain to the target domain. In Structure Mapping theory, this involves producing candidate inferences. These are potential inferences about the target, from the base, that come from the entities that are part of the common relational structure, but present only in the base. Those entities are carried over to the target, and placed in the corresponding position in the relational structure of its representation. If these inferences are good fits, they are kept. If not, they are discarded.
The view of analogy I've just described is quite obviously domain general. It involves types of processes (retrieval, mapping, and transfer) that can be used to perform analogies in any domain. There are, however, some domain-specific models of analogy. Most of these come from the embodied cognition literature. Others (e.g., Phineas6, see this paper for a short description, complete with neat diagrams) are designed to handle specific domains, and usually involve domain-specific mechanisms for evaluating transfer (Phineas is designed to handle physics problems, and uses physics-specific rules).
As with theories of analogical reasoning, most theories of category-based induction are domain-general. There are four main types of inferences involving category-knowledge. The first, usually called taxonomic inference, is the most simple. For taxonomic categories (e.g., most biological categories), the "is-a" relationship that relates them to each other has certain immutable features. One of them is that any feature of a superordinate category is also a feature of its subordinate categories. Thus, if birds (the superordinate category) have wings, then robins (a subordinate of birds) have wings. The second type, sometimes called "specific" arguments, involves making inductive inferences from members of a category to other members of that category (e.g., zebras have organ X, therefore horses have organ X). This can also be thought of as inferences from one (or more) category to another category, where both categories are members of a third higher-level category. The third type, called "general" arguments, involves inferences from members of a category to the category itself (e.g., zebras and horses have organ X, therefore all mammals have organ X). The fourth type, called "mixed" arguments, involves inferences from members of separate categories (e.g., zebras and robins) to a higher-level category of which one of the premise categories is a member (e.g., mammals). There has been a substantial amount of research on these last three types of inference (the first type is pretty straightforward, though there has been a lot of research on taxonomic categories, which I may talk about in a future post). I'll try to talk a little bit about each, but most of the discussion will be about inferences between members of a category (or inferences between categories).
Currently, there are two primary models of category-based inductions of the last three types, the similarity-coverage model7 and the feature-based induction model8. To see how they work, consider argument (1), which is a specific argument:
Elephants love onions.According to the similarity-coverage model, the perceived strength of this argument depends on two things:
Mustangs love onions
Therefore, zebras love onions. (from Smith, 1993, p. 231)
- "The degree to which the premise categories are similar to the conclusion category."
- "The degree to which the categories are similar to members of the lowest level category that includes both the premise and conclusion categories." (Osherson et al., 1990, p. 185)
The feature-based induction model also relies on the similarity between the premises and the conclusion, but without the similarity to members of the lowest-level common category. In addition, the similarity is quantified in terms of the amount of feature overlap. Thus, as Smith (1993) puts it:
The main idea is that an argument whose conclusion claims a relation between category C (e.g., Zebras) and predicate P (e.g., love onions) is judged strong to the extent that the features of C have already been associated with P in the premises. (p. 232)In other words, the amount of overlap between the features of the premises and the features of the conclusion determines how strong the argument is.
The models deal with general arguments in ways that are similar to their treatment of specific arguments. One interesting difference is the diversity effect predicted by the similarity-coverage model. In specific arguments, there is an inverse relationship between the similarity of the categories in the premise and the perceived strength of the argument. . The more diverse the premises, the stronger the argument. This is because more diverse premises will produce a higher degree of similarity between the premise categories and the conclusion category, and also similarity with a greater number of members of the lowest-level common category. Thus, argument (2) is considered to be weaker than argument (3).
(2) Horses have organ X. Zebras have organ X. Therefore mammals have organ X.Horses and zebras are more similar to each other than horses and bats, therefore, the argument that uses the second pair as premises is seen as the stronger one. The diversity effect also applies to specific arguments with more than one premise. Thus, argument (1) above is better than argument (4):
(3) Horses have organ X. Bats have organ X. Therefore mammals have organ X.
Donkeys love onions.This effect is also predicted by the feature-based model, but for slightly different reasons. Because the strength of an argument is dependent entirely on the feature overlap between the categories in the premises and the conclusion, diverse premise categories will produce stronger arguments. This results from the fact that diverse premise categories will, when combined, produce a larger set of features, and thus (in most cases) more overlap with the conclusion category's features.
Mustangs love onions.
Therefore, zebras love onions.
Mixed arguments are the least common, and for good reason. While the premise categories tend to be highly dissimilar, which makes arguments more sound, their combined similarity to the conclusion category tends to be lower as well. Consider argument (5):
Bobcats have organ X.Bobcats have a relatively high degree of similarity with mammals, but both mammals and bobcats have a relatively low degree of similarity with penguins. Thus, penguins adds little to the argument in both the similarity-coverage model and the feature-based induction model.
Penguins have organ X.
Therefore, mammals have organ X.
There are some induction effects that are not predicted by either of these models. For instance, some inferences seem to be related to causal associations, rather than overall similarity9. In these inferences, the similarity (or feature overlap) that is considered when judging the strength of the argument is specific to the feature being inferred. For instance, in one experiment, Heit and Rubinstein (1994) gave participants arguments about behavioral or anatomical properties. When arguments were about behavioral properties, behavioral similarity was more predictive of argument strength than overall similarity, while when anatomical arguments were about anatomical properties, overall similarity was more predictive. For example, argument (6) is considered stronger than argument (7), while argument (9) is stronger than argument (8).
(6) Grouper travel in a zig-zag path. Therefore whales travel in a zig-zag path.Finally, there are some cases in which expertise influences category-based inductions, which are not easily handled by the two primary models. Doug Medin and his colleagues have been conducting some very interesting research comparing three sets of populations: college students (novices), tree experts of various sorts (taxonomists, landscapers, and park maintenance workers), and Itzaj Mayans from Guatemala. While college students tend to exhibit the diversity effect predicted by both the similarity-convergence and feature-based induction models, tree experts, when reasoning about trees, and Itzaj Mayans, when reasoning about native flora and fauna, do not. This seems to be because experts are using domain-specific knowledge to make inductive inferences. This implies that, while domain-general theories of category-based induction may work in many cases, domain-specific accounts may be necessary for others (e.g., areas of expertise). These theories would take into account the role of domain-specific knowledge and representations.
(7) Bears travel in a zig-zag path. Therefore whales travel in a zig-zag path.10
(8) Grouper have organ X. Therefore whales have organ X.
(9) Bears have organ X. Therefore whales have organ X.
So, that's analogical reasoning and category-based induction. In the next post on reasoning, I'll talk about mental models, which are designed to deal with the types of reasoning I've discussed so far, as well as all sorts of other types.
One more thing, an administrative note. I've stopped linking to the PDFs of all the papers that I cite, because it takes forever to do so. However, if anyone wants the PDFs, just email me, and I'll either link you to them or send them as attachments (your choice).
1 Gick, M. & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15(1), 1-38.
2 Gentner, D., Ratterman, M., & Forbus, K. (1983). The roles of similarity in transfer: separating retrievability from inferential soundness. Cognitive Psychology, 25, 524-575.
3 Novick, L. R. (1988). Analogical transfer, problem similarity, and expertise. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 510-520.
4 Hummel, J. E., & Holyoak, K. J. (1996). LISA: A computational model of analogical inference and schema induction. In G. W. Cottrell (Ed.), Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society pp. 352-357. Hillsdale, NJ: Erlbaum.
5 Gentner, D., (1983). Structure-mapping: a theoretical framework for analogy. Cognitive Science, 7, 15-70.
6 Falkenhainer, B. (1987). An examination of the third stage in the analogy process: Verification-based analogical learning. Proceedings of IJCAI-87, 260-263.
7 Osherson, D. N., Smith, E. E., Wilkie, O., Lopez, A., Shafir, E. (1990). Category based induction. Psychological Review, 97, 185–200.
8 Sloman, S. A. (1993). Feature-based induction.
Cognitive Psychology/, 25, 231–80.
9 Heit, E. % Rubinstein, J. (1994). Similarity and property effects in inductive reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 411–22.
10 Example adapted (i.e., stolen) from Rehder, B. (Under Review). When similarity and causality compete in category-based property induction.