Friday, December 17, 2004

Reasoning: Analogical Reasoning and Category-based Inducation

Having gone through the domain specific-domain general debate, it's time to move on to categorical and anological reasoning. In both cases, most theories are domain-general, and though those will be the focus, I will mention a couple domain-specific theories along the way.

Analogical Reasoning

Analogical reasoning involves three primary steps: retrieval, mapping, and inference production (sometimes called transfer). There are other stages in some models (e.g., learning, evaluation, and representation-building -- see this paper, for a discussion of some research in these areas), but I'm going to focus on theese three parts. Analogies involve the comparison of two domains that are usually considered dissimilar. In most cases, we know more about one domain (called the base) than the other (called the target), and the goal of the analogy is to say something about the target from our knowledge of the base. Generally, we start with a target, and have to retrieve a base representation. Thus, the first step in forming an analogy is the retrieval of an analogue. While analogical reasoning is ubiquitous in human cognition, it turns out that we are not very good at retrieving suitable analogues. The classic experiment demonstrating this was conducted by Glick and Holyoak1, using Duncker's radiation problem:
Suppose you are a doctor faced with a patient who has an inoperable stomach tumor. You have at your disposal rays that can destroy human tissue when directed with sufficient intensity. How can you use these rays to destroy the tumor without destroying the surrounding healthy tissue?
Before participants were given this problem, they read various stories. For some participants, one of those stories involved a general who had to decide how to attack a fortress. The fortress was surrounded by mines which would be set off if crossed by a large force. The general solved this problem by approaching the fortress with several small forces from multiple directions that would converge on the force all at once. Using this problem as an analogue to the problem they were supposed to solve, participants would know that they could use smaller doses of radiation from multiple directions, which would arrive at the tumor at one time, thus allowing for the same large dose of radiation without having to destroy tissue by sending it through the body in one place. Participants who weren't given the fortress story prior to reading the radiation problem solved it about 10% of the time. Participants who were given the fortress problem solved the radiation problem about 30% of the time, which was a significant increase, but which meant that 70% of the time, participants were not able to retrieve the relevant analogue.

Subsequent research has shown that the reason people have such a difficult time retrieving suitable analogues is that we tend to retrieve items from memory based on surface (based on objects and atrbitues), rather than relational (based on common relational structures) similarity2. Since analogies are all about relational similarity, this is a problem. Fortunately, people tend to reject inferences that come from bases that are surface similar to targets, but not relationally similar. The tendency to retrieve surface similar analogues is also an explanation for why experts are better at analogical reasoning in their domains of expertise than novices3. This last fact actually hints at the reasons for accepting domain-general theories of analogical reasoning. While expert reasoning may appear to be using domain-specific processes, what may actually be happening is that experts simply have more complex relational knowledge within that domain. More complex representations yield different types of results in analogical reasoning.

After an analogue has been retrieved, people must them form an analogy between it and the target. Almost every theory of analogy in cognitive psychology, the mapping process involves aligning two representations so that their common relations and objects are in correspondence. In some models (most notably, LISA4), alignment occurs after an abstract representation of the base has been produced, and projected onto the target. In others, (including Structure Mapping theory5), projections from the base are determined by the mapping. In this second type of model, mappings occur by aligning the representations of the domains using certain constraints. Structure Mapping theory uses three constraints, called systematicity, parallel connectivity, and one-to-one mapping. Systematicity requires that higher-order mappings (mappings between relations are of a higher-order than mappings between objects, e.g.) be preferred. Parallel-connectivity requires that when two relations are placed in correspondence, their arguments (e.g., the objects in the relations) are also placed in correspondence. One-to-one mapping allows an entity in a domain to be mapped onto at most one entity in the other domain. Using these constraints, the mapping produces a representation that consists only of the common relational structure and any entities (objects and their attributes, or relations) attached to that structure in the base that are not present in the target.

After the two domains are mapped onto each other, we can then transfer information from the base domain to the target domain. In Structure Mapping theory, this involves producing candidate inferences. These are potential inferences about the target, from the base, that come from the entities that are part of the common relational structure, but present only in the base. Those entities are carried over to the target, and placed in the corresponding position in the relational structure of its representation. If these inferences are good fits, they are kept. If not, they are discarded.

The view of analogy I've just described is quite obviously domain general. It involves types of processes (retrieval, mapping, and transfer) that can be used to perform analogies in any domain. There are, however, some domain-specific models of analogy. Most of these come from the embodied cognition literature. Others (e.g., Phineas6, see this paper for a short description, complete with neat diagrams) are designed to handle specific domains, and usually involve domain-specific mechanisms for evaluating transfer (Phineas is designed to handle physics problems, and uses physics-specific rules).

Category-based Induction

As with theories of analogical reasoning, most theories of category-based induction are domain-general. There are four main types of inferences involving category-knowledge. The first, usually called taxonomic inference, is the most simple. For taxonomic categories (e.g., most biological categories), the "is-a" relationship that relates them to each other has certain immutable features. One of them is that any feature of a superordinate category is also a feature of its subordinate categories. Thus, if birds (the superordinate category) have wings, then robins (a subordinate of birds) have wings. The second type, sometimes called "specific" arguments, involves making inductive inferences from members of a category to other members of that category (e.g., zebras have organ X, therefore horses have organ X). This can also be thought of as inferences from one (or more) category to another category, where both categories are members of a third higher-level category. The third type, called "general" arguments, involves inferences from members of a category to the category itself (e.g., zebras and horses have organ X, therefore all mammals have organ X). The fourth type, called "mixed" arguments, involves inferences from members of separate categories (e.g., zebras and robins) to a higher-level category of which one of the premise categories is a member (e.g., mammals). There has been a substantial amount of research on these last three types of inference (the first type is pretty straightforward, though there has been a lot of research on taxonomic categories, which I may talk about in a future post). I'll try to talk a little bit about each, but most of the discussion will be about inferences between members of a category (or inferences between categories).

Currently, there are two primary models of category-based inductions of the last three types, the similarity-coverage model7 and the feature-based induction model8. To see how they work, consider argument (1), which is a specific argument:
Elephants love onions.
Mustangs love onions
Therefore, zebras love onions. (from Smith, 1993, p. 231)
According to the similarity-coverage model, the perceived strength of this argument depends on two things:
  1. "The degree to which the premise categories are similar to the conclusion category."
  2. "The degree to which the categories are similar to members of the lowest level category that includes both the premise and conclusion categories." (Osherson et al., 1990, p. 185)
Thus, there are two comparisons involved in determining the perceived strength of these arguments: how similar to each other the categories in the premise are, and how similar the categories are to the lowest-level category to which they all belong. In the argument above, these two comparisons would be between elephants, mustangs, and zebras, and between all of these and the category MAMMAL. Important for this latter comparison is the concept of typicality. The more similar a member of a category is to the category representation, the more typical it is. Thus, robins are (for Americans, at least) more typical birds than emus, because they are more similar to the category representation. More typical members of a category make better premises in arguments like the one above. In addition, the more similar the premise categories are to the conclusion category, the stronger the argument is perceived to be. Mustangs and zebras are quite similar, and therefore the argument from mustangs to zebras is likely to be considered strong. If elephants had been the only premise category, the argument might not have been seen as very strong, or at least as less strong than with the inclusion of the mustang premise.

The feature-based induction model also relies on the similarity between the premises and the conclusion, but without the similarity to members of the lowest-level common category. In addition, the similarity is quantified in terms of the amount of feature overlap. Thus, as Smith (1993) puts it:
The main idea is that an argument whose conclusion claims a relation between category C (e.g., Zebras) and predicate P (e.g., love onions) is judged strong to the extent that the features of C have already been associated with P in the premises. (p. 232)
In other words, the amount of overlap between the features of the premises and the features of the conclusion determines how strong the argument is.

The models deal with general arguments in ways that are similar to their treatment of specific arguments. One interesting difference is the diversity effect predicted by the similarity-coverage model. In specific arguments, there is an inverse relationship between the similarity of the categories in the premise and the perceived strength of the argument. . The more diverse the premises, the stronger the argument. This is because more diverse premises will produce a higher degree of similarity between the premise categories and the conclusion category, and also similarity with a greater number of members of the lowest-level common category. Thus, argument (2) is considered to be weaker than argument (3).
(2) Horses have organ X. Zebras have organ X. Therefore mammals have organ X.

(3) Horses have organ X. Bats have organ X. Therefore mammals have organ X.
Horses and zebras are more similar to each other than horses and bats, therefore, the argument that uses the second pair as premises is seen as the stronger one. The diversity effect also applies to specific arguments with more than one premise. Thus, argument (1) above is better than argument (4):
Donkeys love onions.
Mustangs love onions.
Therefore, zebras love onions.
This effect is also predicted by the feature-based model, but for slightly different reasons. Because the strength of an argument is dependent entirely on the feature overlap between the categories in the premises and the conclusion, diverse premise categories will produce stronger arguments. This results from the fact that diverse premise categories will, when combined, produce a larger set of features, and thus (in most cases) more overlap with the conclusion category's features.

Mixed arguments are the least common, and for good reason. While the premise categories tend to be highly dissimilar, which makes arguments more sound, their combined similarity to the conclusion category tends to be lower as well. Consider argument (5):
Bobcats have organ X.
Penguins have organ X.
Therefore, mammals have organ X.
Bobcats have a relatively high degree of similarity with mammals, but both mammals and bobcats have a relatively low degree of similarity with penguins. Thus, penguins adds little to the argument in both the similarity-coverage model and the feature-based induction model.

There are some induction effects that are not predicted by either of these models. For instance, some inferences seem to be related to causal associations, rather than overall similarity9. In these inferences, the similarity (or feature overlap) that is considered when judging the strength of the argument is specific to the feature being inferred. For instance, in one experiment, Heit and Rubinstein (1994) gave participants arguments about behavioral or anatomical properties. When arguments were about behavioral properties, behavioral similarity was more predictive of argument strength than overall similarity, while when anatomical arguments were about anatomical properties, overall similarity was more predictive. For example, argument (6) is considered stronger than argument (7), while argument (9) is stronger than argument (8).
(6) Grouper travel in a zig-zag path. Therefore whales travel in a zig-zag path.
(7) Bears travel in a zig-zag path. Therefore whales travel in a zig-zag path.10
(8) Grouper have organ X. Therefore whales have organ X.
(9) Bears have organ X. Therefore whales have organ X.
Finally, there are some cases in which expertise influences category-based inductions, which are not easily handled by the two primary models. Doug Medin and his colleagues have been conducting some very interesting research comparing three sets of populations: college students (novices), tree experts of various sorts (taxonomists, landscapers, and park maintenance workers), and Itzaj Mayans from Guatemala. While college students tend to exhibit the diversity effect predicted by both the similarity-convergence and feature-based induction models, tree experts, when reasoning about trees, and Itzaj Mayans, when reasoning about native flora and fauna, do not. This seems to be because experts are using domain-specific knowledge to make inductive inferences. This implies that, while domain-general theories of category-based induction may work in many cases, domain-specific accounts may be necessary for others (e.g., areas of expertise). These theories would take into account the role of domain-specific knowledge and representations.

So, that's analogical reasoning and category-based induction. In the next post on reasoning, I'll talk about mental models, which are designed to deal with the types of reasoning I've discussed so far, as well as all sorts of other types.

One more thing, an administrative note. I've stopped linking to the PDFs of all the papers that I cite, because it takes forever to do so. However, if anyone wants the PDFs, just email me, and I'll either link you to them or send them as attachments (your choice).


1 Gick, M. & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15(1), 1-38.
2 Gentner, D., Ratterman, M., & Forbus, K. (1983). The roles of similarity in transfer: separating retrievability from inferential soundness. Cognitive Psychology, 25, 524-575.
3 Novick, L. R. (1988). Analogical transfer, problem similarity, and expertise. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 510-520.
4 Hummel, J. E., & Holyoak, K. J. (1996). LISA: A computational model of analogical inference and schema induction. In G. W. Cottrell (Ed.), Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society pp. 352-357. Hillsdale, NJ: Erlbaum.
5 Gentner, D., (1983). Structure-mapping: a theoretical framework for analogy. Cognitive Science, 7, 15-70.
6 Falkenhainer, B. (1987). An examination of the third stage in the analogy process: Verification-based analogical learning. Proceedings of IJCAI-87, 260-263.
7 Osherson, D. N., Smith, E. E., Wilkie, O., Lopez, A., Shafir, E. (1990). Category based induction. Psychological Review, 97, 185–200.
8 Sloman, S. A. (1993). Feature-based induction.
Cognitive Psychology/, 25, 231–80.
9 Heit, E. % Rubinstein, J. (1994). Similarity and property effects in inductive reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 411–22.
10 Example adapted (i.e., stolen) from Rehder, B. (Und
er Review). When similarity and causality compete in category-based property induction.

20 comments:

kiloi said...

nike tnEnter the necessary language
translation, up to 200 bytes winter, moves frequently in Chinanike chaussures showing that the deep strategy of the Chinese market. Harvard Business School, tn chaussures according to the relevant survey data show that in recent years the Chinese market three brands, Adidas, mens clothingpolo shirts Li Ning market share at 21 percent, respectively, 20%, 17%. The brand is first-line to three lines of urban competition for mutual penetration. Side of theworld,announced layoffs, while China's large-scale facilities fists. The sporting goods giant Nike's every move in the winter will be fully s strategy. Years later, the Nike, Inc. announced the world's Fan

kiloi said...

cheap polos
polo shirts
ralph lauren polo shirtssport shoes
ugg boots
puma shoes
chaussures pumamp4
trade chinalacoste polo shirts
chaussure puma femmewedding dressestennis racket
cheap handbags

kiloi said...

HAIR STRAIGHTENERS
ED HARDY SHIRTS
HAIR STRAIGHTENERS
MENSCLOTHING mans clothing
cheap ugg boots
converse shoes
wedding dresses
wholesale polo shirts
brand clothingcheap clothing
clothes sportspolos shirtair shoesair shoesed hardy clothinged hardy clothing
英文推广

Anonymous said...

出会い喫茶出会いカフェテレクラ不倫セックスフレンドセフレ出会い出会い出会い掲示板出会い出会い出会い人妻風俗デリヘルデリバリーヘルス出会い出会い無料フィリピンライブチャットアダルトライブチャットデリヘル

Anonymous said...

不動産ソープランドアクセスカウンターコレステロール中性脂肪花粉症在宅ワーク内職在宅アルバイト乾燥肌ダイエット 食事サプリメント無料占い出会い山口クレジットカード現金化クレジット現金化ライブチャットフィリピンチャットレディパソコン在宅ワーク

Anonymous said...

出会い豊島区出会い北区出会い荒川区出会い板橋区出会い練馬区出会い足立区出会い葛飾区出会い江戸川区ニキビCholesterol水虫冷え性むくみ産後わきが車買取転職加齢臭

Anonymous said...

出会い豊島区出会い北区出会い荒川区出会い板橋区出会い練馬区出会い足立区出会い葛飾区出会い江戸川区ニキビCholesterol水虫冷え性むくみ産後わきが車買取転職加齢臭

Anonymous said...

出会い愛知出会い秋田出会い青森出会い千葉出会い愛媛出会い福井出会い福岡出会い福島出会い岐阜出会い群馬出会い広島出会い北海道出会い兵庫出会い茨城出会い石川出会い岩手出会い香川出会い鹿児島出会い神奈川出会い高知

Anonymous said...

出会い熊本出会い京都出会い三重出会い宮城出会い宮崎出会い長野出会い長崎出会い奈良出会い新潟出会い大分出会い岡山出会い沖縄出会い大阪出会い佐賀出会い埼玉出会い滋賀出会い島根出会い静岡出会い栃木出会い徳島

Anonymous said...

出会い熊本出会い京都出会い三重出会い宮城出会い宮崎出会い長野出会い長崎出会い奈良出会い新潟出会い大分出会い岡山出会い沖縄出会い大阪出会い佐賀出会い埼玉出会い滋賀出会い島根出会い静岡出会い栃木出会い徳島

Anonymous said...

出会い東京出会い鳥取出会い富山出会い和歌山出会い山形出会い山口出会い山梨出会い北九州出会い下関出会い川崎出会い神戸出会い久留米出会い水戸出会い名古屋出会い大牟田出会い埼玉出会い堺出会い仙台出会い横浜出会い横須賀出会い札幌出会い川崎

Anonymous said...

出会い堺出会い仙台出会い横浜出会い横須賀出会い札幌出会い千代田区出会い中央区出会い港区出会い新宿区出会い文京区出会い台東区出会い墨田区出会い江東区出会い品川区出会い目黒区出会い大田区出会い世田谷区出会い渋谷区出会い中野区出会い杉並区

Anonymous said...

福井出会い愛知出会い岐阜出会い静岡出会い三重出会い兵庫出会い大阪出会い和歌山出会い滋賀出会い京都出会い奈良出会い山口出会い鳥取出会い島根出会い岡山出会い広島出会い徳島出会い香川出会い愛媛出会い高知出会い

Anonymous said...

出会い札幌出会い函館出会い北海道出会い秋田出会い青森出会い岩手出会い東京出会い八王子出会い府中出会い調布出会い銀座出会い仙台出会い優良出会いサイトアダルト盗撮素人熟女エロアニメAV女優

Anonymous said...

出会い札幌出会い函館出会い北海道出会い秋田出会い青森出会い岩手出会い東京出会い八王子出会い府中出会い調布出会い銀座出会い仙台出会い優良出会いサイトアダルト盗撮素人熟女エロアニメAV女優

Anonymous said...

出会い愛知出会い秋田出会い青森出会い千葉出会い愛媛出会い福井出会い福岡出会い福島出会い岐阜出会い群馬出会い広島出会い北海道出会い兵庫出会い茨城出会い石川出会い岩手出会い香川出会い鹿児島出会い神奈川出会い高知

Anonymous said...

福井出会いカフェ愛知出会いカフェ岐阜出会いカフェ静岡出会いカフェ三重出会いカフェ兵庫出会いカフェ大阪出会いカフェ和歌山出会いカフェ滋賀出会いカフェ京都出会いカフェ奈良出会いカフェ山口出会いカフェ鳥取出会いカフェ島根出会いカフェ岡山出会いカフェ広島出会いカフェ徳島出会いカフェ香川出会いカフェ愛媛出会いカフェ高知出会いカフェ

Anonymous said...

福井テレクラ愛知テレクラ岐阜テレクラ静岡テレクラ三重テレクラ兵庫テレクラ大阪テレクラ和歌山テレクラ滋賀テレクラ京都テレクラ奈良テレクラ山口テレクラ鳥取テレクラ島根テレクラ岡山テレクラ広島テレクラ徳島テレクラ香川テレクラ愛媛テレクラ高知テレクラ

Anonymous said...

福井セフレ愛知セフレ岐阜セフレ静岡セフレ三重セフレ兵庫セフレ大阪セフレ和歌山セフレ滋賀セフレ京都セフレ奈良セフレ山口セフレ鳥取セフレ島根セフレ岡山セフレ広島セフレ徳島セフレ香川セフレ愛媛セフレ高知セフレ

crazyloko said...

China Wholesalers has been described as the world’s factory. buy products wholesaleThis phenomenom is typified by the rise ofbusiness. Incredible range of products available with China Wholesale “Low Price and High Quality” not only reaches directly to their target clients worldwide but also ensures that wholesale from china from China means margins you cannot find elsewhere and China Wholesale will skyroket your profits.china wholesale productsbuy china wholesalewholesale chinawholesale productsbuy products