{"id":5235,"date":"2011-07-14T20:24:39","date_gmt":"2011-07-14T20:24:39","guid":{"rendered":"http:\/\/crashtext.org\/misc\/critical-thinking.htm\/"},"modified":"2013-11-29T23:58:02","modified_gmt":"2013-11-30T04:58:02","slug":"critical-thinking","status":"publish","type":"post","link":"https:\/\/crashingpatient.com\/philosophy\/critical-thinking.htm\/","title":{"rendered":"Critical Thinking, Logical Fallacies, and Meta-Cognition"},"content":{"rendered":"

<\/span>Meta-Cognition<\/span><\/h2>\n

Novices vs. expert (Ann Emerg Med 2013;61:96)<\/p>\n

<\/span>Critical Thinking<\/span><\/h2>\n

Induction and Deduction<\/a><\/strong><\/p>\n

Massimo Pigliucci is certainly correct in saying that \u0093it is important for anyone interested in critical thinking and science to understand the difference between deduction and induction\u0094 (\u0093Elementary, Dear Watson\u0094 May\/June 2003). However, it has been several decades since logicians have defined that difference in terms of going from general to particulars or vice versa. His own example of deduction belies the problem. It doesn\u0092t go from the general to the particular but from one general and one particular statement to another particular statement. All men are mortal. Socrates is a man. Therefore, Socrates is mortal. <\/em>General statements aren\u0092t needed at all in the premises of some deductive arguments. For example, \u0093Socrates is a stonemason. Socrates is a philosopher. Therefore, at least one stonemason is a philosopher.\u0094 This is a valid deductive argument. \u0093Rumsfeld is arrogant. Rumsfeld is Republican. Therefore, all Republicans are arrogant\u0094 is also a deductive argument, though an invalid one, going from particulars to the general.<\/p>\n

Induction, says Pigliucci, \u0093seeks to go from particular facts to general statements.\u0094 That is true sometimes, but not all the time. Jones was late yesterday so he\u0092ll probably be late today<\/em> is an inductive argument. I admit it is not a cogent argument, but cogency is a different matter.<\/p>\n

The general to particular relationship isn\u0092t rich enough to serve as a good line of demarcation between induction and deduction. Any standard logic text today will make the distinction in terms of arguments that claim their conclusions follow with necessity<\/em> from their premises (deductive arguments) and those which claim their conclusions follow with some degree of probability<\/em> from their premises (inductive arguments). This distinction in terms of premises either implying their conclusions with necessity or supporting their conclusions to some degree of probability is not without its problems, however. One virtue of the general\/particular distinction is that there is not likely to be any ambiguity about a statement being one or the other. But there will be many cases where it won\u0092t be clear whether an arguer is claiming a conclusion follows with necessity. There will also probably be many cases where the arguer should be claiming a conclusion follows with some degree of probability but the language might well indicate that the arguer thinks it follows with necessity. For example, many people might argue that since the sun has always risen in the east, it is necessarily the case that the sun will always rise in the east. Yet, it isn\u0092t necessarily the case at all. It just happens to be the case and it is easy to imagine any number of things happening to the earth that could change its relationship with the sun.<\/p>\n

By dividing arguments into those whose conclusions follow with necessity and those which don\u0092t we end up dividing arguments into those whose conclusions are entailed by their premises and those whose conclusions go beyond the data provided by the premises. A valid deductive argument can\u0092t have true premises and a false conclusion, but a cogent inductive argument might. This may sound peculiar, but it\u0092s not. Even the best inductive argument cannot claim that the truth of its premises guarantees the truth of its conclusion. Even the worst valid deductive argument–that is, one with premises that are actually false–can still claim that if<\/em> its premises were true, its conclusion would have to be true. No valid deductive argument can guarantee the truth of its premises unless its premises are tautologies. (In logic, a tautology is a statement that cannot possibly be false: e.g., \u0093A rose is a rose\u0094 or \u0093Either it will rain or it will not rain\u0094 or \u0093If Browne is psychic and stupid, then Browne is stupid.\u0094)<\/p>\n

So, how does knowing the difference between induction and deduction have any bearing on critical thinking? If you understand deduction, then you should be able to understand why scientific experiments are set up the way they are. For example, if someone claims to be able to feel another person\u0092s \u0093energy field\u0094 by moving her hands above the patient\u0092s body, as those who practice therapeutic touch claim, then she should be able to demonstrate that she can detect another person\u0092s energy field when that field is beneath one of her hands, even if her vision is blocked so that she can\u0092t see which hand is over the alleged energy field. If one can detect energy fields by feel alone then one must be able to detect energy fields without the assistance of any visual or aural feedback from the patient. Likewise, if one claims to be able to detect metal or oil by dowsing, then one ought to be able to detect metal or oil hidden from sight under controlled conditions. If one claims to be able to facilitate communication from someone who is retarded and physically unable to talk or point, then one should be able to describe correctly objects placed in the visual field of the patient even if those objects cannot be seen by the facilitator.<\/p>\n

On the other hand, the nature of induction should, at the very least, make us humble by reminding us that no matter how great the evidence is for a belief, that belief could still be false.<\/p>\n

See also Austin Cline, Deductive & Inductive Arguments How do they differ? <\/a><\/p>\n

 <\/p>\n

 <\/p>\n

 <\/p>\n

The Concept of Validity<\/strong><\/p>\n

Deductive arguments are those whose premises are said to<\/em> entail their conclusions (see lesson 1<\/a>). If the premises of a deductive argument do<\/em> entail their conclusion, the argument is valid<\/em>. (The term valid<\/em> is not used by most logicians when referring to inductive<\/em> arguments, but that is a topic for another mini-lesson.) If not, the argument is invalid<\/em>.<\/p>\n

Here’s an example of a valid argument:<\/p>\n

Shermer and Randi are skeptics. Shermer and Randi are writers. So, some skeptics are writers.<\/p><\/blockquote>\n

To say the argument is valid is to say that it is logically impossible<\/em> for its premises to be true and its conclusion false. So, if <\/em>the premises of my example are true, then the conclusion must be true also. The premises of this argument happen to be true, so this argument is not only valid, but sound <\/em>or cogent<\/em>. A sound or cogent deductive argument is defined as one that is valid and<\/em> has true premises.<\/p>\n

A valid argument may have false premises, however. For example,<\/p>\n

All Protestants are bigots. All bigots are Italian. So, all Protestants are Italian.<\/p><\/blockquote>\n

Being valid is not the same as being sound. Validity is determined by the relationship of premises to conclusion in a deductive argument. This relationship, in a valid argument, is referred to as implication<\/em> or inference<\/em>. The premises of a valid argument are said to imply<\/em> their conclusion. The conclusion of a valid argument may be inferred<\/em> from its premises.<\/p>\n

While many errors in deduction are due to making unjustified inferences from premises, the vast majority of unsound deductive arguments are probably due to premises that are questionable or false. For example, many researchers on psi have found statistical anomalies and have inferred from this data that they have found evidence for psi. The error, however, is one of assumption, not inference. The researchers assume<\/em> that psi is the best explanation for the statistical anomaly. If one makes this assumption, then one’s inference from the data is justified. However, the assumption is questionable and the arguments based on it are unsound. Similar unsound reasoning occurs in the arguments that intercessory prayer heals and that psychics get messages from the dead. Researchers assume<\/em> that a statistically significant correlation between praying and healing is best explained by assuming prayer is a causative agent, but this assumption is questionable. Researchers also assume that results that are statistically improbable if explained by chance, guessing, or cold reading, are best explained by positing communication from the dead, but this assumption is questionable. These researchers reason well enough. That is, they draw correct inferences from their data. But the reasons<\/em> on which they base their reasoning are faulty because questionable.<\/p>\n

I am not suggesting by the above comments that the data and methods of these researchers is beyond criticism. In fact, I find it interesting that skeptics seem to divide into two camps when criticizing such things as Gary Schwartz’s so-called afterlife experiments<\/a>. One camp attacks the assumptions. The other camp attacks the data or the methods used to gather the data. The former camp finds errors of assumption and fallacies such as begging the question<\/a>, argument to ignorance<\/a>, or false dilemma<\/a>. The other finds cheating, sensory leakage<\/a>, poor use of statistics, inadequate controls, and that sort of thing.<\/p>\n

Finally, some deductive arguments are unsound because they are invalid, not because their premises are false or questionable. Here is an unsound deductive argument whose premises may well be true:<\/p>\n

If my astrologer is clairvoyant, then she predicted my travel plans correctly. She predicted my travel plans correctly. So, my astrologer is clairvoyant.<\/p><\/blockquote>\n

This conclusion is not entailed by these premises, so the argument is invalid. It is possible that both these premises are true but the conclusion is false. (She may have predicted my travel plans because she got information from my travel agent, for example.) This argument is said to commit the fallacy of affirming the consequent<\/a>. Another example of this fallacy would be:<\/p>\n

If God created the universe, we should observe order and design in Nature. We do observe order and design in Nature. So, God created the universe.<\/p><\/blockquote>\n

The premises of this argument may be true, but they do not entail their conclusion. This conclusion could be false even if the premises are true. (We should also observe order and design in Nature if something like Darwin’s theory of natural selection is true.)<\/p>\n

 <\/p>\n

The Wason Card Problem<\/strong><\/a><\/p>\n

One of the nicer features of the James Randi Educational Foundation’s Amazing Meeting<\/a> earlier this year was the time set aside for mini-talks by those responding to a call for papers. One of those talks was given by Dr. Jeff Corey, who teaches experimental psychology at C. W. Post College. His talk was on “The Wason Card Problem” and its role in teaching critical thinking skills. Four cards are presented: A, B, 4, and 7. There is a letter on one side of each card and a number on the other side. Which card(s) must you turn over to determine whether the following statement is false? “If a card has a vowel on one side, then it has an even number on the other side<\/em>.”<\/p>\n

A<\/p>\n

B<\/p>\n

4<\/p>\n

7<\/p>\n

(I suggest you spend a few minutes trying to solve the problem before continuing.)<\/p>\n

(I hope you have been able to restrain yourself from jumping ahead and have worked out your solution to the problem. Before continuing, try to solve the following alternative version: Let the cards show “beer,” “cola,” “16 years,” and “22 years.” On one side of each card is the name of a drink; on the other side is the age of the drinker. What card(s) must be turned over to determine if the following statement is false? If a person is drinking beer, then the person is over 19-years-old.<\/em>)<\/p>\n

***<\/p>\n

I gave the Wason Card Problem to 100 students last semester and only seven got it right, which was about what was expected. There are various explanations for these results. One of the more common explanations is in terms of confirmation bias<\/a>. This explanation is based on the fact that the majority of people think you must turn over cards A and 4, the vowel card and the even-number card. It is thought that those who would turn over these cards are thinking “I must turn over A to see if there is an even number on the other side and I must turn over the 4 to see if there is a vowel on the other side.” Such thinking supposedly indicates that one is trying to confirm the statement If a card has a vowel on one side, then it has an even number on the other side. <\/em>Presumably, one is thinking that if the statement cannot be confirmed, it must be false. This explanation then leads to the question: Why do most people try to confirm a statement, when the task is to determine if it is false? One explanation is that people tend to try to fit individual cases into patterns or rules. The problem with this explanation is that in this case we are instructed to find cases that don’t<\/em> fit the rule. Is there some sort of inherent resistance to such an activity? Are we so driven to fit individual cases to a rule that we can’t even follow a simple instruction to find cases that don’t fit the rule? Or, are we so driven that we tend to think that the best way to determine whether an instance does not fit a rule is to try to confirm it and if it can’t be confirmed then, and only then, do we consider that the rule might be wrong?<\/p>\n

Corey noted that when the problem is changed from abstract items, such as numbers and letters, and put in concrete terms, such as drinks and the age of the drinker, the success rate significantly increases (see the example described above). One would think that confirmation bias would lead most people to say they must turn over the beer card and the 22 card, but they don’t. Most people see that the cola and 22 cards are irrelevant to solving the problem. If I remember correctly, Corey explained the difference in performance between the abstract and concrete versions of the problem in terms of evolutionary psychology: Humans are hardwired to solve practical, concrete problems, not abstract ones. To support his point, he says he simplified the abstract test to include only two cards (showing 1 and 2) with equally poor results.<\/p>\n

I had discussed confirmation bias, but not conditional statements, with my classes before giving them the Wason problem. The majority seemed to understand confirmation bias; so, if the reason so many do so poorly on this problem is confirmation bias, then just knowing about confirmation bias is not much help in overcoming it as a hindrance to critical thinking. This is consistent with what I teach. Recognition of a hindrance is a necessary but not a sufficient condition for overcoming that hindrance. However, next semester I’m going to give my students the Wason test after I discuss determining the truth-value of conditional statements. The reason for doing so is that anyone who has studied the logic of conditional statements should know that a conditional statement is false if and only if the antecedent is true and the consequent is false. (The antecedent is the if <\/em>statement; the consequent is the then<\/em> statement.) So, the statement If a card has a vowel on one side, then it has an even number on the other side<\/em> can only be false if the statement a card has a vowel on one side<\/em> is true and the statement it has an even number on the other side<\/em> is false. I must look at the card with the vowel showing to find out what is on the other side because it could be an odd number and thus would show me that the statement is false. I must also look at the card with the odd number to find out what is on the other side because it could be a vowel and thus would show me that the statement is false. I don’t need to look at the card with the consonant because the statement I am testing has nothing to do with consonants. Nor do I need to look at the card with the even number showing because whether the other side has a vowel or a consonant will not help me determine whether the statement is false.<\/p>\n

There is a possibility that the reason many think that the even-numbered card must be turned over is that they mistakenly think that the statement they are testing implies that if a card has an even number on one side then it cannot have a consonant on the other<\/em>. In other words, it is possible that the high error rate is due to misunderstanding logical implication rather than confirmation bias. In the concrete version of the problem, perhaps it is much easier to see that the statement If a person is drinking beer, then the person is over 19-years-old<\/em> does not imply that if a person is over 19 then they cannot be drinking cola. If this is the case, then an explanation in terms of the difference between contextual<\/em> implication and logical implication might be better than one in terms of confirmation bias. Perhaps it is the context<\/em> of drinking and age of the drinker that indicates to many people that a person can be over 19 and not drink beer without falsifying the statement being tested, i.e., that simply because if you’re drinking beer you are over 19 doesn’t imply that if you’re over 19 you can’t be drinking cola. That is, in the concrete case people may not have any better understanding of logical implication than they do in the abstract case and neither case may have anything to do with confirmation bias.<\/p>\n

On the other hand, some might reason that if I turn over the even card and find a vowel, then I have confirmed the statement, which is in effect the same as showing that the statement is not false, but true. This would be classic confirmation bias. Finding an instance that confirms the rule does not prove the rule is true. But, finding one instance that disproves the rule shows that the rule is false.<\/p>\n

 <\/p>\n

The Wason Card Problem Revisited<\/strong><\/a><\/p>\n

I received several responses to my analysis of the Wason problem<\/a>. Mathematician and author Jan Willem Nienhuys wrote from the Netherlands:<\/p>\n

I don’t think that the card problem as presented is compatible with the beer over 21 problem. What would happen if you said “vowels and odds are forbidden to go together on one card” and ask someone to check whether there are cards that are forbidden. That’s the beer over 21 problem. Another problem with the example is that the beer problem has a known social setting. If you made some kind of funny restriction, like ‘over 22 must drink coke’, it’s much harder, or you can make a restaurant setting, with a completely strange restriction like ‘girls (or people with a polysyllabic name) must order broccoli’, then it’s much more difficult, for the problem solvers must then keep an odd fact in mind while analyzing several cases. The less unfamiliar facts one has to keep at same time ready in the mind, the easier it is. (And it is quite possible that not everybody knows what’s an even number or what’s a vowel, or that people with slightly deficient knowledge know at most one of these concepts, you’d be surprised how deficient people’s knowledge is). \u00a0<\/strong><\/p><\/blockquote>\n

I replied to Jan that, unless I’m mistaken, both problems imply that two cards are forbidden together (vowel and odd number; beer and 19-years or under). I think I will try the problem on my classes with Jan’s suggested instruction and see if the results vary significantly. (I’ll send him the results and he, the mathematician, can tell me whether the difference, if any, is significant!) The social setting would be part of what I’m calling the context that might be why the beer problem is easier to solve for most people. It had not occurred to me that part of the problem might be in understanding the meaning of words like “vowel” and “even,” but that is a consideration that should not be taken lightly (unfortunately) and maybe I should try the test with some set-up questions to make sure those taking it understand such terms.<\/p>\n

Jan replied:<\/p>\n

I will be very interested in what you find. You might try variations like: if there are two primes on one side, the other side must show their product. This means that if a card shows a single number that is the product of two primes, you don’t have to turn it around. If it shows two numbers that aren’t primes, you also don’t have to turn it around. Obviously the difficulty is that lots of people don’t know what are primes, and even if they do so theoretically, some know their tables of multiplication so poorly, that they are at loss what to do when the card shows 42 or 49 or 87 or 36 or 39. Or 10.<\/strong><\/p><\/blockquote>\n

Yikes! Jan, I teach a general course in logic and critical thinking, not math! My students would lynch me if I posed such a problem to them.<\/p>\n

I do think that one of the problems with solving this problem (and many others!) has to do with how one reads or misreads the instructions. (For those who don’t recall the exact instructions, here they are again: Four cards are presented: A, D, 4, and 7. There is a letter on one side of each card and a number on the other side. Which card(s) must you turn over to determine whether the following statement is false? “If a card has a vowel on one side, then it has an even number on the other side<\/em>.”<\/p>\n

One reader wrote:<\/p>\n

My solution to the problem is to check all cards (or a random sample if there are a large number of them) – Sometimes it’s best to see what rules apply. (Sometimes “if” means if and only if…)<\/strong><\/p><\/blockquote>\n

This approach represents a common mistake in problem-solving: self-imposed rules. The instructions do not imply that there are more than four cards, nor does “if” mean “if and only if.” (See James Adams’ Conceptual Blockbusting<\/a><\/em> for a good discussion on common hindrances to problem-solving.)<\/p>\n

The reader continues:<\/p>\n

A simpler explanation for people choosing A and 4: Given that people tend to satifice<\/a>, it makes sense that many will just check the cards where they see a vowel or an even number. It’s a quick solution made with the immediate data on hand, requiring no additional thought (about the implications of the statement or anything else). Classic satisficing behavior.<\/strong><\/p><\/blockquote>\n

Whether this solution is satisficing or satificing, it’s wrong.<\/p>\n

Another reader, Jack Philley, wrote:<\/p>\n

Thanks for a great newsletter. I am a safety engineer and incident investigator. I also teach a segment on critical thinking in my incident investigation course, and I have been using the Wason card challenge. I picked it up from Tom Gilovich’s book How We Know What Isn’t So<\/a><\/em>. About 80 % of my students get it wrong and some of them become very angry and embarrassed and defend their logic to an unreasonable degree. I use it to illustrate our natural talent to try to prove a hypothesis and our weakness in thinking about how to disprove a suspected hypothesis. This comes in handy when trying to identify the actual accident scenario from a set of speculated possible cause scenarios. <\/strong><\/p><\/blockquote>\n

For those who haven’t read Gilovich (or have but don’t remember what he said about the Wason problem), he thinks that people turn over card “2” even though it is uninformative and can only<\/em> confirm the hypothesis because they are looking for evidence that would be consistent with the hypothesis rather than evidence which would be inconsistent with the hypothesis. He also finds this behavior most informative because it “makes it abundantly clear that the tendency to seek out information consistent with a hypothesis need not stem from any desire<\/em> for the hypothesis to be true (33).” Who really cares what is true regarding vowels and numbers? Thus, the notion that we seek confirmatory evidence because we are trying to find support for things we want to be true is not supported by the typical results of the Wason test. People seek confirmatory evidence, according to Gilovich, because they think it is relevant.<\/p>\n

As to the notion I put forth that it is because of the context that people do better when the problem is in terms of drinking beer or soda and age, Gilovich notes that only in contexts that invoke the notion of permission<\/em> do we find improved performance (p. 34 note). This just shows, he thinks, that there are some situations where “people are not preoccupied with confirmations.”<\/p>\n

 <\/p>\n

Logical Fallacies<\/strong><\/p>\n

Logical fallacies are errors that occur in arguments. In logic, an argument is the giving of reasons (called premises<\/em>) to support some claim (called the conclusion<\/em>). There are many ways to classify logical fallacies. I prefer listing the conditions for a good or cogent argument and then classifying logical fallacies according to the failure to meet these conditions.<\/p>\n

Every argument makes some assumptions. A cogent argument makes only warranted assumptions, i.e., its assumptions are not questionable or false. So, fallacies of assumption<\/em> make up one type of logical fallacy. One of the most common fallacies of assumption is called begging the question<\/a><\/em>. Here the arguer assumes what he should be proving. Most arguments for psi<\/a> commit this fallacy. For example, many believers in psi point to the ganzfeld experiments<\/a> as proof of paranormal activity. They note that a .25 success rate is predicted by chance but Honorton had some success rates of .34. One defender of psi claims that the odds of getting 34% correct in these experiments was a million billion to one. That may be true but one is begging the question to ascribe the amazing success rate to paranormal powers. It could<\/em> be evidence of psychic activity but there might be some other explanation as well. The amazing statistic doesn’t prove<\/em> what caused it. The fact that the experiment is trying to find proof of psi isn’t relevant. If someone else did the same experiment but claimed to be trying to find proof that angels, dark matter, or aliens were communicating directly to some minds, that would not be relevant to what was actually the cause of the amazing statistic. The experimenters are simply assuming<\/em> that any amazing stat they get is due to something paranormal.<\/p>\n

Another common–and fatal–fallacy of assumption is the false dilemma<\/a>, whereby one restricts consideration of reasonable alternatives.<\/p>\n

Not all fallacies of assumption are fatal. Some cogent arguments might make one or two questionable or false assumptions, but still have enough good evidence to support their conclusions. Some, like the gambler’s fallacy<\/a>, are fatal, however.<\/p>\n

Another quality of a cogent argument is that the premises are relevant<\/em> to supporting their conclusions. Providing irrelevant reasons for your conclusion need not be fatal, either, provided you have sufficient relevant evidence to support your conclusion. However, if all the reasons you give to support of your conclusion are irrelevant then your reasoning is said to be a non sequitur. The divine fallacy<\/a> is a type of non sequitur.<\/p>\n

One of the more common fallacies of relevance<\/em> is the ad hominem, an attack on the one making the argument rather than an attack on the argument. One of the most frequent types of ad hominem attack is to attack the person’s motives<\/em> rather than his evidence. For example, when an opponent refuses to agree with some point that is essential to your argument, you call him an “antitheist” or “obtuse.”<\/p>\n

Other examples of irrelevant reasoning are the sunk-cost fallacy<\/a> and the argument to ignorance<\/a>.<\/p>\n

A third quality of a cogent argument is sometimes called the completeness requirement:<\/em> A cogent argument should not omit relevant evidence. Selective thinking<\/a> is the basis for most beliefs in the psychic<\/a> powers of so-called mind readers<\/a> and mediums<\/a>. It is also the basis for many, if not most, occult<\/a> and pseudoscientific<\/a> beliefs. Selective thinking is essential to the arguments of defenders of untested and unproven remedies. Suppressing or omitting relevant evidence is obviously not fatal to the persuasiveness<\/em> of an argument, but it is fatal to its cogency<\/em>. The regressive fallacy<\/a> is an example of a fallacy of omission. <\/em>The false dilemma<\/a> is also a fallacy of omission.<\/p>\n

A fourth quality of a cogent argument is fairness. A cogent argument doesn’t distort evidence nor does it exaggerate or undervalue the strength of specific data. The straw man fallacy<\/a> violates the principle of fairness.<\/p>\n

A fifth quality of cogent reasoning is clarity. Some fallacies are due to ambiguity, such as the fallacy of equivocation: shifting the meaning of a key expression in an argument. For example, the following argument uses ‘accident’ first in the sense of ‘not created’ and then in the sense of ‘chance event.’<\/p>\n

Since you don’t believe you were created by God then you must believe you are just an accident. Therefore, all your thoughts and actions are accidents, including your disbelief in God.<\/p><\/blockquote>\n

Finally, a cogent argument provides a sufficient quantity of evidence to support its conclusion. Failure to provide sufficient evidence is to commit the fallacy of hasty conclusion<\/em>. One type of hasty conclusion that occurs quite frequently in the production of superstitious beliefs and beliefs in the paranormal is the post hoc fallacy<\/a>.<\/p>\n

Some fallacies may be classified in more than one way, e.g., the pragmatic fallacy<\/a>, which at times seems to be due to vagueness and at times due to insufficient evidence.<\/p>\n

The critical thinker must supplement the study of logical fallacies with lessons from the social sciences on such topics as<\/p>\n