{"id":5235,"date":"2011-07-14T20:24:39","date_gmt":"2011-07-14T20:24:39","guid":{"rendered":"http:\/\/crashtext.org\/misc\/critical-thinking.htm\/"},"modified":"2013-11-29T23:58:02","modified_gmt":"2013-11-30T04:58:02","slug":"critical-thinking","status":"publish","type":"post","link":"https:\/\/crashingpatient.com\/philosophy\/critical-thinking.htm\/","title":{"rendered":"Critical Thinking, Logical Fallacies, and Meta-Cognition"},"content":{"rendered":"
Novices vs. expert (Ann Emerg Med 2013;61:96)<\/p>\n
Induction and Deduction<\/a><\/strong><\/p>\n Massimo Pigliucci is certainly correct in saying that \u0093it is important for anyone interested in critical thinking and science to understand the difference between deduction and induction\u0094 (\u0093Elementary, Dear Watson\u0094 May\/June 2003). However, it has been several decades since logicians have defined that difference in terms of going from general to particulars or vice versa. His own example of deduction belies the problem. It doesn\u0092t go from the general to the particular but from one general and one particular statement to another particular statement. All men are mortal. Socrates is a man. Therefore, Socrates is mortal. <\/em>General statements aren\u0092t needed at all in the premises of some deductive arguments. For example, \u0093Socrates is a stonemason. Socrates is a philosopher. Therefore, at least one stonemason is a philosopher.\u0094 This is a valid deductive argument. \u0093Rumsfeld is arrogant. Rumsfeld is Republican. Therefore, all Republicans are arrogant\u0094 is also a deductive argument, though an invalid one, going from particulars to the general.<\/p>\n Induction, says Pigliucci, \u0093seeks to go from particular facts to general statements.\u0094 That is true sometimes, but not all the time. Jones was late yesterday so he\u0092ll probably be late today<\/em> is an inductive argument. I admit it is not a cogent argument, but cogency is a different matter.<\/p>\n The general to particular relationship isn\u0092t rich enough to serve as a good line of demarcation between induction and deduction. Any standard logic text today will make the distinction in terms of arguments that claim their conclusions follow with necessity<\/em> from their premises (deductive arguments) and those which claim their conclusions follow with some degree of probability<\/em> from their premises (inductive arguments). This distinction in terms of premises either implying their conclusions with necessity or supporting their conclusions to some degree of probability is not without its problems, however. One virtue of the general\/particular distinction is that there is not likely to be any ambiguity about a statement being one or the other. But there will be many cases where it won\u0092t be clear whether an arguer is claiming a conclusion follows with necessity. There will also probably be many cases where the arguer should be claiming a conclusion follows with some degree of probability but the language might well indicate that the arguer thinks it follows with necessity. For example, many people might argue that since the sun has always risen in the east, it is necessarily the case that the sun will always rise in the east. Yet, it isn\u0092t necessarily the case at all. It just happens to be the case and it is easy to imagine any number of things happening to the earth that could change its relationship with the sun.<\/p>\n By dividing arguments into those whose conclusions follow with necessity and those which don\u0092t we end up dividing arguments into those whose conclusions are entailed by their premises and those whose conclusions go beyond the data provided by the premises. A valid deductive argument can\u0092t have true premises and a false conclusion, but a cogent inductive argument might. This may sound peculiar, but it\u0092s not. Even the best inductive argument cannot claim that the truth of its premises guarantees the truth of its conclusion. Even the worst valid deductive argument–that is, one with premises that are actually false–can still claim that if<\/em> its premises were true, its conclusion would have to be true. No valid deductive argument can guarantee the truth of its premises unless its premises are tautologies. (In logic, a tautology is a statement that cannot possibly be false: e.g., \u0093A rose is a rose\u0094 or \u0093Either it will rain or it will not rain\u0094 or \u0093If Browne is psychic and stupid, then Browne is stupid.\u0094)<\/p>\n So, how does knowing the difference between induction and deduction have any bearing on critical thinking? If you understand deduction, then you should be able to understand why scientific experiments are set up the way they are. For example, if someone claims to be able to feel another person\u0092s \u0093energy field\u0094 by moving her hands above the patient\u0092s body, as those who practice therapeutic touch claim, then she should be able to demonstrate that she can detect another person\u0092s energy field when that field is beneath one of her hands, even if her vision is blocked so that she can\u0092t see which hand is over the alleged energy field. If one can detect energy fields by feel alone then one must be able to detect energy fields without the assistance of any visual or aural feedback from the patient. Likewise, if one claims to be able to detect metal or oil by dowsing, then one ought to be able to detect metal or oil hidden from sight under controlled conditions. If one claims to be able to facilitate communication from someone who is retarded and physically unable to talk or point, then one should be able to describe correctly objects placed in the visual field of the patient even if those objects cannot be seen by the facilitator.<\/p>\n On the other hand, the nature of induction should, at the very least, make us humble by reminding us that no matter how great the evidence is for a belief, that belief could still be false.<\/p>\n See also Austin Cline, Deductive & Inductive Arguments How do they differ? <\/a><\/p>\n <\/p>\n <\/p>\n <\/p>\n The Concept of Validity<\/strong><\/p>\n Deductive arguments are those whose premises are said to<\/em> entail their conclusions (see lesson 1<\/a>). If the premises of a deductive argument do<\/em> entail their conclusion, the argument is valid<\/em>. (The term valid<\/em> is not used by most logicians when referring to inductive<\/em> arguments, but that is a topic for another mini-lesson.) If not, the argument is invalid<\/em>.<\/p>\n Here’s an example of a valid argument:<\/p>\n Shermer and Randi are skeptics. Shermer and Randi are writers. So, some skeptics are writers.<\/p><\/blockquote>\n To say the argument is valid is to say that it is logically impossible<\/em> for its premises to be true and its conclusion false. So, if <\/em>the premises of my example are true, then the conclusion must be true also. The premises of this argument happen to be true, so this argument is not only valid, but sound <\/em>or cogent<\/em>. A sound or cogent deductive argument is defined as one that is valid and<\/em> has true premises.<\/p>\n A valid argument may have false premises, however. For example,<\/p>\n All Protestants are bigots. All bigots are Italian. So, all Protestants are Italian.<\/p><\/blockquote>\n Being valid is not the same as being sound. Validity is determined by the relationship of premises to conclusion in a deductive argument. This relationship, in a valid argument, is referred to as implication<\/em> or inference<\/em>. The premises of a valid argument are said to imply<\/em> their conclusion. The conclusion of a valid argument may be inferred<\/em> from its premises.<\/p>\n While many errors in deduction are due to making unjustified inferences from premises, the vast majority of unsound deductive arguments are probably due to premises that are questionable or false. For example, many researchers on psi have found statistical anomalies and have inferred from this data that they have found evidence for psi. The error, however, is one of assumption, not inference. The researchers assume<\/em> that psi is the best explanation for the statistical anomaly. If one makes this assumption, then one’s inference from the data is justified. However, the assumption is questionable and the arguments based on it are unsound. Similar unsound reasoning occurs in the arguments that intercessory prayer heals and that psychics get messages from the dead. Researchers assume<\/em> that a statistically significant correlation between praying and healing is best explained by assuming prayer is a causative agent, but this assumption is questionable. Researchers also assume that results that are statistically improbable if explained by chance, guessing, or cold reading, are best explained by positing communication from the dead, but this assumption is questionable. These researchers reason well enough. That is, they draw correct inferences from their data. But the reasons<\/em> on which they base their reasoning are faulty because questionable.<\/p>\n I am not suggesting by the above comments that the data and methods of these researchers is beyond criticism. In fact, I find it interesting that skeptics seem to divide into two camps when criticizing such things as Gary Schwartz’s so-called afterlife experiments<\/a>. One camp attacks the assumptions. The other camp attacks the data or the methods used to gather the data. The former camp finds errors of assumption and fallacies such as begging the question<\/a>, argument to ignorance<\/a>, or false dilemma<\/a>. The other finds cheating, sensory leakage<\/a>, poor use of statistics, inadequate controls, and that sort of thing.<\/p>\n Finally, some deductive arguments are unsound because they are invalid, not because their premises are false or questionable. Here is an unsound deductive argument whose premises may well be true:<\/p>\n If my astrologer is clairvoyant, then she predicted my travel plans correctly. She predicted my travel plans correctly. So, my astrologer is clairvoyant.<\/p><\/blockquote>\n This conclusion is not entailed by these premises, so the argument is invalid. It is possible that both these premises are true but the conclusion is false. (She may have predicted my travel plans because she got information from my travel agent, for example.) This argument is said to commit the fallacy of affirming the consequent<\/a>. Another example of this fallacy would be:<\/p>\n If God created the universe, we should observe order and design in Nature. We do observe order and design in Nature. So, God created the universe.<\/p><\/blockquote>\n The premises of this argument may be true, but they do not entail their conclusion. This conclusion could be false even if the premises are true. (We should also observe order and design in Nature if something like Darwin’s theory of natural selection is true.)<\/p>\n <\/p>\n The Wason Card Problem<\/strong><\/a><\/p>\n One of the nicer features of the James Randi Educational Foundation’s Amazing Meeting<\/a> earlier this year was the time set aside for mini-talks by those responding to a call for papers. One of those talks was given by Dr. Jeff Corey, who teaches experimental psychology at C. W. Post College. His talk was on “The Wason Card Problem” and its role in teaching critical thinking skills. Four cards are presented: A, B, 4, and 7. There is a letter on one side of each card and a number on the other side. Which card(s) must you turn over to determine whether the following statement is false? “If a card has a vowel on one side, then it has an even number on the other side<\/em>.”<\/p>\n A<\/p>\n B<\/p>\n 4<\/p>\n 7<\/p>\n (I suggest you spend a few minutes trying to solve the problem before continuing.)<\/p>\n (I hope you have been able to restrain yourself from jumping ahead and have worked out your solution to the problem. Before continuing, try to solve the following alternative version: Let the cards show “beer,” “cola,” “16 years,” and “22 years.” On one side of each card is the name of a drink; on the other side is the age of the drinker. What card(s) must be turned over to determine if the following statement is false? If a person is drinking beer, then the person is over 19-years-old.<\/em>)<\/p>\n ***<\/p>\n I gave the Wason Card Problem to 100 students last semester and only seven got it right, which was about what was expected. There are various explanations for these results. One of the more common explanations is in terms of confirmation bias<\/a>. This explanation is based on the fact that the majority of people think you must turn over cards A and 4, the vowel card and the even-number card. It is thought that those who would turn over these cards are thinking “I must turn over A to see if there is an even number on the other side and I must turn over the 4 to see if there is a vowel on the other side.” Such thinking supposedly indicates that one is trying to confirm the statement If a card has a vowel on one side, then it has an even number on the other side. <\/em>Presumably, one is thinking that if the statement cannot be confirmed, it must be false. This explanation then leads to the question: Why do most people try to confirm a statement, when the task is to determine if it is false? One explanation is that people tend to try to fit individual cases into patterns or rules. The problem with this explanation is that in this case we are instructed to find cases that don’t<\/em> fit the rule. Is there some sort of inherent resistance to such an activity? Are we so driven to fit individual cases to a rule that we can’t even follow a simple instruction to find cases that don’t fit the rule? Or, are we so driven that we tend to think that the best way to determine whether an instance does not fit a rule is to try to confirm it and if it can’t be confirmed then, and only then, do we consider that the rule might be wrong?<\/p>\n Corey noted that when the problem is changed from abstract items, such as numbers and letters, and put in concrete terms, such as drinks and the age of the drinker, the success rate significantly increases (see the example described above). One would think that confirmation bias would lead most people to say they must turn over the beer card and the 22 card, but they don’t. Most people see that the cola and 22 cards are irrelevant to solving the problem. If I remember correctly, Corey explained the difference in performance between the abstract and concrete versions of the problem in terms of evolutionary psychology: Humans are hardwired to solve practical, concrete problems, not abstract ones. To support his point, he says he simplified the abstract test to include only two cards (showing 1 and 2) with equally poor results.<\/p>\n I had discussed confirmation bias, but not conditional statements, with my classes before giving them the Wason problem. The majority seemed to understand confirmation bias; so, if the reason so many do so poorly on this problem is confirmation bias, then just knowing about confirmation bias is not much help in overcoming it as a hindrance to critical thinking. This is consistent with what I teach. Recognition of a hindrance is a necessary but not a sufficient condition for overcoming that hindrance. However, next semester I’m going to give my students the Wason test after I discuss determining the truth-value of conditional statements. The reason for doing so is that anyone who has studied the logic of conditional statements should know that a conditional statement is false if and only if the antecedent is true and the consequent is false. (The antecedent is the if <\/em>statement; the consequent is the then<\/em> statement.) So, the statement If a card has a vowel on one side, then it has an even number on the other side<\/em> can only be false if the statement a card has a vowel on one side<\/em> is true and the statement it has an even number on the other side<\/em> is false. I must look at the card with the vowel showing to find out what is on the other side because it could be an odd number and thus would show me that the statement is false. I must also look at the card with the odd number to find out what is on the other side because it could be a vowel and thus would show me that the statement is false. I don’t need to look at the card with the consonant because the statement I am testing has nothing to do with consonants. Nor do I need to look at the card with the even number showing because whether the other side has a vowel or a consonant will not help me determine whether the statement is false.<\/p>\n There is a possibility that the reason many think that the even-numbered card must be turned over is that they mistakenly think that the statement they are testing implies that if a card has an even number on one side then it cannot have a consonant on the other<\/em>. In other words, it is possible that the high error rate is due to misunderstanding logical implication rather than confirmation bias. In the concrete version of the problem, perhaps it is much easier to see that the statement If a person is drinking beer, then the person is over 19-years-old<\/em> does not imply that if a person is over 19 then they cannot be drinking cola. If this is the case, then an explanation in terms of the difference between contextual<\/em> implication and logical implication might be better than one in terms of confirmation bias. Perhaps it is the context<\/em> of drinking and age of the drinker that indicates to many people that a person can be over 19 and not drink beer without falsifying the statement being tested, i.e., that simply because if you’re drinking beer you are over 19 doesn’t imply that if you’re over 19 you can’t be drinking cola. That is, in the concrete case people may not have any better understanding of logical implication than they do in the abstract case and neither case may have anything to do with confirmation bias.<\/p>\n On the other hand, some might reason that if I turn over the even card and find a vowel, then I have confirmed the statement, which is in effect the same as showing that the statement is not false, but true. This would be classic confirmation bias. Finding an instance that confirms the rule does not prove the rule is true. But, finding one instance that disproves the rule shows that the rule is false.<\/p>\n <\/p>\n The Wason Card Problem Revisited<\/strong><\/a><\/p>\n I received several responses to my analysis of the Wason problem<\/a>. Mathematician and author Jan Willem Nienhuys wrote from the Netherlands:<\/p>\n I don’t think that the card problem as presented is compatible with the beer over 21 problem. What would happen if you said “vowels and odds are forbidden to go together on one card” and ask someone to check whether there are cards that are forbidden. That’s the beer over 21 problem. Another problem with the example is that the beer problem has a known social setting. If you made some kind of funny restriction, like ‘over 22 must drink coke’, it’s much harder, or you can make a restaurant setting, with a completely strange restriction like ‘girls (or people with a polysyllabic name) must order broccoli’, then it’s much more difficult, for the problem solvers must then keep an odd fact in mind while analyzing several cases. The less unfamiliar facts one has to keep at same time ready in the mind, the easier it is. (And it is quite possible that not everybody knows what’s an even number or what’s a vowel, or that people with slightly deficient knowledge know at most one of these concepts, you’d be surprised how deficient people’s knowledge is). \u00a0<\/strong><\/p><\/blockquote>\n I replied to Jan that, unless I’m mistaken, both problems imply that two cards are forbidden together (vowel and odd number; beer and 19-years or under). I think I will try the problem on my classes with Jan’s suggested instruction and see if the results vary significantly. (I’ll send him the results and he, the mathematician, can tell me whether the difference, if any, is significant!) The social setting would be part of what I’m calling the context that might be why the beer problem is easier to solve for most people. It had not occurred to me that part of the problem might be in understanding the meaning of words like “vowel” and “even,” but that is a consideration that should not be taken lightly (unfortunately) and maybe I should try the test with some set-up questions to make sure those taking it understand such terms.<\/p>\n Jan replied:<\/p>\n I will be very interested in what you find. You might try variations like: if there are two primes on one side, the other side must show their product. This means that if a card shows a single number that is the product of two primes, you don’t have to turn it around. If it shows two numbers that aren’t primes, you also don’t have to turn it around. Obviously the difficulty is that lots of people don’t know what are primes, and even if they do so theoretically, some know their tables of multiplication so poorly, that they are at loss what to do when the card shows 42 or 49 or 87 or 36 or 39. Or 10.<\/strong><\/p><\/blockquote>\n Yikes! Jan, I teach a general course in logic and critical thinking, not math! My students would lynch me if I posed such a problem to them.<\/p>\n I do think that one of the problems with solving this problem (and many others!) has to do with how one reads or misreads the instructions. (For those who don’t recall the exact instructions, here they are again: Four cards are presented: A, D, 4, and 7. There is a letter on one side of each card and a number on the other side. Which card(s) must you turn over to determine whether the following statement is false? “If a card has a vowel on one side, then it has an even number on the other side<\/em>.”<\/p>\n One reader wrote:<\/p>\n My solution to the problem is to check all cards (or a random sample if there are a large number of them) – Sometimes it’s best to see what rules apply. (Sometimes “if” means if and only if…)<\/strong><\/p><\/blockquote>\n This approach represents a common mistake in problem-solving: self-imposed rules. The instructions do not imply that there are more than four cards, nor does “if” mean “if and only if.” (See James Adams’ Conceptual Blockbusting<\/a><\/em> for a good discussion on common hindrances to problem-solving.)<\/p>\n The reader continues:<\/p>\n A simpler explanation for people choosing A and 4: Given that people tend to satifice<\/a>, it makes sense that many will just check the cards where they see a vowel or an even number. It’s a quick solution made with the immediate data on hand, requiring no additional thought (about the implications of the statement or anything else). Classic satisficing behavior.<\/strong><\/p><\/blockquote>\n Whether this solution is satisficing or satificing, it’s wrong.<\/p>\n Another reader, Jack Philley, wrote:<\/p>\n Thanks for a great newsletter. I am a safety engineer and incident investigator. I also teach a segment on critical thinking in my incident investigation course, and I have been using the Wason card challenge. I picked it up from Tom Gilovich’s book How We Know What Isn’t So<\/a><\/em>. About 80 % of my students get it wrong and some of them become very angry and embarrassed and defend their logic to an unreasonable degree. I use it to illustrate our natural talent to try to prove a hypothesis and our weakness in thinking about how to disprove a suspected hypothesis. This comes in handy when trying to identify the actual accident scenario from a set of speculated possible cause scenarios. <\/strong><\/p><\/blockquote>\n For those who haven’t read Gilovich (or have but don’t remember what he said about the Wason problem), he thinks that people turn over card “2” even though it is uninformative and can only<\/em> confirm the hypothesis because they are looking for evidence that would be consistent with the hypothesis rather than evidence which would be inconsistent with the hypothesis. He also finds this behavior most informative because it “makes it abundantly clear that the tendency to seek out information consistent with a hypothesis need not stem from any desire<\/em> for the hypothesis to be true (33).” Who really cares what is true regarding vowels and numbers? Thus, the notion that we seek confirmatory evidence because we are trying to find support for things we want to be true is not supported by the typical results of the Wason test. People seek confirmatory evidence, according to Gilovich, because they think it is relevant.<\/p>\n As to the notion I put forth that it is because of the context that people do better when the problem is in terms of drinking beer or soda and age, Gilovich notes that only in contexts that invoke the notion of permission<\/em> do we find improved performance (p. 34 note). This just shows, he thinks, that there are some situations where “people are not preoccupied with confirmations.”<\/p>\n <\/p>\n Logical Fallacies<\/strong><\/p>\n Logical fallacies are errors that occur in arguments. In logic, an argument is the giving of reasons (called premises<\/em>) to support some claim (called the conclusion<\/em>). There are many ways to classify logical fallacies. I prefer listing the conditions for a good or cogent argument and then classifying logical fallacies according to the failure to meet these conditions.<\/p>\n Every argument makes some assumptions. A cogent argument makes only warranted assumptions, i.e., its assumptions are not questionable or false. So, fallacies of assumption<\/em> make up one type of logical fallacy. One of the most common fallacies of assumption is called begging the question<\/a><\/em>. Here the arguer assumes what he should be proving. Most arguments for psi<\/a> commit this fallacy. For example, many believers in psi point to the ganzfeld experiments<\/a> as proof of paranormal activity. They note that a .25 success rate is predicted by chance but Honorton had some success rates of .34. One defender of psi claims that the odds of getting 34% correct in these experiments was a million billion to one. That may be true but one is begging the question to ascribe the amazing success rate to paranormal powers. It could<\/em> be evidence of psychic activity but there might be some other explanation as well. The amazing statistic doesn’t prove<\/em> what caused it. The fact that the experiment is trying to find proof of psi isn’t relevant. If someone else did the same experiment but claimed to be trying to find proof that angels, dark matter, or aliens were communicating directly to some minds, that would not be relevant to what was actually the cause of the amazing statistic. The experimenters are simply assuming<\/em> that any amazing stat they get is due to something paranormal.<\/p>\n Another common–and fatal–fallacy of assumption is the false dilemma<\/a>, whereby one restricts consideration of reasonable alternatives.<\/p>\n Not all fallacies of assumption are fatal. Some cogent arguments might make one or two questionable or false assumptions, but still have enough good evidence to support their conclusions. Some, like the gambler’s fallacy<\/a>, are fatal, however.<\/p>\n Another quality of a cogent argument is that the premises are relevant<\/em> to supporting their conclusions. Providing irrelevant reasons for your conclusion need not be fatal, either, provided you have sufficient relevant evidence to support your conclusion. However, if all the reasons you give to support of your conclusion are irrelevant then your reasoning is said to be a non sequitur. The divine fallacy<\/a> is a type of non sequitur.<\/p>\n One of the more common fallacies of relevance<\/em> is the ad hominem, an attack on the one making the argument rather than an attack on the argument. One of the most frequent types of ad hominem attack is to attack the person’s motives<\/em> rather than his evidence. For example, when an opponent refuses to agree with some point that is essential to your argument, you call him an “antitheist” or “obtuse.”<\/p>\n Other examples of irrelevant reasoning are the sunk-cost fallacy<\/a> and the argument to ignorance<\/a>.<\/p>\n A third quality of a cogent argument is sometimes called the completeness requirement:<\/em> A cogent argument should not omit relevant evidence. Selective thinking<\/a> is the basis for most beliefs in the psychic<\/a> powers of so-called mind readers<\/a> and mediums<\/a>. It is also the basis for many, if not most, occult<\/a> and pseudoscientific<\/a> beliefs. Selective thinking is essential to the arguments of defenders of untested and unproven remedies. Suppressing or omitting relevant evidence is obviously not fatal to the persuasiveness<\/em> of an argument, but it is fatal to its cogency<\/em>. The regressive fallacy<\/a> is an example of a fallacy of omission. <\/em>The false dilemma<\/a> is also a fallacy of omission.<\/p>\n A fourth quality of a cogent argument is fairness. A cogent argument doesn’t distort evidence nor does it exaggerate or undervalue the strength of specific data. The straw man fallacy<\/a> violates the principle of fairness.<\/p>\n A fifth quality of cogent reasoning is clarity. Some fallacies are due to ambiguity, such as the fallacy of equivocation: shifting the meaning of a key expression in an argument. For example, the following argument uses ‘accident’ first in the sense of ‘not created’ and then in the sense of ‘chance event.’<\/p>\n Since you don’t believe you were created by God then you must believe you are just an accident. Therefore, all your thoughts and actions are accidents, including your disbelief in God.<\/p><\/blockquote>\n Finally, a cogent argument provides a sufficient quantity of evidence to support its conclusion. Failure to provide sufficient evidence is to commit the fallacy of hasty conclusion<\/em>. One type of hasty conclusion that occurs quite frequently in the production of superstitious beliefs and beliefs in the paranormal is the post hoc fallacy<\/a>.<\/p>\n Some fallacies may be classified in more than one way, e.g., the pragmatic fallacy<\/a>, which at times seems to be due to vagueness and at times due to insufficient evidence.<\/p>\n The critical thinker must supplement the study of logical fallacies with lessons from the social sciences on such topics as<\/p>\n James Alcock reminds us that \u0093The true critical thinker accepts what few people ever accept — that one cannot routinely trust perceptions and memories\u0094 (\u0093The Belief Engine<\/a>\u0094). The unhappy truth is that humans are not truth-seeking missiles. In addition to understanding logical fallacies, we must also understand why we are prone to them.<\/p>\n There are literally hundreds of logical fallacies. For a good general introduction to fallacies I recommend Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments<\/a><\/em> by T. Edward Damer or Asking the Right Questions: A Guide to Critical Thinking<\/a><\/em> by M. Neil Browne and Stuart M. Keeley.<\/p>\n There are some on-line sites that focus on fallacies. I refer the reader to them without comment:<\/p>\n <\/p>\n replication of scientific studies<\/strong><\/p>\n A student who did very well in my Logic and Critical Reasoning course sent the following news item along with the suggestion that I might need to revise my thinking about lunar effects<\/a>. I replied that I might need to emphasize more strongly what I teach: Look for what is not<\/em> mentioned in the study, not just at what is mentioned. And don’t forget how important replication<\/em> of a study is.<\/p>\n Aug 11, 2003. (Bloomberg) — Car accidents occur 14 percent more often on average during a full moon than a new moon, according to a study of 3 million car policies by the U.K.’s Churchill Insurance Group Plc.<\/p>\n The data show a rise in all types of accidents, involving single vehicles or multiple cars, the company said in an e-mailed press release. The next full moon will be tomorrow night.<\/p>\n “We know that the moon is a strong source of energy, as it affects the tides and weather patterns, but were surprised by this bizarre trend,” Craig Staniland, head of car insurance at Churchill, said in the release.<\/p>\n The company, which Royal Bank of Scotland Group Plc agreed to buy from Credit Suisse Group in June, speculated that eastern philosophy’s concepts of yin and yang<\/a> may explain the accident rate. It cited a feng shui<\/a> expert, Simon Brown, saying that the full moon radiates more of the sun’s yang energy onto earth, making people more aggressive and impatient.<\/p>\n The insurer said it won’t change its underwriting criteria to take the full moon into account, the company said.<\/p><\/blockquote>\n In addition to yin and yang, there might be other explanations for this data, but before searching for explanations one should make sure there is something that needs to be explained. The study seems to claim that there are 14% more accidents on nights when there is a full moon than on nights when there is a new moon. (When the moon is full, if the weather is clear, it will generally be very bright. When the moon is new, even if the weather is clear, the moon will hardly be visible.)*<\/a> The results of a single study may be suggestive but they are not usually considered conclusive. This study may have been well-designed but we are not told anything about how it was conducted or how it was designed, so we can’t be sure. The Churchill Insurance Group may have a flawless study, but note that they didn’t take the results seriously enough to alter their underwriting criteria. Why not? I don’t know. What I would like to know is how was the study done?<\/em><\/p>\n The press release mentions a study of 3 million car policies but that’s a bit vague. Did they analyze 3 million policies and separate those who made accident claims from those who didn’t? Then, did they find that claims that involved accidents that happened at night when there was a full moon occurred 14% more frequently than claims that involved accidents that happened at night when there was a new moon? Did they control for weather? That is, did they review their data to make sure that there were about the same number of stormy nights on both full and new moon nights? Otherwise, they might just be measuring an effect of bad weather, not moon phases.<\/p>\n How many accidents are we talking about? Without knowing the numbers we can’t determine whether this study had a sufficient number of cases to analyze. But even if it had many thousands of cases, we don’t know over how long a period of time this study was conducted. If it analyzed data over a very long period of time, that would be more impressive than if it analyzed data over a very short period of time. Why? Over a short period of time they are more likely to get skewed results. For example, maybe the period they evaluated had two full moons in 30 days and both occurred on Saturdays. With smaller numbers it becomes more important to control for factors like the weather or weekends.<\/p>\n We need to know exactly how many accidents were involved in the study, the beginning date and end date of the data collection, the exact number of nights involved, and the exact number of full and new moons during the study. We should also be assured that only accidents that occurred after the rising and before the setting of the full moon were included in the study. If the accidents happened during the day or before the full moon was present, the likelihood that the moon had anything to do with diminishes significantly.<\/p>\n Finally, even if the study was based on a sufficient number of cases over an adequate period of time and included only data it should include (and didn’t include data it shouldn’t include), and even if the data were analyzed properly by professional statisticians, we should still wait until it is replicated before worrying about finding an explanation for the 14% statistic. A single study with statistically impressive results should not be taken as sufficient to base any important decisions on.<\/p>\n Now, trying to prove the statistic is due to yin and yang is another matter altogether. I have no idea how anyone could construct a scientific study to test that hypothesis.<\/p>\n But we can at least correct one misconception put forth in this press release: the moon is not<\/em> a strong source of gravitational energy on earthlings. George Abell has calculated that the moon’s gravitational pull on a human individual is less than that of a mosquito. Ivan Kelly put it this way: “A mother holding her child “will exert 12 million times as much tidal force on her child as the moon.”*<\/a><\/p>\n Why would anyone cite this study favorably? Confirmation bias<\/a>. If you already believe in lunar effects, this study confirms your belief. You will be less likely to be critical of it than if it goes against your beliefs. Also, the suburban myth<\/a> that the moon is a strong source of energy continues to be reported in the media, giving many people the impression that it must be true.<\/p>\n <\/p>\n the fallacy of suppressed evidence<\/strong><\/p>\n One of the basic principles of cogent argumentation is that a cogent argument presents all<\/em> the relevant evidence. An argument that omits relevant evidence appears stronger and more cogent than it is.<\/p>\n The fallacy of suppressed evidence occurs when an arguer intentionally omits relevant data. This is a difficult fallacy to detect because we often have no way of knowing that we haven’t been told the whole truth.<\/p>\n Many advertisements commit this fallacy. Ads inform us of a product’s dangers only if required to do so by law. Ads never state that a competitor’s product is equally good. The coal, asbestos, nuclear fuel, and tobacco industries have knowingly suppressed evidence regarding the health of their employees or the health hazards of their industries.<\/p>\n Occasionally scientists will suppress evidence, making a study seem more significant than it is. In the December 1998 issue of The Western Journal of Medicine<\/em> scientists Fred Sicher, Elisabeth Targ, Dan Moore II, and Helene S. Smith published “A Randomized Double-Blind Study of the Effect of Distant Healing in a Population With Advanced AIDS–Report of a Small Scale Study.” (I’ll refer to this as “the Sicher report.”) The authors do not mention, nor has The Western Journal of Medicine<\/em> ever acknowledged, that the study was originally designed and funded to determine one<\/em> specific effect: death. The 1998 study was designed to be a follow-up to a 1995 study of 20 patients with AIDS, ten of whom were prayed for by psychic healers. Four of the patients died, a result consistent with chance, but all four were in the control group, a stat that appeared anomalous enough to these scientists to do further study. I don’t know whether evidence was suppressed or whether the scientists doing the study were simply incompetent, but the four patients who died were the four oldest in the study. The 1995 study did not control for age when it assigned the patients to either the control or the healing prayer group. Any controlled study on mortality that does not control for age is by definition not a properly designed study.<\/p>\n The follow-up study, however, did suppress evidence, yet it is “widely acknowledged as the most scientifically rigorous attempt ever to discover if prayer can heal” (Bronson 2002). The standard format for scientific reports is to begin with an abstract that summarizes the contents of the report. The Abstract for the Sicher report notes that controls were done for age, number of AIDS-defining illnesses, and cell count. Patients were randomly assigned to the control or healing prayer groups. The study followed the patients for six months. “At 6 months, a blind medical chart review found that treatment subjects acquired significantly fewer new AIDS-defining illnesses (0.1 versus 0.6 per patient, P = 0.04), had lower illness severity (severity score 0.8 versus 2.65, P = 0.03), and required significantly fewer doctor visits (9.2 versus 13.0, P = 0.01), fewer hospitalizations (0.15 versus 0.6, P = 0.04), and fewer days of hospitalization (0.5 versus 3.4, P = 0.04).” These numbers are very impressive. They indicate that the measured differences were not likely due to chance. Whether they were due to healing prayer (HP) is another matter, but the scientists concluded their abstract with the claim: “These data support the possibility of a DH effect in AIDS and suggest the value of further research.” Two years later the team, led by Elisabeth Targ, was granted $1.5 million of our tax dollars from the National Institutes of Health Center for Complementary Medicine to do further research on the healing effects of prayer.<\/p>\n What the Sicher study didn’t reveal was that the original study had not been designed to do any of these measurements they report as significant. Of course, any researcher who didn’t report significant findings just because the original study hadn’t set out to investigate them would be remiss. The standard format of a scientific report allows such findings to be noted in the abstract or in the Discussion section of the report. It would have been appropriate for the Sicher report to have noted in the Discussion section that since only one patient died during their study, it appears that the new drugs being given AIDS patients as part of their standard therapy (triple-drug anti-retroviral therapy) were having a significant effect on longevity. They might even have suggested that their finding warranted further research into the effectiveness of the new drug therapy. However, the Sicher report Abstract doesn’t even mention that only one of their subjects died during the study, indicating that they didn’t recognize a truly significant research finding. It may also indicate that the scientists didn’t want to call attention to the fact that their original study was designed to study the effect of healing prayer on the mortality rate of AIDS patients. Since only one patient died, perhaps they felt that they had nothing to report.<\/p>\n It was only after they mined the data once the study was completed that they came up with the suggestive and impressive statistics that they present in their published report. The Texas sharpshooter fallacy<\/a> seems to have been committed here. Under certain conditions, mining the data would be perfectly acceptable. For example, if your original study was designed to study the effectiveness of a drug on blood pressure but you find after the data is in that the experimental group had no significant decrease in blood pressure but did have a significant increase in HDL (the “good” cholesterol), you would be remiss not to mention this. You would be guilty of deception, however, if you wrote your paper as if your original design was to study the effects of the drug on cholesterol and made no mention of blood pressure.<\/p>\n So, it would have been entirely appropriate for the Sicher report to have noted in the Discussion section that they had discovered something interesting in their statistics:\u00a0 Hospital stays and doctor visits were lower for the HP group. It was inappropriate to write the report as if that was one of the effects the study was designed to measure when this effect was neither looked for nor discovered until Moore, the statistician for the study, began crunching numbers looking for something of statistical significance after the study was completed. That was all he could come up with. Again, crunching numbers and data mining after a study is completed is appropriate; not mentioning that you rewrote your paper to make it look like it had been designed to crunch those numbers isn’t.<\/p>\n It would have been appropriate in the Discussion section of their report to have speculated as to the reason for the statistically significant differences in hospitalizations and days of hospitalization. They could have speculated that prayer made all the difference and, if they were competent, they would have also noted that insurance coverage could make all the difference as well. “Patients with health insurance tend to stay in hospitals longer than uninsured ones” (Bronson 2002). The researchers should have checked this out and reported their findings. Instead, they then took a list of 23 illnesses associated with AIDS and had Sicher go back over each of the forty patient medical charts and use them to collect the data for the 23 illnesses as best he could. This was after it was known to Sicher which group each patient had been randomly assigned to, prayer or control. The fact that the names were blacked out, so he could not immediately tell whose record he was reading, does not seem sufficient to justify allowing him to review the data. There were only 40 patients in the study and he was familiar with each of them. It would have been better had an independent party, someone not involved in the study, gone over the medical charts. Sicher is “an ardent believer in distant healing” and he had put up $7,500 for the pilot study (ibid.) on prayer and mortality. His impartiality was clearly compromised. So was the double-blind quality of the study.<\/p>\n Thus, there was quite a bit of significant and relevant evidence suppressed in the Sicher study that, had it been revealed, might have diminished its reputation as the best designed study ever on prayer and healing. Instead of being held up as a model of promising research in the field of spiritual science, this study might have ended up in the trash heap where it belongs.<\/p>\n further reading<\/strong><\/p>\n <\/p>\n replication revisited<\/strong><\/p>\n One of the traits of a cogent argument is that the evidence be sufficient to warrant accepting the conclusion. In causal arguments, this generally requires–among other things–that a finding of a significant correlation between two variables, such as magnets and pain, be reproducible. Replication of a significant correlation usually indicates that the finding was not a fluke or due to methodological error. Yet, I am often sent copies of articles regarding single studies and advised that it may be about time for me to change my mind on some subject. For example, I recently heard from Jouni Helminen that “It may be time to update the Skepdic website regarding magnet therapy on fibromyalgia patients.” Jouni referred me to an article<\/a> from the University of Virginia News. I state in my entry on magnet therapy<\/a>: “There is almost no scientific evidence supporting magnet therapy.” The article about a study done on magnet therapy to reduce fibromyalgia pain did nothing to change my mind. The study, conducted by University of Virginia (UV) researchers, was published in the Journal of Alternative and Complementary Medicine<\/a><\/em>, which asserts that it “includes observational and analytical reports on treatments outside the realm of allopathic medicine….”<\/p>\n\n
\n
\n