Insignificant Blackbird
I picked up a new bit of jargon at the pub trivia quiz last night: Base Rate Fallacy. I was wittering on about statistical significance testing and Rob, one of my team-mates remarked "You know it's all based on a fallacy don't you?" Actually, I didn't. I took a bit of convincing too; it wasn't until about 4 o'clock this morning, when I was thinking it through (as a distraction from the raucous "I suppose a root would be out of the question" noises coming from the blackbird that's adopted the gable above my bedroom window as its favourite perch) that I decided there really was something in Rob's remark and the cryptic notes he made on a piece of scrap paper to illustrate the fallacy.As I understand it, the argument goes something like this. When scientists (psychometricians, for example) employ null hypothesis testing, or significance testing (or whatever they choose to call it), they're employing the following pattern of logical inference:
If H1 (the experimental hypothesis) is true then the test statistic will have an improbable value.
The test statistic does not have an improbable value.
Therefore H1 is improbable. [And we write off the experimental results as random variation]
Apparently, this is called Bayesian Modus Tollens, and there's a bit of controversy within statistical circles over whether or not it's valid. It certainly looks dodgy to me but that may be because of the way I've expressed it, translating Rob's formal notation into something a little short of plain English. What got me convinced that there is a fallacy here (while I was trying to ignore that randy blackbird), was working out the pattern of logical inference that's involved in accepting experimental results as significant (as in worth publishing and maybe throwing your hat into the ring for a Nobel prize while you're at it):
If H1 (the experimental hypothesis) is true then the test statistic will have an improbable value.
The test statistic does have an improbable value.
Therefore H1 is probable.
Straight up, this is a fallacy; it's not any sort of logical inference at all. A fact that's easily demonstrated with the help of the classic Socrates example:
If Socrates is a man then Socrates is mortal.
Socrates is mortal.
Therefore Socrates is a man.
There's more on the subject here (extensive bibliography), here and here (PDF - a philosopher sinks the slipper into Michael Behe). Here's a juicy bit from the last, to whet your appetite:
In addition to rejecting evolutionary explanations, Behe advances the positive thesis that the biochemical systems he describes in loving detail were designed by an intelligent agent (p. 204). However, for these details to favor intelligent design over mindless evolution, we must know how probable those details are under each hypothesis. This is the point of the Law of Likelihood. Behe asserts that these details are very improbable according to evolutionary theory, but how probable are they according to the hypothesis of intelligent design? It is here that we encounter a great silence. Behe and other ID theorists spend a great deal of time criticizing evolutionary theory, but they don t take even the first steps towards formulating an alternative theory of their own that confers probabilities on what we observe. If an intelligent designer built the vertebrate eye ,or the bacterial flagellum, or the biochemical cascade that causes blood to clot, what is the probability that these devices would have the features we observe? The answer is simple - we do not know. We lack knowledge of what this putative designer's intentions would be if he set his mind to constructing structures that perform these functions.
The sad fact about ID theory is that there is no such theory. Behe argues that evolutionary theory entails that adaptive complexity is very improbable, Johnson rails against the dogmatism of scientists who rule out a priori the possibility of supernatural explanation, and Dembski tries to construct an epistemology in which it is possible to gain evidence for the hypothesis of design without ever having to know what, if anything, that hypothesis predicts. A lot goes wrong in each of these efforts, but notice what is not even on the list.
No comments:
Post a Comment