Wednesday, August 03, 2005

The Science of Nitpicking (1)


One of the most tedious aspects of blogging is sorting through all the crap. You know how it goes; you're slogging your way through the news, features and op-ed pages of the mainstream media looking for something that might just possibly inspire you to write and, with each item you read, you find yourself stuck on the same question: is this complete crap, mostly crap with a couple of brilliant ideas thrown in or your actual intellectual paydirt? I've been looking for a way to make the sorting a little more efficient and I think I've finally come up with something workable.

In any piece of polemic you read, whether it's a blog post, an op-ed article, or a stodgy policy proposal in PDF format, you're going to encounter assertions and lines of argument which look a little dodgy. Identify these as you go and, in a spirit of fair-mindedness, assign each a 50% probability of being true (or acceptable, interesting or intriguing). Implicitly, each has a 50% probability of being some sort of crap such as a factual error, an oversimplification or complete nonsense.

Starting with the the first, assess each dodgy proposition for actual (rather than suspected) error and count the errors. You need to take the propositions in order for the test to have any semblance of validity*. If you find an uninterrupted sequence of five errors you can stop: there's a 95% probability that what you're reading is complete crap. 95% seems to be widely accepted in statistical circles as the minimum probability you have to achieve to claim a significant result. If you're feeling lucky you might try for a significance of 99%: for that you need to demonstrate a succession of seven errors of fact or argument. Table 1 (below) lists the actual probabilities; numbers buffs might find it marginally interesting.

Number of Successive ErrorsLikelihood that the Article is Complete Bullshit
150%
275%
387.5%
493.8%
596.9%
698.4%
799.2%

One obvious deficiency of this test is that it only identifies one particular variety of crap artistry: the kind which produces a continuous stream of bullshit and fallacy entirely uninterrupted by fact or reason. If you want to quantify the crappiness of an piece where fact and bullshit are more freely intermingled, you'll need to apply another test, which I'll be working up for The Science of Nitpicking (2).

* I suppose I'd better mention somewhere, even if it's only in a footnote, that the whole basis of the test is that each of these checks is a Bernoulli trial, where a hit on the bullshit detector counts as a success. It's on that basis that the probabilities in Table 1 are calculated.