War about the numbers..
Tuesday, January 8, 2008 3:51:04 AM
(Above: attempt at valid criticism).
It's always interesting to read attempts at clever critique, so I was thrilled to find one on "The Lancet study" on Iraqi casualties the other day. The study was commissioned by political operatives, scheduled to be put in the papers ahead of the US elections, and was obviously designed to be used as a talking point, more than offered as food for thought. And it made use of methodology that basically amounts to taking a poll of more or less representative areas of the country - and then extrapolating those numbers to the entire country. In other words, about as reliable as a phone- interview. Except the questioners would be threatened on their lives, suspected of being agents collecting names for thugs - and of course afterwards suspected of inflating the numbers to aid the holy cause of killing americans everywhere for their political anti- american shadow- lords.
Nevertheless, ..
... it predicted an amount of dead people in Iraq, it seemed to me, that was more likely to be true than what the numbers from the morgues in various cities would suggest. Which is the numbers used by the US and the Pentagon, and which in fact is significantly lower.
Simply because using the cluster- method of polling households for numbers of dead is more likely to at least reflect the actual numbers when a greater number of deaths happen at one time. During an air- strike, or so called house to house clearing,etc, where for example the use of 40mm cannon fire from c130s replace knocks at the door. Which the other method - collecting numbers of matching body- parts at the morgues - is going to consistently ignore. The same would go for every incident of small wars in the street - the dead would not end up in the morgues, but be buried by their relatives quickly after the deaths. Deaths out in the rural areas would also similarly not be registered. It would also be unwise for people to register their dead as being involved in sectarian killings, or any sort of firefight - since that would lead to unwelcome attention in Iraq's unpredictable political climate. So again, trusting the typical casualty figures in a place that is "not like every other semi- civilised country" as one said - that is just not going to be advisable if you want a correct number. Which is, obviously, the reason we make use of methods like "cluster" polling instead.
That, of course, is no reason to expect the Lancet study to be correct. Obviously, when using extrapolation of this kind - it's ridiculous to not admit how the selecting of clusters is important, and how it can skew the results utterly. For instance, if you choose the households where a bunch of people have been killed - the you're not going to get a representative number. Nevertheless - the reason the cluster- method is considered somewhat better than the usual polling, is because you get a representative sample of the area and several individual cells, rather than getting just random data that hides whether it actually is representative or not.
In other words, the most interesting way of questioning this study's credibility should deal with the way the clusters were selected, and whether the questioning was likely to be done in such a way as to encourage less conservative numbers (and if so, in what ways).
It's conceivable, after all, that the questioners were mysteriously led around to the houses with the most bullet- holes, and to the houses surrounding the largest craters. And so the study then extrapolated a country full of craters and buildings with bullet- holes. As well as took any death, be it from disease, accidents, war or famine - to be a casualty of war, etc. The question of fraud - obviously it's possible, and the methodology should be available so the study can be judged on it's merits, if any.
But it's curious how important statistical critique and indeed critical sense suddenly becomes, when there's a need to get a particularly good political message out. In this case, when the need is there for a "well, now we need to be reasonable and look at things for real - we need to be realistic about rebuilding Iraq and the actual task at hand" - because then presto, there's no stop to reasonable and helpful scientific critiques when it comes to conclusions extrapolated from the flimsiest of evidence. Even though none of this mattered the slightest, when the message of the day was "one frog is proof that the apocralypse is coming". Which of course is merely coincidence, yeah?
Like the article quotes the Lancet editor as saying: science is a universal language, not favouring anyone.
Which is true. But it's certainly useful to look at the quality of the critique as well as the quality of the studies, when judging what exactly we're dealing with - and to what degree science is at all involved. Since indeed critique and studies together is essential in order to have any clarity about what the value of the studies are. This is true now, and has been true ever since someone first decided to draw any conclusion. And no, it is not considered such a novel approach that it should be given political preference for the one who can claim a monopoly on it.
Which is why I enjoy attempts like these so much: intellectually pretentious as only someone high on "statistics for dummies" can be, while being always painfully balanced - in presenting the critique against their own presupposed conclusions. And so lending credibility to their own conclusion as being considered and put meticulously through the most trying test.
But what is really being done? The only questions asked of the "alternative theory" are those sustaining the presupposed conclusion. While there is no end to the ways in which any amount of worries may be connected to the particular study being questioned - without those worries in any way being generally appliccable to any statistical extrapolation, even though it is clear those problems would be impossible to avoid in this specific study.
Which makes for the following, obviously "sound", conclusion in this critique - that it is not possible to achieve representative statistical data in other than ideal conditions. Conditions which we will, naturally, not discuss how should be achieved in alternative theories. While we similiarly avoid mentioning to what extent those results are possible, in what would be practically "ideal" conditions - or how this compares to other studies. And in this way we are presented with a critique of the survey that not only discounts this particular report, and lends credibility to any suspicion about it's accuracy, or to whether it has any valid data at all - but a critique which would be more than sufficient to fundamentally question, if not discount on principle, any statistical polling data about anything.
Which would have been painfully apparent, if it wasn't for how all kinds of specific suspicions about the polling data and the political affiliations and opinions of the authors was not listed and repeated over and over as a replacement for actual arguments. Because the critique is easily summarised by the three headers it indeed lists, and which is the general complaint on any statistical survey.
So the questions about the study are, as mentioned - what are the weaknesses with the study's methodology. What are the likely causes of error. Are those errors likely to be predictably skewing the results in a particular way. And can that explain the split in the various numbers (that is, apart from how politcal bias and shrillness explains all, of course). And what sort of trust should be given to such a study in the first place - and for what reasons?
And if we can see those challenges clearly, and see the arguments made to sustain the challenge - then it's possible to see that critique as useful. Then we can find out what the counter argument may be, and then learn from the arguments made.
Unfortunately - that is something we're not seeing to often these days. Not always because people are unscrupulous crooks - but simply because people are not exactly used to seeing science as useful, unless it predicts their political goals and targets. And that, I'm afraid, will not change by assuming a serious and intellectual demeanor, while heroically lambasting a survey for being fundamentally flawed - in that it is a statistical construction, and not an accurate measurement of "fact".
Something which, of course, actually was discussed in the survey as it was publicised. But nevermind that.
--
Btw, no - I'll not get into the way the entire piece reeks of the ever- prevailing narrative that "liberal and democratic" equals insane political number- magicking based on short- sighted political gratification, justified by how a small lie will inevitably save the world - while conservative and balanced skeptics are connected to calm and responsible analysis void of emotional attachment, innuendo or ideology. Anyone reading this piece, and who has a brain, will see the utter irony here, so don't bug me about it. You all suck equally much, and I've got the statistical proof to back it up.






