Today’s samizdat health news included a post by Brad Rodu about his fight with the Karolinska Institute in Sweden over their illegal refusal to provide some of their data. Rodu, I and my colleagues at TobaccoHarmReduction.org, and sometimes all of us together have written extensively about how researchers at KI, who are motivated by an overt political agenda (a non-corporate worldly agenda, I might add, as an example of a point in yesterday’s post), have published a series of obviously biased analyses from a dataset they have that claim to show that smokeless tobacco causes various disease risks, contrary to the rest of the evidence. I will let Rodu’s post and his promised more detailed follow-up cover the substance of the particular claims (and you can read what he, I, and others have written if you want more), and focus on the broader implications for health research reporting. If you do want to read more, I suggest this poster (pdf) of ours, since it covers the topic area but also focuses on this case as a failure of the scientific process, including the rejection of an obviously-important letter to the editor by a journal, which relates to the comment discussion in UN34 that I linked to above.
At the simplest level, this story is a demonstration of the failures of peer review. In publication after publication, the KI researchers got away with making contradictory claims about their data and conducting analyses that contradicted each other in their implicit claims about the best way to analyze this data. The journal peer review process did not even seem to slow them down in this enterprise. In fairness to the reviewers, the KI researchers used unethical tactics to reduce the chance that the reviewers would judge the work more effectively, including not cross-referencing the other papers they had written (i.e., pretending each analysis existed in a vacuum) and not trying to explain why they were making contradictory claims and shopping different statistical methods (they simply never acknowledged that they had ever looked at the data in any way other than what was used in the individual paper, rather than trying to explain why the methods varied). Of course, since (a) the reviewers could have figured this out if they really were the studious experts that most people who think peer review is so significant probably think they are, but did not, and (b) anyone submitting a paper can employ the same tactics, the fact that the KI researchers are ultimately to blame does nothing to change the fact that this demonstrates some serious limits of peer review.
But at a deeper level the indictment of health science publishing is even worse. What we have here is a rare – in health science – case where some researchers (Rodu, we) actually took the time to figure out that published analyses had some problems, which is the essence of real peer review. (This is also what Michael J. McFadden did in the case he wrote about in the UN34 comments.) Contrary to the beliefs of those who think journal reviews represent the important scientific review of a claim, the peer review that really matters comes from dozens or hundreds of experts thinking hard about something after it is made available to them (i.e., after it is published, though not necessarily in a journal – indeed, it is better if it is before it appears in a journal). Other fields make such review possible before the immutable journal article and accompanying press release; they have a culture where anyone writing something of significance is expected to circulate it, either hand-to-hand or by posting it somewhere as a working paper, so that comments can be collected and considered, and thus errors corrected and other concerns addressed before it goes into a journal. In health science (and some other fields) a paper is usually raced into a journal with maybe ten people ever having assessed it – and I am including papers that have eight, ten, or more authors: most of the time no more than a few of the authors had the attentiveness and skills to really review the analysis and write-up carefully.
So in health science, a paper can only be genuinely peer reviewed (by anyone other than a few of the authors, maybe a few of their friends, an editor or two, and a couple of assigned journal referees) after it appears in a journal and, if “news worthy” in the press. But in the case that Rodu wrote about, when he and we tried to conduct such a review and report what we discovered in the same journals that published the article (the value of which is also discussed in the UN34 comments), we were usually prevented from doing so: Only one of the journals that published the KI claims published our letters documenting the problems.
But it gets worse. As I pointed out before, reviewers do not generally get access to the data and analysis, so they cannot genuinely review what is most important. Moreover, they probably would not look very carefully at them if they had them – that would take ten times the hours and effort as most reviewers devote to a review, and most of them would not be capable of offering any important insight in any case. But even in a case where someone wants to review an epidemiologic analysis, because they have specific concerns about what was done wrong, they cannot do it.
That observation is pretty much universal: Health science analyses are almost always black boxes that are not subject to real scientific scrutiny. What is unusual about the Karolinska case is that the researchers are actually legally required by the Swedish Constitution to provide their data. If the data were shared with interested researchers, it would immediately confirm or contradict some of the observations that have been made about their conduct. And even if they continued to defy honest scientific practice (i.e., stick to standard black-box epidemiologic practice) and not share their calculation methods (recall my observation in UN34 that reported methods do not usually reveal what methods someone actually used), there are many of us who could engage in “forensic epidemiology” to assess the other claims that have been made about them.
Why would someone defy a court order in order to prevent anyone from checking numbers they published in a peer reviewed analyses? Why would they have refused to even report the results of some specific additional analysis of that data, a compromise that probably would have avoided the court fight in the first place? I will let you draw your own conclusions, but I do not think it is a Stieg Larsson-worthy mystery. The answer is almost certainly not: Because they are confident that the peer review process already rendered their published results beyond reproach.
So much for the mighty wizard of peer review. Pay no attention to that creaky old self-perpetuating system behind the curtain.