Of course, the news reports still fit the pattern. The reporters gave as much credence to irrelevant and thoroughly debunked claims about the dangers of e-cigarettes as they did to Siegel’s study and other meaningful evidence about their value. If the reporters had the expertise to be critical, and had they been willing to give up the great opportunity to write what they like best (substantive disagreements that can be portrayed as a football match), they would not have been tricked into repeating the anti-e-cigarette propaganda. If they were a little more sophisticated still, though, they might have asked some tough and reasonable questions about the pro-e-cigarette evidence.
The Siegel journal article (pdf) is a good opportunity to further demonstrate some points about peer review since those who want to follow along can download it for free, and it is about two pages long and nontechnical. I would like to emphasize that I am not trying to criticize the authors (though I will continue to poke at Siegel to release more useful information as I did in the previous post). The authors were adhering to accepted standards, and probably were severely restricted in what they could fit in the science-destroying word limits of the journal (a fatal problem with paper health science journals, one which can be remedied by authors who, say, have some kind of tool that allows posting of information in a way that interested people can read it – hint, hint!), and it is those standards that I seek to criticize and demystify.
As I have previously observed in this series and elsewhere, but it bears repeating, journal reviewers (the editors and referees) almost never know more about the research than you do after reading the article. If you have a minute, take a look at the methods section from that article, where it is explained what the researchers did. As I noted in the THR blog, there are several rather critical bits of information that are missing. Not only does this absence keep readers from critically evaluating the methods, but it makes it difficult to determine what the results really say. This is typical for medical and public health journals, though a few epidemiology journals do a lot better, and I would have required a lot more – including providing the survey instrument and the recruitment email, and a lot more about inclusion criteria – before even sending it out for review if it had been submitted to Harm Reduction Journal.
But as Karyn Heavner noted in the comments from my previous UN post, the American Journal of Preventive Medicine’s requirements (in their instructions for authors) about providing methods only seemed to require the year of data collection (useful, but obviously minimal information) and stating that the study had ethics board approval and there was informed consent (important things to have done, but a pure waste of space to report in the article). She pointed out that as far as this journal is concerned, reporting “we analyzed the data using standard methods” is a sufficient description of the statistical methods. The journal (not uniquely in the field) actually goes so far as to print the methods section in a small font, practically announcing “we do not care about what research was actually done, and we do not think you, dear reader, should worry your pretty little head about that complicated science stuff; just read the results without knowing what they are results of, and believe the assertions of the authors.”
But what is not obvious to readers who have not participated in peer review before is that the reviewers must have been satisfied with this view, because they almost certainly saw no more about the methods than what appears in the article. As I have said before in this series, so much for the mighty wizard of peer review.
In response to a question on one of my previous posts I explained that perhaps as much as half the time when I am asked to review something my report consists of saying (with a few hundred more words of elaboration) something like, “I cannot evaluate anything of substance until I know more about the methods. This paper should include the survey instrument as an appendix or link to it. You need to explain how the questions were asked and what responses translated into particular coding. The methods should report any analyses that were tried but not reported. You need to explain the choices you made about X, Y, and Z. Do that and send it back to me and I will be able to review it.” The reactions to this are bipolar: Most journals never ask me to review for them again, and typically just make a decision about the paper as if I had not even responded. A few journals (generally subject-matter-focused journals that are very serious about getting good work done on their topic) keep coming back to me or ask me to be an editor.
Note that my list of what I ask for does not include the actual data from the research or the statistical programs used to analyze it. I know that it is pointless to ask, and rarely would want to take the time it would take to make use of all that. When I was advising students, it was part of my job to review their work at that level, looking at the actual data and programing. It is a lot of work and nothing I could do more than a handful of times a year given everything else I had to do (aside: kinda makes you wonder what happens to student work that cannot get an adviser to do that, perhaps because someone’s lab is cranking out 100 papers per year?). I was unlikely to do it for any but the most important journal reviews; when I worked that hard as an editor on occasion, it was pretty clear I had contributed more to the paper than some of the coauthors. It is not that I ignore the analysis, but I have to streamline my review of it: Sometimes a specific point about the data and analysis occurs to me and I ask the authors to report something or tell me what happens if they run a particular variation on the analysis even if they do not want to include it in the paper. I am not sure why I even bother with that, though, since the authors seldom do it.
I have to assume that most people who express faith in health science peer review would find even the review process I provide as an editor and reviewer to be surprisingly thin. And what I do is up there in the “about as good as it gets” range. The comments of one good, methodologically skilled, attentive, careful reader (expertise in the subject matter is also helpful but not actually critical) – helping a student or junior colleague, doing a favor for a friend, or writing in a blog – are worth more than the peer review process.
Even beyond examples of politicized advocacy journals being willing to publish anything that supports their claims, there is just more demand to publish papers than there is supply of good reviewers. I recently got an mass-mailed invitation to submit something to the new journal Education Research. The call for papers had numerous grammar errors and gave little indication of what they would publish – anything I think. Also, the same email was recruiting for editors, and checking the website reveals that they only have two in addition to the two co-editors-in-chief. Now I am not trying to denigrate what I assume is an effort to create home-grown African-based journals, and I know I was just born lucky to have what has become the Earth-standard scientific language as my native tongue so I am not making fun of bad grammar. The point is that thanks to this journal, the articles “‘Nation is a non-existent notion’: Greek students determine the term ‘nation’.” and “Falling standard in Nigeria education: traceable to proper skills-acquisition in schools?” are now peer reviewed journal articles. Meanwhile, Paul Krugman’s blog (linked from this page), the many excellent analyses of what tipped Egypt over the edge (power to the people!), the extremely cogent comment that “kristinnm” just left at the above-linked THR blog post, and my reports about the health effects of wind turbines (to mention just what I read today) are not.
Grade school students get taught that the key to science is skepticism, doubting and testing everything. But the key to science is actually trust – without it, we cannot move forward. We need ways to know who we can trust. The current peer review system, like having specialist reporters, is supposed to provide us with the confidence we can trust something. Oops.
But here is the hard part: I am telling you ways in which your trust has been betrayed. But how do you know that you can trust me?
I trust Krugman because much of what he writes I can judge for myself and it is right (and the Nobel Prize helps, but that is really not the key. I trust kristinnm’s observations because she hit on some points that I had been trying to get a better handle on and what she wrote rang true. I trust some people who are writing about Egypt, but I have to remind myself that I honestly do not know who to believe (just because I work to remind people they should not blindly trust what the pundits tell them does not mean I remember it myself once I am out of my element). As for whether someone should trust what I am writing about wind turbines, I have tried to put everything I have learned about how to answer “how do you know that you can trust me” into practice, and I will make those writing the subject of my critical eye — as best I can do that about myself — here in the future.
[Update: I did not make the connection immediately, but kristinnm is Kristin Noll Marsh who followed up with this post at her Wisconsin Vapers Blog which expands on her thoughts. I think this is really useful dialog, though more on-topic for the THR blog than UN, so I am going to follow up there.]