This article, while about the functioning of the human mind and body, is not really about health, but it is too great an example of a particular type of bad health science reporting to pass up. It begins:
One of psychology’s most respected journals has agreed to publish a paper presenting what its author [Cornell emeritus professor Daryl J. Bem] describes as strong evidence for extrasensory perception, the ability to sense future events.
My interest in the article is not about ESP. (Nor is it about the field of psychology despite yesterday’s entry being about that field. It was pure coincidence – who could have foreseen that today would produce such a great psychology story?). It is about the science reporters and some of the story subjects apparently not understanding what the naively revered process of scientific peer review actually does.
The article reads like a story of some kind of religious doctrinal rift. This is actually not too surprising. The topic that the mainstream media is far-and-away best at reporting on is sport. Perhaps because of that, they try to make every other topic – public policy, science, etc. – as much like a sporting match as possible, emphasizing the battling partisans and score-keeping over substantive analysis of the topic. Among pursuits of the mind, doctrinal battles are already quite similar to sporting matches, so portraying scientific inquiry as if it were such a battle is probably just too great a temptation for reporters.
As a result, most of the article portrays a fight about whether these findings should even be allowed to be published because they are contrary to conventional belief. I would guess that any such debate in the field is being exaggerated by the press. However, this does seem to represent the actual attitudes of at least some anti-scientific members of this particular scientific community:
Some scientists say the report deserves to be published, in the name of open inquiry; others insist that its acceptance only accentuates fundamental flaws in the evaluation and peer review of research in the social sciences. “It’s craziness, pure craziness. I can’t believe a major journal is allowing this work in,” Ray Hyman, an emeritus professor of psychology at the University Oregon and longtime critic of ESP research, said. “I think it’s just an embarrassment for the entire field.”
I say anti-scientific because, c’mon, how exactly do these people think science works? We all get together and decide what is true and then produce evidence to support it, burying anything that contradicts it? Well, I guess that is what passed for science in the dark ages, and is what passes for science in anti-tobacco journals and a few similarly-politicized areas, and apparently for some areas of psychology research. Real science, however, relies on an interplay of theorizing and analyzing and reporting of field/experimental research. All of these are needed, including reporting research results that might not end up supporting an accepted theory.
I am especially amused by the bit about this being a fundamental flaw in peer review. I guess there were a couple of generations during which the peer review process was considered to add great value, in between Einstein (peer review started to become popular late in his career and he was appalled by it) and now (when anyone who has participated in peer review in a high-volume science, and who has half a clue, knows that it just barely adds value). Those of us familiar with peer review are aware that it serves to screen out some research that uses particularly bad methodology (it sounds like the Bem studies use methods as good as any in the field – pretty cute ones at that, which you can read about at the link above). Beyond that, peer review does nothing more that any editor could do, get rid of material that is incoherent or off-topic for the journal. Of course, it is often used to censor those who do not support the opinions of those who control the field, so I guess that is what Hyman was referring to.
Here is the bit that made this a must-blog for me today:
But many experts say that is precisely the problem. Claims that defy almost every law of science are by definition extraordinary and thus require extraordinary evidence. Neglecting to take this into account — as conventional social science analyses do — makes many findings look far more significant than they really are, these experts say.
Uh, yeah. And how exactly do we accumulate such extraordinary evidence? By publishing one study on the topic, and then another, and then a few more until everyone (or at least everyone who is remotely honest) has to say “huh, I guess we were wrong about that one”. I suspect that Bem is not so naive as to think, “my evidence alone has definitively proved that this extraordinary phenomenon is true.” Only a non-scientist would think that we have to defend the science literature against results that support a hypothesis that might come to be accepted as wrong, something that is obviously impossible. But the reporter and those he talked to seem to think that the “extraordinary evidence” rule means do not publish even a single result that contradicts the conventional wisdom until we have extraordinary evidence. I trust everyone sees a little problem with that.
What is worse, my experience in public health says that the “extraordinary evidence” rule applies to something that defies the preference of those with the big money, not something that defies laws of science. It is easy for anyone to publish absurd claims about the effects of environmental tobacco smoke, for example, claims which defy everything we know about environmental health. But it is extremely difficult to publish a solid study that supports the claim (which is much more consistent with science, though not politically correct) that ETS has effects that are so small that they are difficult to measure. Trying to publish studies about any aspect of tobacco harm reduction is equally difficult because the
Official Censors peer reviewers play games like questioning aspects of THR that are completely unrelated to the particular study, effectively making the same circular demand as the above “extraordinary evidence” non-rule: Researchers must prove that THR works before they are allowed to study whether it works or not.
Finally, the news story makes several references to other researchers re-analyzing Bem’s study data. This must mean that Bem made the data available. If this be junk science in parapsychology research, play on. In epidemiology we can only dream of getting access to data to do an honest reanalysis, even after obviously biased and misleading analyses are published (are peer reviewed, I might add). That is even the case when the junk scientists’ data is officially a matter of public record, but more about the Karolinska Institutet will have to wait until later.