Monthly Archives: March 2011

Unhealthful News 87 – all I have time for before takeoff

Air France has seatbelt extenders for infants. Much smarter than what Pediatrics recommends (full car seats filling up the plane). Viva.

I promise to have a better post tomorrow.

[Anyone wonder what happened to UN87?  I really did send it out and it said it posted, but it does not appear in the blog reader — you need to tell me things, friends.  So here it is (again?).  I still claim credit for posting it on time!  As long as I am at it, I will add an update:  AF is great (they even give out baby toys), but the remote controls for their televisions etc. are really lame.  After spilling just one little whiskey and water over one of them in a groggy state in the middle of hte night, it kept resetting my movie and, worse, turning the overhead light on and off and calling the flight attendant.  The French just do not have good waterproof electronics.  U-S-A, U-S-A.]

Unhealthful News 86 – If you cannot figure out how they could possibly measure that, they probably didn’t really

Survey research involves no magic opinio-meters that plug into people’s heads.  Everyone knows that.  All of us have been survey subjects.  And yet we seem all too willing to believe claims about survey results that could not possibly have been measured well by asking questions, even in the best case.  The best you can hope for is to measure actions and characteristics of people that they are capable of meaningfully and accurately reporting.  Moreover, since almost all surveys are based on checking a box, what is measured has to be measurable that way (i.e., it cannot require the conversation or free-text description that it takes to really communicate the details of someone’s preferences, experiences, and motives).

GlaxoSmithKline released the results of a survey of smokers they commissioned regarding the potential ban of menthol in cigarettes.  I am not going to address the underlying policy discussion, because I have already covered that.  Rather, I would like to point out what readers (and those news outlets that basically just printed the content of that press release as news) should have noticed about some of the claims.  Some observers might suggest that the main reason to question the study results is that it was paid for by someone who had a stake in the matter at hand (if a mandatory reduction in the quality of cigarettes causes people to try to quit, some of them will buy GSK’s products that many people believe aid quitting).  It is certainly true that the fact that they released the results (which they could have chosen to not do, unlike with, say, all scholarly research funded by tobacco companies that I know of, where the funder cannot suppress the result if they do not like it) tells us something about the results:  GSK does not think that the results offer any competitive advantage as marketing information, but does think that they could influence the political debate in a direction they prefer.

The real reasons for doubting the results are not a lazy ad hominem criticism, though, but a scientific criticism of the claimed results, one that anyone can understand.  The most reported result was, “if the FDA were to ban menthol cigarettes, four out of five menthol smokers (82 percent) say they are likely to try quitting.”  I saw this reported as “82 said they would quit”, which obviously misinterprets the press release.  But just consider the actual claim:  What question was asked to get those responses?  We do not know.  But since I suspect that if you designed the right series of questions, you could get 3/4 of smokers to say they are likely to try quitting next month if the month contains a Thursday.  “Try to quit” is a phrase that can mean very little effort or volition, but elicits the impression of something aggressive and likely to succeed.

Consider also, “almost 40 percent [of menthol smokers] say that menthol flavoring is the only reason they smoke.”  Again, by asking the right questions, it is possible to get many smokers to attribute their behavior entirely to social factors, daily patterns, the aesthetics, etc., rather than the drug delivery.  We did some focus group research of smokers, and quite often no one would mention nicotine as part of their motivation.  And, yes, it is practically mandatory to acknowledge that it is not all about nicotine.  But it does not play any role in the motives of more than 40% of this subpopulation of smokers?  Come on!

But, gee, maybe it is true: “The survey shows that menthol smokers feel “twice-addicted” – both to the menthol and to the tobacco – and most are attracted by the taste and feel of menthol cigarettes.” Um, but wait a minute, how does a survey show that someone is addicted to menthol?  It is pretty sketchy to even claim that someone is addicted to smoking at all, since addiction is not well-defined, so you either ask about addiction and get an answer based on each individual’s idiosyncratic interpretation of the term, or you ask well-posed questions and idiosyncratically decide for yourself which of those represent addiction.  But how can you possibly figure out whether menthol smokers are addicted to tobacco (presumably that means nicotine) and menthol independently? 

Only those rare individuals who had been stuck able to buy only non-menthol cigarettes for a while could realistically assess how they would feel about smoking non-menthols, while measuring an independent “addiction” to smoking menthol, apart from tobacco, would require that someone had experienced… well, I have no idea what.  Maybe vaping nicotine-free menthol e-cigarettes, which has probably been experienced by approximately zero of the respondents.  And that says nothing about how it can be that 40% of the respondents smoke only for the menthol, but they are apparently also addicted to tobacco.  Go figure.

The point is that you should go figure.  These results are so absurd that no one should take them seriously.  And everyone should learn enough from the most absurd claims that they do not take any of the other survey results seriously either.  Whatever you might think of GSK and the ridiculously self-serving balance of the press release – about how wonderful their barely-functional products are and how lousy other options for quitting smoking are – it is not difficult to be able to see the absurdity of their conclusions about the survey results.  Perhaps if they told us what they actually asked we could make something useful from the data, though I suspect that any survey that attributes 40% of smoking entirely to something other than nicotine is pretty much doomed. 

Oh, but good news, the reader has no idea what the survey questions actually were and how they reached their conclusions, but they do make the effort to report, “For analysis, sample data were statistically weighted by race, gender, income, and menthol versus non-menthol smoking to accurately reflect the current population of adult smokers on each of these dimensions.”  The humor of that might be lost on many readers, so to offer an analogy, imagine someone making a salad of Miracle Whip and iceberg lettuce, and serving it over Jello, but making sure to sprinkle it only with organic sea salt — or, if you want a less colorful metaphor, call it polishing the brass on the Titanic. 

Unhealthful News 85 – Overly-conclusive health science articles, and never having to say you are sorry

One of the reasons that there is so much junk published in health science, particularly of the kind that make for junk news, is that there are no repercussions for declaring a dramatic, telegenic conclusion that turns out to be absurdly wrong.  Every real science allows for the possibility that a particular study, done a particular way on a particular day, might produce a result that is “wrong” in the sense of running contrary to what is (or later becomes) agreed upon wisdom on the point.  That is the nature of science, of random error, of unexpected effects of methodologic choices, etc.  What makes a mess of it is if someone doing a little faulty study, whose results are what they are, thinks he is writing Principia Mathematica, or at least “The mortality of doctors in relation to their smoking habits”.  In most fields, one of the key lessons taught in graduate school is that you – each individual student – know very little compared to the extent of human knowledge in and around your subject of study.  This produces an epistemic modesty that makes for better science.

This message seems to be lost on health researchers.  Part of the problem is that a lot of researchers are trained only as physicians, not as scientists, and clinical training usually includes the message that you are supposed to act like a god and never admit to your ignorance.  Actually, that is not entirely fair – it is perhaps more a matter that medical training causes people to become unaware of their ignorance, a trick of mind control that is utterly baffling and might be of interest to the psychologists working at Guantanamo.  When this god complex spills over into research, it creates a tendency to say “my little lame study showed X, and therefore X is True and the world should be changed in the following way based on that….”

Of course, this does not explain the behavior of health researchers who studied science and got research degrees rather than professional degrees, though maybe some of it is a spillover effect.  Also, epidemiologists, toxicologist, etc. also get to play god sometimes, influencing or even controlling decisions that are important to how people live their lives.

But the bigger problem, I believe, is that no one in health science is ever asked to say they are sorry for a faulty conclusion they adamantly declared.  Changing your beliefs based on evidence is the mark of a scientific or otherwise intelligent mind (though political pundits like to call it “flip flopping”).  But failing to recant the old conclusions, refusing to admit that you made them, and never explaining what made you wrong before and makes you correct now are dishonest behaviors that warrant embarrassment and public criticism.  In a world of such ethics, having to change your conclusions means either admitting you were wrong or being justly criticized for failing to do so.  In that world you have a lot of incentive to not over-conclude.  Again, this does not mean there is any embarrassment in saying “my research did Y and the result was X”, even though this turns out to contradict better evidence, so long as you stop there.  But if you make a press release and adamant declarations about X being true, you deserve a reputation as a bad scientist and someone whose opinions should not be trusted.

That is not the world of health research and publishing.

I was thinking about this because here at Vapefest this weekend, several people have mentioned to me the new research by Thomas Eissenberg and his research group, who notoriously reported – and aggressively touted to the media – that e-cigarettes deliver no nicotine to users.  Basically, he did a badly designed study (a minor error) and then implied he had created a Great Work for the Ages (a very major error).  That group’s new study (described here) discovered what, oh, maybe a million people already knew from personal experience:  E-cigarettes do deliver nicotine after all.

Surprise!

From what has been reported, the study might well exhibit the same kind of naivety that got the researchers into trouble in the first place:  Doing one tiny study of an extremely heterogeneous phenomenon and making a big deal about the quantitative results.  At least this time the results are rather less absurd than “zero”, but they are still quantitatively meaningless.  If you told me what levels of nicotine absorption you wanted to get from a study of three (yes, just three) vapers, I am sure I could design a study of three people that within a few tries would give you those numbers.  What remains of the scientific integrity of research on nicotine and tobacco can only hope that they do not imply the specific quantitative results matter when they publish their results, let alone that they make policy recommendations based on them.  We should not be too optimistic, though, based on this.

The rumor here (though I could not find any documentation of this) is that Eissenberg’s message then switched from “these things are bad because they do not give smokers the nicotine they need” to “these things are bad because they deliver so much nicotine that users might get too much”.  This might not be true – it might be that he is becoming a “friend of the cause” of vaping, as some users speculated.

But either way, the major crime against science and common sense was the original conclusion.  If you “discover” something based on one lousy study, you should not do an interview about it on CNN.  And if your “discovery” is contrary to what lots of people with better evidence than your own are pretty sure is true, then you are an utter fool for making declarations on national television.

Oh, but wait, maybe not.  You would be an idiot if there were any repercussions for staking your scientific reputation on such a claim.  But if you publish in public health, and especially if you want to be an influential pundit in public health, then making an incorrect over-the-top claim to the press makes you a shrewd politician not a bad scientist.  Much like it is in Hollywood, not so much like it is in science, any publicity is good publicity.  Consider what Eissenberg and company have contributed to the discussion and science about e-cigarettes.  Then realize that his name comes up in discussions of experts on the subject (and he self-represents as one).  Welcome to a world where Charlie Sheen is on television giving advice about relationships and healthy living thanks to his widely-reported contributions to these areas.