In yesterday’s post I suggested that a study of statins that failed to find what we would expect to find, based on a lot of prior knowledge, might not have been looking at the data the right way. (I also preemptively condemned what I am 99% sure will be the reaction of the medical establishment to the study, which is to ignore it without even trying to explain the result because they are sure it is wrong and do not understand the value of the study design.) Also yesterday I wrote in our weekly readings in tobacco harm reductio at the THR blog about a new study that seemed to be designed to fail. It is a similar theme, that it is very easy to do a study that purports to look for a phenomenon, but really does not do so. I think that point would benefit from a bit more Unhealthful News detail.
The key point to keep in mind is that almost all public health science studies (along with most psychology studies, and some other fields) produce results that are much narrower and less interesting than the authors and others interpret them. A lot of the confusion about this probably stems from medical research (which we read about in the newspaper and which most public health researchers were trained in, rather than in public health research) and physics (which is actually a very unusual science, but it misleadingly taught in school as if it were the canonical science that establishes the methods used in other sciences). Those who are vaguely familiar with clinical research or physics will think of experiments as being designed to optimally answer a narrow well-defined question, like which of two surgical procedures produces fewer complications on average, or what happens when you swing a pendulum in a gravity field.
But when we want to know something in most social science realms, we often cannot do the experiment we want. Want to know what will happen with health outcomes if you lower the salt content of lots of foods? About the best experiment you can do is to pay a bunch of people to eat as assigned for a few years and see if the ones eating less salt have better health outcomes. The problem is that the results of that study will be interpreted not as “forcing cooperative people to follow a monitored low-salt diet has health benefits”, but as “lowing the salt content of foods will improve health outcomes.” Similarly, “give people condoms and actively bug them to use them, so that it becomes a matter of personal obligation and identity, and HIV transmission goes way down” will be interpreted as “condom distribution and education dramatically lowers HIV transmission”. This is why public health, economics, and other social sciences rely primarily on observational studies, which measure what we are really interested in. It is not that experiments (“clinical trials”) would be too expensive or unethical, as is often claimed; rather, doing the right experiments would be more or less impossible.
The new study was described in the news (unfortunately, I do not know any other good source of information about it, so I have to go with what was reported) as answering,
Can a smokeless product, in this instance Camel Snus, contribute to a smoker quitting cigarettes…?
That is a pretty broad question isn’t it? (For those readers who may not know, substituting smokeless tobacco for smoking provide nearly all of the health benefits of quitting entire without depriving the user of nicotine.) The study, by Matthew Carpenter at University of South Carolina obviously will not answer the broad question. What could answer a question as broad as “can it contribute?” Putting Camel Snus on the market and seeing what happens – done and underway.
Presumably we can narrow down what the study is really examining, right? Well, it does not appear that the author is capable of doing so. A few paragraphs later,
Carpenter’s research team wants to learn whether Snus [sic – should not be capitalized except as part of a brand name] leads to quit attempts, smoking reduction and cessation
Again, the product is being given remarkable credit for independent action and perhaps even volition. Is the product acting, or are we talking about the mere existence of the product? Obviously whatever they are really doing is much more specific. As best as I can figure out, they will be gathering a large collection of smokers who are not trying to quit and will give half of them a supply of Camel Snus. What is wrong with that? Nothing, so long as the results are interpreted as “what happens over a fairly short period of time if you give random smokers one particular variety of smokeless tobacco, without educating them about why they should want to switch, but kind of implying (to an unknown degree) that they ought to by virtue of the fact that you are giving it to them?” Of course, that is not how it will be interpreted. In theory the researchers might be honest about the extremely narrow implications of their research, answering a very limited and artificial question. Not likely, though:
“The study will provide strong, clear and objective evidence to guide clinical and regulatory decision-making for this controversial area of tobacco control,” Carpenter said.
If I did not already know what passes for policy research in tobacco and health, I would assume this was a joke. Setting aside the fact that there is no such thing as objective evidence (this reflects generally bad science education that cuts across fields), there are still numerous problems with this claim. The study will provide “strong” evidence? Really? A single highly-artificial intervention, with a single moderate-sized population, will tell us what we need to know?
And even if it could, why are they assuming their evidence will be strong and clear? Are they already writing their conclusions before they start (not unheard of in tobacco research, but trickier for a study like this than it was for, say, DiFranza’s Joe Camel studies)? Even if you believed that this study could ever give clear policy-relevant evidence for some outcome, Carpenter’s assertion depends on him already deciding what the results will be. Presumably if no smokers switched they would consider this strong evidence of something, and we would probably all agree it was strong evidence if 200 switched. Somewhere in between is a crossover point (which will vary by observer) where someone would say “hmm, I am not quite sure what to say abou this”. But apparently the researchers plan to keep that from happening, to make sure they have “strong, clear” evidence of something. I suppose this is fairly realistic, since they do seem to be designing a study that avoids encouraging smokers to switch.
As for creating a “guide” for policy, perhaps you could conclude that the study result will help inform decision making. Anything can help inform. But no specific scientific result can guide policy decisions, and certainly not one as obliquely informative as this one will be.
The study design might not be quite as bad as I am guessing. There was an allusion to “or another smokeless product”. If they actually give someone a decent sample of many varieties of commercially available smokeless products, and then give them more of whichever they ask for, this at least solves the problem of conflating “not spontaneously attracted to switching” with “does not like the taste of Camel Snus” or “Camel Snus does not deliver enough nicotine fast enough to appeal to most smokers, even though other smokeless tobacco products do”. Studies that force people to choose from only one or just a few varieties of a highly variable consumer product are completely inappropriate for assessing overall preferences for the category.
“We’re just trying to mimic the real-world scenario of a smoker being exposed to these products in their own environment, such as a grocery store,” Carpenter said.
It is indeed true that most American smokers (who have been sufficiently lied to about the risks that they would not consider switching to, say, Skoal products, which offer the same reduction in health risk compared to smoking) will see the beckoning of only the new Camel Snus and Marlboro Snus, and not the wide variety of other products that exist. But why, exactly, would anyone care about the results of an experiment that mimics that situation? Far worse, though, is the apparent failure to educate the smokers as to why they might want to switch. Actually, this is a bit ambiguous and maybe they plan to provide some information, or maybe they will see that it is the right thing to do before they start and change their plan. But between the previous quote and the following one, it does not appear so:
Carpenter said researchers are not trying to encourage the use of smokeless tobacco with the study.
What could possibly possess someone do a study of whether people will adopt a more healthy behavior and not try to encourage them to do it? Here are some condoms; do whatever you want with them. Here is a course of statins, but we are not going to tell you how much to take or why you should want to. Here is your healthy food; you might expect that I am going to suggest you not pour salt and butter on it, but do whatever you want.
But, as Carpenter alluded, the fix is probably in, and the plan by the US government (who is funding this) is to interpret the almost certainly unimpressive rate of switching as evidence that THR does not work, full stop, no caveats. Frankly it is surprising it has taken this long. I remember Brad Rodu and I, the first time we ever met, eight or nine years ago, wondering why the anti-THR activists (the anti-tobacco extremist faction) were not cranking out studies that were designed to “show” that smokers who were not interested in becoming abstinent would also not switch to low-risk nicotine products. These would undermine the efforts of those of us who genuinely cared about public health to educate people about THR It was clearly easy to design studies that would show no interesting in switching (e.g., by not suggesting there was any reason to switch, not helping people switch, etc.). I think the failure to pursue this tactic was because the extremists were convinced that they could undermine THR by simply lying to the public and convincing them that there were not low-risk alternatives. They were largely successful with that for a decade, and no doubt killed many smokers who would have switched if they had not been misled.
But in the information age, claims that are so obviously false can only stand up so long. While most people still believe the lies, enough have learned the truth that this tactic is failing and a collection of other tactics has been adopted. And it turns out that misrepresenting the scientific evidence to claim that low-risk products are not low-risk is rather more difficult than just paying to create “scientific” “evidence” that low-risk products should be banned or discouraged because most smokers do not want to switch to them. Yes, I know, the logic of the previous sentence is even faultier than the science could be, but that is just how they roll.