Monthly Archives: May 2011

Unhealthful News 147 – Bad news about pharmaceutical niacin, pretty good news about health science

Since I recently wrote about statins, I thought I would follow up with today’s story about cholesterol drugs.  It is actually a story of most everything proceeding in a way that makes perfect sense, though it seems to have created a lot of consternation.

Having lower “bad cholesterol” reduces cardiovascular disease (CVD) risk; statins lower bad cholesterol; and trials have shown that taking statins provides the health benefits.  Also, statins do not cost much (apart from the pharma industry profits that can be made via patents) and do not seem to have much downside.  All in all, a straightforward story of preventive medicine.  The story I wrote about was that they seemed not to be doing so well in practice in Sweden, but that was a “hmm, we should try to explain that” moment, not a case of “whoa, it looks like we were wrong.”

In a story that turns out to be dissimilar, having higher “good cholesterol” and lower triglycerides reduces CVD risk;vniacin, a B vitamin, raises good cholesterol and lowers triglycerides; but the studies do not seem to show that taking niacin improves CVD risk.  This is disappointing, since niacin is also cheap, though for some people it causes an annoying skin flush and sometimes other superficial side effects.  It is fairly odd to find a case where having a particular physiologic status is good for you, but causing that status is not good for you.  It becomes more likely, though, when the method of causing it departs substantially from what causes it in nature, as it were.  However, this was not the first time a drug to raise good cholesterol failed to have the expected health effect, so it was not totally shocking.

The way this transpired should not be seen as troubling, however, despite the way some new reports have portrayed it.  Consider the sequence of events:  Observational research supported the conclusion that the cholesterol levels in question (when not drug-induced) result in lower CVD risk.  Simple short-term studies supported the conclusion that niacin causes those levels.  Niacin is cheap and low-risk (and those who hate the side effects can rationally choose to not take it).  Therefore it was the obvious rational choice for people to have been taking niacin while awaiting further information.  Further research that connected up the whole proposed causal pathway (niacin causes reduced CVD risk) rather than breaking it into pieces, however, finally suggested there was no benefit.  Oh, well.  Note much harm done, and that is why we do this research: to find out if what we believed before seems to be wrong.

So, what is disturbing about this story?  One issue is that the the study, and apparently the popular medical regimen, consisted of taking Abbott’s drug, Niaspan, which is basically niacin with a slow release.  Presumably someone somewhere concocted a reason why this drug should be used rather than cheap generic niacin, but it certainly was not because it was shown to be more effective (obviously: we only just learned how effective it is, which is to say, not at all).  I guess the only good news in this turning of a common nutrient into private profits was that, sadly, if people had just been taking niacin from competitive market sources, the study might have not been done.

Also mildly disturbing was the cessation of the study early because the group taking niacin had a somewhat higher rate of stroke (and no reduction in heart attacks, as was hoped).  In this particular case, the good effects were not happening, so quitting the study was a good idea.  But the rules for stopping studies early because they have become “unethical” are quite misguided – but that is a story for another day.

But since there was no apparent benefit, rather than a complicated uncertain tradeoff between costs and benefits, stopping in this case seemed entirely sensible.  Not so sensible is:

Wells Fargo Securities analyst Larry Biegelsen said the surprise findings could cut Niaspan sales by 20 to 30 percent.

So the message will be “this does not seem to work, so we advise only three-quarters of you to keep buying and taking it”?  You really have to love our medical industry.  Notice also that it is the Wall Street guys who are assessing the effects of this.  I did not notice any broad comments about how this should affect behavior from the medical or public health people.  Health research, in the mind of those in the halls of power, is not primarily a story about health.

Of course, it is possible that some consumers will get a benefit, people different from those who were studied (who had a history of heart disease, and like most trial subjects, were rather different from most of the target population).  I think this is what the study leader was trying to say when interviewed, though it came out as unintentional comedy:

But it’s not clear if niacin would have any effect on people at higher risk or those who don’t have a diagnosis of heart disease yet but take niacin as a preventive, said study co-leader Dr. William Boden of the University at Buffalo.
 “We can’t generalize these findings …to patients that we didn’t study,” he said.

I would have to say that any study whose results cannot be generalized beyond the few thousand people in the study is really not worth doing.  

Yes, it is always possible that some unstudied types of people will benefit, but there are three strikes here:  No studies show this drug helps, one study shows this drug does no good, and other good-cholesterol-raising drugs have not shown health benefits either.  I think this falls into the category of “stop recommending this unless some new evidence emerges to change our minds.”

So if you piece together all the claims, we have a study that showed there is no benefit from causing what is known to be a beneficial difference under other circumstances, which focused on one patented version of common nutrient, which was stopped for no good reason, but could not be generalized beyond a few thousand people, though it is relevant to maybe a million people taking the nutrient for the non-existent benefit, the implications of which are being studied by finance guys rather than health policy makers, and that will end some but not all use of the apparently useless treatment.  And yet, all in all, compared to much of what we see, this story arc is a case of health science working mostly like it should.

Unhealthful News 146 – Tobacco harm reduction study is apparently designed to fail; it was only a matter of time

In yesterday’s post I suggested that a study of statins that failed to find what we would expect to find, based on a lot of prior knowledge, might not have been looking at the data the right way.  (I also preemptively condemned what I am 99% sure will be the reaction of the medical establishment to the study, which is to ignore it without even trying to explain the result because they are sure it is wrong and do not understand the value of the study design.)  Also yesterday I wrote in our weekly readings in tobacco harm reductio at the THR blog about a new study that seemed to be designed to fail.  It is a similar theme, that it is very easy to do a study that purports to look for a phenomenon, but really does not do so.  I think that point would benefit from a bit more Unhealthful News detail.

The key point to keep in mind is that almost all public health science studies (along with most psychology studies, and some other fields) produce results that are much narrower and less interesting than the authors and others interpret them.  A lot of the confusion about this probably stems from medical research (which we read about in the newspaper and which most public health researchers were trained in, rather than in public health research) and physics (which is actually a very unusual science, but it misleadingly taught in school as if it were the canonical science that establishes the methods used in other sciences).  Those who are vaguely familiar with clinical research or physics will think of experiments as being designed to optimally answer a narrow well-defined question, like which of two surgical procedures produces fewer complications on average, or what happens when you swing a pendulum in a gravity field. 

But when we want to know something in most social science realms, we often cannot do the experiment we want.  Want to know what will happen with health outcomes if you lower the salt content of lots of foods?  About the best experiment you can do is to pay a bunch of people to eat as assigned for a few years and see if the ones eating less salt have better health outcomes.  The problem is that the results of that study will be interpreted not as “forcing cooperative people to follow a monitored low-salt diet has health benefits”, but as “lowing the salt content of foods will improve health outcomes.”  Similarly, “give people condoms and actively bug them to use them, so that it becomes a matter of personal obligation and identity, and HIV transmission goes way down” will be interpreted as “condom distribution and education dramatically lowers HIV transmission”.  This is why public health, economics, and other social sciences rely primarily on observational studies, which measure what we are really interested in.  It is not that experiments (“clinical trials”) would be too expensive or unethical, as is often claimed; rather, doing the right experiments would be more or less impossible.

The new study was described in the news (unfortunately, I do not know any other good source of information about it, so I have to go with what was reported) as answering,

Can a smokeless product, in this instance Camel Snus, contribute to a smoker quitting cigarettes…? 

That is a pretty broad question isn’t it?  (For those readers who may not know, substituting smokeless tobacco for smoking provide nearly all of the health benefits of quitting entire without depriving the user of nicotine.)  The study, by Matthew Carpenter at University of South Carolina obviously will not answer the broad question.  What could answer a question as broad as “can it contribute?”  Putting Camel Snus on the market and seeing what happens – done and underway.

Presumably we can narrow down what the study is really examining, right?  Well, it does not appear that the author is capable of doing so.  A few paragraphs later,

Carpenter’s research team wants to learn whether Snus [sic – should not be capitalized except as part of a brand name] leads to quit attempts, smoking reduction and cessation

Again, the product is being given remarkable credit for independent action and perhaps even volition.  Is the product acting, or are we talking about the mere existence of the product?  Obviously whatever they are really doing is much more specific.  As best as I can figure out, they will be gathering a large collection of smokers who are not trying to quit and will give half of them a supply of Camel Snus.  What is wrong with that?  Nothing, so long as the results are interpreted as “what happens over a fairly short period of time if you give random smokers one particular variety of smokeless tobacco, without educating them about why they should want to switch, but kind of implying (to an unknown degree) that they ought to by virtue of the fact that you are giving it to them?”  Of course, that is not how it will be interpreted.  In theory the researchers might be honest about the extremely narrow implications of their research, answering a very limited and artificial question.  Not likely, though:

“The study will provide strong, clear and objective evidence to guide clinical and regulatory decision-making for this controversial area of tobacco control,” Carpenter said.

If I did not already know what passes for policy research in tobacco and health, I would assume this was a joke.  Setting aside the fact that there is no such thing as objective evidence (this reflects generally bad science education that cuts across fields), there are still numerous problems with this claim.  The study will provide “strong” evidence?  Really?  A single highly-artificial intervention, with a single moderate-sized population, will tell us what we need to know? 

And even if it could, why are they assuming their evidence will be strong and clear?  Are they already writing their conclusions before they start (not unheard of in tobacco research, but trickier for a study like this than it was for, say, DiFranza’s Joe Camel studies)?  Even if you believed that this study could ever give clear policy-relevant evidence for some outcome, Carpenter’s assertion depends on him already deciding what the results will be.  Presumably if no smokers switched they would consider this strong evidence of something, and we would probably all agree it was strong evidence if 200 switched.  Somewhere in between is a crossover point (which will vary by observer) where someone would say “hmm, I am not quite sure what to say abou this”.  But apparently the researchers plan to keep that from happening, to make sure they have “strong, clear” evidence of something.  I suppose this is fairly realistic, since they do seem to be designing a study that avoids encouraging smokers to switch.

As for creating a “guide” for policy, perhaps you could conclude that the study result will help inform decision making.  Anything can help inform.  But no specific scientific result can guide policy decisions,  and certainly not one as obliquely informative as this one will be.

The study design might not be quite as bad as I am guessing.  There was an allusion to “or another smokeless product”.  If they actually give someone a decent sample of many varieties of commercially available smokeless products, and then give them more of whichever they ask for, this at least solves the problem of conflating “not spontaneously attracted to switching” with “does not like the taste of Camel Snus” or “Camel Snus does not deliver enough nicotine fast enough to appeal to most smokers, even though other smokeless tobacco products do”.  Studies that force people to choose from only one or just a few varieties of a highly variable consumer product are completely inappropriate for assessing overall preferences for the category. 

“We’re just trying to mimic the real-world scenario of a smoker being exposed to these products in their own environment, such as a grocery store,” Carpenter said.

It is indeed true that most American smokers (who have been sufficiently lied to about the risks that they would not consider switching to, say, Skoal products, which offer the same reduction in health risk compared to smoking) will see the beckoning of only the new Camel Snus and Marlboro Snus, and not the wide variety of other products that exist.  But why, exactly, would anyone care about the results of an experiment that mimics that situation?  Far worse, though, is the apparent failure to educate the smokers as to why they might want to switch.  Actually, this is a bit ambiguous and maybe they plan to provide some information, or maybe they will see that it is the right thing to do before they start and change their plan.  But between the previous quote and the following one, it does not appear so:

Carpenter said researchers are not trying to encourage the use of smokeless tobacco with the study.

What could possibly possess someone do a study of whether people will adopt a more healthy behavior and not try to encourage them to do it?  Here are some condoms; do whatever you want with them.  Here is a course of statins, but we are not going to tell you how much to take or why you should want to.  Here is your healthy food; you might expect that I am going to suggest you not pour salt and butter on it, but do whatever you want.

But, as Carpenter alluded, the fix is probably in, and the plan by the US government (who is funding this) is to interpret the almost certainly unimpressive rate of switching as evidence that THR does not work, full stop, no caveats.  Frankly it is surprising it has taken this long.  I remember Brad Rodu and I, the first time we ever met, eight or nine years ago, wondering why the anti-THR activists (the anti-tobacco extremist faction) were not cranking out studies that were designed to “show” that smokers who were not interested in becoming abstinent would also not switch to low-risk nicotine products.  These would undermine the efforts of those of us who genuinely cared about public health to educate people about THR   It was clearly easy to design studies that would show no interesting in switching (e.g., by not suggesting there was any reason to switch, not helping people switch, etc.).  I think the failure to pursue this tactic was because the extremists were convinced that they could undermine THR by simply lying to the public and convincing them that there were not low-risk alternatives.  They were largely successful with that for a decade, and no doubt killed many smokers who would have switched if they had not been misled. 

But in the information age, claims that are so obviously false can only stand up so long.  While most people still believe the lies, enough have learned the truth that this tactic is failing and a collection of other tactics has been adopted.  And it turns out that misrepresenting the scientific evidence to claim that low-risk products are not low-risk is rather more difficult than just paying to create “scientific” “evidence” that low-risk products should be banned or discouraged because most smokers do not want to switch to them.  Yes, I know, the logic of the previous sentence is even faultier than the science could be, but that is just how they roll.

Unhealthful News 145 – Statins prevent heart attacks, except maybe in real life in Sweden?

There is a joke about economists that upon observing something working in practice they immediately set out to try to figure out if it works in theory.  No one ever seems to make a joke of it (perhaps because it is less funny), but a similar observation applies to health researchers.  In their case, they observe that something works in the real world and they wonder if it works in the highly artificial confines of a randomized trial.  What they far too seldom seem to wonder is if the opposite is true, if the semi-theoretical result that is based on trials really works out.

A new study (which does not seem to have made the news, which is probably just as well) in the Journal of Negative Results in BioMedicine looked at statin use and the rate of AMI (acute myocardial infarction – i.e., heart attack).  The study was more in the economic style than epidemiology (which is to say that the authors explained their methods and used a purpose-built comparison rather than just forcing everything into a logistic regression and not explaining what they did).  To summarize the basic result, they did not find a correlation between rate of statin use (across geography, time, and age range) and AMI rates.

This is rather troubling since one of the Accepted Truths of preventive medicine right now is that statins provide substantial benefit at very little cost.  But this information should not be dismissed because randomized trials got a different result.  Randomized trials do not represent the real-world circumstances in which people act.  For economic exposures (i.e., consumer choice or pure behavior – e.g., smoking cessation), trials are often almost useless.  For purely biological exposures (say, something in the water) or attempts to evaluate existing behaviors (such as the effects of an exposure that some people just happen to have, studied by forcing others to be exposed in a trial), this is not such a problem.  Most medical exposures fall somewhere in between – statins have a biological effect, but actually using them as directed is economic (a consumer behavior).

There are some obvious possible stories that make the new result misleading and the trial results exactly right after all.  If statins are used more by subpopulations that need them more (i.e., have more people at higher risk of disease) then there will be a simple confounding problem (called “confounding by indication”) wherein high risk causes the exposure, so people with the exposure do worse than average even if the exposure is beneficial.  For a population where most everyone at high and moderate risk are consistently using statins, this confounding would largely disappear.

Another possible explanation is that they did not look at the data correctly.  What they did sounds reasonable, but it is impossible to know that for sure.  For one thing, the rate of fatal AMI seemed to do the “right” thing even though non-fatal AMI seemed to go a little bit the wrong way.  You will recall that I often question whether authors who found a positive result hunted around for a statistical model that generated their preferred outcome.  It should be realized that using the wrong statistical model and getting a misleading negative result is a much simpler exercise.  It is very easy to fail to find something that really exists by analyzing the data wrong.  It is not clear if they authors hunted around a bit to see if maybe their negative result was not so robust if they changed their analysis (that is, if it might be that they just missed it by looking at the data one particular way).

And I think there is some reason to worry.  The authors demonstrate some holes in their knowledge of scientific epistemology.  They wrote:

Results from an ecological study are best not being interpreted at the individual level, thus avoiding the ecological fallacy. However, the results can be used as a basis for discussion and for the generation of new alternative hypotheses.

A disturbing number of people seem to think there is something called the “ecological fallacy” that implies that you cannot draw conclusions about the effect of an exposure on people based on ecological data.  That is simply wrong.  There is one odd way in which ecological data can steer you to an incorrect causal conclusion that is not present for other types of studies, which is that it is possible that having a higher rate of exposure in a population causes a higher rate of the outcome in the population, but for an individual this is not true.  An example is that having more guns in a population causes people to be more likely to be shot by stranger.  However, having a gun yourself does not make you more likely to be shot by a stranger (I am setting aside the fact that it makes you enormously more likely to be shot by a family member or by yourself). 

But oddities like this are rare and usually fairly predictable.  Beyond that, the challenges with ecological data are just the same as with any other study design: measurement error, confounding, etc. There is no fallacy, and usually there is no reason to think there is a “ecological paradox” like with the guns (it is not really a paradox either, but that term is a lot closer to correct than “fallacy”).  Indeed, population-wide ecological data has some advantages over other data, creating a tradeoff rather than a clear advantage.  There is no more an ecological fallacy that makes it necessarily worse than there is a “sampling fallacy” that makes other study designs necessarily worse.

As for generating new alternative hypotheses, allow me:  Hypothesis 1 = statins do not work so well when used by regular people in real life as compared to the artificial situation in trials.  Hypotheses 2A (B,C…) = statins do not work as well in subpopulation A (or B or C…) as they do in trial populations.  Hypothesis Variant 3 = this is true in Sweden but not elsewhere.  Hypothesis Variant 4 = the observed lack of correlation will change when there is greater use of statins.  There, done.  I generated the hypotheses.  Shall we get on with figuring out what is really true?

Probably not.  The randomized trials have spoken, and any contrary evidence will be dismissed by those who do not understand it (which includes most of the people who make health policy).  There is probably no harm done in ignoring the other evidence in this case, because even if statins are a bit less impressive than currently thought, they still should be used a lot more.  Still, it is not so reassuring that the reaction to this from those who tell millions of people how to live healthier will likely be to ignore it because it must be wrong, rather than to act like scientists and make the effort to assure themselves by figuring out why this result occurred.