Category Archives: THR

How many premature deaths have been averted by e-cigarettes already

Note: This is a working paper, so comments and suggestions for revisions are welcome. It is a bit rougher than I normally release, but I believe it produces a valid estimate and is quite adequate. What appears here is the analysis itself; a summary and further discussion appears at the antiTHRlies blog. Continue reading

Mike Siegel still doesn’t understand what is wrong with his study “plan”

I would have left this alone if he did. I heartily recommended he do so.  Alas…

The story is here and in the above link of how Mike Siegel:

  • proposed doing a ridiculously expensive study ($4.5 million) for an RCT comparing e-cigarettes to NRT, which would have accomplished nothing particularly useful for science, advocacy, or regulation;
  • gave only the vaguest sketch of what he was going to do (thus the scare quotes in the title);
  • solicited money from the community even though the price tag was grossly disproportionate to crowdfunding, while counting on the e-cigarette industry for the bulk of the funding (notably excluding the companies in the industry that could possibly afford this — not that they would support it);
  • apparently did not communicate with experienced researchers (who could have told him this was a bad idea all around);
  • or communicate with community advocates (who could have told him it was inappropriate to solicit money from the community and that they (we) would oppose such efforts);
  • or communicate with experts on the FDA (who would have told him that his claims that this is what is needed for FDA purposes were badly wrong);
  • or communicate effectively with the target companies about the prospects for funding (from which he would have learned that there was no chance of getting funding on this scale for such a project);
  • or talk to someone familiar with research ethics (who would have told him that his solicitation of open-ended funding from the community based on bold promises and a vague plan gives, at the very least, the appearance of violating research ethics);
  • sprung the plan (such as it was) on the research community and the consumer/advocacy community as fait accompli only when he started soliciting for it;
  • blamed CASAA and others who contacted him to express concern about many of the above points for his funding hopes falling flat;
  • misrepresented that CASAA and those others were “pressuring” him to change his plans for the purpose of producing results that were more “favorable”.

Oh, and as an added bonus, his next post after (falsely) announcing he was abandoning the project, and blaming it on CASAA, was a gratuitous anti-smokeless-tobacco broadside. For those who may not know, CASAA’s mission is to support tobacco harm reduction (THR), and despite the exploding interest in e-cigarettes, the leading THR product in the world remains smokeless tobacco. Siegel, by contrast, does not support THR, either as a moral philosophy or in practice. He is just as inclined to publish anti-THR lies about smokeless tobacco as anyone else in the tobacco control industry. It is just that somehow he got it into his head that e-cigarettes are a worthy cure for smoking while, for unstated reasons, smokeless tobacco is evil (despite them being basically the same for practical purposes, and the evidence about the low risks of smokeless tobacco being far more conclusive). So he decided to petulantly lash out with a random attack on smokeless tobacco and THR when he blamed CASAA for his own failures. Nice.

Anyway, he is still soliciting industry funding for the same bad project, and is still making absurd claims and baseless accusations, in his latest post and elsewhere (we just responded to a reporter who he made such claims to). But before getting to that post, a few other points from the interim period (which I never would have brought up proactively — see the previous post about hoping he would quit digging himself deeper into this hole).

First, a colleague researched the cost of the Bullen RCT that did basically the same thing Siegel is proposing. (As you might recall, that study showed that e-cigarettes perform about the same as NRT in the clinical assignment setting, which is also the inevitable result from Siegel’s proposed study. I explained why this is so, and why it reflects on the poor choice of the study method, not on e-cigarettes’ (or NRT’s) true value, here and here. I also pointed out that his results would inevitably be construed as “e-cigarettes fail!!!”) That study cost less than $1 million. That is still an enormous amount — enough to fund many non-RCT studies that would be far more informative — but still a bit more reasonable. One wonders what Siegel is planning to spend all that money on. Since he does not offer any disclosure of the budget at all, it is impossible to know. I do know that I operated my entire university-based THR research shop, and its many different research projects, for five years on about one-third of what he is seeking.

Second, as I already alluded, in his post on the topic when he announced he was shutting down his crowdfunding (a wise move), he also announced that he was cancelling the project. That same text replaced the homepage for the project’s website (where he had previously been soliciting donations). But this was clearly not true. He never stopped pursuing industry funding. I am aware of contacts he made seeking funding immediately after declaring the project was done and he has appended a statement to his blog posts saying he is seeking that funding. Plus, of course, he is still frantically trying to defend his terrible plan.

Third is a speculative observation that I have to admit I did not catch (perhaps I am too long away from universities) but university-based colleagues did. Those following this saga might recall that Siegel first announced his project and the crowdfunding effort on a Thursday or thereabouts (I am not looking up the exact dates for this, but the days are useful to tell the story and they are about right). CASAA and others had posed serious concerns to him about it by Monday. On Wednesday he made a major pitch for funding at an e-cigarette industry conference. Then a few of days later, on a Saturday morning, he abruptly announced he was shutting down the project. This is very odd behavior for someone who claims to have been planning a project for a year (albeit apparently not planning any of the details or thinking it through), to give it all up in about a week after getting some pushback.

Now this might be explained by him suddenly understanding that the money simply does not exist outside the federal government and perhaps the major tobacco companies (Siegel ruled out seeking funding from any of those sources). But since he continued to protest that there must be enough money out there (demonstrating a failure to understand the difference between gross turnover and net equity), and since it is now clear that he never stopped seeking it, that cannot be the explanation. Besides, that would not motivate a precipitous shutdown on a weekend. What would? Perhaps it was just a fit of pique. But my colleagues suggested that it looked like someone at his university intervened and ordered him to stop what he was doing. This hypothesis is bolstered by the observation that he kept much of the project website intact, but removed all interlinks with Boston University. It is certainly plausible: I pointed out that seeking money from individuals based on false promises and vague (and hopeless) plans has serious ethical problems. Of course, this is speculation based on limited evidence, and like any such, it might be wrong. He continues to be so opaque about the project that I suspect we will never know for sure.

So what does his latest post contain, other than a declaration that he is still seeking funding for his “plan”? The thesis statement is:

Some of the researchers and advocates who opposed our crowdfunding campaign to raise money for a randomized behavioral study of the effects of electronic cigarettes on smoking behavior argued that randomized clinical trials (RCTs) are simply not appropriate to study e-cigarettes because they cannot simulate the real-life situation, where smokers have many choices of different types of products, can engage with social networks, and can experiment over time, change products, advance from one type of product to another, etc.

Instead, these advocates argued that surveys are the best way to study the potential benefits of e-cigarettes. Surveys measure the real-life situation of how e-cigarettes and vapor products are actually used.

According to the argument, surveys produce valid results, while RCTs produce invalid results.

Unfortunately, it’s just not that simple.

He did get one thing right: It is not that simple. That is, no one, so far as I know, said anything remotely so simplistic to him. It is possible that he is intentionally misrepresenting what people said so he can argue with a strawman. He has done that plenty during the course of this saga, after all. But maybe he just does not get it. It is possible that not only does he not understand the points that were made, but does not even understand enough to recognize he was missing something. Thus he did not even realize he needed to try harder to understand what was being said. That is harsh to say, of course, but since he seems intent on burning massive resources while making no effort to learn from people who understand many relevant points better than he does (to say nothing of also lashing out at them), harshness seems appropriate.

The primary objections to the crowdfunding were (a) even a few percent of the enormous budget collected that way would cripple all other crowdfunded projects in the space, (b) he was never going to get enough total funding, so all of the donations were going to be burned up on administrative costs without delivering anything, and (c) asking the public for effectively unlimited funds based on false promises is typically referred to as “a scam” rather than “crowdfunding”. Once we get beyond that, we get to the scientific objections to the plan. I wrote about those extensively; those who read that will recall that what he claims to be the scientific objection was actually about the third most important in the list of problems.

He ignored those that are higher on the list either because he has no response or he does not understand them. Not that he has a real solution for the one problem he notes — it is still a serious limitation in itself.

And, of course, no one suggested that RCTs “produce invalid results.” That is his misinterpretation. Results are never invalid; they are what they are. Reported results might be invalid if they do not reflect what happened in the study, but that has nothing to do with the study design. The question at hand is whether the results are useful. That is always the proper measure of applied science. When someone wants to burn $4.5 million on applied science, it is not unreasonable to ask that he provide a clear assessment of what good the results will do.

RCTs are a good option for measuring a medical intervention to treat a disease. And that may be what Siegel thinks e-cigarettes are, a medical treatment — the above language about them being a cure for smoking was not accidental. So if you want to know what would happen if clinicians give out particular products and particular instructions to smokers, the right RCT could give you a rough measure of that (only rough because the trial setting would still be importantly different from the real clinical environment). As I explained previously, it would probably tell you they work slightly better than NRT in that role. That would be a valid measure of what is being measured (as is always the case), but not something that is particularly useful to know. And it would mostly be interpreted invalidly, we can be sure.

The above argument is convenient for advocates who want to suppress “negative” or “unfavorable” findings by discouraging RCTs – which they believe will “underestimate” the effectiveness of e-cigarettes for smoking cessation and encouraging survey studies – which they believe will show the effects of vapor products in all their possible glory.

Of course, no one actually made the above argument. Some of us made a point that he has misconstrued to be this argument, and did not use those terms in quotation marks. The fake quotations and trumped-up accusations are standard practice for people in tobacco control, where Siegel keeps one foot. Perhaps he became so used to such behavior that it does not even occur to him that misrepresenting your critics’ points and defamation are not legitimate arguments.

However, the argument carries with it a lack of scientific validity.

So does this mean that he is departing from the tobacco control script at least somewhat? He is still leading with the misrepresentation, but is he then departing by adding a scientific argument? Well, no.

I am not going to quote every word, but you can read the original and see that he does not even make a prima facie case that his proposal would produce useful results. He seems to have given up on trying to defend his promise that this is exactly the type of study the FDA wants to see, after it was pointed out that was clearly false. (Of course he did not admit that he was wrong, so one has to wonder if he is still making that claim as part of his sales pitch.) But he has not replaced that with any affirmative claim of why the results would be useful. Instead, he just recites vague simplistic (frequently incorrect) generalizations about RCTs.

His next long paragraph wanders around that point like some kind of buzzwords madlib. It does not say anything that actually responds to the criticisms of his proposal or the analyses of what is wrong with of RCTs in this context. Then he returns to creating strawmen:

This is why I find it so troubling that some major voices in the e-cigarette community are arguing that RCTs should not be conducted and only survey studies are of value.

He is trying to create a fake antagonist who is making such extreme arguments that simplistic assertions that “all studies have some value” serve as a response. He does not acknowledge, let alone respond to, the specific criticisms of his project, nor even the specific criticisms of RCTs in this particular context.

He goes on to discuss general characteristics about RCTs. He gets much of the detail wrong, but the general points are sufficient to show why these vague generalities simply do not respond to the questions at hand:

The randomized study provides a number of important benefits that can never be realized in a survey study. Most importantly, the RCT can equalize between study groups the known and unknown confounding variables that may lead to invalid study results.

That is pretty much the advantage of randomized trials, but it comes at enormous cost. (Some other characteristics can be advantages in some contexts, but are disadvantages in the present context.) He got this point vaguely right, though the emphasized “never” is false and there are at least four technical errors in the second sentence[*].

[*For those who are interested in the technical details: It is a common mistake to equate confounding with “confounding variables”. But confounding is a property of the study population with regard to the exposure and outcome of interest — specifically, differences in outcomes between the exposed and unexposed groups that are not caused by the exposure — not of other variables. The variables that are called “confounders” are actually what are used to try to eliminate the effects of confounding (and thus should really be thought of as deconfounders), not necessarily the causes of it. It is not that randomization (whether in an RCT or not) “can equalize” these variables (by which he actually was trying to say, “can eliminate confounding”) — luck can do that too, but neither necessarily does it. What randomization does do is replace systematic confounding with random confounding, which then has nice statistical properties we can describe. Also, again, study results are never invalid; it is only interpretations of them that can be invalid.]

The problem is — as Siegel would know if he actually considered what people were writing rather than looking for excuses to ignore it — is that this makes an RCT a pretty good design for answering the question “what would happen if we ordered smokers to try using a particular regimen of e-cigarettes?” But this is not a very interesting question, certainly not $4.5-million-interesting.

His next several paragraphs present examples of how observational studies are willfully misinterpreted by ANTZ. He describes this in terms of confounding, though in many cases that is not actually the problem (misconstruing the population that was being studied is actually the most common problem in the examples he alludes to). But even if we restrict ourselves to cases where willful misinterpretations took advantage of confounding, just why does he think that one mediocre RCT, with tepid results, is going to change that practice? If the goal is to address willful misinterpretations of study results, perhaps it would be better to become expert in epidemiology and point out the flaws in the claims in a systematic way.

The beauty of a randomized study is that it can [eliminate systematic confounding.] There is no way for a survey study to accomplish this.  Thus, to simply throw out the RCT is quite unscientific, in my opinion. It throws out one of the most valid pieces of evidence that is necessary to make an informed judgment about the effect of these products: the differences in effectiveness of the products under conditions in which confounding cannot throw off the results.

(I decided to be charitable and replace his attempt to explain confounding with the phrase in brackets.)

Ok, so he has repeated the one advantage of an RCT ad nauseam, and has overstated it (decent observational studies can do a lot to deal with confounding, flatly contrary to his assertions; how does he think we estimate the effects of smoking?). But he has yet to explain why the severe disadvantages imposed by the design, most notably that it is only suited to asking an uninteresting question, are justified by this one advantage.

This leads to my thought for the day: It never struck me before how perfectly the medical fetishization of RCTs (more here) mimics the medicalized “public health” bias (more here) toward demanding interventions. In both cases they make the observation that there is one thing in the world that is worse than it might be. In both cases, they then leap to the conclusion that any intervention that ostensibly improves that one factor must be a good idea. In both cases, they ignore all the other impacts, costs, and implications of the proposed intervention. Very interesting parallel.

A second major advantage of a clinical trial is that it can examine the potential effectiveness of interventions in which the use of a product is promoted for use among smokers who are interested in quitting. A survey cannot do this, because it can only examine the use of products under current conditions. It provides no information on what would happen if the product was actively promoted to a group of smokers, as it is in a clinical trial.

Here he basically concedes that the clinical trial is better only for asking the particular question, “what would happen, on average, if smokers were instructed to try a particular regimen e-cigarettes” (“promoted” is the wrong word — in a clinical trial setting the intervention typically looks more like ordering the subjects to take an action; at the very least, the inclination of subjects to try to comply in a trial is much greater than in a normal clinical setting). No one other than Siegel seems interested in that question. He apparently is so obsessed by it (coming from that medical mindset where smoking is a disease and it is his job to impose a cure) that it does not even occur to him that it is a very weird question to ask and not much useful can come of the answer. There is no serious possibility that such non-targeted interventions will ever become accepted practice in the real world. Moreover, to the extent that this is tried on an ad hoc basis, the question of what (small) percentage of the time it works is not very interesting. It either works or not in each case, and if it does not work someone can move on to another strategy. (This a deeper point that I have covered before — see my observation that NRTs are not a bad thing even though they “fail 94% of the time!!!”)

So in reality this is a disadvantage of RCTs: They can only offer an answer to a fairly uninteresting question.

A third major advantage of a clinical trial is that conditions are controlled as carefully as possible, minimizing potential biases. Both sampling and measurement bias are greatly reduced, if not eliminated. In contrast, survey studies are generally subject to significant sampling and measurement bias.

Again, he partially understands this, but the generalities do not offer a defense for his plan. Let’s break it out.

An observational study (note, by the way, that when he uses the word “survey”, he seems to be referring to observational studies in general) can have a problem with sampling, such that people with particular characteristics are more likely to be chosen among one exposure group. To take the most obvious example, if we do a convenience sample survey of e-cigarette enthusiasts (as I, Siegel, and others have done) we cannot estimate what portion of a well-defined population quit smoking thanks to e-cigarettes because people with characteristic {tried e-cigarettes + quit smoking with them} are far more likely to be sampled than those with {tried e-cigarettes + did not quit smoking}. What this means is that the studied population has confounding that did not exist in the target population (the totality of everyone we would have wanted to get responses from).

So the perfect RCT mostly eliminates this confounding too. Then again, the perfect observational study also mostly eliminates it. A systematic survey dramatically reduces this problem compared to a convenience sample. An RCT does not completely eliminate it because people with particular characteristics (e.g., liking e-cigarettes; hating e-cigarettes) are far more likely to not “comply” with the RCT (e.g., dropping out of the study; not sticking with their assigned behavior), which introduces what is a very similar bias. No clear winner here.

As for measurement error, there are advantages in some cases, but also a major disadvantage in this case. In an RCT you can know exactly what was done to someone because you are doing it to them, which is an advantage for a drug trial but not so much for a social intervention (both because we are not really interested in the effects of one rigid protocol, no matter how complicated, and because what is “done to” each subject will inevitably vary across subjects even though it is supposed to be the same). For outcome measures, RCT fetishizers like to think RCTs are inherently better, but this is not so. Yes, it is more likely an RCT protocol will include biomarker tests to confirm someone’s testimony about what products they are using. But this is not inherent to RCTs. It could be done for an observational study if you were willing to spend the money, and it does not have to be done in an RCT — in other words, it is an orthogonal consideration. The big problem is that RCTs are always designed based on a near-future stopping point (this tends to be demanded by IRBs). So they measure, say, whether someone is abstinent from smoking at 6 months, which is a poor measurement of the endpoint of interest, whether they genuinely quit (most people who are abstinent from smoking 6 months after a clinical intervention resume smoking). A retrospective observational study can better measure the real outcome of interest.

So no points scored for RCTs (let alone his particular RCT) here.

There are some specific problems with survey studies that could lead to an overestimation of the benefits of electronic cigarettes. One major problem is that a survey study of the use of advanced vapor products would result in a severe bias towards finding a high level of effectiveness of these products. The reason? By the time smokers advance to use products like open-ended systems, they have already experienced success, fulfillment, and enjoyment with vaping. In other words, limiting the sample to users of advanced vapor products filters out the majority of e-cigarette users, who do not experience great success and therefore don’t go on to the more advanced products.

Which is why no knowledgeable researcher has ever proposed trying to estimate the effectiveness of e-cigarettes by just studying users of advanced vapor products. It makes a really nice strawman, though.

He then goes around in circles again about how different methods all have their advantages, still never once explaining what it is that his RCT would accomplish, and then returns to his misleading accusations. They are not quite as defamatory as his previous posts, but they still misrepresent what his critics said.

And this is why when some responded to our proposal for a randomized behavioral study by arguing that such an approach was invalid and that we should do a survey instead, I viewed those responses as being unscientific and unsound.

I would be surprised if anyone referred a study method as “invalid”. If they/we/I did, it was sloppy for the reasons noted above. What numerous people told him is that the RCT would not measure anything that is interesting and would produce results that are easy to interpret invalidly (as is routinely done with the similar results from NRT RCTs), and do so at ridiculous cost. He is yet to respond to any of those points. It is not clear what he even means by “unscientific”, and I doubt he actually has anything concrete in mind when he repeatedly writes it.

And then he drifts back to that tobacco control urge to attack people and ignore analysis:

Instead, I believe what is truly behind these draconian opinions (draconian because they would throw out an entire line of potential evidence) is a bias towards electronic cigarettes. I’m not arguing that it is a conscious bias. It may be subconscious. But I don’t believe that any objective scientist would argue for completely throwing out a randomized clinical study design and relying solely on survey evidence to draw conclusions about the effectiveness of a product such as electronic cigarettes.

Why actually try to understand substantive criticisms and respond to them (math is hard!) when there is always an ad hominem to be found? And another good strawman (no one suggested they would “throw out” any evidence that existed — they just said that pursuing this particular project was a bad idea). Also, he needs to look up “draconian”.

But, wait a minute here. This long-time cheerleader for e-cigarettes, who was (and apparently still is) trying to sell this project to funders based on the claim that it will provide important evidence about how great e-cigarettes are, is oh-so-bothered that someone might prefer to see a study that does a better job of showing the value of e-cigarettes. Really? It seems like there is a bit of trying to have it both ways here. He is selling this project based on it providing results that will benefit the e-cigarette industry, but then he climbs up into his tower and complains when he believes (or pretends to believe) that the reason others would prefer to see higher-quality research is merely because it might provide benefits for e-cigarette advocacy. Is his doublethink calculated posturing? I suppose it may be subconscious.

Finally, while it is true that the typical RCT is limited because it does not simulate the real-life situation where smokers can choose between different products, change products over time, and engage in social networks to support their vaping, the study we had proposed would have allowed for all of these things….

So the only issues that Siegel actually responds to are the tertiary concerns about the regimentation of the treatment, but his response is fundamentally not true. For one thing, what he has proposed contains no details at all, and thus not these details. Whenever he was faced with a concern about the treatment being too regimented he always responded “oh, we will deal with that.” But how? Despite a supposed year of planning, there was not even a sketch of a protocol. Yet he claims that when there eventually is one, it will somehow solve all these limitations. Will it? Maybe, maybe not. Will an IRB approve this ad hoc protocol that supposedly addresses these problems? Maybe, maybe not. He should have figured all that out before making promises and asking for money (and his failure to do so might have led to his fundraising being shut down). But the other reason that this is not true is that no matter how many bells-and-whistles the treatment is complicated with, it is still just one particular treatment in an artificial setting, applied to an odd population.

He concludes with:

Yes, there are limitations to RCTs, but it makes no sense to throw out the baby with the bath water.

I am not sure this is remotely the right metaphor for what he is trying to argue. But to run with it: What baby? He is yet to produce a single affirmative argument that says “this study will give a measure of X and that is useful because….” Also, what is the bathwater that someone is throwing out?

Perhaps what he is saying is “don’t throw out the advantage of reducing systematic confounding just because my proposal to achieve this advantage comes at the expense of studying an uninteresting question, using methodology that does not resemble the real world, in a way that produces results that will inevitably be misinterpreted, and at an absurd cost.” Not a very compelling argument.

I have to wonder about the purpose of Siegel’s latest missive. Though many people expressed concern about his plans, the methodological criticisms of his proposed study have mostly come from me. Surely he must understand that I know far more about epidemiologic study methodology than he does. (That is not bravado, it is a simple fact; I have made a serious study of the topic and built my career on it.) This observation is not an appeal to authority — anyone familiar with my work knows that I adhere to the “bring the reader along with you” style, trying hard to never suggest that someone should believe a conclusion just because I asserted it. Rather, I note this to point out that Siegel must realize that I am familiar with every single claim he has made (you will notice that for his few claims that were correct, I presented a better version of them than he did; for the naive ones, I could easily explain why they were wrong because I have responded to similar errors hundreds of times before). He cannot possibly believe that asserting points that I already thoroughly understood when I offered my criticisms could offer a relevant response to my actual concerns. Thus, I cannot help but think that he is tactically engaging in misdirection, trying to use a lot of words that might fool nonexperts while just pretending the expert criticism does not exist, hoping that with that he can trick people into supporting his view. If so, I am sure his mentor would be proud of him.

Michael Siegel puts himself in a hole, and then keeps digging

Those of you who do not like “drama”, stop reading now.

So in the previous post I revealed how Mike Siegel put out an announcement about an ill-advised e-cigarette study with vague protocol and an enormous price tag, and then about a week later figured out he could not possibly fund it and spiked the whole thing. In between those two events, he started asking the e-cigarette consumer community for donations, and CASAA recommended against providing them. I wrote a few analyses (somewhat coincidentally, since the first was already in my queue) about why the study design — a randomized controlled trial (RCT) — was inappropriate for measuring the role of e-cigarettes. And I wrote a private letter to Siegel pointing out why his crowdfunding was borderline unethical (at best) and created a serious risk of scandal for him. (My communication was partially to try to end the harm the crowdfunding might do to the community, of course, but I was also trying to do him a favor.)

He ended up cancelling the study, and this was clearly entirely due to the fact that he belatedly realized his funding goal was hopeless. But instead of trying to quietly move on from this fiasco, he put out a blog post (and the same statement on the project website) in which he tried to blame his failure on others. In doing that, he wrote claims about CASAA, and perhaps me, that were arguably defamatory. (He did not have the decency to mention the names of those he was making accusation about, but most everyone familiar with the issue interpreted his claims as being about CASAA. Study personnel also sent at least one tweet specifically to CASAA supporters requesting they contact them directly.) So I responded with the previous post. The details are all there.

[Note, in case it is not obvious: Like the previous post, this is just me talking, not CASAA.]

Not content to dig himself into a hole with his original post, instead of taking the opportunity to quietly walk away, Siegel decided to redouble his digging with accusations and misleading claims in some comments on that post. I would never have responded to those, nor perhaps even heard about them, were it not for this claim he made:

We did receive pressure and threats to alter the study design in order to try to create more positive results for e-cigarettes. For us, that was just unacceptable. At the end of the day, we could just not work under such conditions, which are not conducive to the practice of objective science.

This is simply unfathomable and, I think, represents just how incredibly out of touch he is with scientific discourse. He grew up in tobacco control and has since been conducting a monologue from a public health school, and so perhaps just never encountered critical scientific analysis.

Okay, perhaps he really did receive threats or pressure. But what was threatened and by whom? If they were threats of criminal activity, he should report them to the police. It seems like these would be the only threats that could actually cause him to alter his plans.

What other threats or pressure could there be with any teeth whatsoever? Did someone threaten to complain about this to his university about the study plan itself? If so, presumably he knew that the complaint would be groundless and ignored. Did someone threaten to complain to the university about the ethics of the crowdfunding? If so it was not I, and, as far as I know, I was the only one who was particularly concerned about the ethics perhaps violating university rules of conduct. Besides, that would be about the community crowdfunding itself, not “objective science”, and the response to that (valid) threat, if it were actually made, would be to just end the community crowdfunding, which he acknowledged was unimportant (see below).

Was the threat to point out the flaws in the study once it was underway or after the results were reported (we are now in the fictitious scenario where he actually managed to fund it)? If so, that’s science, baby. You should always assume that is going to happen. (And note that I never said such a thing, though he could certainly be confident I would subject it to critical analysis if it was carried out, so that is hardly something that could be called a threat even if I said it.) Was the threat to keep belaboring this fiasco to hurt his reputation? If so, it was apparently he who made the threat to himself, perhaps writing it on his own bedroom mirror like some scene from a horror movie.

No, I am pretty sure he was referring to the multiple knowledgeable and well-meaning people (not just me and CASAA) who urged him to do a different study that would be both more useful and more than an order of magnitude cheaper. Perhaps from his ivory tower the statement “we could get behind that and try to gather support for it” seems like pressure, though most of us would call it a friendly promise of collaboration. I cannot conceive of how it could be taken as a threat. A threat is a statement of “if you do X then I am going to do Y” designed to make someone not do X. No one tried to stop him from doing the study (we knew he could never fund it, after all) — we just tried to convince him it was a bad idea.

CASAA came out against his plans; it did not threaten to come out against his plans. (CASAA expressed its concerns to him before doing that, without making any threats, and asked him some questions. He did not respond to the substance of our concerns.) I pointed out the ethical dubiousness of his fundraising, not only not threatening anything, but making clear that I preferred to keep that conversation private. (I went public with it in my previous post only after he made some allegations that were pretty clearly about that communication.)

If that is what he is calling threats, it is defamatory. If it is not, he should really produce some examples of what he is referring to. Just the anonymized text would be fine — it would not confirm that they were real or credible, but would at least tell us what he is making allegations about.

Finally, I have to address the naivety of that last bit. There is no such thing as objective science. All science is designed by people and interpreted by people. It is only as good as the skill and integrity of those people. Someone pointing out that a particular study design is misguided is no less “objective” than someone thinking it is a good idea. Debate about methods is not any less “objective” than anything else in science. A better, cheaper study is no less “objective” than an expensive white elephant. His claim is not just self-serving, but shows a deep lack of understanding about the scientific process.

So since I took the time to comment on Siegel’s latest defamation, I will go ahead and look at the rest of what he wrote.

First, we would NEVER ask the vaping community for $4.5 million.

As I made clear previously, this misrepresents the concerns expressed about his crowdfunding. Obviously everyone knew this. Which is why we pointed out that if he raised even a few percent of that much from the community, it would crowd out all other community-supported research and activism.

This campaign was directed ALMOST ENTIRELY at electronic cigarette companies, including the largest independent (non-tobacco) companies. With something close to $10 billion in annual sales, it is not unreasonable to expect that one could raise $4.5 million.

As I pointed out previously, this is just naive. Gross turnover tells us little about what tiny fraction of that is either net equity (this, not gross revenue, is what someone might fund research to protect) or available cash (which could provide the funds). About half of total sales come from the majors, who he refused to take funding from. The rest comes from privately-held small (and a few medium) businesses, so he was basically asking the owners for personal donations. If the industry really thought his study would do a lot of good (difficult to believe), the sum probably would have been worth it to them collectively. But then the free-rider problem would kick in (better to let someone else pay for the study that benefits everyone), and there is not enough coordination to overcome it. It was obvious from the start that he had no prayer. Any number of people could have told him this. It only took him a week to figure it out after pitching his requests to the industry.

In fact, we did not send out any appeals directly to the vaping community. Most of our appeals for the funding are going to the very largest companies. However, we didn’t want to exclude the vaping community completely and thought that allowing them to contribute even a small amount toward the overall goal would give them a sense of participation and ownership.

This is just not true. The project webpage had a huge “Donate” button on it. The project twitter feed sent out instructions about how to donate. These appeals were obviously not directed at corporate funders. Consumers certainly interpreted the appeals that way, with many posting on social media that Siegel was asking for contributions in order to be able to do the study. These methods for “allowing” people — people who mostly could not understand the inaccuracy of his promises about this project — to give some of their own money to him were apparently in place from the start despite his claims that they were in response to popular demand.

Second, regardless of anyone’s views, there is no question that the FDA is going to require some sort of clinical or behavioral trial, with randomization of subjects, before it endorses electronic cigarettes as a bona fide smoking cessation or harm reduction tool. In other words, the FDA will never view e-cigarettes as “appropriate for the public’s health” – the key requirement for new product applications – in the absence of clinical trial-type evidence. Surveys are just not going to cut it. That’s just reality. Anyone who believes that the FDA is going to be convinced by survey data is simply not accepting reality.

This just demonstrates how little he understands the role of e-cigarettes, the current FDA battle, or how FDA works. There is relatively little interest in having FDA endorse an e-cigarette as a smoking cessation device (i.e., have it an approved pharmaceutical). If there is such interest, it comes from one of four major companies, not the community. The concern in the community is FDA threatening a de facto ban of 99.99% of all currently-available products — which would come from the Center for Tobacco Products, not the drug regulation arm of FDA. An RCT would not address this threat at all. Indeed, if a product were approved as a smoking cessation device, that would actually increase the threat because FDA might use that approval as an excuse for banning all other products. However, a pharmaceutical application would have to be about a product. Generic trials of some other product do not substitute. The company that wanted pharmaceutical approval would have to do its own tightly-controlled RCT with its own product.

As for showing that the products are good for the public health as a whole, he is completely backwards. An RCT of a cessation experiment in a clinic could tell FDA nothing about the public impact of e-cigarettes, which they are extremely interested in, while a good survey could. Again, Siegel could have learned this if he had only asked for advice and critical review.

Third, a randomized study, despite some limitations, is the only design that can address the problem of inherent differences between smokers who choose to use different products. It is clearly not the only study design that needs to be used, but it is one of the designs that is needed. No one study will provide the answer to this research question. Multiple studies are needed that use multiple designs.

Um, no. An RCT is almost perfectly designed to not tell us about differences among people. The whole point of that approach it is to pretend that everyone is interchangeable and see what happens when you act upon them. Yes, multiple studies are needed to answer all questions (duh!). But his proposed study did not look like it would answer any useful questions — but it would cost more than all those other useful studies combined. (Copy and paste remark about “ask for advice” here.)

Finally, the BSCiTS study was going to avoid many of the limitations of the RCT approach by not assigning a particular product to each smoker. Instead, our plan was to give subjects a choice of multiple products, including not only cig-a-likes [sic] but also egos and a few more advanced products. We were planning to include different choices of nicotine level and flavorings as well. I doubt that any other completed or proposed RCT on electronic cigarettes would have used such an approach. Our goal would be to simulate the real-life situation. Again, I doubt any other RCT study will do that.

I noted in the previous post that he seemed to be making this up as he went along. Each new communication he received about the rigidity of RCTs resulted in more claims of what he was going to do. (At one point he claimed that there was no chance an IRB would allow him to use open systems, so he was not proposing it. Here he seems to claim it was the plan to do so. Or perhaps he is claiming he can test “advanced products” but without them being open, which makes little sense. Maybe he wanted to use big batteries with pre-filled cartomizers. But that would severely limit the liquid options, unless the researchers will planning on filling tens of thousands of cartomizers themselves. Which would mean they were refillable, and thus open. I am not sure he understood any of this.) Notice also that he claimed above that this study would support a smoking cessation claims — which must be about a single product, not the category — but here he talks about trying to test the whole category at once. He is just making it up.

And he still misses the most important scientific points I tried to explain. No matter what regimen is used in an RCT, it is still just one regimen — particular choices (no matter how many), particular instructions, etc. — and it is still a clinical intervention that does not resemble the real world of e-cigarette adoption. If the fundamental problem with an RCT were the narrowness of the product options in the treatment arm, then maybe it would be possible to do something about it. But it is not. E-cigarettes are not a medical treatment. An RCT would tell us almost nothing about their real benefits.

I think we all learned something today. If you want to undertake a really ambitious project, take baby steps and ask for advice; do not just commit and then ask for support. And after you ask, listen. Don’t bristle at friendly advice as if it were a threat to your integrity, let alone lash out and defame those who offer it. If you do, you might run out of friends. Don’t mistake writing a lot for reading a lot. And finally, when you find yourself deep in a hole of your own digging, see if there is anyone who will swap you a ladder for your shovel.