Monthly Archives: January 2011

Unhealthful News 30 – Figuring out who to believe (part 1)

The challenge that probably interests me more than anything else in my intellectual career is how to recognize when someone in a debate is clearly right without having to become expert in the subject matter, and the closely related problem of how to make it clear that you are right to outside observers.  The specific situation that most interests me is one where the observer of the debate is intelligent, generally well-versed in similar subject matter (science, politics), and genuinely interested in figuring out the truth but does not have any particular expertise in the specific subject matter and is not likely to acquire it.  I know this does not describe most situations where you might be trying to persuade someone, but it is a particularly important case (it describes many cases of trying to win over opinion leaders) and one that seems to present a surmountable challenge.

It is not so easy though.  I have tried doing it on several topics over the years, most recently tobacco harm reduction.  It is clear to me — as someone who has made an extensive study of the epidemiology, economics (that is, what people like), politics, and ethics of the matter — that there is no legitimate case to be made against encouraging THR unless someone accepts some very odd goals.  I am fairly certain I have identified the motives of those who oppose THR, and it is clear to me that if they openly admitted their real goals and preferences they would face opposition from the vast majority of the population.  They apparently agree with that assessment, since they hide their real motives beyond pseudo-scientific claims and rhetoric. 

That is what is clear to me.  But I know that to most observers it is not clear that the opponents of THR are trafficking in dishonest nonsense and misdirection.  They know how to use the vocabulary of science and make “sciencey” arguments (i.e., things that sound like they ought to be scientific claims, but really are not, in the spirit of Colbert’s “truthiness”).  To the completely uninitiated, it sounds like there is a scientific argument going on about health risks, when there is no legitimate debate on those points whatsoever.  To those who know a bit more, it seems like there is a legitimate debate going on about ethics and behavior, though there is barely more of a case against THR from those quarters than there is from the health science.  I know from experience that if I can sit down and talk with a member of my target audience (in particular, someone who is genuinely interested in learning the truth), I can almost always convince him or her of the truth. 

On the other hand, in such circumstances I generally have the advantage that my listener knows me, and thus knows that there is no chance I am the one spouting utter nonsense and simply lying about the science when I point out that the other side is doing just that.  So perhaps I have not quite achieved my goal of figuring out how to communicate the material to someone who wants to know the truth but does not know, going in, that I am the one that should be believed.  I think I have some insight into the topic, and would like to try to communicate some pointers, and at the same time try to better figure out how to do it myself.

I will explore that theme and goal periodically (maybe most every week) in this series because it is critical to what I am trying to do.  Eventually I will challenge a health news claim that you (some particular one of you) was inclined to believe.  Perhaps you will believe me because I have built up enough credibility through my other analyses, but maybe you will want me to make a case for why I am right that does not require you to start by assuming I am right.  For example, I suspect some readers must be asking (if they have read this series, particularly what I wrote yesterday), “why should we believe you, the iconoclast, rather than the icons of epidemiology in academia and government; if your calls for methodologic reform are right, why is almost no one adopting them.”

In short, I want to explore what I can write and what you can realize that would lead you to believe me?

To start exploring “why should you believe me?”, I would like to invoke the work of someone who I consider to be very talented at making a good case for why we should believe him.  Many of my readers follow Chris Snowdon’s Velvet Glove Iron Fist blog, but may not be as familiar with his other book and blog The Spirit Level Delusion.  (If you are somewhat familiar you might want to check back, he added a lot of new material last week.)  This is his response to the book “The Spirit Level”, by two epidemiologists, Richard Wilkinson and Kate Pickett (W&P), which claims that wealth or income inequality in a society (not the well-known problems of poverty, but inequality per se) causes all manner of health and social problems.  W&P’s book apparently has a big following among British lefty pundits – those who are predisposed to support the policies that would be recommended were the book correct.  It has received much less attention in the U.S. (perhaps due to a dumb choice of title, which sounds New Age-ish to the ears of those of us who refer to that tool as just a “level”, “bubble level”, or perhaps “carpenter’s level” and had never before heard the term “spirit level”), though it has been picked up by a few lefty pundits like Nicholas Kristof (which I commented on with dismay since I like Kristof’s non-naive analyses). 

Snowdon’s book (and associated interviews and blog posts) do a thorough job of debunking W&P and showing that it is utter junk science.  I am confident that no serious reader who was genuinely interested in learning the truth could read what he wrote and still believe that there was a legitimate debate about whether W&P’s analysis was legitimate.  He could not easily win a fight by simply presenting his own assertions that were counter to theirs, hoping readers would choose to believe him.  Why would they choose to believe a journalist who is not backed by a major publisher over two university researchers?  (Most readers of this blog perhaps realize that a sharp scientifically literate journalist is probably a better scientific thinker than most people who publish epidemiology, but the average reader would not know this.)  There is a lot to mine from his presentation, and I can only touch on the answer today (more later).

The key to Snowdon’s methods is pointing out, in ways that any sensible reader can see without an expertise in the subject matter, fundamental flaws in W&P’s arguments.  The reader is then forced to either believe the critique or believe that Snowdon is fabricating gross out-and-out lies.  For example, in the first of his recent posts, Snowdon addresses W&P’s implication that the many studies on the subject of inequality that came before them all supported their claim.  He first points out that if you read carefully, W&P only state that there were 200 papers that tested the relationship between income inequality and health.  They overlook the fact that quite a few of them conducted that test and concluded that there was nothing there.

(I am reminded of a Colbert episode from last week where he was joking at length about Taco Bell being accused of putting “beef” in its food that did not actually meet the U.S. Department of Agriculture legal definition of beef.  A Taco Bell spokesman responded to the accusations by pointing out that all of their beef was USDA inspected.  Colbert noted that “inspected” is not the same thing as “approved”.  This further reminds me of a word that you may see in epidemiologic survey research, “validated”, which basically is more like “inspected” though authors try to make it seem more like “approved”.  I expect I will take up that point sometime in this series.)

Snowdon then went on to produce a series of quotes from previous researchers about findings that disagree with W&P’s claim.  His key observation here is not that the evidence that W&P were wrong is more compelling than the evidence they were right.  That argument would require the reader to have expertise in the field to sort out the conflicting claims, to know whether all relevant studies were being cited, to know what exactly the quoted study results mean, etc.  But Snowdon’s key point was a different one:

Those with a healthy scepticism will have noticed that I have only quoted studies that support one side of the debate. It’s a slippery and misleading trick and it is exactly what Wilkinson and Pickett do throughout The Spirit Level. The difference is that I made it clear from the outset of this book that there are many conflicting studies. Readers of The Spirit Level would be hard-pressed to guess that there was any debate at all.

So Snowdon has successfully pointed out to the reader that whatever the weight of the evidence might show, the evidence does not resemble what W&P claim it is.  To doubt that point would require believing that Snowdon was making up the quotes he wrote, something that would undoubtedly be picked up on by those on the other side and that would destroy his credibility, and thus is vanishingly unlikely.  (Also the interested reader could check it himself.)  He then redoubles the point by showing that a study that W&P cited as being the exemplary support for their thesis was actually quite equivocal.  Roughly speaking that one translates into, “if that’s all you got, why did you even show up?”

(Aside:  This is also is support for a criticism I make about the way reference citations are used in health science.  Far too many authors, reviewers, and editors seem to think that it is appropriate to make a sweeping statement and then cite a single supporting study following it.  But all this does is create an illusion of increased credibility – finding a single quote or citation to support a particular claim is almost completely uninformative because there is some support for all but the most hopeless claims.  Authors need to either implicitly say “this broad claim is true; we assert this based on our expertise about the entire body of evidence and you will have to trust us”, provide a complete review of the evidence, or direct the reader to further analysis of the point (a legitimate use of a citation, and should be used more often).  Citing a single piece of support as if it justifies a sweeping claim is just a way of trying to mislead readers.)

While pointing out that W&P are trying to misrepresent the weight of the evidence is not sufficient to deny any particular claim they make (Snowdon debunks many of their points in detail using other argument), is should be enough to make the open-minded reader seriously doubt everything that W&P claimed.  The general lesson is:  If authors can be shown to be denying the existence of opposing evidence and conclusions – not disagreeing, challenging its validity, or saying that it is overwhelmed by the evidence on the other side, but simply pretending it does not exist – this is pretty good evidence that they are not honest analysts and, moreover, do not think their case can stand on its merits.

Of course, W&P made it easy for Snowdon to shatter their credibility by making it so brittle.  They put the reader in the position of either believing they have unequivocal evidence for a “new theory of everything” (to quote from Snowdon’s snarky subtitle), or concluding that they were just pulling a sales-job on the reader.  If they had behaved like scientists – recognizing the best contrary evidence and being properly equivocal – rather than peddlers or evangelists, it would have been necessary to explore the merits of their argument to challenge their claims and credibility.

Still, it is useful to figure out how to debunk as easy a target as is The Spirit Level.  We need to start with the challenge of winning one-sided debates before we can take on arguments that have some credibility.

Unhealthful News 29 – Um, yeah, we already knew that: smokeless tobacco does not cause pancreatic cancer

A recent paper has been touted as showing that smokeless tobacco (ST, which mainly refers to oral snuff, which is sometimes called snus, and chewing tobacco) does not cause pancreatic cancer (PC), which is contrary to what some people believe.  This is of little practical consequence, since even the highest plausible risk claimed for PC was only a fraction of 1% of the risk from smoking, and thus the claim had no effect on the value of ST in tobacco harm reduction.  But there are several angles on this that are worth exploring here.  (For those of you not familiar with my work on tobacco harm reduction – substitution of low-risk sources of nicotine like ST for smoking, which could have enormous public health benefits – you can find more background in our book, blog, and other resources at TobaccoHarmReduction.org.) 

As a first observation, since this is a series about health news, I should point out that, as far as I know, the new article did not make the news.  Since I cannot point to a news report for background reading, I recommend instead a good blog post by Chris Snowdon that summarizes it (and touches on a few of the themes I explore here).

It would be one thing if it did not make the news because it was not actually news (see below).  But I doubt that most reporters would have realized that, so the obvious explanation does not speak well of the press.  News that contradicts conventional wisdom is likely to be highlighted because it is more entertaining, but not if it is an inconvenient truth for those who control the discussion, in which case it stands a good chance of being buried.  Since the anti-tobacco activists who dominate the discourse in these areas want to discourage smokers from switching to low-risk alternatives (yes I know that sounds crazy, but it is true – it is beyond the present scope, but I cover it elsewhere), they prefer people to believe that ST is riskier than it really is.

Second is the “um, yeah, we already knew that” point.  Those of us who follow and create the science in this area have always known that the evidence never supported the claim of any substantial risk of PC from ST.  (An important subpoint here is that an empirical claim of “does not cause” should be interpreted as meaning “does not cause so much that we can detect it”.  For an outcome with many causes, like cancer, and an exposure that affects the body in many ways, it is inevitable that if enough people are exposed at least one will get the disease because of the exposure.  It is also inevitable that at least one person will be prevented from getting the disease because of the exposure.  So what we are really interested in is whether the net extra cases are common enough that we can detect them.)

There have been a three or four studies whose authors claimed to have found an association between ST use and PC.  Other studies found nothing of interest and there must be dozens or perhaps hundreds of datasets that include the necessary data, so the lack of further publications suggests that no association was found in these.  There was never a time that a knowledgable and honest researcher reviewing the available information would have been confident in saying there was a substantial risk.  One of the studies that claimed to find an association, published by U.S. government employees, was a near-perfect example of intentionally biased analysis; they actually found that ST users had lower risk for PC but figured out how to twist how they presented the results to imply the opposite.  Two somewhat more honest studies each hinted at a possible risk, but each provided very weak evidence and they actually contradicted each other.  Only by using an intentionally biased comparison (basically cherrypicking a high number from a different analysis of each dataset, because if similar methods were used they got very different results) could activists claim that these studies could be taken together as evidence of a risk.  Several of us had been pointing this out ever since the second of these studies were published; see the introduction (by me) and main content (by Peter Lee) of Chapter 9 of our book (free download) for more details.

The worst-case-scenario honest interpretation of the data is that there are a few hints that perhaps there is some small risk, but it is clearly quite small and when all we know is considered the evidence suggests there is no measurable risk.  In other words, if the new report had made the news, it would have been portrayed as a new discovery that contradicted old beliefs.  But only people who did not understand the evidence (or pretended to not understand the evidence) ever held those old beliefs.

One clue about why this would be is that the study was a meta-analysis, which refers to methods of combining the results from previous studies.  While some people try to portray such studies as definitive new knowledge, such a study cannot tell anyone who already understood the existing evidence anything they did not already know.  They are just a particular way of doing a review of existing knowledge, usually summarizing our collected previous knowledge with a single statistic.  In some cases, such as when the body of evidence is really complicated and fragmented (e.g., there are hundreds of small studies), this can be useful.  That might be a case where no one actually could understand all the existing evidence because it was too big to get your head around.  But doing a meta-analysis is not fundamentally different from graphing your results differently or presenting a table a different way – it might reveal something you overlooked because of the complicatedness of the information, but it cannot create new information. 

So when the information we already have is rather limited and simple, as it is for the ST-PC relationship, there is no way this meta-analysis of a handful of studies could have told us anything new.  Anyone who learned anything from the new study must have not known the evidence.  This makes the new paper a potentially useful convenient summary, but many of those already existed, so there was no value added.

[There are other problems that make meta-analyses much less definitive than they are made out to be, including some serious conceptual problems with the most common approach.  That single summary statistic has some big problems.  But I will save elaboration on these points for later posts.]

Third, given that, you might wonder why some people think this was news.  I have already pointed out that activists wanted to portray ST as more harmful than it really is. 

A few years ago, those anti-ST activists who wanted to maintain a modicum of credibility realized they could no longer claim that ST caused oral cancer (they came around to this conclusion about ten years after the science made it clear that there was no measurable risk).  While clueless activists, and those who do not care about even pretending to be honest, still make that claim about oral cancer, their smarter colleagues went searching for other claims where the evidence was not so well known.

But a quick web search reveals that the claims about pancreatic cancer risk from ST are stated as fact by anti-tobacco activists, as expected, and by electronic cigarette merchants, which I suppose is understandable marketing dishonesty, but also by some companies that make smokeless tobacco.  The latter are apparently motivated by a fear of ever denying that their products cause health effects, even health effects that their products do not actually cause.  It does escape me why, exactly, they felt compelled to overstate the support for the claim that ST causes PC, rather than perhaps just acknowledging that it has been claimed, not attempting to dispute the claim but also not bolstering it.  I know they had the expertise to know the truth, and urged some of them to stop actively supporting the disinformation, but it had little effect.  Maybe they thought they benefitted from the incorrect beliefs in a way that was too subtle for my politically-naive brain.

The more general observation from this is that accurate science per se does not have much a constituency.  If someone has a political motive to misrepresent the science, like the anti-tobacco extremists do in this case, they will do so.  Perhaps there will be a political competitor who will stand up for scientific accuracy by voicing the opposite view.  But if there are no political actors on one side of the fight, or they are intimidated into not standing up to the junk science as in the present case, then we are left only with those of us who want to defend scientific accuracy for its own sake.  Needless to say, we do not have the press offices that wealthy activists groups, governments, and companies have, so we have little impact on the news.  This is especially true because most health news reporters have no idea who to ask for an expert opinion about the accuracy of a claim, so they usually just find the political spokesmen (some of whom are cleverly disguised as scientists).

Fourth, and most important for the general lessons of this series, is that the new paper exemplifies the fact that there is basically no accountability in health science publishing.  This is a particularly destructive aspect of the preceding observation about accurate science not having a constituency.  In many arenas, adamantly making a claim that turns out to be wrong is bad for your reputation and career.  This is obviously not true everywhere – American right-wing political rhetoric is the example that currently leaps to mind – though you might expect it to be so in science.  Unfortunately, it is not in public health science.

The senior author of the new paper (considered ultimately responsible for oversight; that is what being listed last of the several dozen “authors” of a paper usually means) is Paolo Boffetta.  Boffetta is personally responsible for much of the junk science and disinformation about ST and PC.  He was the lead author of one of the two not-really-agreeing studies mentioned above, a major player in the International Agency for Research on Cancer (IARC) report that constructed misleading evidence of cancer risk from ST, and author of a completely junk meta-analysis that engaged in the dishonest cherrypicking I mentioned above.  I would love to go through the entire indictment of him, but I have been urged to keep my word count down a bit, so I will refer you to the above links, the post by Snowdon and Lee’s article that is reprinted in the book chapter, as well as this recent post by Brad Rodu

Instead I will focus on the point that since publishing in public health science is treated purely as a matter of counting-up scorekeeping by many, no one pays any attention to whether someone is producing junk science or even utter nonsense.  If you are someone like Boffetta who “authors” more papers than anyone could seriously analyze, let alone write, no one cares that you could not possibly be doing any thinking about their content – they just say “wow, look at that big number”, since assessing quality is beyond the abilities of the non-scientists who occupy most senior positions in public health academia and government.  They do not even care (or notice) that someone’s publication record for the last few years contains flat-out contradictions, like the various reports by Boffetta listed here (and it gets even better – during the same period he was also first author of a polemic that called for more honest research in epidemiology and condemned science-by-committee of the type he engaged in on about ST).

If you are thinking that things cannot really be that bad, I have to tell you that they are even worse.

The above describes what is typical for most of the best known (I did not say best respected) researchers in public health science, like those closely associated with the Nurse’s Health Study I mentioned a few days ago.  They crank out far more papers than they could possible hope to do well or even think through, and these are what you read about in the news.  Indeed, you are more likely to read about these mass-produced studies in the news because the authors are more famous – famous for cranking out a zillion often quite lame studies.

Down in the less-rarified end of the field, it can get just as ugly.  I have observed ambitious (in the bad sense of the term) colleagues in public health, trying to climb the ladder explicitly making deals to put each other’s names on their papers, as authors, even though the other contributed nothing to the paper and had no idea whether it was accurate.  Slipping a sixth author into a list of five does not penalize anyone’s credit (though it obviously should), but let someone boost his numbers knowing no one would ever ask him to defend the content of the paper.  On a few occasions I or one of my colleagues who actually cares about science have asked a guest lecturer (often someone who is applying for a faculty job in our department) to explain or justify an analysis in one of their recent papers that we disagreed with, and were later told that actually challenging someone’s claims was considered impolite.  (These people would have never survived graduate school in the fields I studied!)

A lot of critics who do not really understand the field call epidemiology junk science, but typically their condemnations are based on ignorance.  The truth is worse.

I wish I could conclude this point with some optimistic note of “so what you need to do as a reader is…”, but I do not have one.  The one bright spot that occurs to me is that when I work as an expert witness the health science “experts” on the other side are seldom anyone who has really worked in the area since, given the quality of typical public health articles, if they had written much they probably would have published and stood by numerous errors that would undermine their claims of expertise.

Bringing this back to a few take-away points:  If someone claims to have discovered an existing belief is wrong, particularly if this is based on a simple review of the evidence, chances are that either (a) the new claim is wrong, or (b) the real experts did not actually have the incorrect belief.  For a politicized issue (one where any significant constituency cares about the scientific claim for worldly reasons), you are unlikely to get an accurate view of the science unless you hear from a scientific expert who supports the opposition view.  If such a person says “I do not like this, but I cannot dispute the claim”, you have learned a lot; if they are merely given a meaningless soundbite in a news story then you have only learned about the bias of the reporter and have not heard the counter-argument.  If you hear a counter-argument, that is where the tough part begins – for both your analysis and my attempts to empower you.  I start on that tomorrow.

Unhealthful News 28 – coffee, olive pits, and liability as regulation

An interesting confluence of two events seems to have been overlooked, and I suspect that neither one is being reported outside the U.S.  The movie “Hot Coffee”, which tries to counter some of the ridicule that is the conventional wisdom about personal injury lawsuits, debuted at the Sundance Film Festival, and U.S. member of congress Dennis Kucinich filed suit against the food service company in his congressional office building for the dental injury he suffered as a result of a hidden olive pit.

Though print stories I saw about Kucinich were short and matter-of-fact, the television clips included open ridicule.  Taking a shot at Kucinich is undoubtedly tempting for the corporate media since he is a huge outlier among high-office elected officials in America – there is pretty much no other major official who is close to him on the populist left (maybe Bernie Sanders).  Thus he has pushed hard for some very unpopular positions, like “do not start a land war in Asia” (you might recall that opposing the wars was a very unpopular position before everyone else caught on).  [Disclosure: I campaigned for him, a rare fellow libertarian left vegan (I used to be) from Ohio and back in the days I was at the higher end of my widely changing income I was at the “have cocktails with the candidate and hand-signed mementos” level.]

What you would not learn from the giggling talking heads was that biting into the olive pit, hidden in a wrap sandwich where it was easy to bit on without warning, caused so much damage that Kucinich had to endure multiple surgeries and suffered a lot of pain and loss of functionality.  I suspect most of us who are not impoverished would pay a year’s salary to avoid what he went through, and the amount of the suit ($150,000) was less than what he earns in a year, though it sounds like a large number when described without the context of the injury’s severity.

Similarly, you have probably heard of the seven figure judgment awarded against McDonalds for someone getting burned by a cup of their coffee.  The new movie sets the record straight on that one, in the context of a polemic intended to push back against the conventional wisdom about such lawsuits, which its creators characterize as being a concerted campaign by corporations to create scorn and thus increase support for protecting them from further lawsuits.  (For more about the movie from that perspective, there is a series of stories from the anti-corporate media here (scroll down past the video window to find transcripts).)  To briefly correct the story about the coffee:  It was not a case of a driver taking the cup and spilling it on herself, as widely reported.  Rather she was a passenger in a parked vehicle, holding the cup between her legs, and the claim was that the styrofoam cup just collapsed (the last part is hard to verify, of course).  The injuries were so bad that they required surgery and were reported to have substantially ruined her life. 

Yet the family merely asked McDonalds to cover their out-of-pocket medical expenses (recall that this is in medical-financing-backward America), just like a homeowner’s insurance policy might provide such coverage if this happened at a private home.  When McDonalds refused, the family went to court, still seeking a fairly modest sum.  At trial it came out that McDonalds intentionally keeps its coffee at an extremely high temperature, far hotter than coffee would normally be served, because it saves them money, and that hundreds of people had suffered medical-treatment-level injuries from it.  The judge and jury were so incensed by what they learned that they awarded over $2 million in punitive damages, though the plaintiff probably only collected a small fraction of this in the final (secret) settlement.  That final outcome is typical for lawsuits like this; even if the consumers win a big award that makes the news, they usually have to negotiate for a much more modest sum in exchange for the company not tying up the finalization of the award with further legal action that can last longer than the person might live.

Another case I am reminded of, which was a favorite example 20 years ago, was someone who successfully sued the owner of a phone booth when he was struck by a car while using it.  (For my younger readers, a phone booth is like a mobile phone, except that it is bolted to a particular piece of tarmac and a lot cheaper to use.)  It sounds utterly absurd until the fact – not mentioned by those delighting in the example, of course – that this was the second time such a thing had happened in that particular phone booth, which suffered from both dangerous placement and a door that was difficult to open to get away from oncoming traffic.

Why are the Kucinich and Hot Coffee stories important health news?  Because they reflect an important part of the U.S. regulatory system for health risks.  I believe almost half of my readers are from the E.U., and many of you grouse about the increasing morass of regulations there.  In the U.S. we have fewer command-and-control regulations and depend on the threat of lawsuits to give companies the incentive to police themselves.  This is explicitly recognized as an important part of the regulatory regime by those who study law and economics, though probably not by most people.  In theory it has big advantages:  Companies are theoretically in a better position to keep track of possible hazards and, because they are creating the hazards, to figure out the best way to reduce them.  It is flexible, forcing companies to worry about creating new hazards that hurt people even if the regulators have not caught up with the situation.  It is also accepted, as part of this theory, that the optimal number of bad outcomes is not zero, and sometimes it is more efficient to compensate the occasional victim rather than to engage in overly-expensive interventions to reduce the risk. 

There is an endemic debate about doing something to stop “frivolous” lawsuits in the U.S., including Obama’s promise to reduce medical malpractice lawsuits in his State of the Union speech this week.  It is important to realize that such lawsuits are an inherent part of consumer protection, and so this is really a call to reduce consumer protection regulation.  There is no obvious way to get rid of genuinely frivolous suits without creating barriers to other suits that are useful contributions to regulation, especially since some of the apparently frivolous examples are really being misrepresented. 

This does not mean that there are not frivolous lawsuits, and it certainly does not mean that the current system works as well as it might.  The fact that some suits even need to be defended is indefensible (I have worked on several of those – for the defense, I would like to note).  It is quite possible, for example, for someone to win a lawsuit even when ample science shows that there is almost no chance that the exposure in question caused the disease that is being attributed to it.  There is also an inherent arbitrariness to it (e.g., how should the blame be shared between the company that serves dangerously hot coffee and the consumer who takes the inadvisable step of holding it between her legs?).  As for medical malpractice, there is a lot damage done by bad medical practices, but it seems that consumer lawsuits do almost nothing to reduce most of the real errors and frequently punish providers for outcomes that were unfortunate but not caused by error.  So the incentivization to do better work and get rid of practitioners who are incompetent is minimal.

A lot of news stories about consumer health lawsuits, like many news stories, focus on extreme cases and look for ways to make the story entertaining.  Thus, the causal reader might think that a large fraction of lawsuits are silly.  It is true that pretty much no one would have come up with the American liability system if tasked with creating a regulatory system, but that is what evolved and as with most evolved systems, mutation (radical change) is more likely to make things worse than better.  But when you read a story about how we need to do something about the claimed excess of costly lawsuits, keep in mind that this is really saying that that we should reduce companies’ expenses at the cost of having less consumer protection.  You may or may not agree that we should pursue such a change, but if you just read the news you probably would not even know that is what you were being asked to agree with.

[Update:  After writing this, I learned that Kucinich’s lawsuit was settled (and he provided a lot more detail about it at that link) which is what typically happens.  An incentive for food providers to take greater care with pits in hidden olives has been created, and instead of this cost of imposing such incentives being paid to the government or being a deadweight loss, it goes to compensate someone who was injured.  Again, there is plenty wrong with the current system, but this is an example of what is right about it.]