Unhealthful News 22 – I will continue to Countdown disgraces in the news

Today I decided to check MSNBC’s health news for a story, as a tribute to Keith Olbermann’s work there and to take a minuscule poke at MSNBC for driving him away.  I was rewarded with this story, right at the top of the page, about the “discovery” that eating a larger breakfast is not helpful for losing weight.  It is a great example of confused health reporting resulting from trying to hype nothing as well as what you might consider a surprise based on a few of my recent posts, putting faith in an observational study of a subject that can only be effectively studied with an experiment.

The article begins, “For years, dieters have been told that the way to lose weight was to start the day with a hearty breakfast.”  I cannot say that I study current leading weight loss advice, and I suppose that for any behavior relating to food or exercise, someone recommends it for weight loss.  But I am pretty sure that the advice about breakfast is merely to not skip it (because the backlash from feeling starved will drive you to eat more later), and sometimes a recommendation of protein and fat over carbohydrates.  Also there was some recent intriguing advice to exercise before you first eat in the morning.  But to just eat a lot?  It is not clear why anyone would think that is a good idea.  The reporter fashioned her story so that the exciting new conclusion was to eat something, but not a lot, which I suspect is exactly what most of the current advice says.

It is bad enough when news reports identify a current conventional wisdom, stating it like fact even though it is fairly uncertain, and then declare it to be overturned based on a single new study, ignoring flaws in the new study, to say nothing of the fact that scientific inference is not based on “whatever is newest is right” rule.  That is probably responsible for the majority of public confusion and annoyance with health reporting.  But it is even worse when the reporter just makes up a fake conventional wisdom and then claims to be reporting on the “news” that we “now” “know” something else to be true.

As for the study itself, it tells us almost nothing because of confounding.  Readers of this series will recall some posts where I criticize the naive notion that experiments on people (usually called RCTs: “randomized clinical/controlled trials”), where the researchers assign people to particular exposures, always provide better information about health effects than observational studies, where people choose or experience exposures as they would in everyday life.  As I noted at greater length before, RCTs eliminate systematic confounding but at the expense of creating a very artificial situation, with the odd sort of people who would volunteer to have an exposure assigned to them, exposures which may not represent a realistic range of what people actually experience, and forces people to do something they might never have chosen.  Figuring out whether the upside or downside matters more requires some scientific common sense.

If something is purely biological (rather than behavioral or psychological) and normally occurs in an artificial controlled setting, then most (not all) of the downsides go away.  This is why RCTs are good for comparing the effectiveness of medical procedures.  But if you are interested in the behavior of free-living people, the downsides become quite large.  That is why the vogue of doing RCTs and implying that they tell us whether smokers will switch to smokeless alternatives is just bad science.  What is of interest is whether many typical smokers can be informed or persuaded so they choose to switch, using mass communication, as a behavioral choice in their lives.  The RCTs start with the odd subgroup who are inclined to volunteer for a cessation intervention, educates them in a particular way, which may not be effective and is certainly not natural or based on normal educational methods.  Perhaps the results tell us a little bit about what we might really want to know, but they tell us far less than, say, observing the actual substitution choices made by would-be-smokers at the level of personal anecdote, to say nothing of systematic observational studies.  For somewhat different reasons that I noted in the previous posts, RCTs of how long to exclusively breastfeed also end up measuring something we do not really want to know. 

Back on the other side, though, are cases where the confounding is obviously such a huge problem that if it cannot be eliminated then we really cannot possibly hope to sort out the actual causal relationship.  This is the case with the study that triggered today’s news story.

I have colleagues who might suggest that – in the spirit of the cliche “how can you tell if X is lying?” “his lips are moving.” – you can tell if epidemiologic studies of diet and nutrition are junk science by observing that they are epidemiologic studies of diet and nutrition.  There is a lot to that – most of what is done in those areas is a complete joke.  The methods for measuring the exposure (i.e., what people eat), subjects keeping diaries of that information, have been shown to be terrible, and the statistical analyses are often – perhaps even usually – so bad as to be unethical.  But more specifically in this case, the observation was that when someone reported eating more for breakfast it was not associated with them reporting eating less for the rest of the day.  That is, whatever the extra food intake at breakfast, at the end of the day the intake is elevated above the average by about the extra breakfast calories.  But does this mean that eating more at breakfast does not cause a compensating reduction later in the day?  Absolutely not.  There is confounding, both across the population and across different days for the same person.  Some people just eat more than others, obviously, even among study subjects who are trying to lose weight.  They eat more for breakfast, and also a lot the rest of the day.  This effect can be controlled for in the study design, by using someone as his own comparison group (i.e., see if he eats less or more on a particular day, as a function of breakfast, compared to what he himself eats on average).  (There is also the more subtle problem of biased measurement error:  Someone who misreports what he ate for breakfast may also misreport about lunch that day, but I suspect I will have better examples of this point later.)

But though the interpersonal differences can be pretty well controlled, that just leads to another level of confounding that cannot be controlled.  Some days any given person eats more than on other days, due to activity, mood, opportunity, social pressure, simple swings in appetite, or whatever.  So if someone eats a large breakfast, he may do so for reasons that also cause him to eat more than he might otherwise the rest of the day.  So it might well be that eating more at breakfast causes someone to eat less than he would have the rest of the day, but this effect is swamped by whatever caused the eating of the big breakfast.  The claim is still not really plausible (it was never particularly plausible that eating more for breakfast would cause someone to eat less overall), but the point is that the study does not really inform us; there is a problem with confounding that is so bad that it renders the study useless.  So what would be a better way to address this question?  To assign the size of each person’s breakfast each day and see what else they eat – in other words, do an experiment.  Then how much someone eats for breakfast is unrelated to their activity, mood, etc. because it is random, and so the association, or the lack of association, cannot be explained by the obvious confounding.

Why is the artificiality of that not such a problem in this case?  Because what we are really interested in is not what results when people happen to choose to eat a big breakfast, but rather what would happen if they forced themselves to eat (or avoid) a big breakfast.  We want to know if it is a good tool to achieve a particular goal, just like we want to know that about a drug or surgical procedure.  Thus, using a method that works well for testing drugs and surgical procedures seems like a good idea.  We still have the problem that, since this is behavioral rather than biological, that people might react differently to being assigned a meal size as compared to forcing it upon themselves, so the RCT approach still has problems, but it is certainly better than completely fatal confounding.

I expect, however, that we are never going to see that trial.  The researchers might have tricked the MSNBC reporter into believing that this represented some important new knowledge, but I suspect no researcher would be interested enough in the question to do the RCT.  Rather, these dietary studies are almost always fishing expeditions, collecting a lot of data about hundreds of things and then sifting through it for associations that might be used to impress someone.  So, with the exception of the anti-tobacco extremists and a few other political actors, who are not really even scientists anymore, I will declare nutritional researchers, particularly the “health promotion” types and dietitians, to be today’s Worst Epidemiologists In The World.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s