Monthly Archives: February 2011

Unhealthful News 55 – Cribs are not a safe alternative

Last weeks news included a report that revealed, based on a review of hospital records, that each year in the U.S., 10,000 young children end up in the emergency room because of crib and playpen related injuries.  Cribs in the U.S. are governed by a remarkable number of regulations for such a simple object – not as many as there are for cars, but far more than for the more complicated and dangerous device, bicycles.

There is some good reason for this.  Regulations like maximum distance between the bars on the side, preventing infants from getting their head jammed between them or pushed clear through which can have fatal consequences, make perfect sense.  There is no reason anyone would want a crib that created that risk, but it is not reasonable to just demand parents all figure this out themselves.  I know that some of my readers are adamantly opposed to most regulations that are intended to protect people from their own decisions, but I suspect you have to agree that this one is a good idea.
On the other hand, some of the regulations are of the “deciding for someone how they want to trade off risk versus other calculations” variety, tending toward nanny state behavior.  For example, “drop-side” cribs, those with a side that can slide down to allow easier access to the bed and baby, are now banned because if the side is left lowered babies and toddlers are at greater risk of falling out and if mis-assembled the moving parts created a risk of catching or pinching the baby.  This new regulation was mentioned in most of the recent stories and cited as a reason to expect that the injury rate would go down.  Not mentioned was the burden this placed on a 5 foot tall mom, who cannot reach into a crib if the side does not drop, and how she might not be able to get her baby out of the crib until he learned how to crawl to the near side so she could reach him.  (Our society is a hostile place for short women to have kids without a man around.)  Regulations like the drop-side ban and associated recalls of products are so onerous that regular furniture stores seem to have gotten out of the business of selling cribs at all, leaving them entirely to baby product specialty stores that are used to dealing with such hassles.

As for the alarming injured baby statistic, it was noted that 1- and 2-year-olds account for most of the injuries, which is about 10 million at risk in the study population (that is a conservative estimate since older and younger kids were also at risk).  So we have less than 1/1000th of the at-risk population experiencing an event each year.  This would be a disturbing number if many of the events were highly serious, but there is no indication of this.  Deaths were in the order of less than 1/100,000 per person-year.  To put that in perspective, the risk to the kid from car travel is easily ten times that great.

This does not mean that the problems should be ignored.  But it was noted that most of the injuries, especially the most serious ones, consisted of toddlers pulling themselves up out of the crib and then falling to the ground.  This is another case of operator error rather than bad tech, since the mattress can and should be lowered, effectively raising the walls of the cage, as the kid gets bigger.  And, of course, there comes a combination of height and arm strength when the pen walls create rather than reduce falling risk.  In other words, a lot of the injuries were to newly motile kids whose parents had not figured out that they needed to climb-proof their space, so those kids faced some risk wherever they were left alone.  Strange how none of the articles I saw led off with or even clearly noted the message “parents can eliminate almost all of the small but nonzero risk from cribs by making sure the kid cannot climb out of them.”

Many of the news articles about the topic mentioned that, despite the hazards, putting the kid to sleep in a crib is safer than any other option.  I find it rather difficult to understand how this can be asserted, given that it is probably quite difficult to get good statistics on how often kids are put to sleep in socially frowned-upon places like, say, their parents’ bed (common in most of the world, but widely condemned in the U.S.).  I would guess that there are no good statistics on alternative sleeping arrangements like the parents’ bed, dresser drawers, or mattresses on the floor (hmm, that one seems safer).  This is one of those claims that should cause a reporter to say “how do you know that”; they seldom do.

All that got me thinking.  We have a fairly low-risk activity and the risk is being further lowered by changing technology.  It is an alternative to a popular but officially socially-condemned activity.  That sounded really familiar.  So I had to wonder, why the “health promotion” types are not attacking cribs, like they do other harm reduction practices, screaming that cribs are not a safe alternative to other sleeping arrangements.  After all, the study was published in Pediatrics.

How can we accept actions that merely lower the risk from cribs when there are government sanctioned proven methods of quitting…er…sleeping.  As shown here, there are tested and proven safety gear approved by the U.S. National Highway Traffic Safety Administration, International Mountaineering and Climbing Federation, and U.S. Consumer Product Safety Commission.  Photographer’s Note:  The latter is not shown in use (it is sitting in the back) since my model started crying every time I put the bike helmet on him, and for some reason his mom then decided the shoot was over.  Models can be such divas.  But, if we have learned anything from anti-tobacco extremists and their ilk, it is that a little needless emotional distress and intense discomfort is a small price to pay to eliminate every last trace of risk.

With that in mind, why is there no demand to do away with these jungle-animal-decorated death traps?  My personal theory is that all the health activists are secretly in the pocket of Big Crib.  This explains why they condemn the most popular alternative in the world (and that mattress on the floor idea) in favor of a slavish devotion to cribs, even as they desperately try to eliminate all crib features that could lead to faulty assembly or other operator error.  Coming soon will be cribs with a lid on top like a hamster cage,  which will be a bit dehumanizing, but will be good for getting kids ready for their role in our increasingly feudal society.  Actually I think it is more likely that the requirement will be cribs

where the mattress cannot be raised above the lowest level, to make it impossible for anyone to not properly lower it when the kid grows.  Yes it will cause all manner of orthopedic problems due to back bending lifting by a large portion of mothers as well as shorter fathers, but how dare you worry about that?  Think of the children!
Where does it end?  I will bet that more babies are injured when being carried than when they are in their cribs.  Shouldn’t we do something about this needless risk, perhaps creating some kind of approved device that eliminates the danger of unassisted carrying.  Naturally U.S. government regulators will never endorse anything Swedish, even if it is the obvious natural and popular solution and has proven to be miraculously harm reducing.  But I am sure that Big Crib can come up with some convoluted solution to the problem that has not been contaminated by being an accepted lifestyle alternative for centuries.

[Disclaimers:  (1) No babies were harmed in the making of this blog post.  Not seriously anyway.  (2)  The journal Pediatrics is a go-to outlet for both utter crap junk-science that has a particular nanny-state bias and for legitimate research about children’s health that the author thinks should be read by activists rather that just by medics and scientists.  The latter studies are not necessarily junk, they are merely written by authors who are willing to implicitly support the junk so that they can gain greater visibility among those who are not sufficiently expert to know that Pediatrics publishes junk.  Not high praise, I suppose, but it is only fair to concede the point.]

Unhealthful News 54 – Exercising your brain is good, microwaving it perhaps not

If you glanced at the health news today, you undoubtedly learned of a new study that found, based on brain scans, that talking on a mobile phone has some effect on the brain, though it is not known if that is unhealthy (here is a version with still images of the what the scans look like, which is of course utterly meaningless to the reader, but the colors are pretty).  There has long been speculation about whether the radiation (i.e., signals) from phones that enter the brain, due to being transmitted from a point close to it when the phone is held to the ear, might cause cancer or some other disease.  The new study found increased brain activity at the point nearest the transmitting phone.  I am not going to take on the subject as a whole, but I thought I would point out some specific observations that struck me about the stories.

First, it was remarkable how many stories made observations about how the radiation from cell phones is non-ionizing (that is, it cannot break molecular bonds, which is what makes some radiation carcinogenic) but did not mention that the frequency of the radiation is in the microwave range.  You might recognize that term as something that makes water molecules get hotter, which could alter the brain due to the minor heating effect (as could the direct thermal effect of the waste heat from the phone pressed against the head, or sunlight or just being warm).  I am not saying I believe there is some effect from this – I have almost no idea about the biophysics here – but it was very odd that no report bothered to tell us whether this was likely the explanation for the results that were observed, probably was not the likely explanation for some reason, or that the experts have no idea.

Also, I noticed a lot of the stories seemed to place great stock in the fact that the observed metabolic change was “highly statistically significant”.  This seemed intended to cause the reader to believe that the change was of important magnitude even though the magnitude of the change, a 7% increase in activity, seemed modest (though I have no idea whether this is truly small in context).  But all that “statistically significant” means is that the observed result was unlikely to occur by chance alone, which means that even though the effect seems small, random spikes in metabolism are rare enough or they repeated the experiment enough times to see a clear signal above any noise.  This does not mean that the result matters or is even impressive, though presumably that is what the news reader is supposed to be tricked into believing.  (Also, as a more technical point, the phrase “highly statistically significant” is nonsense and indicates a lack of understanding of statistics on the part of the researchers.  Statistical significance is, by construction, a “yes or no” proposition; there are no degrees of “yes” nor is there an “almost” category.  There are other related statistics that have magnitude, but statistical significance does not.)  Note: I wrote more about the technical meaning of statistical significance in UN16.

On a disappointing related note, one of my all time favorite new clippings for teaching was from sometime in the 1990s when an early epidemiologic study reported no statistically significant increase in brain cancer among mobile phone users.  But, the story reported, when researchers looked individually at each of the 20 different brain cancers studied, they did find a statistically significant result for one of them, which was portrayed as worrisome.  The beauty of this, if you do not recognize it, is that the concept of “statistically significant at the .05 level” (which is what is usually meant by “statistically significant”) is often explained by saying that if you repeated a study multiple times and there was really no correlation between the exposure and the outcome, then only 5% of the time (1 time out of 20) you would get a statistically significant result due to bad luck.  Thus, we would expect to see 1 out of the 20 different cancers show up as statistically significant. 

This is not actually quite correct, but it is works in spirit, fitting the usual simplification story, so the fact that there were exactly 20 different brain cancers examined made it such a great example, kind of an inside joke for students learning this material.  Unfortunately, this was back in the days before digital copies of everything and I apparently lost every copy of it.  I thought I had found it again a couple of days ago in an old file and pulled it out, and thought that everything had just come together perfectly when the stories about the new study ran today.  Alas, the clipping I found was a far less interesting random story about the same topic from about the same era.  My perfect example remains lost.

So as to not finish on that note of minor tragedy, one last observation about the news stories.  One story caught my eye because the lead for it included the promise to explain how, “Many variables have prevented scientists from getting good epidemiological evidence about the potential health risks of cell phones.”  That sounded interesting since only the epidemiology can tell us whether there is any actual health problem, and so far it has not supported the fears that there is.  However, it is far from definitive.  After all, with an exposure this common, a tiny increase in probability among those exposed could still be a lot of cases, and with brain problems – not restricted to cancer – being as complicated as they are, figuring out what to look for is not easy.  So it was disappointing that the article only included the above sentence and, “Radiation levels also change depending on the phone type, the distance to the nearest cell phone tower and the number of people using phones in the same area.”

The claim was that because there is so much heterogeneity of exposure, it prevents us from getting good epidemiologic evidence.  But it is actually in cases where there is so much heterogeneity that observational epidemiology of the real-world variety of exposures is particularly important.  The experiment that was reported today, like most experiments, looked at only one very specific exposure (and, in fact, one that was not very realistic), but it served as a “proof of concept” – a demonstration that the phones can have some effect.  But other experiments or narrow studies might have missed this effect if they had looked at a different very specific exposure.  Epidemiology that measures any health problems associated with a varying collection of different but closely related exposures (e.g., all mobile phone use) can provide a proof of concept that does not run so much risk of missing the effect.  With a study of the right type and sufficient quality, observational epidemiology can show whether at least some variations on the exposure are causing a problem, even if not all of them are.  The same data can then be mined to suggest which specific exposures seem to be more strongly associated.

Oh, and just for the record, I try to use a plug-in earphone/microphone when I have a long conversation on a mobile phone.  I would not be surprised if no important health risk is ever found, and is seems that any risk must be small or we would have noticed it already.  On the other hand, why be part of the experiment if you do not have to?  Besides, I just do not like the feeling of the side of my head heating up.

Unhealthful News 53 – Methadone and the urge to never be positive about harm reduction

It was gratifying to read that first sub-Saharan African methadone-based harm reduction program for injection heroin users had been introduced to Tanzania’s main city, a port of call for shipments from Afghanistan to the West.  (It is a perfect trade arrangement: We send troops to Afghanistan and they send back something that also produces adamant feelings of both love and hate, depending on who you ask.)  Tanzania’s heroin problem is not the biggest harm reduction target in the world, but every little bit of civilized behavior toward drug users makes the world a better place.

Most of the news story was a matter-of-fact presentation of the situation, and a report on the value of providing an alternative to needle sharing, which has led to a very high prevalence of HIV in the target population.  Interesting, there was no mention of the easier and less-invasive response to that problem, needle exchanges.  It is very strange to report a story that focuses on needle sharing and not even note “a needle exchange program is being considered” or “needle exchange is currently politically infeasible in this country”.  But the painful part of the report was this sentence:

Methadone is even more addictive than heroin, though it is given in oral doses meant to be small enough to produce no high.

First, that “no high” observation, presented in the article with a tone of “this is the only/right way to do it”, is often cited as a barrier to harm reduction.  For my readers more familiar with tobacco, it is the equivalent of limiting product substitution for cigarettes to nicotine patches which, unless you use several at once, leave most smokers bereft of the effects they want, even if some of the pain of abstinence is removed.  It is possible to give enough methadone that it produces enough high to attract more product switching, rather that restricting doses to unrewarding levels.  Moreover, methadone patients who are unwilling to forgo the high end up scoring heroin periodically or (if the distribution logistics make it possible) taking multiple days’ doses at once (which likely leaves them wanting heroin on the off day).

But worse is “more addictive”.  Does anyone who writes or takes seriously a claim like that even pause to ask “what would ‘more addictive’ even mean?”  Even without going on to the next logical step, asking “for that matter, what does ‘addictive’ even mean”, it seems like this would evoke some skepticism.  Even to the extent that there is a well-defined phenomenon that is labeled “addiction”, there is no associated quantification, no addicto-meter or even an index of degrees of addiction.  Thus there is no room for comparative statements.

Often the original source of such a statement was someone claiming merely that one behavior is more readily ceased than another.  This often means one takes place for more calendar time than another – e.g., alcohol is “more addictive” than crack cocaine because the typical “addict” continues to consume the drug for more years.  An alternative claim consists of counting up how often someone “tries to quit”, which can be little more than declarations of intent, and using that as a measure.  This tends to be higher for drugs that can be used more casually – e.g., smoking is “more addictive” than heroin use because the average smoker declares “ok, that’s it, I’m quitting” much more often.  Yet another quantification is how often someone starts the behavior again after stopping for long enough to get clean.  By that measure, once again, alcohol use and smoking will be “more addictive” than more ominous behaviors because once someone extricates himself from the culture surrounding use of a highly-life-altering drug it is a huge step to go back. 

I suspect that almost no one thinks any of these is what they are being told when they read “more addictive”.  After all, why would someone use a sweeping term like “more addictive” when what they really mean is something much more specific?  Actually that is pretty easy to answer, but the point is that the phrase misleads people who think they know what they are being told.

So what does today’s news reporter mean by “more addictive”.  I would guess that he has no idea.  What was the basis for the claim that he heard and uncritically repeated?  I am not sure, but either of the first two above is plausible — methadone gets used for a long time, and is not much fun so users probably want to quit all the time (however I do not know, offhand, what the relevant statistics are).  But what seems more clear than what the phrase means is that the counter-intuitive claim about drinking methadone being more addictive than shooting heroin seems to be political rhetoric disguised as a meaningless pseudo-scientific statement:  However positive a report about harm reduction is, someone still sneaks in innuendo about the evils of any intervention that does not just force people to stop.  At least in this case, the politics seem to actually be sympathetic to the poor addicted user, rather than the disturbingly common disdain that demands users suffer until they quit.