I had planned to cover the news today, but I had one more overview thought I wanted to communicate. I have written some of this before, but I have a few new thoughts that I think are interesting. I will repeat enough to make it unnecessary to go back and read anything.
When major failures occur in a normally functional system, it is usually due to a combination of three causes, not just one as we often try to simplify it to:
- operator error (e.g., pilot error; occasionally a failure would occur no matter what the proximate actor did, such as when the hardware just breaks, but usually some specific goof triggers it – otherwise it would probably have already happened)
- hardware inadequacy (not necessarily that something broke, though that might be involved, but that it could have been designed to avoid the failure in question if that had been the goal)
- systems failure (rules or patterns of behavior made the operator error more likely and did not guard against or mitigate the effects of this particular failure mode; e.g., making it easy to push the wrong button, not inspecting the tires often enough, not building in redundancies)
I commented yesterday about a novel that included the U.S. CDC fighting an all-threatening disease outbreak that required all of their abilities and major police powers, contrasting that with the current government practice of nibbling away at freedoms and pleasures to provide trivial health benefits at great psychological cost. “CDC” is, of course, the now inadequate abbreviation for the Centers for Disease Control and Prevention. And who can complain about the fact that they or any other government entity have prevention as part of their mission? How can prevention be bad?
It is bad when it lets a certain ilk of people try to make everything about operator error.
An illustrative example of focusing on the operator that you have probably seen is the card reader or gas pump that has multiple layers of hand-scrawled signs telling people which button to push or direction to slide, accompanied by clerks who get annoyed a few dozen times a day when they have to point out the sign to those who do not notice it and consistently guess wrong about what to do. This is a case of the operator being blamed for, and being forced to compensate for, hardware and systems failure: a device that was designed in a way that sufficiently defies expectations that many people’s guess about how to use it, coupled with no better system for encouraging correct usage than yelling at the operator. Another favorite are the hotel key cards where the clerk repeatedly warns you to keep them away from your phone and wallet or they will demagnetize, which inevitably happens. These problems do not merely offend the engineer in me, but cause needless cost. They call for using better hardware (it exists in all of these case) or figuring out a way to gently guide people to avoid the problem.
Such solutions tend to be noncontroversial. No one complains about health researchers when they figure out a hardware fix to a problem, like a new drug. No one should complain (other than, perhaps, about the cost) when government improves infrastructure to give people the opportunity to behave in a healthier way, such as by installing bike paths, making sure that urban bodegas sell fruits rather than just chips, or requiring that restaurants list calorie counts. (Occasionally someone complains about such actions but they generally lack legitimate grounds to do so. Perhaps the predicted benefit does not seem to outweigh the cost, though seldom is that the justification.)
But “…and Prevention” becomes a problem when it consists of harming people in an effort to force them to change their behavior. That is, they put the onus on the operator. This is a remarkable combination of bad judgment and bad ethics. The ethical arguments, both libertarian and cost-benefit based, have been made here and elsewhere to an extent I see no reason to repeat them. The more practical argument is that in most arenas, the people running a complicated human system recognize that they must look for ways to improve the system, since bludgeoning people into being better operators is usually pretty useless. Pilots are not trying to crash after all. And if all of your troops are miserable to the point that they are not functioning well, you can try to whip each of them until they perform better, but there are much better solutions.
If your students are doing badly, you can admonish them to study harder, but if it keeps happening year after year, there is probably something wrong with the teaching, or the motivation, or the community, or something else beyond the individual. Yes, each student could save his own life, but if so many of them are failing to do so, we obviously need systems fixes, just like with a card reader where the users insert the card the wrong way most of the time. Imagine a task force assigned the job of improving the performance of schools proposing the policy, “ban televisions and video games, and institute corporal punishment and public humiliation for bad grades, and make sure there is no safety net for students who cannot get through school to make sure they have an incentive to do better.” But that is basically what the “health promotion” people propose when tasked with disease prevention regarding drugs, diet, and exercise. They sometimes talk about making healthier communities, but a close look reveals that they are often just demanding that each individual behave differently.
When reviewing applications to public health school from the hundreds of indistinguishable Indian applicants trying to get into to American schools (usually with the intention of getting a foot in the door so they can become American physicians), their nearly identical application essays included the phrase “prevention is better than cure” in the opening paragraph. That sounds fine to a public health person, notwithstanding the pathetic repetition, until you notice that it is not always true. Preventing a particular case of a serious disease is almost always better than letting it happen, but that is not how things work. We cannot go back and prevent a particular event. We can only take prevention measures. Some of those are justified and efficient: clean water is definitely better than treating cholera, and the right vaccines and industrial regulations are appropriate and worthwhile.
But in keeping with my observations from yesterday, others represent cases of mistaking a prevention measure for an act of prevention. Just because it would almost certainly be a good thing to prevent a fatal smoking-caused cancer or accident does not mean that everything that might theoretically protect someone is a good policy measure. Preventive measures have broad negative effects and may not save anyone. I doubt that any honest intelligent person really thinks that emotionally violent pictures on cigarette packs are really going to cause many people to not smoke. But there is strong support for it because people mistake a preventive measure that seems like it might do something for a way of preventing a particular outcome. But it does not work that way. Whoever it was in someone’s life who was suffered or died because of a risky behavior is not going to be retroactively helped. Preventing that would have been good, and curing it also, but that is not necessarily true for a prevention measure aimed vaguely in a direction that might have prevented that case. And implementing public policy rules is not a healthy form of therapy.