But it is a straw man to say “surgery saves lives so we just have to do it” (to paragraph one of the medics interviewed in the story). There are inevitably surgeries that were considered just on the positive side of the line, all costs and benefits considered, and this additional information (assuming, as I would guess, it was not widely realized in learned insider circles already) should tip the balance to the negative in such cases. Cognitive impairment is extremely serious, after all – we are not talking about a scar or even a small cancer risk. Additionally, other medics interviewed for the article suggested that there are actions that can be taken to reduce harm; one of them noted, “if we know that this baby needs two small procedures and two anesthetics is worse than one, then we would knock ourselves out to do it on the same day”.
Unfortunately, the concepts of tipping a marginal case and acting based on the best information we have are apparently remarkably difficult for many people to understand, and physicians are not apparently better than average at this. The news story ends with one of them commenting “we don’t believe that there is data yet that says to us either that we should change our technique or that we should frighten parents about allowing us to anesthetize their children for necessary surgery.” The advice against frightening seems good, until you realize that it means that lying to parents when informing them about the risks. If we are 50% sure that there is a 10% risk of minor brain damage (I am making up the numbers), that is a 5% chance. It is no different, in terms of current decision making, from being quite sure that everyone has a 5% risk. And thus we should also act according to this calculated risk. That is, some of that “knock ourselves out” should be happening, and some marginal surgeries should not be done.
The cognitive error in that last quote, which I strongly suspect will become official policy for a while, is to respond to the informational situation characterized by “we are moderately sure there is a problem but we are not positive, and we are not going to be all that more sure anytime soon” with “we will pretend there is no risk until the day comes that we have some particular level of evidence, at which time we will make a radical change as if the information came like a bolt from the blue”. This is a terrible way to make decisions (as is the opposite version of it, sometimes labeled “the precautionary principle”, assuming the worst case scenario until we are certain it is not accurate). In all cases, we should act on the information we have and our degree of certainty about it. There are equations about how to do this precisely (the one above is a simple example). Realistically, we usually lack precise enough input numbers to do a precise calculation, but working through the calculation with rough numbers is generally pretty informative. At the very least, we should recognize that our choices, if we believe there is a good chance that something is a hazard, should shift partway toward where they would be if we were sure it was a hazard.
We can hope that the FDA panel has at least one person on it who is an expert in optimal decision making in situations of uncertainty, rather than just having a collection of experts in surgery and anesthesia, but experience suggests we should not count on it. As a minimum, we should how for just one person who understands that “we are not ready to decide to do anything yet” is a decision like any other.
Sorry, for not being very entertaining today. It was hard to find any humor in this one.