We Need A Standard Unit Of Measure For Risk
More than three decades ago a Stanford professor proposed a uniform way of expressing your odds of dying from a specific cause: the micromort. It’s time it went mainstream.
[You’re reading a post from my Adjacent Possible newsletter. Sign up to receive more essays and conversations like this one in your inbox.]
“The average weekly chance that a boosted person died of Covid was about one in a million during October and November… That risk is not zero, but it is not far from it. The chance that an average American will die in a car crash this week is significantly higher—about 2.4 per million.”
There was a flurry of controversy last week about David Leonhardt’s “The Morning” newsletter in the Times, with some people arguing that Leonhardt was downplaying the continuing threat of COVID by comparing the risk involved to that of automobile accidents. I tend to be generally sympathetic to the way Leonhardt has written about the pandemic, but last week’s column reminded me of a more general point that I’ve been thinking about since the early days of 2020: we need a standardized unit of measure to describe mortality risk that can be used for all kinds of activities, not just living through a viral outbreak.
Our problems with making risk assessments are manifold, and there are many terrific books out there on the ways probability thinking messes with our intuitive understanding of the world. Tim Harford (who has written one of those books himself) had a very helpful rundown of them on Twitter a few days ago.
(David Epstein’s always stimulating newsletter discussed some of these topics a few weeks ago as well, in an essay with the excellent title: “Everything In Your Fridge Causes and Prevents Cancer.”)
One problem we have is that risk is usually both relative and cumulative in nature. Vaccines work because they reduce the overall risk of death dramatically, but in almost every case, some risk remains. So you’re not eliminating a threat, you’re just lowering the odds that it will impact you. And you can reduce the risk further by adding additional measures, like masking or social distancing. So to understand the total risk that you face in a given situation, you have to understand the base rate of danger that you’re confronting, and then the relative impact of the interventions you’re considering, whether it’s wearing a seat belt or putting on a mask.
Another problem with risk is that it often revolves around very small probabilities, which can lead to all sorts of base-rate confusion. You’ll read an article about a new study that finds eating bacon doubles the risk of pancreatic cancer and it’ll sound terrifying. (I am making these numbers up, so do not adjust your diet based on them.) But what if your initial odds of getting pancreatic cancer are 1 in 100,000? Then you can just as accurately say that eating bacon will change your odds of getting pancreatic cancer from from .00001 to .00002. That doesn’t sound nearly as worrisome.
A few decades ago the Stanford professor Ronald Howard proposed a unit of measure for mortality risk. He called it the “micromort.” One micromort equaled a one-in-one-million chance of dying. Howard was an expert in decision theory, and he had recognized that many of life’s most complicated decisions—particularly medical ones—involved complicated assessments of risk probability. Howard imagined the micromort as a common framework that, for example, a doctor could use with a patient to describe the risks of undergoing a specific procedure—and the risks of not undergoing the procedure.
The standard never really took off, but it has seen something of a revival in the COVID age. There was an op-ed in the Times in May of 2020 that discussed COVID risk using the language of micromorts. The Wharton School professor Ethan Mollick posted a short thread on Twitter last summer discussing the ways vaccines reduced your COVID risk, using Howard’s unit of measure:
I should say up front I’ve always loved Howard’s idea of the micromort. The one problem I have with the concept—which is a problem intrinsic to these kinds of risk assessments generally—is that we don’t have an intuitive understanding of very small probabilities. How risky really is a one-in-a-million chance of dying? Is that skydiving-level risk? Plane crash risk? Asteroid-extinction-event risk? We have no intuitive understanding of these odds, in part because no one bothers to teach these sorts of things in school. For the unit of measure to be useful to a layperson, you want it to be anchored in something intelligible, the way the celsius scale is neatly anchored in the two extremes of freezing and boiling water.
We’ve seen the need to anchor risk again and again in the endless comparisons of COVID-19 to the flu, which has been central to the pandemic discourse from the very start. Whether we’re trying to warn people about the heightened dangers of COVID or whether we’re trying to dismiss the threat, we find ourselves coming back to the anchor of seasonal flu because it’s a known danger that we have largely made our peace with as a society, thanks in part to existing interventions like flu shots. So whether you’re saying “it’s no different than the flu” or “it’s way more dangerous than the flu” you’re trying to translate the novel risk of COVID into a more familiar threat that people can get their heads around.
The problem with using flu an an anchor is that there is a great deal of variability from year to year (and even more season to season) in the danger posed by influenza, depending on the variants that happen to be in circulation. (I also think people generally underestimate how deadly flu can be.) That’s why I think a much better unit of measure is the one that Leonhardt employed in his column last week: the risk of dying in an automobile crash. No doubt some Americans underestimate how dangerous driving is, given our car-obsessed culture, but I suspect almost everyone understands that there is some material risk of a fatal accident when we get behind the wheel. Most of us know someone personally who died in a car crash, and of course the list of celebrity vehicular deaths is enormous. And while auto fatalities have decreased dramatically from the James Dean days, the mortality rate is generally fairly stable year-to-year. According to the latest statistics, in the United States the fatality rate is around 1.2 per 100 million miles driven. There’s some small risk in driving—enough that we have speed limits and seat belt laws and many other mandated interventions to keep the risk from being even higher—but for most of us the risk is one we’re willing to take.
The convenient thing about using the automobile framing is that a two-hour trip at mostly highway speeds happens to come out almost exactly to a one-in-a-million chance of dying, Howard’s original measurement. So the “micromort” unit could still be employed. It’s a nice clean number, easy use in rough, back-of-the-envelope calculations, but it would be anchored in an experience that just about everyone is familiar with: driving for two hours on a highway.
With that anchor, you can then do some more precise assessments of just how risky other activities are, and translate them into an experience that’s familiar to everyone. I did some rough calculations on the NYC COVID data, and came out with the estimate that just going about your normal business in New York City during the first week of March 2020—before the lockdowns kicked in—was 625 micromorts. In other words, more than six hundred times more dangerous than getting in your car and driving to Hartford. During the pre-Omicron days in the fall of 2021, hanging out in NY for a week was about 10 micromorts. If you were a healthy vaccinated person, your risks were indeed—as Leonhardt suggested—likely lower than taking a road trip for a few hours. The height of the Omicron wave last week brought that number up to 80 micromorts. Not terrifying, but certainly reason to buckle up for a month or two.
Having a standardized unit of measure for risk would be helpful for our personal calculations, but it could also become a core part of the way the media or public health authorities talk about threats like epidemic disease, or even seasonal flu. Post-COVID—if we ever get there—I suspect I will still be interested to know if the flu risk starts to climb in New York, even by a few micromorts—I wouldn’t radically change my plans, but I might put on a mask in the subway for a few weeks. For the past seventy years, every single local news broadcast has been telling you what the temperature is going to be tomorrow, and the chance of precipitation. Why shouldn’t they also include genuinely life-or-death odds? Basically, risk weather: “The next week looks like we will be reaching a high of 50 micromorts, thanks to the new variant—though only about 8 micromorts if you’re vaccinated. For seniors, though, we’ll probably see a high in the 100s, so you might want to cut back on socializing indoors.” (Part of this was inspired by an idea the epidemiologist Caitlin Rivers mentioned to me when I was interviewing her for this Times Magazine piece in 2020: an epidemic forecasting center, modeled after the National Weather Bureau.) The 11 o’clock news has been teasing upcoming stories about the “ordinary household object that might be giving you cancer” for as long as I can remember. Micromorts might actually compel them to be more precise about the actual risks at play.
And of course a standardized way of describing risk would allow us to debate—and formalize—a set of thresholds for relaxing public health interventions: mask mandates or restaurant vaccination restrictions could be pegged to a specific micromort level. We won’t all agree on what our tolerance should be as a society, but we’ll at least have a firmer grasp of the magnitude of the risk that’s on the table.
Risk assessment is undoubtedly trickier with epidemics because a unit of measurement like the micromort is fundamentally personal in nature: it’s the threat faced by a single person over a specific period. Epidemics are by definition social phenomena and they involve evolutionary forces that shift over time. You may personally not face much risk right now, but by exposing yourself to the virus, you give it a chance to continue a chain of replication that might lead to a new, more deadly variant, or infect an immunocompromised person whose risk profile is very different from yours. (You could make a similar argument about driving itself: there’s the personal risk of the car crash, and the long-term, collective risk of climate change or air pollution that driving in a gas-powered car heightens.) But I think there’s a strong case to be made that having some kind of standardized unit to describe risk would be far preferable to the vague, unanchored way we talk about it today.
And in the event that you feel untroubled by COVID’s current micromort levels and are planning a sporty vacation somewhere this spring, I present to you, courtesy of Wikipedia, a list of potential recreational activities ranked by micromort levels:
Skiing: .7 micromorts per day
Scuba diving: 5 micromorts per dive
Running a marathon: 7 micromorts per run
Skydiving: 8 micromorts per jump
Climbing Mt. Everest: 37,932 micromorts per ascent
Plan accordingly.