Magazine

Inside the C.D.C.’s Pandemic ‘Weather Service’

Listen to This Article

Audio Recording by Audm

To hear more audio stories from publications like The New York Times, download Audm for iPhone or Android.

In January 2020, when most health officials and many scientists were still blind to the coming crisis, Jeffrey Shaman, an infectious-disease modeler at Columbia University, had a hypothesis. Based on how fast the novel coronavirus appeared to be spreading in China — and the fact that it was already popping up in other countries — he suspected that a lot of people were infecting others without ever getting sick themselves. And if that was true, he feared, the outbreak might already be approaching pandemic potential. “The respiratory viruses we tote around like that are the ones that don’t knock us on our asses — they don’t put us in bed before we start shedding them,” Shaman told me recently. “Viruses like that can be nearly impossible to control.”

To find out more about how contagious undocumented cases might be, Shaman and his team built one of the first Covid-19 models. Modeling infectious-disease spread is an inherently difficult business. Good models require reams of carefully collected data and highly specialized computer software — and even then, the work is notoriously tricky. This is especially true in the case of projection and forecasting models, which aren’t just trying to study the underlying mechanisms of real-world events but to describe a range of behaviors or possible futures. As election polls, weather apps and fantasy-football enthusiasts routinely demonstrate, the most mathematically rigorous forecasts can still be wrong, just as the sloppiest guesswork can, by pure chance, be right.

Pandemic forecasting is especially fraught. For one thing, the spread of infectious disease is not governed by physical laws the way that, say, the movement of weather systems is, and is heavily influenced by human behavior, which is much harder to quantify and predict. For another, when something like Covid-19 hits, scientists don’t have a trove of historical data to mine, or previous forecasts to draw from, as they study it. Unlike many other things that catch forecasters’ attention — from hurricanes to presidential elections — pandemics are both rare and unique. “There’s a joke among epidemiologists,” Brandon Dean, an emergency-response planner in the Los Angeles County Public Health Department, told me. “If you’ve seen one pandemic, you’ve seen one pandemic.” Nonetheless, scientists say, rigorous modeling still has a role to play in pandemic-response efforts. It’s one of the only ways to gauge how, where and how quickly a given virus is spreading or which interventions might slow it down. And however guarded a set of predictions may be, a forecast that’s grounded in mathematics is still preferable to one that has been shaped by wishful thinking or, worse, politics.

For his model, Shaman and his team used data on confirmed cases and the daily movements of people among 375 Chinese cities to estimate how many people a person with an undocumented infection was likely to infect. What they found horrified them: Some 86 percent of cases in China were most likely not being detected or reported at all. At that rate, he estimated, half the world could be infected in the next two years and, if the initial fatality rate held, five to six million people could die. If the study’s conclusions were correct, Shaman realized, it was probably already too late to prevent a pandemic. Still, he did what he could to sound the alarm. In February and March, he presented his findings at a large scientific conference, published them in a high-profile scientific journal and gave interviews to NPR, The New York Times and other national media outlets. Some of his colleagues told him later that his work persuaded them to wear masks early on. But for some time, he says, his message did not seem to resonate. In early February 2020, as he was warning of a coming pandemic, the nation’s top health officials were still saying with confidence that the outbreak was not driven by asymptomatic transmission and that it could be controlled.

“What we need to figure out now is why communication was so difficult,” Shaman said. “Why was it so hard for epidemiologists and public-health officials to get on the same page? Why did so many leaders fail to engage with the best evidence or even just the right experts?”

Efforts to model the coronavirus pandemic have met with mixed success. On one hand, much of what scientists forecast early on has proved accurate: The new virus is far worse than flu, more than five million people have died from Covid worldwide and an estimated half of all Covid cases have been transmitted by people without symptoms. As Shaman warned, undocumented transmission has made containment difficult and eradication all but impossible. On the other hand, the pandemic’s five or so waves have defied prediction or even a clear explanation. “Nobody knows why the virus surged in the South or in the Upper Midwest when it did,” Roni Rosenfeld, a computer scientist at Carnegie Mellon University in Pittsburgh, told me. “Nobody predicted when those surges would start, when they would peak, how high those peaks would be or when they would decline.”

Part of the problem is that pandemics are rare events and the science of modeling or predicting them is underdeveloped. But scientists say that those challenges are exacerbated by a fundamental failure in governance: There has been no national system in the United States for infectious-disease forecasting. There has been no central authority or convening body to gather practitioners in times of crisis, no formal mechanism for helping them connect with policymakers or health officials and no consensus about which strategies to employ when and what counts as rigorous work. “Whatever its shortcomings, disease forecasting and analytics is still one of our best opportunities to get ahead of outbreaks and to save lives in the process,” says Dylan George, an infectious-disease modeler and epidemiologist at the Centers for Disease Control and Prevention. “And that opportunity gets repeatedly squandered because we aren’t organized.”

George and a small group of colleagues have spent much of the past decade advocating for a forecasting center that will do for infectious-disease outbreaks what the National Weather Service has done for weather: make forecasting more consistent, more reliable and much more routine. This August, while the Delta variant surged across the American South, federal officials finally established one: the Center for Forecasting and Outbreak Analytics (C.F.A.) is part of the Centers for Disease Control and Prevention. The new center, which has been given about $200 million in initial funding, could help stop the next pandemic in its tracks — but only if scientists and health officials can bridge some longstanding divides.

Credit…Illustration by Julia Dufossé

Scientists have been modeling infectious-disease outbreaks since at least the early 1900s, when the Nobel laureate Ronald Ross used mosquito-reproduction rates and parasite-incubation periods to predict the spread of malaria. In recent decades, Britain and several other European countries have managed to make forecasting a routine part of their infectious-disease control programs. So why, then, has forecasting remained an afterthought, at best, in the United States? For starters, the quality of any given model, or resulting forecast, depends heavily on the quality of data that goes into it, and in the United States, good data on infectious-disease outbreaks is hard to come by: poorly collected in the first place; not easily shared among different entities like testing sites, hospitals and health departments; and difficult for academic modelers to access or interpret. “For modeling, it’s crucial to understand how the data were generated and what the strengths and weaknesses of any data set are,” says Caitlin Rivers, an epidemiologist and the associate director of the C.F.A. Even simple metrics like test-positivity rates or hospitalizations can be loaded with ambiguities. The fuzzier those numbers are, and the less modelers understand about that fuzziness, the weaker their models will be.

Another fundamental problem is that the scientists who make models and the officials who use those models to make decisions are often at odds. Health officials, concerned with protecting their data, can be reluctant to share it with scientists. And scientists, who tend to work in academic centers and not government offices, often fail to factor the realities faced by health officials into their work. Misaligned incentives also prevent the two from collaborating effectively. Academia tends to favor advances in research whereas public-health officials need practical solutions to real-world problems. And they need to implement those solutions on a large scale. “There’s a gap between what academics need to succeed, which is to publish, and what’s needed to have real impact, which is to build systems and structures,” Rosenfeld says.

These shortcomings have hampered every real-world outbreak response so far. During the H1N1 pandemic of 2009, for example, scientists struggled to communicate effectively with decision makers about their work and in many cases failed to access the data they needed to make useful projections about the virus’s spread. They still built many models, but almost none of them managed to influence the response effort. Modelers faced similar hurdles with the Ebola outbreak in West Africa five years later. They managed to guide successful vaccine trials by pinpointing the times and places where cases were likely to surge. But they were not able to establish any coherent or enduring system for working with health officials. “The network that exists is very ad hoc,” Rivers says. “A lot of the work that gets done is based on personal relationships. And the bridges that you build during any given crisis tend to evaporate as soon as that crisis is resolved.”

Scientists and health officials have made many attempts to close these gaps. They’ve created several programs, collaborations and initiatives in the past two decades — each one meant to improve the science and practice of real-world outbreak modeling. How well those efforts fared depends on whom you ask: One such effort changed course after its founder retired, some ran out of funding, others still exist but are too limited in scope to tackle the challenges at hand. Marc Lipsitch, an infectious-disease epidemiologist at Harvard and the C.F.A.’s director for science, says that, nonetheless, each contributed something to the current initiative: “It’s those previous efforts that helped lay the groundwork for what we are doing now.”

At the pandemic’s outset, for example, modelers relied on the lessons they learned from FluSight, an annual challenge in which scientists develop real-time flu forecasts that are then gathered on the C.D.C.’s website and compared with one another, to build a Covid-focused system that they called the Covid-19 Forecast Hub. By early April 2020, this new hub was publishing weekly forecasts on the C.D.C.’s website that would eventually include death counts, case counts and hospitalizations at both the state and national levels. “This was the first time modeling was formally incorporated into the agency’s response at such a large scale,” George, who is director for operations for the C.F.A., told me. “It was a huge deal. Instead of an informal network of individuals, you had somewhere in the realm of 30 to 50 different modeling groups that were helping with Covid in a consistent, systematic way.”

But if those projections were painstaking and modest — scientists ultimately decided that any forecasts more than two weeks out were too uncertain to be useful — they were also no match for the demands of the moment. As the coronavirus epidemic turned into a pandemic, scientists of every ilk were flooded with calls. School officials and health officials, mayors and governors, corporate leaders and event organizers all wanted to know how long the pandemic would last, how it would unfold in their specific communities and what measures they should employ to contain it. “People were just freaking out, scouring the internet and calling any name they could find,” Rosenfeld told me. Not all of those questions could be answered: Data was scant, and the virus was novel. There was only so much that could be modeled with confidence. But when modelers balked at these requests, others stepped into the void.

“Tons and tons of models were built,” Shaman told me. “Some of them, including some by people with no prior experience doing this type of work, were good. But many were frankly terrible.” And when it came to distinguishing between good models and bad ones, health officials were left on their own. “There was no unified national voice saying: ‘These are the basic facts. These are the shared U.S. government views of what may be happening and where we are uncertain. And this is how we are trying to resolve that uncertainty,’” Lipsitch says. Without that coordination, chaos prevailed. Early on, some officials leaned heavily on assumptions that experts say were overconfident and clearly unreliable. Others, especially at the local level, struggled to find anyone at all who could help answer their most crucial forecasting questions. And when modelers and officials did manage to connect, they often found themselves thwarted by other concerns.

In the late fall of 2020, for example, when hospitals in Allegheny County in Pennsylvania started filling up, health officials there contacted Rosenfeld at Carnegie Mellon. They wanted to know how long the uptick would continue and when their hospital capacity was likely to be exceeded so that they could plan accordingly. Rosenfeld tried to be both cautious and quick. He did not have a crystal ball, he said. But because hospitalizations were a lagging indicator, he could look at the cases today and make a reasonable projection about what the hospitalization rate would be in two weeks — just enough time to turn a convention center into a hospital surge unit. “We moved at breakneck speed because we knew it was time-critical,” Rosenfeld says. “Because of data-privacy rules, we could not get access to their data or integrate our model into their system from the outside.” As the two groups were devising a workaround, the peak they were trying to prepare for came and went.

Not every Covid modeling story was a disaster, of course. When officials in Seattle asked Dylan George to help them figure out why more college students were suddenly being hospitalized with Covid-19 — were they becoming more susceptible or just abandoning social-distancing edicts? — he was able to interpret a series of models that showed it was most likely the latter. “All the parts came together in a beautiful way,” George says. “We had good models, and good interpretations of those models, and then we had an enlightened mayor who actually used that information to set policy.” But stories like this have been the exception almost by definition: They require not only the right data for the questions at hand, but also the right scientists and policymakers to connect at the right time. “What we need to figure out now is how to scale those successes,” George added. “How do we make it so that it’s not just Seattle but communities across the country that can benefit?”

As scientists grapple with the failures and limitations of their modeling efforts, they have found themselves suffering from weather envy. Weather forecasting was once primitive and unreliable. But after decades of sustained investment — during which satellites were built, super computers invented and a cadre of professionals recruited — the science improved. Weather models became more mathematically rigorous; weather forecasts more accurate. And before long, people came to trust and depend on that work. A century ago, natural disasters were seen as an act of God: mysterious, unpredictable, governed by alchemy. Today, hardly anyone gets dressed without checking their favorite weather app.

Those developments owe as much to structural changes as to scientific advances. In 1970, a host of weather-forecasting efforts were gathered into the National Weather Service, which is now integrated with several related agencies under the National Oceanic and Atmospheric Administration. As Rivers has noted, it was this centralization that enabled forecasters and decision makers to work together.

Rivers, who will head the C.F.A.’s communication efforts, started developing infectious-disease models during the 2014 Ebola epidemic, when she was still a graduate student. She has spent much of her career thinking about and advocating for a national forecasting center, and the example that has most guided her thinking, she says, is the Weather Service. Among other things, she says, it helps that the agency tracks and measures the accuracy of its forecasts, which are continually reviewed and improved. In recent years, the service has also recalibrated its public messaging. Instead of focusing exclusively on the technicalities of barometric pressure, for example, the agency now also explains that under certain conditions, small trees may be uprooted or windows may shatter. Rivers has paid particular attention to this lesson. “For infectious-disease forecasting, the people producing the models are also the ones tasked with communicating about them,” she says. “And they are not always the best people for that purpose.”

For now, the new center exists almost exclusively in planning documents, virtual meetings and white papers, but “it’s been a very hopeful sort of turning point,” says Nicholas Reich, a biostatistician at the University of Massachusetts, Amherst, whose team created the Covid-19 Forecast Hub. “Having an authoritative national center is going to bring us a long way toward establishing standards and building credibility. And they’ve put exactly the right people in charge.”

One recent afternoon, I joined the center’s founding team in Atlanta by video as they discussed how to set up “test beds,” or small trials, of promising modeling initiatives so that they can evaluate and scale the best ones. The center will focus heavily on analytics: collecting data from a wide array of sources — including hospital records, public-health databases and app-based mobility trackers — and using it to gauge key parameters like transmissibility and fatality rates that can help officials determine how likely it is that an outbreak will become a pandemic, how severe that pandemic could be if it happens and whether there’s a chance of stopping it. The center will then use those metrics to help officials answer the same kinds of questions that have plagued them for the past two years: Should nursing homes bar visitors? Should schools require masks? Which testing policies make the most sense? And what is the smartest way to deploy a limited vaccine supply?

In addition to improving the data generation and sharing that goes into this analytic work, the center’s principals will also work to improve the models — and forecasts — themselves. “We’re really not very good at forecasting right now,” Lipsitch says. “The horizon is just a few weeks and even that is challenging.” Some infectious-disease modeling has improved in recent years. With the right data, scientists can forecast repeating outbreaks like seasonal flu or dengue with some confidence. But pandemics are a different beast. They occur far less frequently, and each one is distinct from its predecessors.

“We know in general terms what drives a pandemic,” Rosenfeld told me later. “We know that people’s behavior, the mode of transmission and the virus’s characteristics all play a role. But we don’t have a detailed, quantitative understanding of how all these forces interact.” With Covid, the biggest wild card has been human behavior. For all that has been learned about the Covid-19 pathogen, the course of the pandemic has hinged on human systems more than viral ones. “The quantities we are measuring are not biological quantities like incidence or prevalence of disease,” Lipsitch says. “They are the number of positive tests, which depend on all these very human things, like test availability and people’s desire to be tested and whether or not they are traveling and what their schools or workplaces make them do. That makes it impossible to compare systems from week to week or place to place.” When cases double in a given jurisdiction, it might mean that the virus is surging, he says. Or it might just be that more people are being tested.

Still, the scientists I spoke with were hopeful that with more time and better data, it will be possible to sort through these connections and to unpack the pandemic’s mysteries, including its unexplained pattern of waves. “I don’t think the waves were chance occurrences,” Rosenfeld says. “I think they were driven by fundamental forces that are relatively stable. And I think it might be possible to understand those forces retrospectively and then to use that knowledge in the next pandemic.”

The coronavirus pandemic is the most documented pandemic in the history of the world, Rosenfeld says. Scientists stand to learn more from it than they have learned from all previous pandemics combined. And as the C.F.A. comes to life, they will finally have a chance to put those lessons to use. If even a fraction of the new center’s visions are realized, when the next pandemic strikes, scientists and decision makers will have well-established connections and clear mechanisms for collaboration. They will also have more robust data sets and a trove of pandemic-forecasting research to draw from. And they may, at last, be equipped to talk to one another — and to the public — about what they’re doing and why it matters.

Related Articles

Back to top button