British statistician, George Box, famously stated that “all models are wrong, but some are useful.” The nation’s experience with COVID-19 has highlighted this fact as policy makers have struggled to calibrate their actions based on imperfect data and modeling. Yet, modeling is useful and will continue to be an important aspect of emergency management.
The United States has dealt with major pandemics before, but the scope and scale of the COVID-19 pandemic has stressed the nation’s public health system and the larger emergency management enterprise in a manner unprecedented in the modern era. The fragility of the global supply chain resulted in shortages of personal protective equipment (PPE). Limited hospital capacity led to overflowing hospitals and scarce medical equipment. Inadequate testing and tracing capabilities led to invisible outbreaks that could only be controlled with blunt measures like lockdowns. The entire experience has exposed weaknesses and tensions within the nation’s emergency management system and will likely lead to major policy changes.
Different Models With Varying Accuracies
Effectively modeling the spread of COVID-19 has proven to be difficult. The Centers for Disease Control and Prevention (CDC) and numerous other experts and academic institutions, including the Imperial College of London and the University of Washington Institute for Health Metrics and Evaluation (IHME), have produced models with varying degrees of accuracy. For example, the heavily relied upon IHME model has been criticized for consistently underestimating the actual death count. If modeling was only an academic exercise, discrepancies would not be as impactful. However, policymakers at all levels of government are using these models to make major decisions, including whether to shut down or open up parts of society with life or death consequences on the line.
Models can be “wrong” in several different ways. All models are wrong in that they simplify the real world, including only the essential parts necessary to elucidate the phenomenon one hopes to understand. As Albert Einstein decreed, “Everything should be as simple as possible, but not simpler.” Herein lies the trick to developing a “useful” model. Oversimplifying a model risks missing important forces and the full range of possible outcomes. Overcomplicating a model risks muddling insights and worse – for example, rescuing a theory that is fundamentally flawed with increasingly ornate bells and whistles.
In statistical models, the estimation of model parameters is a further potential source of inaccuracy. In the case of the coronavirus pandemic, problems have included non-random data collection, sparse data due to disease novelty and insufficient testing, and misleading data due to possible manipulation of publicly disseminated data in some countries, as well as failures to properly account for these measurement errors. However, it is important to distinguish fundamental issues with the estimation procedure from “imprecise” predictions due to fundamental uncertainty in estimation.
Having a basic understanding of modeling and data analytics is a key skill for emergency managers. Those relying on models to make decisions need to appreciate that they will always contend with some degree of uncertainty. Understanding the underlying assumptions associated with a given model and consulting more than one model before making major decisions are prudent courses of action. Going further to glean insights from disparate fields of study is even better. For instance, useful studies of terrorism risk have been produced in the fields of criminology, economics, and political science.
A Collective Forecast Model
With so many different COVID models and methodologies to consider, one promising practice is “ensemble” modeling or the effort to compare and merge various models to produce a collective forecast. In doing so, researchers can synthesize the various models and develop a more accurate picture for policymakers to consider. Roughly, this process resembles throwing out the high and low scores in some Olympic events to eliminate data that could skew the overall result.
The National Hurricane Center uses a similar concept known as consensus modeling, which averages the results of several models to achieve consensus on potential storm tracks. Consensus models are generally more accurate than relying on any one single forecast model. Emergency managers, especially those in coastal areas, should also be familiar with the “cone of uncertainty,” which refers to a probabilistic path of a storm center based on historical error rates. The cone provides at least a rough idea of the storm track and areas of potential impact. However, it is important to appreciate that the center of the storm generally has a 66% chance of occurring within the cone and a 33% chance of occurring outside of it. That is, these models help emergency managers allocate resources and manage risk intelligently but cannot perfectly predict the future.
Emergency managers often find themselves in the middle of policy discussions, helping to gather and interpret the necessary information so policymakers can make informed decisions. Modeling, whether for public health emergencies, terrorism risk, or weather events, is here to stay, and its application will likely only increase along with concurrent increases in data collection and the computational power necessary to process massive data sets. Emergency managers must embrace, or at least understand the fundamentals of modeling, predictive analytics, data visualization, and other quantitative tools and techniques as the discipline adapts to meet COVID and other contemporary challenges.
Terry Hastings
Terry Hastings is the senior policy advisor for the New York State Division of Homeland Security and Emergency Services (DHSES) and an adjunct professor for the College of Emergency Preparedness, Homeland Security and Cybersecurity at the State University of New York at Albany. He oversees the DHSES policy and program development unit and a variety of statewide programs and initiatives.
- Terry Hastingshttps://domesticpreparedness.com/author/terry-hastings
- Terry Hastingshttps://domesticpreparedness.com/author/terry-hastings
- Terry Hastingshttps://domesticpreparedness.com/author/terry-hastings
- Terry Hastingshttps://domesticpreparedness.com/author/terry-hastings
Colin Krainin
Colin Krainin is a special assistant at DHSES. Previously he was an associate research scholar and lecturer at Princeton University. He received his Ph.D. in Economics from the University of Texas at Austin.
- This author does not have any more posts.