As 2011 nears an end, emergency preparedness officials at healthcare organizations across the United States are looking with watchful eyes toward 2012, during which the metrics for determining the level of success of preparedness efforts are likely to become an even more important component of the emergency management process. Emergency preparedness officials therefore have the difficult task of not only justifying the time and money spent on emergency preparedness but also, and of greater importance, determining exactly how prepared organizations are to respond to disasters and/or other expected or unexpected events.
Those who participated in any of the numerous emergency management conferences held in various locales throughout the country this year – and/or who follow online blogs and LinkedIn discussions – have undoubtedly heard of and perhaps even participated in conversations about how to measure whether a healthcare organization has successfully prepared for an emergency or disaster. Unfortunately, there does not yet seem to be a commonly accepted or universally recognized measure for hospitals. Instead, most U.S. hospitals are responsible for meeting the requirements of different agencies that: (a) simply let the hospitals know whether they have successfully “passed” (however that term is understood); or (b) offer subjective advice for program improvement. Sadly, this means that many of the nation’s hospitals are not able to accurately compare their own readiness capabilities with those of other hospitals. In fact, there are at present no nationally understood preparedness measurements that have been defined specifically for patient safety, patient satisfaction, or employee injuries.
In a presentation during the September 2010 annual meeting of the Agency for Healthcare Research and Quality (AHRQ), Cheryl Davies (senior research assistant and project manager at Stanford University’s Center for Primary Care and Outcomes Research) and David Chin (a graduate student researcher, at the University of California, Davis, Center for Healthcare Policy and Research) described a few additional challenges in predicting hospital preparedness. Prominent among those challenges were a lack of evidence-based guidance, inconsistencies in the data available, and a broad host of potential “indicators” (their own study started with a list of over 900 such indicators).
Here it should be noted that, although such indicators are helpful, there is still a lack of actual incident-related outcome data – particularly information related to patient injuries or deaths – as well as short- and long-term follow-ups related to specific events. The lack of such information will continue to be a significant challenge, because research can rarely be applied during a crisis and often must rely on retrospective interviews or assessments – which are often limited by individual recall. Despite these and other hurdles, Davies and Chin suggest some very persuasive and objective metrics that may be used in the future for evaluating, and eventually validating, most if not quite all of their suggested indicator variables.
Inconsistent Guidelines, Free-Floating Standards & Other Variables
At present, though – and in lieu of an existing singular set of national standards – most of the nation’s hospitals typically measure their readiness by assessing performance as determined by adhering to the standards postulated (but not always in specific detail) in one or more of the following nationally known policy documents:
- The findings of The Joint Commission – Meeting all of the standards within the Emergency Management, Environment of Care, and Life Safety chapters of the Survey Activity Guide for Health Care Organizations is how most general acute-care hospitals predict and/or measure readiness. In addition, testing their emergency operations plans during exercises, annually reviewing policies and management plans, and developing strong performance indicators for demonstrating improvement can add significant value to a hospital program. However, without the ability to realistically compare such results with similar data from other hospitals, such readiness indicators will still be measured in a vacuum.
- The National Incident Management System (NIMS) – Meeting the 14 NIMS Implementation Objectives required for healthcare organizations (as described in NIMS Alert 07-08) is another helpful way to gauge readiness, particularly when considering community involvement. But it still falls short by not providing clear performance metrics that can be compared with the measurement metrics of other organizations.
- ASPR Benchmarks (provided by the Office of the Assistant Secretary for Preparedness and Response [ASPR] of the U.S. Department of Health & Human Services) – Any hospital receiving that agency’s Hospital Preparedness Program funds must participate in programs meeting these requirements. However, state and local health authorities have considerable flexibility in describing how to meet the requirements, and can even define the participating role played by their own healthcare organizations.
Fortunately, there are a few available measures that go beyond incident planning and preparedness activities and attempt, instead, to gauge organizational response competency. However, there are still a few limitations involved in implementing:
- HSEEP and the Target Capabilities List (TCL) – The Homeland Security Exercise and Evaluation Program (HSEEP) provides a straightforward and standardized way to prepare and evaluate exercises. This comprehensive program offers useful tools for: (a) designing the exercise and longer-term exercise program; (b) developing Master Scenario Events Lists; and (c) evaluating and preparing useful corrective action plans for modifying plans and policies that can improve response performance over time. Also provided are a series of online and on-site training programs that reinforce learning objectives and improve user competency – a bonus factor that ultimately can create organizational response enhancement. By combining these tools with the TCLs available, organizations can select from a pre-identified set of standardized objectives that measure performance during those exercises that can be compared within and across communities. The challenge in using the TCLs is their lack of alignment with other healthcare organization requirements such as those recommended by The Joint Commission. For that reason, they are often either not used or may be modified to meet individual hospital needs – an unknown variable that limits their usefulness in comparing preparedness levels with other hospitals.
- Veterans Health Administration (VHA) Office of Emergency Management – This is probably the best model for estimating preparedness among healthcare organizations. The VHA – a major branch of the Veterans Administration (VA) – not only defines what a comprehensive emergency management program should look like, it also: (a) provides training to non-VA hospitals and external agencies; (b) conducts cutting-edge research on emergency management issues; and (c) requires all VA hospitals to perform internal audits based on 71 specific response capabilities that can be compared with the capabilities of other VA hospitals. The VHA approach provides an excellent framework for evaluating the preparedness of hospitals within a community and/or across a system. Unfortunately, there is a budgetary downside – it can be resource-heavy, which is difficult to justify for many healthcare organizations.
A Few Helpful Initiatives – And Indicators of Future Progress
It is encouraging that at least a few organizations are taking the initiative needed to help define useful, realistic, and evidence-based standards for healthcare emergency management. Ultimately, this trend should not only improve capabilities among hospitals but also help integrate healthcare into the overall community response. Events such as Hurricanes Katrina and Rita have demonstrated many of the shortcomings in healthcare preparedness. But more recent responses such as those used in the H1N1 Pandemic, Hurricane Irene, and the Joplin Tornado in Missouri this year have revealed that there have been, in fact, a number of significant improvements in healthcare response.
Continued research in this area is paramount to defining and validating these successes and creating useful and commonly adopted measurements. In addition, research in the areas of nursing homes and other care sites related to their ability to prepare and make sound decisions during a crisis – combined with investigations of similar decision making at local government levels – will, it is hoped, build a broader and more predictable way to estimate healthcare community readiness.
Meanwhile, the VHA will continue to lead the way in 2012 for defining hospital readiness through the use of practical and capability-based readiness measures. By combining these measurement indicators with existing standards and national guidance documents – as well as validating the data used through evidence-based outcomes and ongoing research efforts – hospitals and communities may soon be able to more accurately predict the impacts caused by common emergencies and catastrophic disasters. In addition, emergency managers and leaders will be better equipped to make difficult decisions that will lead not only to faster recoveries for their patients but also to improved (and measurable) outcomes for their own staffs, organizations and agencies, and home communities.
____________
For additional information on:
AHRQ Annual meeting, visit https://www.ahrq.gov/about/annualconf10/chin_davies/chin_davies.HTM
NIMS Alert 07-08, visit https://www.fema.gov/pdf/emergency/nims/imp_act_hos_hlth.pdf
HSEEP, visit https://www.fema.gov/emergency-managers/national-preparedness/exercises/hseep
FEMA’s TCL, visit https://www.fema.gov/pdf/government/training/tcl.pdf
VHA’s comprehensive emergency management program, visit https://www.va.gov/VHAEMERGENCYMANAGEMENT/CEMP/index.asp
Emergency Management – Measurements of Success
As 2011 nears an end, emergency preparedness officials at healthcare organizations across the United States are looking with watchful eyes toward 2012, during which the metrics for determining the level of success of preparedness efforts are likely to become an even more important component of the emergency management process. Emergency preparedness officials therefore have the difficult task of not only justifying the time and money spent on emergency preparedness but also, and of greater importance, determining exactly how prepared organizations are to respond to disasters and/or other expected or unexpected events.
Those who participated in any of the numerous emergency management conferences held in various locales throughout the country this year – and/or who follow online blogs and LinkedIn discussions – have undoubtedly heard of and perhaps even participated in conversations about how to measure whether a healthcare organization has successfully prepared for an emergency or disaster. Unfortunately, there does not yet seem to be a commonly accepted or universally recognized measure for hospitals. Instead, most U.S. hospitals are responsible for meeting the requirements of different agencies that: (a) simply let the hospitals know whether they have successfully “passed” (however that term is understood); or (b) offer subjective advice for program improvement. Sadly, this means that many of the nation’s hospitals are not able to accurately compare their own readiness capabilities with those of other hospitals. In fact, there are at present no nationally understood preparedness measurements that have been defined specifically for patient safety, patient satisfaction, or employee injuries.
In a presentation during the September 2010 annual meeting of the Agency for Healthcare Research and Quality (AHRQ), Cheryl Davies (senior research assistant and project manager at Stanford University’s Center for Primary Care and Outcomes Research) and David Chin (a graduate student researcher, at the University of California, Davis, Center for Healthcare Policy and Research) described a few additional challenges in predicting hospital preparedness. Prominent among those challenges were a lack of evidence-based guidance, inconsistencies in the data available, and a broad host of potential “indicators” (their own study started with a list of over 900 such indicators).
Here it should be noted that, although such indicators are helpful, there is still a lack of actual incident-related outcome data – particularly information related to patient injuries or deaths – as well as short- and long-term follow-ups related to specific events. The lack of such information will continue to be a significant challenge, because research can rarely be applied during a crisis and often must rely on retrospective interviews or assessments – which are often limited by individual recall. Despite these and other hurdles, Davies and Chin suggest some very persuasive and objective metrics that may be used in the future for evaluating, and eventually validating, most if not quite all of their suggested indicator variables.
Inconsistent Guidelines, Free-Floating Standards & Other Variables
At present, though – and in lieu of an existing singular set of national standards – most of the nation’s hospitals typically measure their readiness by assessing performance as determined by adhering to the standards postulated (but not always in specific detail) in one or more of the following nationally known policy documents:
Fortunately, there are a few available measures that go beyond incident planning and preparedness activities and attempt, instead, to gauge organizational response competency. However, there are still a few limitations involved in implementing:
A Few Helpful Initiatives – And Indicators of Future Progress
It is encouraging that at least a few organizations are taking the initiative needed to help define useful, realistic, and evidence-based standards for healthcare emergency management. Ultimately, this trend should not only improve capabilities among hospitals but also help integrate healthcare into the overall community response. Events such as Hurricanes Katrina and Rita have demonstrated many of the shortcomings in healthcare preparedness. But more recent responses such as those used in the H1N1 Pandemic, Hurricane Irene, and the Joplin Tornado in Missouri this year have revealed that there have been, in fact, a number of significant improvements in healthcare response.
Continued research in this area is paramount to defining and validating these successes and creating useful and commonly adopted measurements. In addition, research in the areas of nursing homes and other care sites related to their ability to prepare and make sound decisions during a crisis – combined with investigations of similar decision making at local government levels – will, it is hoped, build a broader and more predictable way to estimate healthcare community readiness.
Meanwhile, the VHA will continue to lead the way in 2012 for defining hospital readiness through the use of practical and capability-based readiness measures. By combining these measurement indicators with existing standards and national guidance documents – as well as validating the data used through evidence-based outcomes and ongoing research efforts – hospitals and communities may soon be able to more accurately predict the impacts caused by common emergencies and catastrophic disasters. In addition, emergency managers and leaders will be better equipped to make difficult decisions that will lead not only to faster recoveries for their patients but also to improved (and measurable) outcomes for their own staffs, organizations and agencies, and home communities.
____________
For additional information on:
AHRQ Annual meeting, visit https://www.ahrq.gov/about/annualconf10/chin_davies/chin_davies.HTM
NIMS Alert 07-08, visit https://www.fema.gov/pdf/emergency/nims/imp_act_hos_hlth.pdf
HSEEP, visit https://www.fema.gov/emergency-managers/national-preparedness/exercises/hseep
FEMA’s TCL, visit https://www.fema.gov/pdf/government/training/tcl.pdf
VHA’s comprehensive emergency management program, visit https://www.va.gov/VHAEMERGENCYMANAGEMENT/CEMP/index.asp
Mitch Saruwatari
Mitch Saruwatari is vice president of quality and compliance at LiveProcess, and previously held key positions at Kaiser Permanente. He also has served as: Region I Disaster Medical and Health Specialist for the State of California; a member of the Los Angeles County Centers for Disease Control and Prevention and Health Resources and Services Administration Bioterrorism Advisory Committee; a founding member of the California Disaster Interest Group; and as co-lead for development of the Hospital Incident Command System. In addition to his current position, he is an instructor at the Center for Domestic Preparedness. He holds a Master’s degree in Public Health and is working toward a doctorate from UCLA.
SHARE:
TAGS:
COMMENTS
RELATED ARTICLES
TRENDING
November 2024
Advice for Surviving a Disaster: Be Selfish and Small-Minded
Protecting Critical Infrastructure From Weaponized Drones
RELATED ARTICLES
TRENDING
November 2024
Advice for Surviving a Disaster: Be Selfish and Small-Minded
Protecting Critical Infrastructure From Weaponized Drones