A recent report from the Office of Inspector General at the US Department of Health and Human Services finds, unsurprisingly, that hospital incident reporting systems do not capture most patient harm. A summary of major points:
All 189 sampled hospitals had incident reporting systems to capture events, and administrators we interviewed rely heavily on these systems to identify problems.
Hospital staff did not report 86 percent of events to incident reporting systems, partly because of staff misperceptions about what constitutes patient harm.
Nurses most often reported events, typically identified through the regular course of care; 28 of the 40 reported events led to investigations and 5 led to policy changes.
Hospital accreditors reported that in evaluating hospital safety practices, they focus on how event information is used rather than how it is collected.
And here are the recommendations:
AHRQ and CMS should collaborate to create a list of potentially reportable events and provide technical assistance to hospitals in using the list.
CMS should provide guidance to accreditors regarding surveyor assessment of hospital efforts to track and analyze events and should scrutinize survey processes when approving accreditation programs.
Sorry, but to me this is all somewhat "ho-hum." Virtually all adverse event reporting systems are "bolt-on" additions to hospital clinical information systems. They do not result from a thorough analysis of how work is done on the floors and units of hospitals. They are not designed by the people who actually have to report such events. They are certainly not designed to capture near-misses, which occur 100 to 1000 time more often than actual adverse events, but which are a huge source of information about systemic problems. Filling them out is often considered "extra work" by busy clinicians, not as part of a process of continuous, front-line driven process improvement. Thus the recommendations offered in the report will not, I predict, make a significant difference in the future.
What are also unsurprising are the findings in this article by Downey et al, covering the period 1998 to 2007:
As the patient safety movement enters its second decade, an emerging body of research is finding that safety for hospitalized patients has likely not improved significantly over the past several years. This study, which used the AHRQ Patient Safety Indicators (PSIs) to analyze safety events in 69 million hospitalizations over a 10-year period, also finds no clear evidence of improved safety. Of the 20 PSIs analyzed, 7 increased in incidence over the time period studied, 7 decreased, and 6 did not change. While PSIs are best used for screening purposes and not for direct comparisons between hospitals, they have been used to track system-level rates of safety problems over time. The results of this study and other recent literature provide continued urgency for the safety movement to strive to improve the safety of the entire health care system.
What's going on? Answer: We hear of great work done in quality and safety improvement by hospitals on the leading edge. And that work is often impressive. But those reports distract us from the fact that most hospitals still do not have in place clinical, administrative, and governance leaders who seriously and consistently put process improvement at the top of their strategic objectives. As I have mentioned, I am often approached at my speeches by nurse managers and young doctors who say, "How can I get my CEO/Chiefs/Board to support these kind of activities?"
I have urged here my (former) colleagues in academic medical centers to take the lead in these matters. Many, too, have urged the country's medical schools to incorporate the science of health care delivery improvement into their curricula. Thus far, the pace is way too slow. Let's hearken back to Captain Sullenberger's imperative:
""I wish we were less patient. We are choosing every day we go to work how many lives should be lost in this country."