In my last post, I discussed the role of physicians in patient safety in the US and UK. Today, I’m going widen the lens to consider how the culture and structure of the two healthcare systems have influenced their safety efforts. What I’ve discovered since arriving in London in June has surprised me, and helped me understand what has and hasn’t worked in America.
Before I arrived here, I assumed that the UK had a major advantage when it came to improving patient safety and quality. After all, a single-payer system means less chaos and fragmentation—one payer, one regulator; no muss, no fuss. But this can be more curse than blessing, because it creates a tendency to favor top-down solutions that—as we keep learning in patient safety—simply don’t work very well.
To understand why, let’s start with a short riff on complexity, one of the hottest topics in healthcare policy.
Complexity R Us
Complexity theory is the branch of management thinking that holds that large organizations don’t operate like predictable and static machines, in which Inputs A and B predictably lead to Result C. Rather, organizations operate as “complex adaptive systems,” with unpredictability and non-linearity the rule, not the exception. It’s more Italy (without the wild parties) than Switzerland.
Complexity theory divides decisions and problems into three general categories: simple, complicated, and complex. Simple problems are ones in which the inputs and outputs are known; they can be managed by following a recipe or a set of rules. Baking a cake is a simple problem; so is choosing the right antibiotics to treat pneumonia. Complicated problems involve substantial uncertainties: the solutions may not be known, but they are potentially knowable. An example is designing a rocket ship to fly to the moon—if you were working for NASA in 1962 and heard President Kennedy declare a moon landing as a national goal, you probably believed it was not going to be easy but, with enough brainpower and resources, it could be achieved. Finally, complex problems are often likened to raising a child. While we may have a general sense of what works, the actual formula for success is, alas, unknowable (if you’re not a parent, trust me on this).
Understanding these differences is crucial because our approaches must match the types of problems at hand, and improving patient safety often involves dealing with complicated and complex problems and settings. A checklist may be a fabulous fix for a simple problem, but a distraction for a complex one. Enacting a series of rules and policies may seem like progress (it almost certainly does to the issuer) but may actually set us back if it stifles innovation and collegial exchange. Sometimes the best approach to a complex problem is to try an approach that seems sensible, measure the results (making sure workers feel able to speak truthfully and keeping ears to the train tracks for unanticipated consequences), and repeat this cycle over and over.
Appreciating the complexity of healthcare systems should not lead one to embrace anarchy or decide that rules are for wimps. “A somewhat surprising finding from research on complex adaptive systems,” observes organizational expert Paul Plesk, “is that relatively simple rules can lead to complex, emergent, innovative system behavior.” Atul Gawande expands on this point in The Checklist Manifesto, describing how the best checklists lead to improvements that go well beyond adherence to a few tasks—mostly by creating a limited number of high-level constraints and encouraging cross talk among frontline staff.
The bottom line from analyses of complex systems is that over-managing workers through boatloads of top-down, prescriptive rules and directives may be more unsafe than tolerating some degree of flexibility and experimentation on the front lines. It’s a message that can cause frustration, but those who don’t learn it seem to make the same managerial mistakes over and over again.
The Benefits and Risks of Centralized, Prescriptive Safety Standards
When the patient safety field launched, around the year 2000, both the US and UK needed to respond. Like typecast actors playing their parts for the umpteenth time, both countries followed their respective scripts: the UK favored central rules and the US favored, well, a mixture of this and that. These responses are deeply ingrained in our two countries’ cultural DNA.
In the US, the safety imperative that began with To Err is Human ran up against a leadership vacuum. No national organization was in a position to grab the safety ball and run with it. The Joint Commission filled this gap in part through its hospital accreditation work, as did the Agency for Healthcare Research and Quality (AHRQ) in research and education. But these organizations could not articulate a national strategy, nor did they have the power to enforce tough rules to their constituents (Joint Commission certification is voluntary, funded by the accredited hospitals, markedly limiting the accreditor’s degrees of freedom). Even as these organizations began to rise to the challenge, major gaps remained, and were filled by an alphabet soup of other stakeholders: physician certifying boards like ABIM, training program accreditors like ACGME, business coalitions like Leapfrog, non-profit organizations like Institute for Healthcare Improvement, and state hospital associations. But there was no central authority to truly “own” patient safety.
Soon caregivers and hospital administrators were begging for “harmonization.” Translated: “We accept the fact that you, [Fill in the Blank], are going to boss us around on safety, but can’t you get your act together with the 10 other organizations doing the same thing?”
While I would have loved for a central authority to have made hand washing or prompt discharge summaries national standards, this unruliness had its virtues. Individual healthcare organizations—hospitals, specialty societies, multispecialty groups—had the space to develop their own safety programs without being overwhelmed by a huge compliance burden.
And good ones did just that. Over a few years, a stream of innovations—checklists, time-outs, debriefings, Executive Walk Rounds, trigger tools, new approaches to disclosure—bubbled up from front line clinicians, researchers, and managers, who had the freedom to try things out, see if they worked, and then disseminate them. This happy result only occurred because some clinicians gained skills in safety, were motivated to try new approaches, and were given some leash.
Contrast this with the UK, where the launch of the safety field occasioned lots of prescriptive rulingsissued by the various tentacles of the National Health Service. Here, the instinct to embrace centralized solutions to important problems is facilitated by the country’s small size (I have to keep reminding myself that California is nearly 3 times larger than England in land mass and has amatching Gross “Domestic” Product—about $1.9 trillion), the centrally-controlled single-payer system, and a societal bias that often places the interests of the community over those of the individual.
Take the issue of emergency department door-to-floor time. In the US, we are under pressure to try to shorten this time, certainly a sensible goal. So the time is now being measured and reported internally, and may soon be publicly reported or even subject to incentives. In the UK, however, the NHS approached this issue by mandating a four-hour maximum ED door-to-discharge time (either home or hospital admission) in 2002. Hospitals that miss their four-hour target can be hit with major penalties. (I heard of one institution where a physician leader was fired for his inability to meet this benchmark.)
Is this good or bad? When safety standards are supported by strong evidence and we’ve sorted through the unexpected consequences, then centrally-decreed mandates are fine, propelling us toward safer care faster than a wishy-washy, pluralistic system. On the other hand, boatloads of top-down rules can, and I believe have, created a feeling among front-line staff here that safety is something the government tells us to do. It’s a guaranteed enthusiasm-sapper and innovation-stifler. As you can imagine, the four-hour rule has improved some things but has also generated tons ofgaming and new problems.
Moreover, clinicians here view many of the NHS’s rules as overly politicized ,and even a little silly. Virtually everyone I’ve met here has shared a favorite story of some safety rule whose genesis was a single bad case in a single hospital, where harm befell a friend or relative of a Member of Parliament. Poof: another national standard. These stories are told with bemused helplessness.
And let’s not forget about those pesky complex systems. I mentioned two posts ago that the program to computerize every English hospital has been a fiasco—it was completely bollixed from the top down, violating everything we know about change management in complex systems. (Yesterday, it was formally announced that the program will be taken off life support—after having burned through about $20 billion—but anyone following the story knew that it was DOA years ago.)
Less expensively, one hears that the initial phase of the WHO surgical checklist program—which UK hospitals are now required to adopt—has been a major struggle, largely because it arrived as a central mandate without much room for local adaptation or buy-in.
The problem isn’t limited to the relationship between the central NHS authorities and individual hospitals—the top-down instinct is marbled throughout the entire system. The Trust (hospital system) manager who spends her life receiving directives from the NHS is likely to use the same approach with her clinicians (and then lament about why they don’t just follow the rules). And the government managers, of course, are those who have been promoted from senior leadership roles in healthcare systems, or visa versa. Once the tone is set this way, it is hard to change it: central authorities accustomed to wielding power have an awfully hard time parting with it willingly.
Top-Down or Bottom-Up: Finding the Sweet Spot
In the US, our individualism and mistrust of government causes us to resist central solutions, even to critical societal problems. When we’re lucky, this leaves space for grassroots engagement of and innovation by front line caregivers, and—perhaps—more robust solutions once they finally do emerge. All educators know the maxim, “If you tell your learners the answer, you may prevent them from learning it.” So it often is in patient safety.
On the other hand, America’s antipathy toward top-down directives permits wildly different rates of adoption of clearly effective practices, makes progress maddeningly slow (as every individual clinician and institution retains veto power over anything they don’t like) and contributes to massive disparities in quality—with far too many have-nots scattered among the haves.
Because of this, I see the US now moving in the UK’s direction, with a more prescriptive and top-down approach. You can see the early signs in Medicare’s increasingly aggressive use of transparency and value-based purchasing, and in the patient safety-related activities of various states (public reporting of “Never Events,” hospital inspections and fines, and some state laws in areas like MRSA screening and nurse-to-patient ratios). With several studies documenting oursluggish progress in patient safety, America’s patience with letting 1000 flowers bloom ebbs. The gardener has arrived, and he’s carrying his pruning shears.
Interestingly, just as the US is sliding toward a more central and prescriptive line of attack, I see growing recognition in the UK of the limitations of the top-down approach, more appreciation of the importance of caregiver engagement, and stronger efforts to train physicians and other providers in leadership and safety skills. Just yesterday, a safety expert studying the UK’s surgical checklist program told me that some surgical teams have successfully adapted the checklist to their local environments, with promising results.
“The Americans can always be counted on to do the right thing… after they have exhausted all other possibilities,” famously observed Winston Churchill. In our world, it appears that both the Americans and the Brits are honing in on the right thing: creating systems that are prescriptive when they need to be, while allowing the wisdom and enthusiasm of front line workers to be nurtured and tapped in addressing the complex problems that dominate patient safety and healthcare quality.
And—for both countries—that’s progress.
Robert Wachter, MD, is widely regarded as a leading figure in the modern patient safety movement. Together with Dr. Lee Goldman, he coined the term “hospitalist” in an influential 1996 essay in The New England Journal of Medicine. His posts appear semi-regularly on THCB and on his own blog, Wachter’s World.
Categories: Uncategorized
HIT is a wicked problem with illusory safety benefits; and appears to have set back the safety movement in the UK. Ask the doctors at Milton Keynes.
NY Times commentary on the HIT fiasco in the UK is here:
http://bits.blogs.nytimes.com/2011/09/27/lessons-from-britains-health-information-technology-fiasco/
Great article. Here is a step in the right direction: http://wh.gov/4kB