Categories

Tag: Peter Pronovost

Cost, Value & Tools

Peter PronovostLike a pro golfer swears by a certain brand of clubs or a marathon runner has a chosen make of shoes, surgeons can form strong loyalties to the tools of their craft. Preferences for these items — such as artificial hips and knees, surgical screws, stents, pacemakers and other implants — develop over time, perhaps out of habit or acquired during their training.

Of course, surgeons should have what they need to be at the top of their trade. But the downside of too much variation is that it can drive up the costs of procedures for hospitals, insurers and even patients. When a hospital carries seven brands of the same type of product instead of one or two, it’s not as likely to get volume discounts. Moreover, if hospitals within a health system negotiate independently of one another, they may pay drastically different prices for the exact same item.

Carrying many brands of a given item may also increase risks for error and patient harm. Staff members need to be trained and competent in a variety of tools; the greater the number of tools, the greater the risk for error.

These physician preference items are no small contributor to health care costs. Around the year 2020, medical supplies are expected to eclipse labor as the biggest expense for hospitals, according to the Association for Healthcare Resource and Materials Management. Higher costs for physician preference items are major drivers of this increase.Continue reading…

Potential Bias in U.S. News Patient Safety Scores

flying cadeuciiHospitals can get overwhelmed by the array of ratings, rankings and scorecards that gauge the quality of care that they provide. Yet when those reports come out, we still scrutinize them, seeking to understand how to improve. This work is only worthwhile, of course, when these rankings are based on valid measures.

Certainly, few rankings receive as much attention as U.S. News & World Report’s annual Best Hospitals list. This year, as we pored over the data, we made a startling discovery: As a whole, Maryland hospitals performed significantly worse on a patient safety metric that counts toward 10 percent of a hospital’s overall score. Just three percent of the state’s hospitals received the highest U.S. News score in patient safety — 5 out of 5 — compared to 12 percent of the remaining U.S. hospitals. Similarly, nearly 68 percent of Maryland hospitals, including The Johns Hopkins Hospital, received the worst possible mark — 1 out of 5 — while nationally just 21 percent did. This had been a trend for a few years.

Continue reading…

Did CDC laxness on one infection help spread another?

BY MICHAEL MILLENSON

Screen Shot 2014-10-25 at 11.46.05 AMThere’s an infection that afflicts thousands of Americans yearly, killing an estimated one in five of those who contract it, and costs tens of thousands of dollars per person to treat. Though there’s a proven way to dramatically reduce or even eliminate it, the Centers for Disease Control and Prevention (CDC) inexplicably seems in no hurry to do so.

Unlike Ebola, this infection isn’t transmitted from person to person, with the health care system desperately racing to keep up. Instead, it’s caused by the health care system when clinicians don’t follow established anti-infection protocols – very much like what happened when Texas Health Presbyterian Hospital encountered its first Ebola patient.  That hospital’s failure flashes a warning sign to all of us.

The culprit in this case is called CLABSI, short for “central-line associated bloodstream infection.” A central line is a catheter placed into a patient’s torso to make it easier to infuse critical medications or draw blood. Because the lines are inserted deep into patients already weakened by illness, an infection can be catastrophic.

CLABSIs are deadlier than typhoid fever or malaria. Last year alone they affected more than 10,000 adults, according to hospital reports to the CDC, and nearly 1,700 children, according to an analysis of hospital discharge records. The infections also cost an average of nearly $46,000 per patient to treat, adding up to billions of dollars yearly.

At one time, CLABSIs were thought to be largely unavoidable. But in 2001, Dr. Peter Pronovost, a critical care medicine specialist at Johns Hopkins, simplified existing guidelines into an easy five-step checklist with items like “wash hands” and “clean patient’s skin with an antibacterial agent.” Hopkins’ CLABSI rate plunged.

Continue reading…

Hospital Acquired Infections: How Do We Reach Zero?

This week, the U.S. Centers for Disease Control and Prevention issued two reports that are simultaneously scary and encouraging.

First, the scary news: A national survey conducted in 2011 found that one in every 25 U.S. hospital patients experienced a healthcare-associated infection. That’s 648,000 patients with a combined 722,000 infections.

About 75,000 of those patients died during their hospitalizations, although it’s unknown how many of those deaths resulted from the infections, the CDC researchers reported in the New England Journal of Medicine.

On the bright side, those numbers are less than half the number of hospital-acquired infections that a national survey estimated in 2007. And a second report issued this week found significant decreases in several infection types that have seen the most focused prevention efforts on a national scale.

Noteworthy was a 44 percent decrease in central line-associated bloodstream infections (CLABSI) between 2008 and 2012, as well as a 20 percent reduction in infections related to 10 surgical procedures over the same time period.

These infections were once thought to be inevitable, resulting from patients who were too old, too sick or just plain unlucky. We now know that we can put a significant dent in these events, and even achieve zero infections among the most vulnerable patients.

At Johns Hopkins, we created a program that combated CLABSI in intensive care units through a multi-pronged approach—implementing a simple checklist of evidence-based measures while changing culture and caregivers’ attitudes through an approach called the Comprehensive Unit-based Safety Program (CUSP). The success was replicated on a larger scale across 103 Michigan ICUs and then later across most U.S. states, withimpressive results.

These and similar successes have changed caregivers’ beliefs about what is possible, and inspired more efforts to reach zero infections.

What will it take to attain this goal—or at least get much closer?

Continue reading…

A Powerful Idea From the Nuclear Industry

Where health care has fallen short in significantly improving quality, our peers in other high-risk industries have thrived. Perhaps we can adapt and learn from their lessons.

For example, health care can learn much from the nuclear power industry, which has markedly improved its safety track record over the last two decades since peer-review programs were implemented. Created in the wake of two nuclear crises, these programs may provide a powerful model for health care organizations.

Following the famous Three Mile Island accident, a partial nuclear meltdown near Harrisburg, Pennsylvania in spring 1979, the Institute of Nuclear Power Operators (INPO) was formed by the CEOs of the nuclear companies. That organization established a peer-to-peer assessment program to share best practices, safety hazards, problems and actions that improved safety and operational performance. In the U.S., no nuclear accidents have occurred since then.

A more devastating nuclear incident in Chernobyl, Ukraine in 1986 spurred the creation of the World Association of Nuclear Operators (WANO), which serves a similar purpose but on an international scale. Since WANO’s inception, no severe nuclear accidents had occurred until the nuclear accident in Fukushima, Japan, caused by a devastating earthquake and tsunami in March 2011.

These programs have succeeded because their purpose and approach is very different from review processes by regulatory agencies. Instead of a punitive process that monitors compliance with minimum standards, peer-to-peer evaluations are thorough, confidential and—importantly—voluntary. They are viewed as mutually beneficial and help advance industry best practices, which are shared widely. The goal is to learn and improve rather than judge and shame. The reviews are done by experts, using validated tools and are ruthlessly transparent  yet confidential.

Peer-to-peer review has not been widely used in health care. A couple notable exceptions are the Northern New England Cardiovascular Study, which used organizational peer-to-peer review to improve the care of cardiac surgery patients, and the National Health Service in the UK, which used it to improve the care of patients with lung disease. While provider-level reviews are more common in health care organizations, they fail to capture the scale needed to achieve system-wide improvements.

At the Armstrong Institute, we have been pilot testing peer-to-peer review and early results are encouraging. We have evaluated specific outcomes, like blood stream infections; specific areas, like the operating room; and whole quality and safety programs.

Continue reading…

A SEC for Health Care?

If you have ever tried to choose a physician or hospital based on publicly available performance measures, you may have felt overwhelmed and confused by what you found online. The Centers for Medicare and Medicaid Services, the Agency for Healthcare Research and Quality, the Joint Commission, the Leapfrog Group, and the National Committee for Quality Assurance, as well as most states and for-profit companies such as Healthgrades and U.S. News and World Report, all offer various measures, ratings, rankings and report cards. Hospitals are even generating their own measures and posting their performance on their websites, typically without validation of their methodology or data.

The value and validity of these measures varies greatly, though their accuracy is rarely publically reported.  Even when methodologies are transparent, clinicians, insurers, government agencies and others frequently disagree on whether a measure accurately indicates the quality of care. Some companies’ methods are proprietary and, unlike many other publicly available measures, have not been reviewed by the National Quality Forum, a public-private organization that endorses quality measures.

Depending where you look, you often get a different story about the quality of care at a given institution. For example, none of the 17 hospitals listed in U.S. News and World Report’s “Best Hospitals Honor Roll” were identified by the Joint Commission as top performers in its 2010 list of institutions that received a composite score of at least 95 percent on key process measures. In a recent policy paper, Robert Berenson, a fellow at the Urban Institute, Harlan Krumholz, of the Robert Wood Johnson Foundation, and I called for dramatic change in measurement.  (Thanks to The Health Care Blog for highlighting this analysis recently.)

We made several recommendations, including focusing more on measuring outcomes such as mortality and infections rather than processes (e.g. whether patients received the recommended treatment) or structures of care (e.g. whether ICUs are staffed around the clock with critical care specialists). We urged that measures be at the organization level rather than clinician level, to reflect the fact that safety and quality are as much products of care delivery systems as of individual clinicians. We propose investments in the “basic science” of measurement so that we better understand how to design good measures. You can read these and other recommendations in the analysis.

Continue reading…

Seven Policy Recommendations for Healthcare’s New Era

There is a consensus that measuring performance can be instrumental in improving value in U.S. health care. In particular clinical areas, such as cardiac and intensive care, measurement has been associated with important improvements in providers’ use of evidence-based strategies and patients’ health outcomes over the past two decades. Perhaps most important, measures have altered the culture of health care delivery for the better, with a growing acceptance that clinical practice can and should be objectively assessed.

Nevertheless, as we argue in the full-length version of this paper, substantial shortcomings in the quality of U.S. health care persist. Furthermore, the growth of performance measurement has been accompanied by increasing concerns about the scientific rigor, transparency, and limitations of available measure sets, and how measures should be used to provide proper incentives to improve performance.

The challenge is to recognize current limitations in how measures are used in order to build a much stronger infrastructure to support the goals of increased accountability, more informed patient choice, and quality improvement. In the following paper, we offer seven policy recommendations for achieving the potential of performance measurement.

1. Decisively move from measuring processes to outcomes.

There is growing interest in relying more on outcome measures and less on process measures, since outcome measures better reflect what patients and providers are interested in. Yet establishing valid outcome measures poses substantial challenges—including the need to riskadjust results to account for patients’ baseline health status and risk factors, assure data validity, recognize surveillance bias, and use sufficiently large sample sizes to permit correct inferences about performance.

Read more.

2. Use quality measures strategically, adopting other quality improvement approaches where measures fall short.

While working to develop a broad set of outcome measures that can be the basis for attaining the goals of public accountability and information for consumer choice, Medicare should ensure that the use of performance measures supports quality improvement efforts to address important deficiencies in how care is provided, not only to Medicare beneficiaries but to all Americans. CMS’ current focus on reducing preventable rehospitalizations within 30 days of discharge represents a timely, strategic use of performance measurement to address an evident problem where there are demonstrated approaches to achieve successful improvement [6]. Read more.

Continue reading…

7. Task a single entity with defining standards for measuring and reporting quality and cost data.

There is a plethora of health care quality data being pushed out to the public, yet no rules to assure the accuracy of what is being presented publicly. The health care industry lacks standards for how valid a quality measure should be before it is used in public reporting or pay-for-performance initiatives, although some standards have been proposed.

The NQF does a good job of reviewing and approving proposed measures presented to it, but lacks the authority to establish definitive quantitative standards that would apply broadly to purveyors of performance measures. However, as discussed earlier, many information brokers publically report provider performance without transparency and without meeting basic validity standards. Indeed, even CMS, which helps support NQF financially, has adopted measures for the Physician Quality Reporting System that have not undergone NQF review and approval. Congress now is considering “SGR repeal,” or sustainable growth rate legislation, that would have CMS work directly with specialty societies to develop measures and measurement standards, presumably without requiring NQF review and approval [30].

Without industry standards, payers, policy makers, and providers often become embroiled in a tug-of-war; with payers and policy-makers asserting that existing measures are good enough, and providers arguing they are not. Most often, neither side has data on how good the contested measures actually are. Most importantly, the public lacks valid information about quality, especially outcomes, and costs.

Indeed, most quality measurement efforts struggle to find measures that are scientifically sound yet feasible to implement with the limited resources available. Unfortunately, too often feasibility trumps sound science. In the absence of valid measures, bias in estimating the quality of care provided will likely increase in proportion to the risks and rewards associated with performance. The result is that the focus of health care organizations may change from improving care to “looking good” to attract business. Further, conscientious efforts to reduce measurement burden have significantly compromised the validity of many quality measures, making some nearly meaningless, or even misleading. Unfortunately, measurement bias often remains invisible because of limited reporting of data collection methods that produce the published results. In short, the measurement of quality in health care is neither standardized nor consistently accurate and reliable.

Continue reading…

6. Invest in the basic science of measurement development and applications.

The unfortunate reality is that there is no body of expertise with responsibility for addressing the science of performance measurement. The National Quality Forum (NQF) comes closest, and while it addresses some scientific issues when deciding whether to endorse a proposed measure, NQF is not mandated to explore broader issues to advance the science of measure development, nor does it have the financial support or structure to do so.

An infrastructure is needed to gain national consensus on: what to measure, how to define the measures, how to collect the data and survey for events, what is the accuracy of EHRs as a source of performance, the cost-effectiveness of various measures, how to reduce the costs of data collection, how to define thresholds for measures regarding their accuracy, and how to prioritize the measures collected (informed by the relative value of the information collected and the costs of data collection).

Despite this broad research agenda, there is little research funding to advance the basic science of performance measurement. Given the anticipated broad use of measures throughout the health system, funding can be a public/private partnership modeled after the Patient-Centered Outcomes Research Institute or a federally-funded initiative, perhaps centered at AHRQ. Given budgetary constraints, finding the funding to support the science of measurement will be a challenge. Yet, the costs of misapplication of measures and incorrect judgments about performance are substantial.

Continue reading…

5. Use measurement to promote the concept of the rapid-learning health care system.

Initiatives to promote performance measurement need to be accompanied by support to improve care. Quality measure data should not only be technically correct, but should be organized such that their dissemination is a resource to aid in quality improvement activities. As such, quality measurement should be viewed as just one component of a learning health care system that also includes advancing the science of quality improvement, building providers’ capacity to improve care, transparently reporting performance, and creating formal accountability systems.

There are several strategies to make quality measure data more actionable for quality improvement purposes. For example, for publicly reported outcome measures, CMS provides hospitals with lists of the patients who are included in the calculation. Since the outcomes may occur outside the hospital for mortality and for readmissions that are at other hospitals, this information is often beyond what the hospitals already have available to them. These data give providers the ability to investigate care provided to individual patients, which in turn can support a variety of quality improvement efforts.

Continue reading…