Uncategorized

How are hospitals supposed to reduce readmissions? Part III

By KIP SULLIVAN, JD

The Medicare Payment Advisory Commission (MedPAC) and other proponents of the Hospital Readmissions Reduction Program (HRRP) justified their support for the HRRP with the claim that research had already demonstrated how hospitals could reduce readmissions for all Medicare fee-for-service patients, not just for groups of carefully selected patients. In this three-part series, I am reviewing the evidence for that claim.

We saw in Part I and Part II that the research MedPAC cited in its 2007 report to Congress (the report Congress relied on in authorizing the HRRP) contained no studies supporting that claim. We saw that the few studies MedPAC relied on that claimed to examine a successful intervention studied interventions administered to carefully selected patient populations. These populations were severely limited by two methods: The patients had to be discharged with one of a handful of diagnoses (heart failure, for example); and the patients had to have characteristics that raised the probability the intervention would work (for example, patients had to agree to a home visit, not be admitted from a nursing home, and be able to consent to the intervention).

In this final installment, I review the research cited by the Yale New Haven Health Services Corporation (hereafter the “Yale group”) in their 2011 report to CMS in which they recommended that CMS apply readmission penalties to all Medicare patients regardless of diagnosis and regardless of the patient’s interest in or ability to respond to the intervention. MedPAC at least limited its recommendation (a) to patients discharged with one of seven conditions/procedures and (b) to patients readmitted with diagnoses “related to” the index admission. The Yale group threw even those modest restrictions out the window.

The Yale group recommended what they called a “hospital-wide (all-condition) readmission measure.” Under this measure, penalties would apply to all patients regardless of the condition for which they were admitted and regardless of whether the readmission was related to the index admission (with the exception of planned admissions). “Any readmission is eligible to be counted as an outcome except those that are considered planned,” they stated. (p. 10) [1] The National Quality Forum (NQF) adopted the Yale group’s recommendation almost verbatim shortly after the Yale group presented their recommendation to CMS.

In their 2007 report, MedPAC offered these examples of related and unrelated readmissions: “Admission for angina following discharge for PTCA [angioplasty]” would be an example of a related readmission, whereas “[a]dmission for appendectomy following discharge for pneumonia” would not. (p. 109) Congress also endorsed the “related” requirement (see Section 3025 of the Affordable Care Act, the section that authorized CMS to establish the HRRP). But the Yale group dispensed with the “related” requirement with an astonishing excuse: They said they just couldn’t find a way to measure “relatedness.” “[T]here is no reliable way to determine whether a readmission is related to the previous hospitalization …,” they declared. (p. 17) Rather than conclude their “hospital-wide” readmission measure was a bad idea, they plowed ahead on the basis of this rationalization: “Our guiding principle for defining the eligible population was that the measure should capture as many unplanned readmissions as possible across a maximum number of acute care hospitals.” (p. 17) Thus, to take one of MedPAC’s examples of an unrelated admission, the Yale group decided hospitals should be punished for an admission for an appendectomy within 30 days after discharge for pneumonia. [2]

Having declared that the relatedness criterion was out the window, the Yale group went on to report that the algorithm they recommended “captures 95 percent of eligible Medicare admissions and 88 percent of readmissions following those admissions….” (p. 32)

Thus, the Yale group set for themselves a much tougher challenge than even that which the MedPAC commissioners set for themselves in their 2007 report to Congress. Whereas MedPAC “only” had to cite research demonstrating methods of reducing readmissions for all patients (that is, populations without severe exclusions such as “must live near the hospital”) with one of only seven discharge diagnoses/procedures, the Yale group had to cite research demonstrating methods of reducing readmissions for all patients (regardless of whether they lived near the hospital etc.) and regardless of their diagnosis at discharge.

You will not be surprised when I tell you the Yale group didn’t come remotely close to meeting their burden of proof. They did not cite a single study showing hospitals how to reduce readmissions as the Yale group defined “readmission.”

Research supported using a scalpel, not a chain saw

The Yale group claimed that numerous “randomized controlled trials” justified their extravagant recommendation to CMS. They cited the 16 studies listed in the table below. (p. 8) [3] I have divided those studies into those that did not examine an intervention and those that did. I have divided the latter group into those that found that the intervention reduced readmissions and those that did not. As I did in Part II, I have written “heart failure” in red to call your attention to how much of the research focused on that disease.   


Studies cited by the Yale group’s 2011 report to CMS*  

Studies that did not examine an intervention (3)

Krumholz et al., 2002: Reported a correlation between readmissions of heart failure patients and number of vaguely defined “strategies” (such as “partnering with physicians”).

Conley et al., 2003: “One-year risk of readmission for patients treated with SGAs [second-generation antipsychotics] were at least comparable to the one-year risk for patients receiving fluphenazine decanoate and lower than the risk for patients treated with haloperidol decanoate.”

Weiss et al., 2010: Correlation between nurse assessments of “discharge readiness” with patient assessment is low.

Studies that examined an intervention that did not reduce readmissions (4)

van Walraven et al., 2002:  Found a non-significant odds ratio of .74 for readmission of all types of patients if patients’ family doctors received discharge summary.

Garasen et al., 2007: Differences in readmissions at 60 days for patients admitted for “an acute illness or an acute exacerbation of a known chronic disease” not statistically significant.

Mistiaen et al., 2007:  Meta-analysis of 11 “systematic reviews.” “Based on above two reviews, it appears that readmissions in heart failure patients can be reduced by some kind of intervention.” “Our conclusions on the basis of these fifteen reviews, is that there is only limited evidence for the positive impact of discharge interventions.”

Koehler et al., 2009: This was a literature review of randomized trials conducted since 1990 on “adult patients admitted to the hospital for a medical or surgical cause.” “In general, these assessments should be regarded as hypothesis-generating and the inferences made based on subgroup analyses must be viewed as tentative…. [T]here was evidence of publication bias.”

Studies that examined an intervention that reduced readmissions (9)

Naylor et al., 1994; Naylor et al., 1999: The 1999 study examined patients admitted with one of five diagnoses (including heart failure) or for one of four procedures. “By 24 weeks after the index hospital discharge, 37 percent of the control group had been re-hospitalized compared with 20 percent of the intervention group.”

Stauffer et al., 2011: This was a study of the “transitional care program” studied by Naylor et al. (see above) but limited to heart failure patients. 

Coleman et al., 2004: Examined the “Care Transitions Intervention.” Subjects were “Community‐dwelling adults aged 65 and older admitted to the study hospital with one of nine selected conditions,” including heart failure. “The adjusted odds ratio comparing rehospitalization of intervention subjects with that of controls was 0.52 … at 90 days.”

Voss et al., 2011: Authors studied Coleman et al.’s Care Transitions Intervention. “30-day readmissions were fewer for participants who received coaching….”

Phillips et al., 2004:  Meta-analysis of 18 RCTs. “Comprehensive discharge planning plus post-discharge support significantly reduced readmission rates for older people with CHF [heart failure].”

Jovicic et al., 2006: This meta-analysis of six studies concluded, “Self-management programs targeted for patients with heart failure decrease overall hospital readmissions and readmissions for heart failure.” Three of these studies showed no reduction in readmission rates. All three of those that did (Cline, Koeling and Krumholz) used significant exclusion criteria.

Courtney et al., 2009:  This was a study of emergency visits to hospitals and primary care doctors, not hospital readmissions only, among patients who were admitted for an “acute” condition, “aged 65 and older and with at least one risk factor for readmission (multiple comorbidities, impaired functionality, aged ≥75, recent multiple admissions, poor social support, history of depression).“ The intervention group required significantly fewer emergency hospital readmissions … and emergency GP visits….”

Jack et al., 2009:  Patient participation was severely limited (see discussion in Part I). Change in readmissions was not significant, but change in readmissions plus ER visits was.

*All quotations are from the cited articles. Complete citations are available at pp. 34-35 of the Yale group’s 2011 report to Congress, endnotes 20-35.


We see that only nine of the 16 studies reported that the intervention reduced readmissions. Of those nine, not one studied patient samples that met the “hospital-wide” inclusion criteria the Yale group endorsed – no restrictions on patients’ ability to give consent, speak English, etc., and no restrictions on the type of diagnosis at discharge. Of the patient populations examined in these 16 studies, the populations studied by Jack et al. and Voss et al. were subject to the least restrictions, but even their patient populations were very restricted compared with the “hospital-wide” standard endorsed by the Yale group. The Jack and Voss studies examined patient populations unrestricted by diagnosis, but both studies utilized draconian inclusion criteria that limited patients to those who were more likely to benefit from the intervention. I discussed the  severe inclusion criteria utilized by Jack et al. in Part I (patients had to have a phone and could not be included if they had been living in a nursing home, for example). I comment here on the Voss study.

Like Jack et al., Voss et al. chose a patient sample relatively unrestricted by diagnosis (they examined “all general medicine FFS Medicare beneficiaries, regardless of diagnosis”), but they severely restricted this population with other criteria designed to maximize the probability that patients would understand and respond to coaching. They excluded patients who were discharged to or admitted from a nursing home, discharged to a hospice, had “limited English proficiency or inadequate cognitive function unless a caregiver agreed to receive the intervention as a proxy,” who refused to consent to coaching and a home visit, and those who for some reason were unable to “complete the home visit” (even though they consented to it). As the authors put it, their “final intervention group represents 13.6 percent of the eligible approached population….” Inclusion criteria that excludes 86 percent of the admitted population may accurately be described as draconian.

Voss et al. reported that the intervention they examined (the “Care Transitions Intervention” [CTI] developed by Coleman et al.) was successful – it produced a 36-percent reduction in readmissions. But they made a statement one rarely encounters in the research on readmissions: They emphasized that they applied the CTI to a carefully selected group of patients, and they warned that the CTI probably would not have reduced readmissions if it had been applied it to the entire population of admitted patients. “Because the CTI depends on activating individuals to advocate for their own health, it is less likely to benefit those who are not or cannot currently be activated to a certain level of readiness to act,” they noted, “and identifying such individuals is an important consideration for resource allocation….” But under the HRRP, hospitals do not have the option “of identifying such individuals” and administering the intervention to just those patients. Under the HRRP, hospitals are subject to punishment for all unplanned admissions that occur within 30 days after discharge.

The Yale group failed to offer a warning like the one Voss et al. offered. There is no excuse for that omission. It is obvious what the Yale group’s motivation was: They decided on their own say-so that the HRRP should apply to 95 percent of all admissions rather than select groups of patients, and achieving that goal would not be possible if hospitals were given the option to decide for themselves which patients should receive the intervention services.

To sum up: The Yale group endorsed the equivalent of a chainsaw when the research supported the equivalent of a scalpel.

The free-lunch syndrome

The inexplicable failure of MedPAC and the Yale group to warn their readers that the research they cited did not support their recommendation was compounded by their utter indifference to the cost of the interventions they claimed research had already proven. MedPAC at least acknowledged the issue in their 2007 report to Congress; they said the cost of interventions was “beyond the scope” of their report. [4] The Yale group didn’t even acknowledge the issue. Apparently, the commission and the Yale group thought all those allegedly proven interventions either cost nothing, or hospitals have figured out how to create money out of thin air.

Even if it were true that proven methods of reducing readmissions are available to hospitals, that fact alone would not support a recommendation that hospitals be punished for “excess readmissions.” We would need to know as well how much it would cost hospitals to deploy the allegedly proven methods, and whether those costs were more than the savings hospitals would reap from using those methods. What little research existed on this issue in 2007 (when MedPAC made their report) and 2011 (when the Yale group made theirs) was inconclusive, and that is still the case today. In a 2017 literature review of 44 studies on this issue, Nuckols et al. concluded, “[I]t remains unclear whether effective interventions tend to produce net savings or losses to the health system.” [5]

To illustrate how murky this research is, let me compare two studies of the same intervention cited by the Yale group – the care transition program described by Naylor et al. in their 1999 article and the same program studied by Stauffer et al. in their 2011 article. Naylor et al. and Stauffer et al. both applied the intervention to small pools of carefully selected patients (small compared with the virtually unrestricted populations recommended by the Yale group). But they limited their patient pools in different ways. Naylor et al. restricted their patients both by diagnosis/procedure (they had to have been discharged with one of nine diagnoses/conditions) and by susceptibility to coaching (they had to live near one of two University of Pennsylvania hospitals and have a phone, for example). Stauffer et al. applied the same intervention to just HF patients, but they made no attempt to limit participating patients to those susceptible to coaching. (Stauffer et al. explicitly designed their patient sample to match the expected population that would be covered by the HRRP. It was the only study cited by the Yale group that did that.)

But the financial outcomes of these two studies were starkly different. Naylor et al. reported impressive net savings – about $3,000 per patient over 24 weeks – whereas Stauffer et al. reported no net savings. As Stauffer et al. put it, “[T]he intervention did not save money from the hospital perspective.”

The readmission-penalty experiment has gone on long enough

The research cited by MedPAC and the Yale group was sufficient to support a recommendation that Congress and private payers pay hospitals for several of the interventions described in that research, notably Coleman et al.’s CTI and Naylor et al.’s transitional care program. The research definitely did not indicate that hospitals could reduce all-cause, hospital-wide readmissions (the Yale group’s assumption) nor even “related” readmissions of patients with one of seven diagnoses/procedures (MedPAC’s assumption) even if they spent great wads of money. Moreover, the research did not indicate that hospitals could achieve net savings even if they applied their intervention only to select patient groups, never mind all patients.

Given this state of knowledge about readmissions, the assumption made by MedPAC, the Yale group, NQF, and their many cheerleaders that punishing hospitals for “excess readmissions” was the solution was inexcusable. The creation of the HRRP and penalty programs like it on the basis of this reckless assumption has placed hospitals, especially safety-net hospitals, in a lose-lose situation. If they make no attempt to reduce readmissions, they are very likely to be punished. If they do take a stab at reducing readmissions based on the murky research available to them, they run a high risk of spending more money on interventions than they will save in the form of reduced penalties.

It is human nature to respond to inescapable lose-lose situations like this by gaming – at least some of the time. Evidence that has accumulated since the HRRP began in 2012 suggests hospitals are gaming the HRRP in several ways, including making greater use of observation stays and emergency rooms, and possibly by upcoding and documenting more often that a patient left against medical advice. Some of these gaming strategies may be harming some patients.

Should we wait for that happy day when research proves conclusively that readmission-penalty programs are harming patients before pulling the plug on them? My answer is definitely not. The HRRP and private-sector programs like it should have been treated the way we treat new, unproven drugs. The burden lies on those who propose to expose the populace to new drugs to demonstrate that they are safe and effective before they are placed on the market. We should apply the same standard to new, unproven health policy schemes. The HRRP and private-sector programs like it should be terminated immediately.

[1] The Yale group also exempted admissions that end in patients leaving against medical advice and in discharges with “conditions with very high post-discharge mortality” (primarily cancer).

[2] The Yale group took a similarly cavalier attitude toward risk adjustment of readmission rates. They refused to recommend adding morbidity data from Medicare’s Part B files to the diagnoses listed by hospitals (which would have substantially improved the accuracy of risk adjustment) on the ground that it would be “technically cumbersome.” (p. 20)

 [3] Here is an excerpt from the Yale group’s report to CMS in which they claimed research supported their “hospital-wide all-condition readmission measure.” “Furthermore, randomized controlled trials have shown that improvement in the following areas can directly reduce readmission rates: quality of care during the initial admission; improvement in communication with patients, their caregivers and their clinicians; patient education; pre-discharge assessment; and coordination of care after discharge. [Sixteen studies cited.] Evidence that hospitals have been able to reduce readmission rates through these quality-of-care initiatives illustrates the degree to which hospital practices can affect readmission rates.  Successful randomized trials have reduced 30-day readmission rates by 20-40%….Given that studies have shown readmissions within 30-days to be related to quality of care, and that interventions have been able to reduce 30-day readmission rates, it is reasonable to consider an all-condition 30-day readmission rate as a quality measure.” (p. 8)  The Yale group developed the “hospital-wide readmission measure” in consultation with the National Quality Forum (NQF). The NQF’s explanation of this “quality measure,” published just a few months after the Yale group sent their recommendation to CMS, incorporates much of the Yale group’s report verbatim. It also cites MedPAC’s 2007 report to Congress as well as most of the studies the Yale group cited.

[4] Here is how MedPAC dodged the question of where hospitals were supposed to find the resources to hire extra staff to carry out the interventions MedPAC claimed research had already documented: “A related issue that is beyond the scope of this chapter is the lack of funding for care management services…. Perhaps once experience is gained in how much hospitals can improve and what resources are needed to achieve improvement, policymakers can consider the need for any explicit financing for care management services as a complement to a change in readmission payment policy.” (p. 115, Chapter 5, MedPAC’s 2007 report to Congress)   

[5] I have not read most of the 44 studies Nuckols et al. reviewed, but based on my reading of the readmissions literature, including the studies I discussed in this series, I’m confident that none of those 44 studies analyzed interventions applied to “readmissions” as the HRRP defines them, much less as the Yale group defines them. I’m confident all 44 studies analyzed interventions applied to carefully defined groups of patients. If my assumption is accurate, that would mean that those studies Nuckols et al. reviewed that did report net savings overestimated net savings, and those that reported net losses underestimated those losses.

Kip Sullivan is a member of the advisory board of Health Care for All Minnesota, and a member of the Minnesota chapter of Physicians for a National Health Program