There is a plethora of health care quality data being pushed out to the public, yet no rules to assure the accuracy of what is being presented publicly. The health care industry lacks standards for how valid a quality measure should be before it is used in public reporting or pay-for-performance initiatives, although some standards have been proposed.
The NQF does a good job of reviewing and approving proposed measures presented to it, but lacks the authority to establish definitive quantitative standards that would apply broadly to purveyors of performance measures. However, as discussed earlier, many information brokers publically report provider performance without transparency and without meeting basic validity standards. Indeed, even CMS, which helps support NQF financially, has adopted measures for the Physician Quality Reporting System that have not undergone NQF review and approval. Congress now is considering “SGR repeal,” or sustainable growth rate legislation, that would have CMS work directly with specialty societies to develop measures and measurement standards, presumably without requiring NQF review and approval [30].
Without industry standards, payers, policy makers, and providers often become embroiled in a tug-of-war; with payers and policy-makers asserting that existing measures are good enough, and providers arguing they are not. Most often, neither side has data on how good the contested measures actually are. Most importantly, the public lacks valid information about quality, especially outcomes, and costs.
Indeed, most quality measurement efforts struggle to find measures that are scientifically sound yet feasible to implement with the limited resources available. Unfortunately, too often feasibility trumps sound science. In the absence of valid measures, bias in estimating the quality of care provided will likely increase in proportion to the risks and rewards associated with performance. The result is that the focus of health care organizations may change from improving care to “looking good” to attract business. Further, conscientious efforts to reduce measurement burden have significantly compromised the validity of many quality measures, making some nearly meaningless, or even misleading. Unfortunately, measurement bias often remains invisible because of limited reporting of data collection methods that produce the published results. In short, the measurement of quality in health care is neither standardized nor consistently accurate and reliable.
In short, while the number of performance measures is growing, the health care field lacks an entity to create the rules for reporting quality and cost data; as a result, the great variation in performance measure specifications is slowing efforts to advance quality—at times creating conflict over opposing findings.
The field of quality measurement could advance significantly if providers and policy-makers agreed on validity thresholds and transparently reported the validity of their quality measure data. Before the SEC was created in the aftermath of the Wall Street Crash of 1929, when looking at companies’ financial data, the information provided by one business could not be compared to another; there were no standard rules for reporting performance. Congress established the SEC as an independent, nonpartisan government entity to, among other things, help ensure standards in the disclosure of financial information, make financial performance transparent, audit businesses, ensure compliance with rules, and apply penalties for transgressions.
Policy-makers will need to consider whether such an entity should be housed at AHRQ; should be a publicprivate partnership, such as NQF; or should be a separate, new government entity. Such a commission could promote standardization, transparency, and auditing of the reporting of quality and cost measures. Consistent with First Amendment guarantees of free speech, we would not provide such an entity regulatory authority to require adherence to standards. Rather, we would anticipate that organizations would voluntarily seek to comply with the applicable standards for reporting performance measures. Under this model, this entity would set the rules for the development of measures and the transparent reporting of performance on these measures, analyze progress (with input from clinicians, patients, employers, and insurers), and audit publicly-reported quality measure data. Private sector information brokers could then conduct secondary analyses of the reports, much like happens in the financial industry through companies like Bloomberg. This SEC-like model would thus ensure that all publicly-reported quality measure data are generated from a common basis in fact and allow apples-to-apples comparisons across provider organizations.
Robert A. Berenson, MD is an institute fellow at the Urban Institute.
Peter J. Pronovost, MD, PhD is the director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins, as well as Johns Hopkins Medicine’s senior vice president for patient safety and quality.
Harlan M. Krumholz, MD, is the director of the Yale-New Haven Hospital Center for Outcomes Research and Evaluation, director of the Robert Wood Johnson Foundation Clinical Scholars program at Yale University, and the Harold H. Hines, Jr. professor of cardiology, investigative medicine, and public health.
The authors thank Lawrence Casalino, MD, PhD, chief of the Division of Outcomes and Effectiveness Research and an associate professor at Weill Cornell Medical College, and Andrea Ducas, MPH and Anne Weiss, MPP of the Robert Wood Johnson Foundation for their helpful comments on this paper. This research was funded by theRobert Wood Johnson Foundation, where the report was originally published.
30. Overview of the SGR Repeal and Reform Proposal. Washington, DC: Majority Staff of the U.S. House of Representatives’ Ways and Means Committee and Energy and Commerce Committee, http://waysandmeans.house.gov/uploadedfiles/sgr_reform_short_summary_2013.pdf(accessed April 2013).
Categories: Uncategorized