By JEFF GOLDSMITH
How the Search for Perfect Markets has Damaged Health Policy
Sometimes ideas in healthcare are so powerful that they haunt us for generations even though their link to the real world we all live in is tenuous. The idea of “moral hazard” is one of these ideas. In 1963, future Nobel Laureate economist Kenneth Arrow wrote an influential essay about the applicability of market principles to medicine entitled “Uncertainty and the Welfare Economics of Medical Care”.
One problem Arrow mentioned in this essay was “moral hazard”- the enhancement of demand for something people use to buy for themselves that is financed through third party insurance. Arrow described two varieties of moral hazard: the patient version, where insurance lowers the final cost and inhibitions, raising the demand for a product, and the physician version–what happens when insurance pays for something the physician controls by virtue of a steep asymmetry of knowledge between them and the patient and more care is provided than actually needed. The physician-patient relationship is “ground zero” in the health system.
Moral hazard was only one of several factors Arrow felt would made it difficult to apply rational economic principles to medicine. The highly variable and uniquely threatening character of illness was a more important factor, as was the limited scope of market forces, because government provision of care for large numbers of poor folk was required.
One key to the durability of Arrow’s thesis was timing: it was published just two years before the enactment of Medicare and Medicaid in 1965, which dramatically expanded the government’s role in financing healthcare for the elderly and the categorically needy. In 1960, US health spending was just 5% of GDP, and a remarkable 48% of health spending was out of pocket by individual patients.
After 1966, when the laws were enacted, health spending took off like the proverbial scalded dog. For the next seven years, Medicare spending rose nearly 29% per year and explosive growth in health spending rose to the top of the federal policy stack. By 2003, health spending had reached 15% of GDP! Arrow’s moral hazard thesis quickly morphed into a “blame the patient” narrative that became a central tenet of an emerging field of health economics, as well as in the conservative critique of the US health cost problem.
Fuel was added to the fire by Joseph Newhouse’s RAND Health Insurance Experiment in the 1980s, which found that patients that bore a significant portion of the cost of care used less care and were apparently no sicker at the end of the eight-year study period. An important and widely ignored coda to the RAND study was that patients with higher cost shares were incapable of distinguishing between useful and useless medical care, and thus stinted on life-saving medications that diminished their longer term health prospects. A substantial body of consumer research has since demonstrated that patients are in fact terrible at making “rational” economic choices regarding their health benefits.
The RAND study provided justification for ending so-called first dollar health coverage and, later, high-deductible health plans. Today more than half of all Americans have high deductible health coverage. Not surprisingly, half of all Americans also report foregoing care because they do not have the money to pay their share of the cost!
However, a different moral hazard narrative took hold in liberal/progressive circles, which blamed the physician, rather than the patient, for the health cost crisis.
Continue reading…