Uncategorized

Better Living Through Research Design

Screen Shot 2015-11-07 at 12.59.11 PMWhat if policymakers, science reporters and even scientists can’t distinguish between weak and trustworthy research studies that underlie our health care decisions?

Many studies of healthcare treatments and policies do not prove cause-and-effect relationships because they suffer from faulty research designs. The result is a pattern of mistakes and corrections: early studies of new treatments tend to show dramatic positive health effects, which diminish or disappear as more rigorous studies are conducted.

Indeed, when experts on research evidence do systematic reviews of research studies they commonly exclude 50%-75% because they do not meet basic research design standards required to yield trustworthy conclusions.

In many such studies researchers try to statistically manipulate data to ‘adjust for’ irreconcilable differences between intervention and control groups. Yet it is these very differences that often create the reported, but invalid, effects of the treatments or policies that were studied.

In this accessible and graph-filled article published recently by the US Centers for Disease Control and Prevention, we describe five case examples of how some of the most common biases and flawed study designs impact research on important health policies and interventions, such as comparative effectiveness of medical treatments, cost-containment policies, and health information technology.

Each case is followed by examples of weak study designs that cannot control for bias, stronger designs that can, and the unintended clinical and policy consequences that may result from the seemingly dramatic reporting of poorly designed studies.

Flawed studies have dictated national flu immunization policies, influenced a generation of clinicians to falsely believe that specific sedatives could cause hip fractures in the elderly, overstated the health benefits and cost-savings of electronic health records, and grossly exaggerated the mortality benefits of hospital safety programs, resulting in trillions of dollars spent on interventions with few demonstrated health benefits.

As Ross Koppel and I recently stated in an op-ed in US News and World Report, “Our worry is that as a research community, we are losing our humility and our caution in the face of declining research funding, the need to publish and the need to show useful findings. Perhaps it’s becoming harder to admit that our so-called big data findings are not as powerful as we wish, or are, at best, uninterpretable.”

In our article we provide a simple ranking of the ability of most research designs to control for common biases. This ranking should help the public, policy makers, news media, and research trainees discriminate between biased and credible findings in healthcare studies.

This ranking should help the public, policy makers, news media, and research trainees discriminate between biased and credible findings in healthcare studies.

Editors of medical journals can also benefit from this design hierarchy by refusing to publish research based on study designs that fail even the minimal criteria for inclusion in a self-respecting systematic review of the evidence (something they often don’t do at present).

It is time to embrace the quality rather than the quantity of studies published daily in our field. We are slowly losing the credibility of the public who find it difficult to believe many clinical and health policy studies – especially when they appear to contradict other recent studies.

It is not too late to remedy this misdirection. We hope that some of the simple illustrations of common biases and strong research designs will be helpful to the broad population of users of health research.

Stephen B. Soumerai is Professor of Population Medicine at Harvard Medical School and Harvard Pilgrim Health Care Institute. He also co-chairs the Statistics and Evaluative Sciences concentration within Harvard University’s health policy Ph.D. program.

1 reply »

  1. Excellent piece. Do you think the quality of scientific research has actually declined or could this be more related to greater exposure of poor quality research owing to increased scrutiny and availability of information?