Medical Practice

Implementation May Be a Science, But, Alas, Medicine Remains an Art

By KIM BELLARD

I’ve been working in healthcare for over forty (!) years now, in one form or another, but it wasn’t until this past week that I heard of implementation science.  Which, in a way, is sort of the problem healthcare has. 

Granted, I’m not a doctor or other clinician, but everyone working in healthcare should be aware of, and thinking a lot about, “the scientific study of methods to promote the systematic uptake of research findings and other EBPs into routine practice, and, hence, to improve the quality and effectiveness of health services” (Bauer, et. al). 

It took a JAMA article, by Rita Rubin, to alert me to this intriguing science: It Takes an Average of 17 Years for Evidence to Change Practice—the Burgeoning Field of Implementation Science Seeks to Speed Things Up.

It turns out that implementation science is nothing new. There has been a journal devoted to it (cleverly named Implementation Science) since 2006, along with the relatively newer Implementation Science Communications. Both focus on articles that illustrate “methods to promote the uptake of research findings into routine healthcare in clinical, organizational, or policy contexts.” 

Brian Mittman, Ph.D., has stated that the aims of implementation science are:

  • “To generate reliable strategies for improving health-related processes and outcomes and to facilitate the widespread adoption of these strategies.
  • To produce insights and generalizable knowledge regarding implementation processes, barriers, facilitators, and strategies.
  • To develop, test, and refine implementation theories and hypotheses, methods, and measures.”

Dr. Mittman distinguished it from quality improvement largely because QI focuses primarily on local problems, whereas “the goal of implementation science is to develop generalizable knowledge.” 

Ms. Rubin’s headline highlights the problem healthcare has: it can take an alarmingly long time for empirical research findings to be incorporated into standard medical practice.  There is some dispute about whether 17 years is actually true or not, but it is widely accepted that, whatever the actual number is, it is much too long.  Even then, Ms. Rubin reminds us, it is further estimated that only 1 in 5 interventions make it to routine clinical care.  

She quotes University of Washington gastroenterologist Rachel Issaka, MD, MAS: “implementation science is really trying to close that gap between what we know and what we do.”  Or, rather, between what is known by some and what most do. “The hope of implementation science is that we can synthesize what works for whom and for where and for what disease and close that 17-year gap,” Nathalie Moise, MD, MS, director of implementation science research at Columbia University told JAMA.

It is worth noting that implementation science focuses both on getting clinicians to start doing newly proven treatments as well as to stop doing longstanding treatments that have subsequently been shown to be of little or no value (“deimplementation”). 

There are implementation science departments or programs at Brown, Duke, Johns Hopkins, Northwestern, Penn, UCSF, UNC, University of Michigan, University of Washington, and Wake Forest, to name a few. Some are in the school of medicine, some in the school of public health. 

With such widespread training in the field, you’d think we’d be doing better at closing that gap – or, as Ms. Rubin labels it, that “chasm” – between what we should do and what we do.  But here we are still, and, as Ms. Rubin points out, COVID proved the point. 

“COVID-19 has shown the world that ‘knowing what to do’ does not ensure ‘doing what we know,’” wrote implementation science pioneer Enola Proctor, PhD, a professor emerita of social work, and infectious disease specialist Elvin Geng, MD, MPH, director of the Center for Dissemination and Implementation at the Institute for Public Health, both at Washington University in St Louis, in a 2021 Science editorial.

 Few would argue that clinicians are actively ignoring best practices. It’s more about how they were trained, how others around them practice, what they’re used to/comfortable with, and hugely compounded by the sheer mass of medical knowledge.  Medical knowledge is estimated to double every couple of months, and that half life is getting shorter and shorter; it was estimated at 2 years only five years ago.  No one — no human anyway — can keep up.

Other limitations are that studies may not have had diverse enough study populations, or that they are socio-economic barriers to the desired care.  Ms. Rubin cities the simple lack of a ride post-colonoscopy as a reason that some patients decline getting them. “I do think that White, high socioeconomic clinicians just have no clue that there are people out there who lack transportation options,” Dr. Issaka notes. That’s only one of a million – a billion — blind spots that our health care system has about the people using it.

One has to wonder about what kind of industry healthcare is that it needs a science to study how to implement practices that are proven to be more effective for its customers. Most other industries focus on this as a matter of course, as a matter of survival, but not healthcare.

Much of this, I fear, is our historical view that physicians are as much, if not more, “artists” as scientists. We defer to their judgement. We lack the mechanisms to ensure that they’re practicing similarly to other physicians in the community, much less in other communities, and still much less to best practices/most recent evidence. That’s a big reason why healthcare needs implementation science, and why it has been slow going making it actually succeed.

Big Data and AI give us the tools to change this.

Using Big Data, we have the ability to collect and analyze what happens to patients. We can know what treatments physicians are ordering, and if they are in conformance with best practices. Best of all, it should allow us to evaluate effectiveness on much bigger populations, in more widely diverse situations, in much faster time frames.

Using AI, individual clinicians will be able to better keep up with existing medical knowledge. It’s an impossible task now, but one that AI is already starting to demonstrate. Most current AI are trained on fixed data sets, which can’t include the most current research, but those data sets are still much better than a clinician’s memory, and in the near future AI should be able find current findings in real time. I love that there is implementation science, and I wish its practitioners great success, but I long for the day when healthcare has its principles baked into its everyday practice. 

Kim is a former marketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor.