Google – The Health Care Blog https://thehealthcareblog.com Everything you always wanted to know about the Health Care system. But were afraid to ask. Mon, 18 Mar 2024 22:44:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.4 The Latest AI Craze: Ambient Scribing https://thehealthcareblog.com/blog/2024/03/18/the-latest-ai-craze-ambient-scribing/ Mon, 18 Mar 2024 21:14:52 +0000 https://thehealthcareblog.com/?p=107916 Continue reading...]]>

By MATTHEW HOLT

Okay, I can’t do it any longer. As much as I tried to resist, it is time to write about ambient scribing. But I’m going to do it in a slightly odd way

If you have met me, you know that I have a strange English-American accent, and I speak in a garbled manner. Yet I’m using the inbuilt voice recognition that Google supplies to write this story now.

Side note: I dictated this whole thing on my phone while watching my kids water polo game, which has a fair amount of background noise. And I think you’ll be modestly amused about how terrible the original transcript was. But then I put that entire mess of a text  into ChatGPT and told it to fix the mistakes. it did an incredible job and the output required surprisingly little editing.

Now, it’s not perfect, but it’s a lot better than it used to be, and that is due to a couple of things. One is the vast improvement in acoustic recording, and the second is the combination of Natural Language Processing and artificial intelligence.

Which brings us to ambient listening now. It’s very common in all the applications we use in business, like Zoom and others like transcript creation from videos on Youtube. Of course, we have had something similar in the medical business for many years, particularly in terms of radiology and voice recognition. It has only been in the last few years that transcribing the toughest job of all–the clinical encounter–has gotten easier.

The problem is that doctors and other professionals are forced to write up the notes and history of all that has happened with their patients. The introduction of electronic medical records made this a major pain point. Doctors used to take notes mostly in shorthand, leaving the abstraction of these notes for coding and billing purposes to be done by some poor sap in the basement of the hospital.

Alternatively in the past, doctors used to dictate and then send tapes or voice files off to parts unknown, but then would have to get those notes back and put them into the record. Since the 2010s, when most American health care moved towards using  electronic records, most clinicians have had to type their notes. And this was a big problem for many of them. It has led to a lot of grumpy doctors not only typing in the exam room and ignoring their patients, but also having to type up their notes later in the day. And of course, that’s a major contributor to burnout.

To some extent, the issue of having to type has been mitigated by medical scribes–actual human beings wandering around behind doctors pushing a laptop on wheels and typing up everything that was said by doctors and their patients. And there have been other experiments. Augmedix started off using Google Glass, allowing scribes in remote locations like Bangladesh to listen and type directly into the EMR.

But the real breakthrough has been in the last few years. Companies like Suki, Abridge, and the late Robin started to promise doctors that they could capture the ambient conversation and turn it into proper SOAP notes. The biggest splash was made by the biggest dictation company, Nuance, which in the middle of this transformation got bought by one of the tech titans, Microsoft. Six years ago, they had a demonstration at HIMSS showing that ambient scribing technology was viable. I attended it, and I’m pretty sure that it was faked. Five years ago, I also used Abridge’s tool to try to capture a conversation I had with my doctor — at that time, they were offering a consumer-facing tool – and it was pretty dreadful.

Fast forward to today, and there are a bunch of companies with what seem to be really very good products.

Nuance’s DAX is in relatively wide use. Abridge has refocused itself on clinicians and has excellent reviews, (you can see my interview and demo with CEO Shiv Rao here) and Nabla has just published a really compelling review from its first big rollout with Kaiser Permanente, Northern California in the NEJM no less. (FD I am an advisor to Nabla although not involved in its KP work). And others like DeepScribe, Ambience, Augmedix and even newcomers Innovaccer and Sudoh.ai seem to be good options.

If you take a look at the results of the NEJM published study that was done in Northern California using Nabla’s tool, you’ll see that clinicians have adopted that very quickly, with high marks for both its accuracy, and the ability to deliver a SOAP note and patient summary very quickly. And it has returned a lot of time to the clinician’s day. (Worth noting that independent practice Carbon Health has built its own inhouse ambient scribe and used it on 500K visits so far)

The big gorilla on the EMR side, Epic, has integrated to some extent with Nuance and Abridge, but many of the other companies are both working to integrate with Epic and are inside other EMR competitors – for instance Nextgen is private-labeling Nabla. At the moment, for basically everyone integration really just means getting the note summary into the notes section of the EMR.

But there is definitely more to come. For many years, NLP companies like Apixio, Talix, Health Equity and more (all seemingly bought by Edifecs) have been working on EMR notes to aid coders in billing, and it’s an easy leap to assume that will happen more and more with ambient scribing. And of course, the same thing is going to be true for clinical decision support and pretty soon integration with orders and workflow. In other words, when a doctor says to a patient, “We are going to start you on this new drug,” not only will it appear in the SOAP note, but the prescription or the lab order will just be magically done.

But is it reasonable to suppose that we are just paving the cowpath here? Ambient scribing is just making the physician office visit data more accessible. It’s not making it go away, which is what we should be trying to do. But I can’t blame the ambient scribing companies for that. And as I have (at length!) pointed out, we are still stuck in a fee-for-transaction system in which the health services operators in this country make money by doing stuff, writing it up, and charging for it. That is not going away anytime soon.

But given that’s where we are, I think we can still see how the ambient scribing battle will play out. 

Nuance’s DAX has the advantage of a huge client base, but frankly, Nuance has not been an innovative company. One former employee told me that they have never invented anything. And indeed, the DAX system was massively enhanced by the tech Nuance acquired when purchasing a company called Saykara in 2021, some years after that unconvincing demo back at HIMSS 2018.

So innovation matters, but the other issue is the cost of ambient scribing, which in some cases is nearing the cost of a real scribe. Nuance’s DAX, Suki, and even new entries like Sunoh seem to be around the $400 to $600 a month per physician level. Sunoh is offered by eClinicalworks and has some co-ownership with that EMR vendor. What’s amazing is that at the price quoted at HIMSS of $1.25 per encounter the ambient scribing tool would cost a busy family practice doc seeing 25 patients a day as much as the EMR subscription, around $600 a month.

Abridge has been quoted at roughly $250 a month, and Nabla seems to be considerably less expensive, around $120. But realistically, the whole market will have to compress to about that level because the switching costs are going to be very trivial. Right now, with most of them requiring a paste and copy into the EMR, it’s almost zero.

Which then leads to some more technical issues. How good will these systems become? (Noting that they are already very good, according to reviews on the Elion site). And what will happen to the way they store data. Most of them are currently moving the data back to their cloud for processing. But this may not be acceptable for health systems that like to keep data within their firewalls. For what it’s worth, Nabla, being from the EU and very conscious of GDPR, has been pushing the fact that its process stays on the physician’s local machine – although I’m not sure how much difference that makes in the market.

The other technical issue is the reliance on the large LLMs like OpenAI, Google, etc., compared to companies that are using their own LLM. Again, this may just remain a technical issue that no one cares much about. On the other hand, accuracy and lack of anonymization will continue to be a big issue if more generic LLMs are used. Now the fascination with the initial ChatGPT type LLM is wearing off, there’s going to be a lot more concern about how AI is using health care as a whole–particularly its tendency to “hallucinate” or get stuff wrong. That will obviously impact ambient scribing, even if mistakes may not be as serious as perhaps patient diagnosis or treatment suggestions.

So it’s too early to know exactly how this plays out, but it’s not much too early. In some ways, it’s very refreshing to see the speed at which this new technology is being adopted. As it is, the number of American doctors using ambient scribing is probably below 10%. But it’s highly likely that number goes up to 70%+ in very short order.

The problem that it is fixing for doctors is one that has been around for thousands of years and also one that has been particularly acute for the last twenty years or so. It’s almost like we’re in a period where the doctor suffering with having to  type up their notes in Epic–written up so eloquently by Bob Wachter in his book, “The Digital Doctor,”– is going to be a historical artifact that lasted for fifteen years or so. Maybe it’s going to be talked about nostalgically, like those of us who reminisce about having to get online with dial-up modems.

I’m pretty sure that the winners will be apparent in a couple of years, and that somebody, possibly Microsoft, or possibly the investors in big rounds at 2021 style valuations for Abridge or Ambience, may be regretting what happened in a couple of years. Alternatively, one of them may be a monopoly winner that soon starts printing money.

I suspect, though, that ambient scribing will essentially become a close-to-free product for all different types of business and that clinical care will not be much of an exception. That suggests that a company like Anthropic or OpenAI with close connections to the tech titans, Amazon and Microsoft, will end up becoming more of a feature for the tech giants. My guess is that they will be delivering that product for free probably also into much of clinical care, including ambient scribing. Of course, Epic may decide that it wants to do the same thing, which may leave its partners including Microsoft in the lurch.

It’s reasonable to expect that all aspects of life, including education, general business, consumer activity, and more, will find note-taking, summaries, and decision support a natural part of the next round of computing. For instance, anyone who has had a conversation with their contractor when renovating a house would probably love to have the notes, to-dos and agreements automatically recorded. It’ll be a whole new way of “keeping people honest”. Same thing for health care, I suspect.

But to be fair, we are not there yet. My dictation tool took this whole thing while watching a water polo game on Sunday. And I think you’ll be modestly amused about how terrible the original transcript was. But then I put that entire mess of a text  into ChatGPT and told it to fix the mistakes. it did an incredible job and the output required surprisingly little editing.

AI is getting very smart at working on incomplete information, and health care (as well as clinicians and patients) will benefit.

Matthew Holt is the publisher of The Health Care Blog and one upon a time ran the Health 2.0 Conference

]]>
2024 Prediction: Society Will Arrive at an Inflection Point in AI Advancement https://thehealthcareblog.com/blog/2023/12/27/2024-prediction-society-will-arrive-at-an-inflection-point-in-ai-advancement/ Wed, 27 Dec 2023 05:26:00 +0000 https://thehealthcareblog.com/?p=107752 Continue reading...]]> By MIKE MAGEE

For my parents, March, 1965 was a banner month. First, that was the month that NASA launched the Gemini program, unleashing “transformative capabilities and cutting-edge technologies that paved the way for not only Apollo, but the achievements of the space shuttle, building the International Space Station and setting the stage for human exploration of Mars.” It also was the last month that either of them took a puff of their favored cigarette brand – L&M’s.

They are long gone, but the words “Gemini” and the L’s and the M’s have taken on new meaning and relevance now six decades later.

The name Gemini reemerged with great fanfare on December 6, 2023, when Google chair, Sundar Pichai, introduced “Gemini: our largest and most capable AI model.” Embedded in the announcement were the L’s and the M’s as we see here: “From natural image, audio and video understanding to mathematical reasoning, Gemini’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.

Google’s announcement also offered a head to head comparison with GPT-4 (Generative Pretrained Transformer-4.) It is the product of a non-profit initiative, and was released on March 14, 2023. Microsoft’s helpful AI search engine, Bing, helpfully informs that, “OpenAI is a research organization that aims to create artificial general intelligence (AGI) that can benefit all of humanity…They have created models such as Generative Pretrained Transformers (GPT) which can understand and generate text or code, and DALL-E, which can generate and edit images given a text description.”

While “Bing” goes all the way back to a Steve Ballmer announcement on May 28, 2009, it was 14 years into the future, on February 7, 2023, that the company announced a major overhaul that, 1 month later, would allow Microsoft to broadcast that Bing (by leveraging an agreement with OpenAI) now had more than 100 million users.

Which brings us back to the other LLM (large language model) – GPT-4, which the Gemini announcement explores in a head-to-head comparison with its’ new offering. Google embraces text, image, video, and audio comparisons, and declares Gemini superior to GPT-4.

Mark Minevich, a “highly regarded and trusted Digital Cognitive Strategist,” writing this month in Forbes, seems to agree with this, writing, “Google rocked the technology world with the unveiling of Gemini – an artificial intelligence system representing their most significant leap in AI capabilities. Hailed as a potential game-changer across industries, Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on yet goes far beyond predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

Expect to hear the word “multimodality” repeatedly in 2024 and with emphasis.

But academics will be quick to remind that the origins can be traced all the way back to 1952 scholarly debates about “discourse analysis”, at a time when my Mom and Dad were still puffing on their L&M’s. Language and communication experts at the time recognized “a major shift from analyzing language, or mono-mode, to dealing with multi-mode meaning making practices such as: music, body language, facial expressions, images, architecture, and a great variety of communicative modes.”

Minevich believes that “With Gemini’s launch, society has arrived at an inflection point with AI advancement.” Powerhouse consulting group, BCG (Boston Consulting Group), definitely agrees. They’ve upgraded their L&M’s, with a new acronym, LMM, standing for “large multimodal model.” Leonid Zhukov, Ph.D, director of the BCG Global AI Institute, believes “LMMs have the potential to become the brains of autonomous agents—which don’t just sense but also act on their environment—in the next 3 to 5 years. This could pave the way for fully automated workflows.”

BCG predicts an explosion of activity among its corporate clients focused on labor productivity, personalized customer experiences, and accelerated (especially) scientific R&D. But they also see high volume consumer engagement generating content, new ideas, efficiency gains, and tailored personal experiences.

This seems to be BCG talk for “You ain’t seen nothing yet.” In 2024, they say all eyes are on “autonomous agents.” As they describe what’s coming next: “Autonomous agents are, in effect, dynamic systems that can both sense and act on their environment. In other words, with stand-alone LLMs, you have access to a powerful brain; autonomous agents add arms and legs.”

This kind of talk is making a whole bunch of people nervous. Most have already heard Elon Musk’s famous 2023 quote, “Mark my words, AI is far more dangerous than nukes. I am really quite close to the cutting edge in AI, and it scares the hell out of me.”  BCG acknowledges as much, saying, “Using AI, which generates as much hope as it does horror, therefore poses a conundrum for business… Maintaining human control is central to responsible AI; the risks of AI failures are greatest when timely human intervention isn’t possible. It also demands tempering business performance with safety, security, and fairness… scientists usually focus on the technical challenge of building goodness and fairness into AI, which, logically, is impossible to accomplish unless all humans are good and fair.”

Expect in 2024 to see once again the worn out phrase “Three Pillars” . This time it will be attached to LMM AI, and it will advocate for three forms of “license” in operate:

  1. Legal license – “regulatory permits and statutory obligations.”
  2. Economic license – ROI to shareholders and executives.
  3. Social license – a social contract delivering transparency, equity and justice to society.

BCG suggests that trust will be the core challenge, and that technology is tricky. We’ve been there before. The 1964 Surgeon General’s report knocked the socks off of tobacco company execs who thought high-tech filters would shield them from liability. But the government report burst that bubble by stating “Cigarette smoking is a health hazard of sufficient importance in the United States to warrant appropriate remedial action.”  Then came the Gemini 6A’s 1st attempt to launch on December 12,1965.  It was cancelled when its’ fuel igniter failed.

Generative AI driven LMM’s will “likely be transformative,” but clearly will also have its ups and downs as well.  As BCG cautions, “Trust is critical for social acceptance, especially in cases where AI can act independent of human supervision and have an impact on human lives.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex.

]]>
Holograms to the Rescue https://thehealthcareblog.com/blog/2021/05/25/holograms-to-the-rescue/ Tue, 25 May 2021 14:32:19 +0000 https://thehealthcareblog.com/?p=100399 Continue reading...]]>

By KIM BELLARD

Google is getting much (deserved) publicity for its Project Starline, announced at last week’s I/O conference.  Project Starline is a new 3D video chat capability that promises to make your Zoom experience seem even more tedious.  That’s great, but I’m expecting much more from holograms – or even better technologies.  Fortunately, there are several such candidates.

For anyone who has been excited about advances in telehealth, you haven’t seen anything yet.

If you missed Google’s announcement, Project Starline was described thusly:

Imagine looking through a sort of magic window, and through that window, you see another person, life-size and in three dimensions. You can talk naturally, gesture and make eye contact.

Google says: “We believe this is where person-to-person communication technology can and should go,” because: “The effect is the feeling of a person sitting just across from you, like they are right there.” 

Sounds pretty cool.  The thing, though, is that you’re still looking at the images through a screen.  Google can call it a “magic window” if it wants, but there’s still a screen between you and what you’re seeing.

Not so with Optical Trap Displays (OTDs).  These were pioneered by the BYU holography research group three years ago, and, in their latest advance, they’ve created – what else? – floating lightsabers that emit actual beams:

Optical trap displays are not, strictly speaking, holograms.  They use a laser beam to trap a particle in the air and then push it around, leaving a luminated, floating path.  As the researchers describe it, it’s like “a 3D printer for light.”

The authors explain:

The particle moves through every point in the image several times a second, creating an image by persistence of vision.  The higher the resolution and the refresh rate of the system, the more convincing this effect can be made, where the user will not be able to perceive updates to the imagery displayed to them, and at sufficient resolution will have difficulty distinguishing display image points from real-world image points.

Lead researcher Dan Smalley notes:

Most 3D displays require you to look at a screen, but our technology allows us to create images floating in space — and they’re physical; not some mirage.  This technology can make it possible to create vibrant animated content that orbits around or crawls on or explodes out of every day physical objects.

Co-author Wesley Rogers adds: “We can play some fancy tricks with motion parallax and we can make the display look a lot bigger than it physically is.  This methodology would allow us to create the illusion of a much deeper display up to theoretically an infinite size display.”

Indeed, their paper in Nature speculates: “This result leads us to contemplate the possibility of immersive OTD environments that not only include real images capable of wrapping around physical objects (or the user themselves), but that also provide simulated virtual windows into expansive exterior spaces.”

I don’t know what all of that means, but it sounds awfully impressive.

The BYU researchers believe: “Unlike OTDs, holograms are extremely computationally intensive and their computational complexity scales rapidly with display size.  Neither is true for OTD displays.”  They need to meet Liang Shi, a Ph.D. student at MIT who is leading a team developing “tensor holography.” 

Before anyone with mathemaphobia freaks out about the “tensor,” let’s just say that it is a way to produce holograms almost instantly. 

The work was published in Nature last March.  The technique uses deep neural networks to generate 3D holograms in near real time. I’ll skip the technical details of how this all works, but you can watch their video:

Their approach doesn’t require supercomputers or long calculations, instead allowing neural networks to teach themselves how to generate the holograms. Amazingly, the “compact tensor network” requires less than 1 MB of memory.  The images can be calculated from a multi-camera setup or LiDAR sensor, which are becoming standard on smartphones.

“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” Mr. Shi says.

Joel Kollin, a Microsoft researcher who was not involved in the research, told MIT News that the research “shows that true 3D holographic displays are practical with only moderate computational requirements.” 

All of the efforts are already thinking about healthcare.  Google is currently testing Project Starline in a few of its offices, but is betting big on its future  It has explicitly picked healthcare as one of the first industries it is working with, aiming for trial demos later this year.

The BYU researchers see medicine as a good use for OTDs, helping doctors plan complicated surgeries: “a high-resolution MRI with an optical-trap display could show, in three dimensions, the specific issues they are likely to encounter. Like a real-life game of Operation, surgical teams will be able to plan how to navigate delicate aspects of their upcoming procedures.”

The MIT researchers believe the approach offers much promise for VR, volumetric 3D printing, microscopy, visualization of medical data, and the design of surfaces with unique optical properties. 

If you don’t know what “volumetric 3D printing” is (and I didn’t), it’s been described as like an MRI in reverse: “the form of the object is projected to form the model instead of scanning the object.”  It could revolutionize 3D printing, and, for healthcare specifically, “Being able to 3D print from all spatial dimensions at the same time could be instrumental in producing complex organs…This would enable better and more functional vascularity and multi-cellular-material structures.”

As for “visualization of medical data,” for example, surgeons at The Ohio State University Wexner Medical Center are already using “mixed reality 3D holograms” to assist in shoulder surgery.  Holograms have also been used for cardiac, liver, and spine surgeries, among others, as well as in imaging.    

2020 was, in essence, a coming out party for video conferencing in general and for telehealth in particular.  The capabilities had been around, but it wasn’t until we were locked down and reluctant to be around others that we started to experience its possibilities.  Still, though, we should be thinking of it as version 1.0.

Versions 2.0 and beyond are going to be more realistic, more interactive, and less constrained by screens.  They might be holograms, tensor holograms, optical trap displays, or other technologies I’ve not aware of.  I just hope it doesn’t take another pandemic for us to realize their potential.  

Kim is a former emarketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor.

]]>
Health in 2 Point 00, Episode 106 | More Post-JPM Deals, & a Google/Cerner catfight? https://thehealthcareblog.com/blog/2020/01/23/health-in-2-point-00-episode-106-more-post-jpm-deals-a-google-cerner-catfight/ Thu, 23 Jan 2020 19:54:30 +0000 https://thehealthcareblog.com/?p=97464 Continue reading...]]> Today on Health in 2 Point 00, everybody’s getting 20 million dollars! There are so many deals to cover. AI chatbot symptom checker Buoy gets $20 million, Clew gets $20 million, diabetes management company Oviva gets $21 million, Covera gets $23.5 million for diagnostic improvement in radiology, Zipari gets $22.5 million working on engagement in health plans. Another $20 million for Kaizen (yet another nonemergency medical transportation company), and Color raises $75 million for personal genetics testing. In other news, Google and Cerner—the catfight begins just in time so we don’t have to talk too much about interoperability at HIMSS. And if you were also waiting with bated breath for where Mona Siddiqui ended up, tune in for the gossip on this episode of Health in 2 Point 00. —Matthew Holt

]]>
Patient-Directed Uses vs. The Platform https://thehealthcareblog.com/blog/2019/12/18/patient-directed-uses-vs-the-platform/ https://thehealthcareblog.com/blog/2019/12/18/patient-directed-uses-vs-the-platform/#comments Wed, 18 Dec 2019 14:34:21 +0000 https://thehealthcareblog.com/?p=97291 Continue reading...]]>

By ADRIAN GROPPER, MD

This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.

It’s 2023. Alice, a patient at Ascension Seton Medical Center Austin, decides to get a second opinion at Mayo Clinic. She’s heard great things about Mayo’s collaboration with Google that everyone calls “The Platform”. Alice is worried, and hoping Mayo’s version of Dr. Google says something more than Ascension’s version of Dr. Google. Is her Ascension doctor also using The Platform?

Alice makes an appointment in the breast cancer practice using the Mayo patient portal. Mayo asks permission to access her health records. Alice is offered two choices, one uses HIPAA without her consent and the other is under her control. Her choice is:

  • Enter her demographics and insurance info and have The Platform use HIPAA surveillance to gather her records wherever Mayo can find them, or
  • Alice copies her Mayo Clinic ID and enters it into the patient portal of any hospital, lab, or payer to request her records be sent directly to Mayo.

Alice feels vulnerable. What other information will The Platform gather using their HIPAA surveillance power? She recalls a 2020 law that expanded HIPAA to allow access to her behavioral health records at Austin Rehab.

Alice prefers to avoid HIPAA surprises and picks the patient-directed choice. She enters her Mayo Clinic ID into Ascension’s patient portal. Unfortunately, Ascension is using the CARIN Alliance code of conduct and best practices. Ascension tells Alice that they will not honor her request to send records directly to Mayo. Ascension tells Alice that she must use the Apple Health platform or some other intermediary app to get her records if she wants control.  

Disappointed, Alice tells Ascension to email her records to her Gmail address. In a 2021 settlement with the Federal Trade Commission, Facebook and Google agreed that they will not use data in their messaging services for any other purposes, including “platforms”. Unfortunately, this constraint does not apply to smaller data brokers.

Alice gets her records from Ascension the old-fashion way, by plain Gmail under the government interpretation of her right of access. The rules even say that Alice can request direct transmission of her records in an insecure manner such as plain email if she chooses. But Alice can’t send them directly to Mayo because Mayo, also following CARIN Alliance guidelines, insists that Alice install an app on her phone or sign up for some other platform. 

Alice wonders how we got from clear Federal regulations for patient-directed access to anywhere to the situation where she’s forced to wait days for her records, receive them by email and then mail them to Mayo. Alice wonders.

It’s December 2019. 

This post is about the relationship between two related health records technologies: patient-directed uses of data and platforms for uses of patient data. As physicians and patients, we’re now familiar with the first generation of platforms for patient data called electronic health records or EHR. To understand why CARIN matters, the only thing about EHRs that you need to keep in mind is that neither physicians nor patients get to choose the EHR. The hospitals do. The hospitals now have bigger things in mind, but first they have to get past the frustration that drove the massively bipartisan 21st Century Cures Act in 2016. The hospitals and big tech vendors are preparing for artificial intelligence and machine learning “platforms”. Patient consent and transparency of business deals between hospitals and tech stand in their way.

A platform is something everything else is built on. The platform operator decides who can do what, and uses that power for profit. We’re familiar with Google and Apple as the platforms for mobile apps. Google and Apple decide. A platform for use of health data will have the inside track on machine learning and artificial intelligence for us as patients and doctors. The more data, the better. What will be the relationship between the hospital controlled platform of today’s EHRs and tomorrow’s AI-enabled platforms? Will patients choose a doctor, a hospital, or just send health records to the AI directly? Will US health AI compete with Chinese AI given that the Chinese AI has access to a lot more kinds of data from a lot more places? The practices that will control much of tomorrows digital health are being worked out, mostly behind closed doors, by lobbyists, today.

Three years on, the nation still awaits regulations on “information blocking” based on the Cures Act. Even so, American Health Information Management Association (AHIMA), American Medical Association (AMA, American Medical Informatics Association (AMIA), College of Healthcare Information Management Executives (CHIME), Federation of American Hospitals (FAH), Medical Group Management Association (MGMA), and Premier Inc. are sending letters to House and Senate committees hoping for a further delay of the regulations. 

Access to vast amounts of patient data for machine learning is also driving efforts to weaken HIPAA’s already weak privacy provisions. Here’s a very nice summary by Kirk Nahra. Are we headed for parity with Chinese surveillance practices? 

For their part, our leading health IT academics propose “… strengthening the federal role in protecting health data under patient-mediated data exchange…” Where is this data we’re protecting? In hospital EHRs, of course. We’re led to believe that hospitals are the safe place for our data and patient-directed uses need to be “balanced” by the risk of bypassing the hospitals and their EHRs. Which brings us back to CARIN Alliance as the self-appointed spokes-lobby for patient-directed health information exchange.

According to CARIN, Consumer-directed exchange occurs when a consumer or an authorized caregiver invokes their HIPAA Individual Right of Access (45 CFR § 164.524) and requests their digital health information from a HIPAA covered entity (CE) via an application or other third-party data steward.” (emphasis added) A third-party data steward is a fancy name for platform. But do you or your doctor need a platform to manage uses of your data?

HIPAA does not say that the individual right of access has to involve a third party data steward. We are familiar with our right to ask one hospital to send health records directly to another hospital, or to a lawyer, or anywhere else using mail or fax. But CARIN limits the patient’s HIPAA right of access dramatically: “All of the data exchange is based on the foundation of a consumer who invokes their individual right of access or consent to request their own health information. This type of data exchange does not involve any covered entity to covered entity data exchange.” (emphasis added)

By restricting the meaning of patient-directed access beyond what the law allows, everybody in CARIN gets something they want. The hospitals get to keep more control over doctors and patients while also using the patient data without consent for machine learning and artificial intelligence in secret business deals. The technology vendors get to expand their role as data brokers. And government gets to outsource some of their responsibility for equity, access, and patient safety to private industry. To promote these interests, the CARIN version of patient-directed access reduces the control over data uses for physicians as well as patients much beyond what the law would allow.

The CARIN model for digital health and machine learning is simple. Support as much use and sale by hospitals and EHR vendors without consent while also limiting consented use to platform providers like Amazon, Google, IBM, Microsoft, Oracle and Salesforce, along with CARIN board member Apple. 

CARIN seems to be a miracle of consensus. They have mobilized the White House and HHS to their cause. Respected public interest organizations like The Commonwealth Fund are lending their name to these policies. Is it time for this patient advocate to join the party?

Some of what CARIN is advocating by championing the expansion of the FHIR interface standards is worthwhile. But before I sign on, what I want CARIN to do is:

  • Remove the scope limitation on hospital-to-hospital patient-directed sharing.
  • Suspend work on the Code of Conduct – here’s why.
  • Separate work on FHIR data itself from work on access authorization to FHIR data.
  • Do all work in an open forum with open remote access, open minutes, and an email list for discussion between meetings. Participation in the HEART Workgroup (co-chaired by ONC) and also designed to promote patient-directed uses would be part of this.

Digital health is our future. Will it look like The Mayo Platform with Google and Google’s proprietary artificial intelligence behind the curtain? Will digital health be controlled by proprietary and often opaque Google or Apple or Facebook app store policies?

The CARIN / CMS Connectathon and CARIN Community meeting are taking place this week.  Wouldn’t it be a dream if they would engage in a public conversation of these policies from Alice’s perspective. And for my friends Chris and John at Mayo, what can they do to earn Alice’s trust in their Platform by giving her and her doctors unprecedented transparency and control.

Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country.

]]>
https://thehealthcareblog.com/blog/2019/12/18/patient-directed-uses-vs-the-platform/feed/ 4
Concrete Problems: Experts Caution on Construction of Digital Health Superhighway https://thehealthcareblog.com/blog/2019/11/27/concrete-problems-experts-caution-on-construction-of-digital-health-superhighway/ https://thehealthcareblog.com/blog/2019/11/27/concrete-problems-experts-caution-on-construction-of-digital-health-superhighway/#comments Wed, 27 Nov 2019 13:00:00 +0000 https://thehealthcareblog.com/?p=97101 Continue reading...]]>

By MICHAEL MILLENSON

If you’re used to health tech meetings filled with go-go entrepreneurs and the investors who love them, a conference of academic technology experts can be jarring.

Speakers repeatedly pointed to portions of the digital health superhighway that sorely need more concrete – in this case, concrete knowledge. One researcher even used the word “humility.”

The gathering was the annual symposium of the American Medical Informatics Association (AMIA). AMIA’s founders were pioneers. Witness the physician featured in a Wall Street Journal story detailing his use of “advanced machines [in] helping diagnose illness” – way back in 1959.

That history should provide a sobering perspective on the distinction between inevitable and imminent (a difference at least as important to investors as intellectuals), even on hot-button topics such as new data uses involving the electronic health record (EHR). 

I’ve been one of the optimists. Earlier this year, my colleague Adrian Gropper and I wrote about pending federal regulations requiring providers to give patients access to their medical record in a format usable by mobile apps. This, we said, could “decisively disrupt medicine’s clinical and economic power structure.”

Indeed, the regulations provide “a base on which innovation can happen,” declared Elise Sweeney Anthony, executive director of the policy office of the Office of the National Coordinator for Health Information Policy, at one session. 

But a base is only that. While Apple has already unveiled an app allowing people to see their health record on their iPhone, as yet there’s no “transformative business model” propelling hospitals to reach out to patients, said Julia Adler-Milstein, director of the Center for Clinical Informatics and Improvement Research at the University of California, San Francisco. Nor is there any indication from her research that many patients are interested.

“It’s still early days,” she added. 

Similarly, Fitbit and Google announced their intent to combine patient-generated health data with clinical information in the EHR well before Fitbit agreed to Google’s $2.1 billion takeover bid. However, researchers studying the implementation requirements for this type of integration see far more than a bit that doesn’t yet fit. 

One challenge for any app using patient-reported health data is standardizing symptom descriptions in a way patients will understand and yet still yields clinically useful results. Not to mention concerns about data validity. (See: “Want to cheat your Fitbit? Try a puppy or power drill.”)

“It’s appropriate to have humility,” said Robert S. Rudin, a senior information scientist at RAND. He added, in language virtually identical to Adler-Milstein’s, “This is still early days.”

A major symposium theme was “proactive health care,” or using patients’ health data to prevent or ameliorate illness. One focus was screening patients for the hodgepodge of food, housing and other non-medical issues known as “social determinants of health” (SDOH).  The process seems straightforward: ask patients about their circumstances, load the answers into a database and apply algorithmic analysis. Out pops guidance for addressing the social and economic factors that account for 40 percent of each individual’s health outcomes, compared to the 20 percent from clinic care.

Once again, however, important elements remain unresolved. Are the questions valid? Can one trust patients’ recall? Does the whole process even improve outcomes? One recent analysis even warned that some “efforts could worsen health and widen health inequities.” 

“I’m not sure we’ve worked out these basic issues,” said Jessica Ancker, an associate professor in Weill Cornell Medicine’s division of health informatics

Of course, academics have their biases (“Further research is needed”), just as entrepreneurs have theirs (“It’s not a bug, it’s a feature”). Not to mention humorist James Boren’s memorable advice to bureaucrats. As I’ve previously suggested, assembling a group of regulators, innovators and evidence-makers to talk candidly with each other might significantly accelerate digital health innovation.

For example, the Google and Ascension Health execs who launched the much-criticized “Project Nightingale”could have have benefited from a blunt warning about big data from Lamiece Hassan, a health data research fellow at England’s University of Manchester.

“People have expectations about what information to share and how that information flows,” she said. “Just because the data are accessible doesn’t make it ethical.”

Michael L. Millenson is president of Health Quality Advisors LLC and adjunct associate professor of medicine at Northwestern University Feinberg School of Medicine. This article originally appeared on Forbes here.

]]>
https://thehealthcareblog.com/blog/2019/11/27/concrete-problems-experts-caution-on-construction-of-digital-health-superhighway/feed/ 1
What Google Isn’t Saying About Your Health Records https://thehealthcareblog.com/blog/2019/11/14/what-google-isnt-saying-about-your-health-records/ Thu, 14 Nov 2019 19:41:40 +0000 https://thehealthcareblog.com/?p=97033 Continue reading...]]>

By ADRIAN GROPPER, MD

Google’s semi-secret deal with Ascension is testing the limits of HIPAA as society grapples with the future impact of machine learning and artificial intelligence.

Glenn Cohen points out that HIPAA may not be keeping up with our methods of consent by patients and society on the ways personal data is used. Is prior consent, particularly consent from vulnerable patients seeking care, a good way to regulate secret commercial deals with their caregivers? The answer to a question is strongly influenced by how you ask the questions.

Here’s a short review of this current and related scandals. It also links to a recent deal between Mayo and Google, also semi-secret. A scholarly investigative journalism report of the Google AI scandal with London NHS Foundation Trust in 2016 might be summarized as: the core issue is not consent; it is a conflict of interest at the very foundation of the information governance process. The foxes are guarding the patient data henhouse. When the secrecy of a deal is broken, a scandal ensues.

The parts of the Google-Ascension deal that are secret are likely designed to misdirect attention away from the intellectual property value of the business relationship.

HIPAA grants the hospital, the “covered entity,” the right to delegate certain functions to “business associates” that are presumed to be outsourced services that might otherwise be done by the covered entity itself. A good example of that would be a transcription service that converts a doctor’s dictation into text and just sends it back to the hospital. The assumption there, and core to the provider-centered HIPAA privacy model, is that the transcription service does not use the content of the patient record they are transcribing for their own purposes, such as selling the data to a third party. Sounds reasonable, but HIPAA is ancient in modern network and artificial intelligence computing terms.

Over more than two decades, the practices of business associates justified under HIPAA have drifted to seriously undermine the privacy interests of individual patients as well as society as a whole. One drift, about ten years ago, treats health information exchanges as HIPAA business associates. Now, a business associate can use the patient data in a way that was not entirely under the control of the covered entity or obvious to the patient. A recent example is the dispute between Surescripts as the HIPAA business associate and Amazon PillPack pharmacy as a HIPAA covered entity. The privacy issue in this case is that the patient has an open consented relationship with PillPack as their pharmacy but has no knowledge of how or why Surescripts is using their data for their own business reasons. Surescripts business practices are now under federal investigation, but their use of patient data without consent continues and the privacy aspect of this scandal will play out in the courts.

The next stage of HIPAA drift is machine learning for the benefit of the business associate, Google, in this case. This benefit might be monetized by selling trade secret medical advice to various hospitals and their patients. The privacy impact in this case is not to the individual patient of an Ascension hospital but to society as a whole. Until now, medicine has not been licensed as a trade secret. The advent of proprietary silos of medical science branded Mayo or Google is new and its impact on health care is hard to predict. What we do know is that patients, when asked, are reluctant to let their personal data to be used for profit. Ascension is a nonprofit entity but Google is not. HIPAA is now being used to avoid informed consent for corporate data uses well beyond the patient’s relationship with their Ascension hospital. The public misdirection is driven by conflict of interest since all parties to the secret deal benefit and neither physicians nor patients are consulted.

The Ascension-Google deal bundles simple HIPAA business associate services like cloud computer hosting with less obvious machine learning technology that Google can sell outside of the Ascension relationship. Is Ascension getting a discount on the cloud hosting because of their contribution of patient data to Google’s future business? How much will Google charge a non-Ascension doctor or me as a patient for their medical record summary service? Will Google merge the machine learning from Ascension patients with the machine learning from Mayo patients? One thing seems sure, we’re expected to trust Google to not be evil because we’re certainly not being asked.

Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country. This post originally appeared on Bill of Health here.

]]>
Barbarians at the Gate https://thehealthcareblog.com/blog/2019/09/05/barbarians-at-the-gate/ https://thehealthcareblog.com/blog/2019/09/05/barbarians-at-the-gate/#comments Thu, 05 Sep 2019 12:58:01 +0000 https://thehealthcareblog.com/?p=96751 Continue reading...]]>

By ADRIAN GROPPER, MD

US healthcare is exceptional among rich economies. Exceptional in cost. Exceptional in disparities. Exceptional in the political power hospitals and other incumbents have amassed over decades of runaway healthcare exceptionalism. 

The latest front in healthcare exceptionalism is over who profits from patient records. Parallel articles in the NYTimes and THCB frame the issue as “barbarians at the gate” when the real issue is an obsolete health IT infrastructure and how ill-suited it is for the coming age of BigData and machine learning. Just check out the breathless announcement of “frictionless exchange” by Microsoft, AWS, Google, IBM, Salesforce and Oracle. Facebook already offers frictionless exchange. Frictionless exchange has come to mean that one data broker, like Facebook, adds value by aggregating personal data from many sources and then uses machine learning to find a customer, like Cambridge Analytica, that will use the predictive model to manipulate your behavior. How will the six data brokers in the announcement be different from Facebook?

The NYTimes article and the THCB post imply that we will know the barbarians when we see them and then rush to talk about the solutions. Aside from calls for new laws in Washington (weaken behavioral health privacy protections, preempt state privacy laws, reduce surprise medical bills, allow a national patient ID, treat data brokers as HIPAA covered entities, and maybe more) our leaders have to work with regulations (OCR, information blocking, etc…), standards (FHIR, OAuth, UMA), and best practices (Argonaut, SMART, CARIN Alliance, Patient Privacy Rights, etc…). I’m not going to discuss new laws in this post and will focus on practices under existing law.

Patient-directed access to health data is the future. This was made clear at the recent ONC Interoperability Forum as opened by Don Rucker and closed with a panel about the future. CARIN Alliance and Patient Privacy Rights are working to define patient-directed access in what might or might not be different ways. CARIN and PPR have no obvious differences when it comes to the data models and semantics associated with a patient-directed interface (API). PPR appreciates HL7 and CARIN efforts on the data models and semantics for both clinics and payers.

Consider the ongoing news about the data broker called Surescripts and the data processor called Amazon PillPack. The FTC is looking into whether Surescripts used its dominant data broker position illegally in restraint of trade. Surescripts, in a somewhat separate action, is claiming that barbarian PillPack is using patient consent to break down the gate it erected for its business purposes. From my patient perspective, does Surescripts have a right to aggregate my prescription history and then refuse me the ability to share that data with PillPack without special effort? 

The possible differences between CARIN and PPR pertain to how the barbarian is labeled and who maintains the registry or registries of the barbarians. The open questions for CARIN, PPR, and other would-be arbiters of barbary fall into four related categories:

1 – Labels Only

2 – Registries Only

  • For deployment efficiency, the the apps and services may be listed in controlled registries. The app could be registered by the developer of the app or by the operator (including a physician) that wants to use the app. This option is relevant because apps might have options the operator can choose that would change the criteria for a particular registry. Will registries support submissions by developers, operators or both?
  • Aside from labels, patients tend to infer reputation on the basis of metrics like the number of users and the number of reviews for an app. Do the registries list software operators along with the software vendors in order to promote transparency and competition?
  • Do the registries allow for public comment with or without moderation?

3 – Labels and Registries Combined

  • What should be the number of registries and would they require one or more of the available labels?
  • A typical app store policy is a low bar to enable maximum competition and reduce disputes over exclusion. Consumer rating bureaus, on the other hand, tend to issue stars or checkmarks in a handful of categories in order to reward excellence. Is our label and registry design aimed at establishing a low bar (“You must be this high to be a barbarian”) or promoting a “race to the top” (such as 0-5 stars in a few defined categories)?
  • To improve fairness and transparency, should the orgs that define labels be separate from the orgs that operate registries?

4 – “Without special effort”

  • Opening the gate to their own records is an established right for both the patient subject or the barbarian designated by the patient. Making this work “without special effort” requires implementation of standard dynamic client registration features that current gatekeepers have chosen to ignore. Should regulators mandate support for dynamic client registration, for any and all barbarians, as long as the app is only able to access the records of the individual patient exercising their right of access?

It seems that the definition of a barbarian is anyone who aims to get patient records under the current laws and the explicit direction of the patient. The opposite of barbarians, whoever they may be within the gates of HIPAA, are able to get patient records without consent or accounting for disclosures by asserting “Treatment, Payment, or Operations” as well as the pretense of de-identification. Meanwhile, these HIPAA non-barbarians are able to sell off the machine learning and other medical science teachings as “trade secret intellectual property” in the form of computer decision support and other for-profit algorithms. This hospital-led privatization of open medicine will contribute to the next round of US healthcare exceptionalism. 

And as for the patients, no worries; we’ll just tell them it’s about patient safety.

Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country.

]]>
https://thehealthcareblog.com/blog/2019/09/05/barbarians-at-the-gate/feed/ 1
Insights from a Verily Venture Investor on Health Data & Dollars https://thehealthcareblog.com/blog/2019/03/21/insights-from-a-verily-venture-investor-on-health-data-dollars/ Thu, 21 Mar 2019 21:48:44 +0000 https://thehealthcareblog.com/?p=96082 Continue reading...]]> By JESSICA DaMASSA, WTF Health

Google’s Verily has a $1Billion dollar investment fund and a nearly limitless talent pool of data scientists and engineers at the ready. So, how are they planning to invest in a better future for health?

Luba Greenwood, Strategic Business Development & Corporate Ventures for Verily told me how the tech giant is thinking about the big data opportunity in healthcare – and, more importantly, what they see as their role in helping scale it in unprecedented ways.

So, where should other health tech investors place their bets, then? Luba’s previous successes investing in digital health and health technology while at Roche (FlatIron, MySugr, etc.) give her a unique perspective on the ‘state-of-play’ in healthcare investment…but has the game changed now that she’s in another league at Verily? Listen in to find out.

Filmed at the Together.Health Spring Summit at HIMSS 2019 in Orlando, Florida, February 2019.

Get a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health

]]>
How is Google’s Verily Thinking About Data, Investing, & Healthcare? | Luba Greenwood, Verily https://thehealthcareblog.com/blog/2019/02/25/how-is-googles-verily-thinking-about-data-investing-healthcare-luba-greenwood-verily/ Mon, 25 Feb 2019 19:37:41 +0000 https://thehealthcareblog.com/?p=96934 Continue reading...]]> By JESSICA DAMASSA, WTF HEALTH

Google’s Verily has a $1Billion dollar investment fund and a nearly limitless talent pool of data scientists and engineers at the ready. So, how are they planning to invest in a better future for health? Luba Greenwood, Strategic Business Development & Corporate Ventures for Verily explains how the tech giant is thinking about the big data opportunity in healthcare – and, more importantly, what they see as their role in helping scale it in unprecedented ways. Where should other health tech investors place their bets, then? Luba’s previous successes investing in digital health and health technology while at Roche give her a unique perspective on the ‘state-of-play’ in healthcare investment…but has the game changed now that she’s in another league at Verily? Listen in to find out!

Filmed at the Together.Health Spring Summit at HIMSS 2019 in Orlando, Florida, February 2019.

Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew Holt.

Get a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health

]]>