BY KIM BELLARD
I was at the barbershop the other day and overheard one barber talking with his senior citizen customer about when – not if – robot AIs would become barbers. I kid you not.
Now, I don’t usually expect to heard conversations about technology at the barber, but it illustrates that I think we are at the point with AI that we were with the Internet in the late ‘90’s/early ‘00s: people’s lives were just starting to change because of it, new companies were jumping in with ideas about how to use it, and existing companies knew they were going to have to figure out ways to incorporate it if they wanted to survive. Lots of missteps and false starts, but clearly a tidal wave that could only be ignored at one’s own risk. So now it is with AI.
I’ve been pleased that healthcare has been paying attention, probably sooner than it acknowledged the Internet. Every day, it seems, there are new developments about how various kinds of AI are showing usefulness/potential usefulness in healthcare, in a wide variety of ways. There’s lots of informed discussions about how it will be best used and where the limits will be, but as a long-time observer of our healthcare system, I think we’re not talking enough about two crucial questions. Namely:
- Who will get paid?
- Who will get sued?
Now, let me clarify that these are less unclear in some cases than others. e.g., when AI assists in drug discovery, pharma can produce more drugs and make more money; when it assists health insurers with claims processing or prior authorizations, that results in administrative savings that go straight to the bottom line. No, the tricky part is using AI in actual health care delivery, such as in a doctor’s office or a hospital.
Payment
There has been some cautious optimism that AI can help with diagnosis and suggested treatments. It can analyze more data, read and understand more studies, and apply more uniform logic in making such decisions. It has shown its value, for example, in diagnosing dementia, heart attacks, lung cancer, and pancreatic cancer. Earlier and more accurate diagnoses should lead to better outcomes for patients.
The trouble is, in our health care system, no one gets paid – at least, to any great extent — for better outcomes or even for earlier diagnoses. Arguably, if those result in less care, some health care professional or institution is going to get less money. Like it or not, when it comes to payment, our healthcare system is built around doing more, not doing better.
Well, maybe those quicker, more accurate diagnoses will lead to physicians being able to see more patients, increasing their throughput and thus revenue. Again, though, no one that I know of is advocating that doctors see more patients; there’s pretty widespread agreement that doctors already see too many patients, which has adversely impacted the doctor-patient relationship.
So if a physician or health care organization is evaluating how to apply AI, if they do a cost/benefit, it’s a little hard to see where the economic benefit comes in.
Well, wait; what about helping physicians with all the paperwork, all that “pajama time” they spend on administrative tasks? Well, yes, there is some evidence that AI can help with this, but again, as Rod Tidwell told Jerry Maguire, show me the money. Giving physicians back some of their personal time might help reduce burnout and improve their quality of life – both laudable goals – but that doesn’t directly lead to more revenue. A good use of AI, but who is getting paid by implementing it?
Payment will really become an issue when – as with barbers, not “if” – AI start seeing patients directly. A single instance of AI could see thousands, perhaps millions of patients simultaneously, delivering those earlier, more accurate diagnoses. Perhaps they’ll just triage, but it will radically change the health care landscape. But who will get paid for those visits, and how much?
Would the AI itself get the payment (which leads to a whole rabbit hole of personhood and licensure questions), the (presumably) healthcare organization that deployed it, or even the AI developer? In any event, if we base AI payment on what a human doctor might receive, we’d be grossly overpaying; at best the “costs” are marginal costs for an almost infinitesimal amount of the AI’s time.
For all those reasons and more, we’ll need a new paradigm for payment.
Liability
Let’s concede right away that our current liability system in healthcare is terrible. It doesn’t identify most errors or incompetence, doesn’t reward most patients injured by the care they receive, doesn’t punish most of the healthcare professionals and institutions giving harmful care, and probably over-rewards some/many of the few patients it does help. Now throw AI into that mix.
As long as human doctors retain final say about care, even if assisted by AI, they’re probably going to be stuck with any resulting liability. That quickly will become problematic as their ability to understand why an AI makes a recommendation becomes harder (the infamous “black box” problem).
They will quickly seek to push the liability onto the AI developers, much as they might for other software or for medical equipment, but that line will be hard to draw as the AI “learns” from its instantiation in a particular healthcare practice or organization. Neither that organization nor the AI developer is going to be keen to accept the liability.
In the world I ultimately expect, where AI acts on its own, at least to some extent, one would expect the AI to bear liability for its actions, but that presumes the AI has assets and is an entity that can be sued, neither of which is likely to be true anytime soon.
So, if anything, as it stands AI is likely to further muddy an already muddled healthcare liability system. Boy, that should speed adoption, right?
For all those reasons and more, we’ll need a new paradigm for liability.
———-
Healthcare is supposed to be about caring for people, making their lives better by improving their health (or, at least, reducing their suffering). Most healthcare professionals and institutions pay at least lip service to this, but the hard truth of it is that, especially in the U.S., healthcare is a business. As such, AI is going to face slow going in healthcare until we grapple with key business issues like payment and liability.
AI is going to be ready for healthcare long before healthcare is going to be ready for AI.
Kim is a former emarketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor
Categories: Health Tech
The use of Artificial Intelligence (AI) in healthcare has the potential to revolutionize the industry by improving patient care, enhancing diagnostic accuracy, and increasing operational efficiency. AI can analyze large amounts of data to identify patterns, predict outcomes, and assist in treatment planning. However, the business reality of healthcare AI involves various challenges. Implementing AI technologies requires substantial investments in infrastructure, data collection, and training. Additionally, regulatory compliance, privacy concerns, and ethical considerations need to be addressed. Integration with existing healthcare systems and workflows can also pose challenges. Despite these hurdles, the potential benefits of healthcare AI make it a promising area for future development and investment.