Categories

Category: Health Tech

THCB Gang Episode 125, Thursday June 8

Joining Matthew Holt (@boltyboy) on #THCBGang on Thursday June 8 at 1PM PT 4PM ET are privacy expert Deven McGraw (@healthprivacy); Queen of employer benefits Jennifer Benz (@Jenbenz); THCB regular writer and ponder of odd juxtapositions Kim Bellard (@kimbbellard); and policy expert consultant/author Rosemarie Day (@Rosemarie_Day1);

You can see the video below & if you’d rather listen than watch, the audio is preserved as a weekly podcast available on our iTunes & Spotify channels.

Interview with Dr Pamela Tenaerts, Medable

Pam Tenaerts is the Chief Scientific Officer of Medable, which went from being a small company creating software helping clinical researchers to design their own experiments to being the big dog in remote clinical trials during the pandemic. Medable has raised over $500m in the past 3 years. Pam has a stellar research background and this interview covers the gamut about how clinical trials work, which companies are involved, how remote (or hybrid) trials actually work, and what the likely outcome for clinical research will be. If you have any interest in understanding the state of play in pharma R&D, this is compulsory viewing–Matthew Holt

Matthew’s health care tidbits: Hedge Funds that Do Health Care on the Side

Each time I send out the THCB Reader, our newsletter that summarizes the best of THCB (Sign up here!) I include a brief tidbits section. Then I had the brainwave to add them to the blog. They’re short and usually not too sweet! –Matthew Holt

Lots of news about bad behavior in health care this week, with real shots about patient & staff safety at home care company Papa, and Grail misinforming 400 people that they had cancer. But the prize for tone deafness this week comes from another very well funded health care provider system being heartless to its poorest patients. 

This week it’s Allina, a Minnesota “nice” system which actually amended its Epic system so that clinicians could literally not book appointments or provide care to patients who owed Allina money. Clinicians on the sharp end of this were so appalled that they went on the record about their own employer to NY Times’ reporter Sarah Kliff. The most egregious example was a doctor unable to write a prescription for a kid that had scabies–an infectious parasitic disease–who was sharing one bed with two other kids!

Of course Allina also is on the low end of charity care provision (below 1% of revenues). In contrast ten employees make more than $1m a year and another 10 make more than $500,000

We all know about egregious private equity funds investing in payday loans and other scummy outfits that prey on the poor. Turns out that if you let a non-profit hospital become beholden to its financial, rather than moral, north star, it starts to behave in a similar manner. Allina, of course, had a smidge under $4bn in its “investment reserve” at the end of 2021. It’s by no means special. UPMC has over $7bn in its reserves (unclear if this includes the investments it has made in startups), while Ascension has a formal private equity fund that controversially paid its former CEOs over $10m as part of its $18bn reserves.

Somehow having hedge funds that provide a little health care service on the side doesn’t leave the best taste in the mouth for how we should be organizing this health care system.

THCB Gang Episode 124, Thursday June 1

Joining Matthew Holt (@boltyboy) on #THCBGang on Thursday June 1 at 1PM PT 4PM ET were double trouble futurists Jeff Goldsmith and Ian Morrison (@seccurve), and delivery & platform expert Vince Kuraitis (@VinceKuraitis). Lots of discussion about Kaiser and Geisinger and what this means about the model for the future of care delivery. Do incentives or professionalism matter more?

The video is below. If you’d rather listen to the episode, the audio is preserved from Friday as a weekly podcast available on our iTunes & Spotify channels

Designing (Healthcare) via Roblox

BY KIM BELLARD

Here’s a question: what medical schools are incorporating Roblox into their curriculum?  

Interested readers can get back to me, but in the meantime I’m guessing none.  At best, very few.  And instead of “medical schools” feel free to insert kind of “healthcare institutions/organization” that is interested in educating or training – which is to say, all of them.  By way of contrast, I was intrigued by the collaboration between Roblox and The Parsons School of Design. 

Perhaps you don’t know about Roblox, a creator platform whose vision is “to reimagine the way people come together to create, play, explore, learn, and connect with one another.”  As their website says: “We don’t make Roblox.  You do.” It claims to have almost 10 million developers using its platform, hosting some 50 million “experiences.”  

I first wrote about it in 2021, astonished that over half of American children used it, with some 37 million unique daily users. Today it has over 66 million unique daily users — some 214 million monthly active users.   The vast majority of the users – as much as 80% — are under 16, a fact Roblox is acutely aware of and is seeking to change.  

Continue reading…

The New Rules of Healthcare Platforms: APIs Enable the Platforming of Healthcare

BY VINCE KURATIS, BRENDAN KEELER, and JODY RANCK

Recent regulations have mandated the use of HL7 FHIR APIs (application programming interfaces) to share health data. The regs apply to healthcare providers, payers, and technology developers who participate in federal programs. Many incumbent healthcare organizations are viewing these mandates as a compliance burden. That’s short-sighted. We recommend a more opportunistic POV.

APIs facilitate the sharing of health data across different devices and platforms. By adopting APIs, healthcare organizations can transform themselves from traditional service providers into powerful platforms that can connect patients, providers, and other stakeholders in new and innovative ways.

This blog post is the fourth in the series on The New Rules of Healthcare Platforms. In this essay, we explore the many benefits of API adoption for healthcare organizations and the key considerations that must be taken into account when implementing APIs:

  • Healthcare’s Data Inflection Point
  • APIs Enable Platform Business Models
  • Barriers, Challenges, Reality Check

Healthcare’s Data Inflection Point

Compared to other industries, healthcare generates a disproportionately large amount of data. According to RBC Capital Markets, “30% of the world’s data volume is being generated by the healthcare industry. By 2025, the compound annual growth rate of data for healthcare will reach 36%. That’s 6% faster than manufacturing, 10% faster than financial services, and 11% faster than media & entertainment.”

Over the past 15 years, new regulations have driven digitization, data interoperability, and data sharing. The goal of regulations has been to liberate patient data that has previously been unstructured and trapped in patient silos. Venture capitalist Kahini Shah summarized these regulatory efforts in her article entitled Healthcare Data APIs – An Upcoming Multi-Billion Dollar Market?:

Recent regulation is forcing digitization, aggregation and transmission of medical records. Congress passed the HITECH Act in 2009, prompting the adoption of electronic health records. Before that medical records were paper based. Healthcare data is incredibly siloed, every American sees an average of 19 providers in their lifetime. Connecting these disparate electronic systems and having them exchange information is called interoperability. In 2020, the HHS and CMS implemented two rules that mandate patient access to their medical records and interoperability. These transformative rules give patients the right to access their data when they need and make it available via APIs. The interoperability rules state that there is no blocking – EHRs must allow data to be shared easily across different systems owned by different vendors.

Continue reading…

Can AI Part The Red Sea?

BY MIKE MAGEE

A few weeks ago New York Times columnist Tom Friedman wrote, “We Are Opening The Lid On Two Giant Pandoras Boxes.” He was referring to 1) artificial Intelligence (AI) which most agree has the potential to go horribly wrong unless carefully regulated, and 2) global warming leading to water mediated flooding, drought, and vast human and planetary destruction.

Friedman argues that we must accept the risk of pursuing one (rapid fire progress in AI) to potentially uncover a solution to the other. But positioning science as savior quite misses the point that it is human behavior (a combination of greed and willful ignorance), rather than lack of scientific acumen, that has placed our planet and her inhabitants at risk.

The short and long term effects of fossil fuels and carbonization of our environment were well understood before Al Gore took “An Inconvenient Truth” on the road in 2006. So were the confounding factors including population growth, urbanization, and surface water degradation. 

When I first published “Healthy Waters,” the global population was 6.5 billion with 49% urban, mostly situated on coastal plains. It is now 8 billion with 57% urban and slated to reach 8.5 billion by 2030 with 63% urban. 552 cities around the globe now contain populations exceeding 1 million citizens.

Under ideal circumstances, this urban migration could serve our human populations with jobs, clean air and water, transportation, housing and education, health care, safety and security. Without investment however, this could be a death trap. 

Continue reading…

AI is Bright, But Can Also Be Dark

BY KIM BELLARD

If you’ve been following artificial intelligence (AI) lately – and you should be – then you may have started thinking about how it’s going to change the world. In terms of its potential impact on society, it’s been compared to the introduction of the Internet, the invention of the printing press, even the first use of the wheel. Maybe you’ve played with it, maybe you know enough to worry about what it might mean for your job, but one thing you shouldn’t ignore: like any technology, it can be used for both good and bad.  

If you thought cyberattacks/cybercrimes were bad when done by humans or simple bots, just wait to see what AI can do.  And, as Ryan Health wrote in Axios, “AI can also weaponize modern medicine against the same people it sets out to cure.”

We may need DarkBERT, and the Dark Web, to help protect us.

A new study showed how AI can create much more effective, cheaper spear phishing campaigns, and the author notes that the campaigns can also use “convincing voice clones of individuals.”  He notes: “By engaging in natural language dialog with targets, AI agents can lull victims into a false sense of trust and familiarity prior to launching attacks.”  

Continue reading…

Asking Bard And ChatGPT To Find The Best Medical Care, I Got Truth And Truthiness

BY MICHAEL MILLENSON

If you ask ChatGPT how many procedures a certain surgeon does or a specific hospital’s infection rate, the OpenAI and Microsoft chatbot inevitably replies with some version of, “I don’t do that.”

But depending upon how you ask, Google’s Bard provides a very different response, even recommending a “consultation” with particular clinicians.

Bard told me how many knee replacement surgeries were performed by major Chicago hospitals in 2021, their infection rates and the national average. It even told me which Chicago surgeon does the most knee surgeries and his infection rate. When I asked about heart bypass surgery, Bard provided both the mortality rate for some local hospitals and the national average for comparison. While sometimes Bard cited itself as the information source, beginning its response with, “According to my knowledge,” other times it referenced well-known and respected organizations.

There was just one problem. As Google itself warns, “Bard is experimental…so double-check information in Bard’s responses.” When I followed that advice, truth began to blend indistinguishably with “truthiness” – comedian Stephen Colbert’s memorable term to describe information that’s seen as true not because of supporting facts, but because it “feels” true.

Continue reading…