Categories

Tag: AI

Would You Picket Over AI?

By KIM BELLARD

I’m paying close attention to strike by the Writers Guild Of America (WGA), which represents “Hollywood” writers.  Oh, sure, I’m worried about the impact on my viewing habits, and I know the strike is really, as usual, about money, but what got my attention is that it’s the first strike I’m aware of where impact of AI on their jobs is one of the key issues.

It may or may not be the first time, but it’s certainly not going to be the last.

The WGA included this in their demands: “Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.” I.e., if something – a script, treatment, outline, or even story idea – warrants a writing credit, it must come from a writer.  A human writer, that is.

John August, a screenwriter who is on the WGA negotiating committee, explained to The New York Times: “A terrible case of like, ‘Oh, I read through your scripts, I didn’t like the scene, so I had ChatGPT rewrite the scene’ — that’s the nightmare scenario,”

The studios, as represented by the Alliance of Motion Picture and Television Producers (AMPTP), agree there is an issue: “AI raises hard, important creative and legal questions for everyone.” It wants both sides to continue to study the issue, but noted that under current agreement only a human could be considered a writer. 

Still, though, we’ve all seen examples of AI generating remarkably plausible content.  “If you have a connection to the internet, you have consumed AI-generated content,” Jonathan Greenglass, a tech investor, told The Washington Post. “It’s already here.”  It’s easy to imagine some producer feeding an AI a bunch of scripts from prior instalments to come up with the next Star Wars, Marvel universe, or Fast and Furious release.  Would you really know the difference? 

Sure, maybe AI won’t produce a Citizen Kane or The Godfather, but, as Alissa Wilkinson wrote in Vox: “But here is the thing: Cheap imitations of good things are what power the entertainment industry. Audiences have shown themselves more than happy to gobble up the same dreck over and over.” 

Continue reading…

Can we trust ChatGPT to get the basics right?

by MATTHEW HOLT

Eric Topol has a piece in his excellent newsletter Ground Truth‘s today about AI in medicine. He refers to the paper he and colleagues wrote in Nature about Generalist Medical Artificial Intelligence (the medical version of GAI). It’s more on the latest in LLM (Large Language Models). They differ from previous AI which was essentially focused on one problem, and in medicine that mostly meant radiology. Now, you can feed different types of information in and get lots of different answers.

Eric & colleagues concluded their paper with this statement: “Ultimately, GMAI promises unprecedented possibilities for healthcare, supporting clinicians amid a range of essential tasks, overcoming communication barriers, making high-quality care more widely accessible, and reducing the administrative burden on clinicians to allow them to spend more time with patients.” But he does note that “there are striking liabilities and challenges that have to be dealt with. The “hallucinations” (aka fabrications or BS) are a major issue, along with bias, misinformation, lack of validation in prospective clinical trials, privacy and security and deep concerns about regulatory issues.”

What he’s saying is that there are unexplained errors in LLMs and therefore we need a human in the loop to make sure the AI isn’t getting stuff wrong. I myself had a striking example of this on a topic that was purely simple calculation about a well published set of facts. I asked ChatGPT (3 not 4) about the historical performance of the stock market. Apparently ChatGPT can pass the medical exams to become a doctor. But had it responded with the same level of accuracy about a clinical issue I would be extremely concerned!

The brief video of my use of ChatGPT for stock market “research” is below:

THCB Spotlight: Glen Tullman, Transcarent & Aneesh Chopra, Carejourney

No THCB Gang today because my kid is in the hospital (minor planned surgery) So instead I am reposting this great interview from last week.

I just got to interview Glen Tullman, CEO Transcarent (and formerly CEO of Livongo & Allscripts) & Aneesh Chopra, CEO Carejourney (and formerly CTO of the US). The trigger for the interview is a new partnership between the two companies, but the conversation was really about what’s happening with health care in the US, including how the customer experience needs to change, what level of data and information is available about providers and how that is changing, how AI is going to change data analytics, and what is actually happening with Medicare Advantage. This is a fascinating discussion with two real leaders in health and health techMatthew Holt

AI: Not Ready, Not Set – Go!

By KIM BELLARD

I feel like I’ve written about AI a lot lately, but there’s so much happening in the field. I can’t keep up with the various leading entrants or their impressive successes, but three essays on the implications of what we’re seeing struck me: Bill Gates’ The Age of AI Has Begun, Thomas Friedman’s Our New Promethean Moment, and You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills by Yuval Harari, Tristan Harris, and Aza Raskin.  All three essays speculate that we’re at one of the big technological turning points in human history.

We’re not ready.

The subtitle of Mr. Gates’ piece states: “Artificial intelligence is as revolutionary as mobile phones and the Internet.” Similarly, Mr. Friedman recounts what former Microsoft executive Craig Mundie recently told him: “You need to understand, this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”    

Mr. Gates elaborates:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Mr. Friedman is similarly awed:

This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

Professor Harari and colleagues are more worried than awed, warning: “A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts.”  Transformational isn’t always beneficial.

Continue reading…

Searching For The Next Search

By KIM BELLARD

I didn’t write about ChatGPT when it was first introduced a month ago because, well, it seemed like everyone else was. I didn’t play with it to see what it could do.  I didn’t want it to write any poems. I didn’t have any AP tests I wanted it to pass. And, for all you know, I’m not using it to write this. But when The New York Times reports that Google sees ChatGPT as a “Code Red” for its search business, that got my attention.

A few months ago I wrote about how Google saw TikTok as an existential threat to its business, estimating that 40% of young people used it for searches. It was a different kind of search, mind you, with video results instead of links, but that’s what made it scary – because it didn’t just incrementally improve “traditional” search, as Google had done to Lycos or Altavista, it potentially changed what “search” was.    

TikTok may well still do that (although it is facing existential issues of its own), but ChatGPT could pose an even greater threat. Why get a bunch of search results that you still have to investigate when you could just ask ChatGPT to tell you exactly what you want to know?

Look, I like Google as much as anyone, but the prospect that its massive dominance of the search engine market could, in the near future, suddenly come to an end gives me hope for healthcare.  If Google isn’t safe in search, no company is safe in any industry, healthcare included.

Continue reading…

OK, trying more AI to write this

by MATTHEW HOLT

In the far oft halcyon days of last month, I used an AI system from Lex.Page  to write something about digital health. In the past few days the world has gone nuts about a new AI author called ChatGPT. So I tought I would compare and contrast.

Last month Lex.page was asked about Digital Health and it said

Digital Health is one of the most rapidly growing and important sectors in health care today. It is a broad term that encompasses the use of technology to improve health and health care.

Today I asked ChatGPT the same thing. In fact I just wrote “Digital Health” as the Q and this was the A

Digital health, also known as telehealth, is the use of digital technologies to deliver healthcare services remotely. This includes the use of video conferencing, mobile health apps, remote patient monitoring, and other digital tools to provide healthcare services to patients. Digital health can be used to improve access to primary care, reduce costs, and improve patient outcomes.

OK, pretty much a longer version of the same thing but with more details. So then I went BACK to Lex.page to ask it. But I had to ask it “What is Digital Health”

Digital health is a term used to describe the use of technology to improve healthcare. It covers everything from diagnostic tools and monitoring devices to electronic medical records and healthcare analytics. It’s an exciting and rapidly evolving area that has the potential to revolutionize the delivery of healthcare.

So in a month Lex.page has gotten a lot better!

Last month Lex.page said in response to my question/statement, “The biggest challenges for primary care are”

-Access to care

-Cost of care

-Quality of care

So it seems that digital health can help with all of those things. By providing tools that help patients better self-manage their health, digital health can help to improve access to care. By providing tools that help to lower the cost of care, digital health can help to make care more affordable. And by providing tools that help to improve the quality of care, digital health can help to make

I put the same sentence into GPTChat and at the bottom it told me Warning: Your text ends in a trailing space, which causes worse performance due to how the API splits text into tokens. But I asked it anyway and it gave me

A: 1. Accessibility: Ensuring that everyone has access to quality primary care services.

I surmised that it had more answers so I kept hitting submit! It got to another 12 answers with no new question!

Q: A: 2. Cost: Finding ways to make primary care services more affordable and accessible to all. Q: A: 3. Workforce: Recruiting and retaining a sufficient number of primary care providers to meet the needs of the population. Q: A: 4. Technology: Incorporating new technologies into primary care to improve efficiency and patient outcomes. Q: A: 5. Coordination: Improving the coordination of care between primary care providers and other healthcare professionals. Q: A: 6. Quality: Ensuring that primary care services are of the highest quality and meet the needs of patients. Q: A: 7. Education: Educating patients about the importance of preventive care and the value of primary care services. Q: A: 8. Mental Health: Integrating mental health services into primary care to better address the needs of patients. Q: A: 9. Population Health: Developing strategies to improve the health of the population as a whole. A: 10. Data: Utilizing data to inform decision-making and improve patient outcomes. A: 11. Payment: Developing new payment models that incentivize quality care and reduce costs. Q: A: 12. Social Determinants of Health: Addressing the social and environmental factors that influence health outcomes.

OK so those are all separate statements, but it’s pretty damn impressive. How is Lex.page doing? Well I asked it’s chatbot the same statement and it now answered:

The biggest challenges for primary care are likely to be related to access and cost. In many areas of the world, access to quality primary care is limited by geography, poverty, or both. Cost is also an issue, especially in countries with expensive healthcare systems. Additionally, many primary care providers struggle to effectively manage chronic conditions, such as diabetes and hypertension, due to a lack of resources, funding, and staff. Finally, the rapid advancements in technology have presented a challenge in terms of staying up to date with the latest medical developments.

That’s not only pretty smart but it’s quite an advance on the answer it gave me to the same vague question/statement just 31 days ago.

We are seeing a technology emerge right before our eyes.

Presumably the next step is to set it to read some of my favorite pieces on THCB and just have it write for me! Then I will be happily replaced by a robot!

Go Ahead, AI—Surprise Us

By KIM BELLARD

Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics – artificial intelligence (A.I.), particularly for healthcare.  With the WHO releasing its “first global report” on A.I. — Ethics & Governance of Artificial Intelligence for Health – and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic. 

My prediction: it’s not going to work out quite like we expect, and it probably shouldn’t. 

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” Dr Tedros Adhanom Ghebreyesus, WHO Director-General, said in a statement.  He’s right on both counts.

WHO’s proposed six principles are:

  • Protecting human autonomy
  • Promoting human well-being and safety and the public interest
  • Ensuring transparency, explainability and intelligibility 
  • Fostering responsibility and accountability
  • Ensuring inclusiveness and equity 
  • Promoting AI that is responsive and sustainable

All valid points, but, as we’re already learning, easier to propose than to ensure.  Just ask Timnit Gebru.  When it comes to using new technologies, we’re not so good about thinking through their implications, much less ensuring that everyone benefits.  We’re more of a “let the genie out of the bottle and see what happens” kind of species, and I hope our future AI overlords don’t laugh too much about that. 

As Stacey Higginbotham asks in IEEE Spectrum, “how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins?…we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s.”   She further notes that “we haven’t even established what those benefits should be.”

Continue reading…

Docs are ROCs: a simple fix for a “methodologically indefensible” practice in medical AI studies

By LUKE OAKDEN-RAYNER

Anyone who has read my blog or tweets before has probably seen that I have issues with some of the common methods used to analyse the performance of medical machine learning models. In particular, the most commonly reported metrics we use (sensitivity, specificity, F1, accuracy and so on) all systematically underestimate human performance in head to head comparisons against AI models.

This makes AI look better than it is, and may be partially responsible for the “implementation gap” that everyone is so concerned about.

I’ve just posted a preprint on arxiv titled “Docs are ROCs: A simple off-the-shelf approach for estimating average human performance in diagnostic studies” which provides what I think is a solid solution to this problem, and I thought I would explain in some detail here.

Disclaimer: not peer reviewed, content subject to change 


A (con)vexing problem

When we compare machine learning models to humans, we have a bit of a problem. Which humans?

In medical tasks, we typically take the doctor who currently does the task (for example, a radiologist identifying cancer on a CT scan) as proxy for the standard of clinical practice. But doctors aren’t a monolithic group who all give the same answers. Inter-reader variability typically ranges from 15% to 50%, depending on the task. Thus, we usually take as many doctors as we can find and then try to summarise their performance (this is called a multi-reader multicase study, MRMC for short).

Since the metrics we care most about in medicine are sensitivity and specificity, many papers have reported the averages of these values. In fact, a recent systematic review showed that over 70% of medical AI studies that compared humans to AI models reported these values. This makes a lot of sense. We want to know how the average doctor performs at the task, so the average performance on these metrics should be great, right?

Continue reading…

Will AI-Based Automation Replace Basic Primary Care? Should It?

By KEN TERRY

In a recent podcast about the future of telehealth, Lyle Berkowitz, MD, a technology consultant, entrepreneur, and professor at Northwestern University’s Feinberg School of Medicine, confidently predicted that, because of telehealth and clinical automation, “In 10-20 years, we won’t need primary care physicians [for routine care]. The remaining PCPs will specialize in caring for complicated patients. Other than that, if people need care, they’ll go to NPs or PAs or receive automated care with the help of AI.”

Berkowitz isn’t the first to make this kind of prediction. Back in 2013, when mobile health was just starting to take hold, a trio of experts from the Scripps Translational Science Institute—Eric Topol, MD, Steven R. Steinhubl, MD, and Evan D. Muse, MD—wrote a JAMA Commentary arguing that, because of mHealth, physicians would eventually see patients far less often for minor acute problems and follow-up visits than they did then.

Many acute conditions diagnosed and treated in ambulatory care offices, they argued, could be addressed through novel technologies. For example, otitis media might be diagnosed using a smartphone-based otoscope, and urinary tract infections might be assessed using at-home urinalysis. Remote monitoring with digital blood pressure cuffs could be used to improve blood pressure control, so that patients would only have to visit their physicians occasionally.

Continue reading…

Trying to Make AI Less Squirrelly

By KIM BELLARD

You may have missed it, but the Association for the Advancement of Artificial Intelligence (AAAI) just announced its first annual Squirrel AI award winner: Regina Barzilay, a professor at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).   In fact, if you’re like me, you may have missed that there was a Squirrel AI award.  But there is, and it’s kind of a big deal, especially for healthcare – as Professor Barzilay’s work illustrates. 

The Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity (Squirrel AI is a Chinese-based AI-powered “adaptive education provider”) “recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects.”  The award carries a prize of $1,000,000, which is about the same as a Nobel Prize

Yolanda Gil, a past president of AAAI, explained the rationale for the new award: “What we wanted to do with the award is to put out to the public that if we treat AI with fear, then we may not pursue the benefits that AI is having for people.”

Dr. Barzilay has impressive credentials, including a MacArthur Fellowship.   Her expertise is in natural language processing (NLP) and machine learning, and she focused her interests on healthcare following a breast cancer diagnosis.  “It was the end of 2014, January 2015, I just came back with a totally new vision about the goals of my research and technology development,” she told The Wall Street Journal. “And from there, I was trying to do something tangible, to change the diagnostics and treatment of breast cancer.”

Continue reading…