Categories

Tag: Kim Bellard

Would You Picket Over AI?

By KIM BELLARD

I’m paying close attention to strike by the Writers Guild Of America (WGA), which represents “Hollywood” writers.  Oh, sure, I’m worried about the impact on my viewing habits, and I know the strike is really, as usual, about money, but what got my attention is that it’s the first strike I’m aware of where impact of AI on their jobs is one of the key issues.

It may or may not be the first time, but it’s certainly not going to be the last.

The WGA included this in their demands: “Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.” I.e., if something – a script, treatment, outline, or even story idea – warrants a writing credit, it must come from a writer.  A human writer, that is.

John August, a screenwriter who is on the WGA negotiating committee, explained to The New York Times: “A terrible case of like, ‘Oh, I read through your scripts, I didn’t like the scene, so I had ChatGPT rewrite the scene’ — that’s the nightmare scenario,”

The studios, as represented by the Alliance of Motion Picture and Television Producers (AMPTP), agree there is an issue: “AI raises hard, important creative and legal questions for everyone.” It wants both sides to continue to study the issue, but noted that under current agreement only a human could be considered a writer. 

Still, though, we’ve all seen examples of AI generating remarkably plausible content.  “If you have a connection to the internet, you have consumed AI-generated content,” Jonathan Greenglass, a tech investor, told The Washington Post. “It’s already here.”  It’s easy to imagine some producer feeding an AI a bunch of scripts from prior instalments to come up with the next Star Wars, Marvel universe, or Fast and Furious release.  Would you really know the difference? 

Sure, maybe AI won’t produce a Citizen Kane or The Godfather, but, as Alissa Wilkinson wrote in Vox: “But here is the thing: Cheap imitations of good things are what power the entertainment industry. Audiences have shown themselves more than happy to gobble up the same dreck over and over.” 

Continue reading…

A Life Well Lived, Fights Well Fought

By KIM BELLARD

I first became aware of Casey Quinlan in 2017, when she published an article in Tincture, which I was helping to edit.  In it, she discussed how she’d had her medical history and advance directive tattooed on her chest, out of frustration with the lack of health information exchange in healthcare.  As she said, “ALL. THOSE. FUCKING. FORMS. ON. CLIPBOARDS.”

Well, I thought: she sounds like an interesting person. 

I started following her on Twitter, enjoying her outspokenness and agreeing with many of her points of view.  Then early in the pandemic Matthew Holt started THCB Gang podcast, and I got to participate in many of them with her as a co-panelist. It was sometimes hard to get a word in edgewise, but when she was on we always knew it was going to be an extra-lively session.  And the stories she could tell…

I never met Casey IRL.  I never worked with her. I never even had a one-on-one conversation with her, unless you count Twitter replies.  There are large parts of her life that I don’t know anything about.  But, boy, the force of her personality, the strength of her will, the sharpness of her intellect, and the fearlessness of her spirit were always clear. 

She fought her cancer as fiercely as she lived her life generally.  We knew the end was inevitable, but it nonetheless was hard to imagine.  There have been outpourings of support on Twitter, on CaringBridge, and elsewhere. I have to mention in particular the efforts of Jan Oldenburg, who was there with her near the end and also took on the various bureaucracies on Casey’s behalf when Casey was no longer able to. 

Casey’s passing is a loss to her friends, her followers, and the patient community at large.  And to those of us who got to know her even a little bit. 

Worms Aren’t So Dumb

BY KIM BELLARD

Chances are, you’ve read about AI lately.  Maybe you’ve actually even tried DALL-E or ChatGPT, maybe even GPT-4.  Perhaps you can use the term Large Language Model (LLM) with some degree of confidence.  But chances are also good that you haven’t heard of “liquid neural networks,” and don’t get the worm reference above.   

That’s the thing about artificial intelligence: it’s evolving faster than we are. Whatever you think you know is already probably out-of-date.

Liquid neural networks were first introduced in 2020.  The authors wrote: “We introduce a new class of time-continuous recurrent neural network models.” They based the networks on the brain of a tiny roundworm, Caenorhabditis elegans.  The goal was networks that were more adaptable, that could change “on the fly” and would adapt to unfamiliar circumstances. 

Researchers at MIT’s CSAIL have shown some significant progress.  A new paper in Science Robotics discussed how they created “robust flight navigation agents” using liquid neural networks to autonomously pilot drones. They claim that these networks are “causal and adapt to changing conditions,” and that their “experiments showed that this level of robustness in decision-making is exclusive to liquid networks.”  

Continue reading…

I Have Some Silly Questions

BY KIM BELLARD

Last year I used some of Alfred North Whitehead’s pithy quotations to talk about healthcare, starting with the provocative “It is the business of the future to be dangerous.”  I want to revisit another of his quotations that I’d like to spend more time on: “The silly question is the first intimation of some totally new development.” 

I can’t promise that I even have intimations of what the totally new developments are going to be, but if any industry lends itself to asking “silly” questions about it, it is healthcare. Hopefully I can at least spark some thought and discussion.  

In no particular order:

Why do we prefer to spend money on care when people are no longer healthy than we do on keeping them healthy?

The U.S. healthcare system well known for being exorbitantly expensive while delivering rather mediocre results.  Everyone laments it but we keep throwing more money into the system that is producing these results. 

We’d be smarter to invest in upstream spending.  Like making sure people get enough to eat, with foods that are good for us.  We’d rather spend money on diabetes or obesity drugs rather than addressing the root causes of each disease.  Or like making sure the water we drink, the air we breathe, the things we eat, aren’t polluted (how many toxins or microplastics have you ingested today?).  Not to mention reducing poverty, improving education, or fixing social media

We know the kinds of things we should do, we say we want to do them, but we lack the political will to achieve them and the infrastructure to ensure them.  So we end up paying for our neglect through our ever-more expensive healthcare system.  That’s silly.

Why is everything in healthcare so expensive? 

Continue reading…

Implementation May Be a Science, But, Alas, Medicine Remains an Art

By KIM BELLARD

I’ve been working in healthcare for over forty (!) years now, in one form or another, but it wasn’t until this past week that I heard of implementation science.  Which, in a way, is sort of the problem healthcare has. 

Granted, I’m not a doctor or other clinician, but everyone working in healthcare should be aware of, and thinking a lot about, “the scientific study of methods to promote the systematic uptake of research findings and other EBPs into routine practice, and, hence, to improve the quality and effectiveness of health services” (Bauer, et. al). 

It took a JAMA article, by Rita Rubin, to alert me to this intriguing science: It Takes an Average of 17 Years for Evidence to Change Practice—the Burgeoning Field of Implementation Science Seeks to Speed Things Up.

It turns out that implementation science is nothing new. There has been a journal devoted to it (cleverly named Implementation Science) since 2006, along with the relatively newer Implementation Science Communications. Both focus on articles that illustrate “methods to promote the uptake of research findings into routine healthcare in clinical, organizational, or policy contexts.” 

Brian Mittman, Ph.D., has stated that the aims of implementation science are:

  • “To generate reliable strategies for improving health-related processes and outcomes and to facilitate the widespread adoption of these strategies.
  • To produce insights and generalizable knowledge regarding implementation processes, barriers, facilitators, and strategies.
  • To develop, test, and refine implementation theories and hypotheses, methods, and measures.”

Dr. Mittman distinguished it from quality improvement largely because QI focuses primarily on local problems, whereas “the goal of implementation science is to develop generalizable knowledge.” 

Ms. Rubin’s headline highlights the problem healthcare has: it can take an alarmingly long time for empirical research findings to be incorporated into standard medical practice.  There is some dispute about whether 17 years is actually true or not, but it is widely accepted that, whatever the actual number is, it is much too long.  Even then, Ms. Rubin reminds us, it is further estimated that only 1 in 5 interventions make it to routine clinical care.  

Continue reading…

I Have No Mouth, Yet Still I Scream

BY KIM BELLARD

In light of the recent open letter from AI leaders for a moratorium on AI development, I’m declaring a temporary moratorium on writing about it too, although I doubt either one will last long (and this week’s title is, if you hadn’t noticed, an homage to Harlan Ellison’s classic dystopian AI short story).  Instead, this week I want to write about plants. Specifically, the new research that suggests that plants can, in their own way, scream. 

Bear with me.

To be fair, the researchers don’t use the word “scream;” they talk about “ultrasonic airborne sounds,” but just about every account of the research I saw used the more provocative term.  It has long been known that plants are far from passive, responding to stimuli in their environment with changes in color, smell, and shape, but these researchers “show that stressed plants emit airborne sounds that can be recorded from a distance and classified.”  Moreover, they posit: “These informative sounds may also be detectable by other organisms.”  

It should make you wonder what your houseplant is saying about you when you forget to water it or get a cat.  

They basically tortured – what else would you call it? – plants with a variety of stresses, then used machine learning (damn – I guess I am writing about AI after all) to classify, with up to 70% accuracy, different categories of responses, such as too much water versus too little.  Even plants that have been cut, and thus are dying, can still produce the sounds, at least for short periods.  They speculate that other plants, as well as insects, may be able to “hear” and respond to the sounds.

Continue reading…

AI: Not Ready, Not Set – Go!

By KIM BELLARD

I feel like I’ve written about AI a lot lately, but there’s so much happening in the field. I can’t keep up with the various leading entrants or their impressive successes, but three essays on the implications of what we’re seeing struck me: Bill Gates’ The Age of AI Has Begun, Thomas Friedman’s Our New Promethean Moment, and You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills by Yuval Harari, Tristan Harris, and Aza Raskin.  All three essays speculate that we’re at one of the big technological turning points in human history.

We’re not ready.

The subtitle of Mr. Gates’ piece states: “Artificial intelligence is as revolutionary as mobile phones and the Internet.” Similarly, Mr. Friedman recounts what former Microsoft executive Craig Mundie recently told him: “You need to understand, this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”    

Mr. Gates elaborates:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Mr. Friedman is similarly awed:

This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

Professor Harari and colleagues are more worried than awed, warning: “A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts.”  Transformational isn’t always beneficial.

Continue reading…

We’re Disrupting Disruption

By KIM BELLARD

The Sunday Times featured an op-ed by Mark Britnell, a professor at the UCL Global Business School for Health, with the headline Our creaking NHS can’t beat its admin chaos without a tech revolution. Substitute “U.S. healthcare system” for “NHS” and the headline still would work, as would most of the content.   

I wouldn’t hold my breath about that tech revolution. In fact, if you’re waiting for disruptive innovation in healthcare, or more generally, you may be in for a long wait.

A new study in Nature argues that science is becoming less disruptive. That seems counterintuitive; it often feels like we’re living in a golden age of scientific discoveries and technological innovations. But the authors are firm in their finding: “we report a marked decline in disruptive science and technology over time.” 

The authors looked at data from 45 million scientific papers and 3.9 million patents, going back six decades. Their primary method of analysis is something called a CD Index, which looks at how papers influence subsequent citations. Essentially, the more disruptive, the more the paper itself is cited, rather than previous work.       

The results are surprising, and disturbing. “Across fields, we find that science and technology are becoming less disruptive,” the authors found, “…relative to earlier eras, recent papers and patents do less to push science and technology in new directions.” The declines appeared in all the fields studied (life sciences and biomedicine, physical sciences, technology, and social sciences), although rates of decline varied slightly.  

The authors also looked at how language changed, such as introduction of new words and use of words that connote creation or discovery versus words like  “improve” or “enhance.” The results were consistent with the CD Index results.

“Overall,” they say, “our results suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology.”

“The data suggest something is changing,” co-author Russell Funk, a sociologist at the University of Minnesota in Minneapolis, told Nature. “You don’t have quite the same intensity of breakthrough discoveries you once had.”

Continue reading…

Searching For The Next Search

By KIM BELLARD

I didn’t write about ChatGPT when it was first introduced a month ago because, well, it seemed like everyone else was. I didn’t play with it to see what it could do.  I didn’t want it to write any poems. I didn’t have any AP tests I wanted it to pass. And, for all you know, I’m not using it to write this. But when The New York Times reports that Google sees ChatGPT as a “Code Red” for its search business, that got my attention.

A few months ago I wrote about how Google saw TikTok as an existential threat to its business, estimating that 40% of young people used it for searches. It was a different kind of search, mind you, with video results instead of links, but that’s what made it scary – because it didn’t just incrementally improve “traditional” search, as Google had done to Lycos or Altavista, it potentially changed what “search” was.    

TikTok may well still do that (although it is facing existential issues of its own), but ChatGPT could pose an even greater threat. Why get a bunch of search results that you still have to investigate when you could just ask ChatGPT to tell you exactly what you want to know?

Look, I like Google as much as anyone, but the prospect that its massive dominance of the search engine market could, in the near future, suddenly come to an end gives me hope for healthcare.  If Google isn’t safe in search, no company is safe in any industry, healthcare included.

Continue reading…

Netflix for Drugs?

By KIM BELLARD

A relative — obviously overestimating my healthcare expertise — asked my thoughts on The New York Times article Can a Federally Funded ‘Netflix Model’ Fix the Broken Market for Antibiotics? I had previously skimmed the article and was vaguely aware of the Pasteur Act that it discusses, but, honestly, my immediate reaction to the article was, gosh, that may not be a great analogy: do people realize what a tough year Netflix has had?

I have to admit that I tend to stay away from writing about Big Pharma and prescription drugs, mainly because, in a US healthcare system that seems to pride itself on being opaque, frustrating, and yet outrageously expensive, the prescription drug industry takes the cake. It’s too much of a mess.

But a “Netflix model” for drug development? Consider me intrigued.

It’s easy to understand why market forces might not do well with rare diseases that need an “orphan drug,” but the “subscription model” approach that the Pasteur Act seeks to address is something that most of us need: antibiotics. Antibiotic resistance has made many of our front-line antibiotics less effective, but discovering new antibiotics is a slow, expensive process, and many pharmaceutical companies are reluctant to take the risk. The Pasteur Act would essentially pay for their development in return for “free” use of subsequently invented drugs.

Continue reading…