Categories

Category: Health Tech

The Times They Are A-Changing….Fast

By KIM BELLARD

If you have been following my Twitter – oops, I mean “X” – feed lately, you may have noticed that I’ve been emphasizing The Coming Wave, the new book from Mustafa Suleyman (with Michael Bhaskar). If you have not yet read it, or at least ordered it, I urge you to do so, because, frankly, our lives are not going to be the same, at all.  And we’re woefully unprepared.

One thing I especially appreciated is that, although he made his reputation in artificial intelligence, Mr. Suleyman doesn’t only focus on AI. He also discusses synthetic biology, quantum computing, robotics, and new energy technologies as ones that stand to radically change our lives.  What they have in common is that they have hugely asymmetric impacts, they display hyper-evolution, they are often omni-use, and they increasingly demonstrate autonomy. 

In other words, these technologies can do things we didn’t know they could do, have impacts we didn’t expect (and may not want), and may decide what to do on their own.  

To build an AI, for the near future one needs a significant amount of computing power, using specialized chips and a large amount of data, but with synthetic biology, the technology is getting to the point where someone can set up a lab in their garage and experiment away.  AI can spread rapidly, but it needs a connected device; engineered organisms can get anywhere there is air or water.

“A pandemic virus synthesized anywhere will spread everywhere,” MIT”s Kevin Esvelt told Axios.

I’ve been fascinated with synthetic biology for some time now, and yet I still think we’re not paying enough attention. “For me, the most exciting thing about synthetic biology is finding or seeing unique ways that living organisms can solve a problem,” David Riglar, Sir Henry Dale research fellow at Imperial College London, told The Scientist. “This offers us opportunities to do things that would otherwise be impossible with non-living alternatives.”

Jim Collins, Termeer professor of medical engineering and science at Massachusetts Institute of Technology (MIT), added: “By approaching biology as an engineering discipline, we are now beginning to create programmable medicines and diagnostic tools with the ability to sense and dynamically respond to information in our bodies.”

For example, researchers just reported on a smart pill — the size of a blueberry! — that can be used to automatically detect key biological molecules in the gut that suggest problems, and wirelessly transmit the information in real time. 

Continue reading…

Shiv Rao, CEO demos Abridge

Abridge has been trying to document the clinical encounter automatically since 2018. There’s been quit a lot of fuss about them in recent weeks. They announced becoming the first “Pal” on the Epic “Partners& Pals” program, and also that their AI based encounter capture technology was now being used at several hospitals. And they showed up in a NY Times article about tech being used for clinical documentation. But of course they’re not the only company trying to turn the messy speech in a clinician/patient encounter into a buttoned-up clinical note. Suki, Augmedix & Robin all come to mind, while the elephant is Nuance, which has itself been swallowed by the whale that is Microsoft.

But having used their consumer version a few years back and been a little disappointed, I wanted to see what all the fuss was about. CEO Shiv Rao was a real sport and took me through a clinical example with him as the doc and me as a (slightly) fictionalized patient. He also patiently explained where the company was coming from and what their road map was. But they are all in on AI–no off shore typists trying to correct in close to real time here.

And you’ll for sure want to see the demo. (If you want to skip the chat it’s about 8.00 to 16.50). And I think you’ll be very impressed indeed. I know I was. I can’t imagine a doctor not wanting this, and I suspect those armies of scribes will soon be able to go back to real work! — Matthew Holt

Smells like AI Spirit

By KIM BELLARD

There are so many exciting developments in artificial intelligence (AI) these days that one almost becomes numb to them. Then along comes something that makes me think, hmm, I didn’t see that coming.

For example, AI can now smell.

Strictly speaking, that’s not quite true, at least not in the way humans and other creatures smell.  There’s no olfactory organ, like our nose or a snake’s tongue. What AI has been trained to do is to look at a molecular structure and predict what it would smell like.

If you’re wondering (as I certainly did when I heard AI could smell), AI has also started to crack taste as well, with food and beverage companies already using AI to help develop new flavors, among other things. AI can even reportedly “taste wine” with 95% accuracy. It seems human senses really aren’t as human-only as we’d thought.

The new research comes from the Monell Chemical Senses Center and Osmo, a Google spin-off. It’s a logical pairing since Monell’s mission is “to improve health and well-being by advancing the scientific understanding of taste, smell, and related senses,” and Osmo seeks to give “computers a sense of smell.” More importantly, Osmo’s goal in doing that is: “Digitizing smell to give everyone a goal at a better life.”

Osmo CEO Alex Wiltschko, PhD says: “Computers have been able to digitize vision and hearing, but not smell – our deepest and oldest sense.” It’s easy to understand how vision and hearing can be translated into electrical and, ultimately, digital signals; we’ve been doing that for some time. Smell (and taste) seem somehow different; they seem chemical, not electrical, much less digital. But the Osmo team believes: “In this new era, computers will generate smells like we generate images and sounds today.”

I’m not sure I can yet imagine what that would be like.

The research team used an industry dataset of 5,000 known odorants, and matched molecular structures to perceived scents, creating what Osmo calls the Principle Odor Map (POM). This model was then used to train the AI. Once trained, the AI outperformed humans in identifying new odors. 

The model depends on the correlation between the molecules and the smells perceived by the study’s panelists, who were trained to recognize 55 odors. “Our confidence in this model can only be as good as our confidence in the data we used to test it,” said co-first author Emily Mayhew, PhD. Senior co-author Joel Mainland, PhD. admitted: “The tricky thing about talking about how the model is doing is we have no objective truth.” 

The study resulted in a different way to think about smell. The Montell Center says:

The team surmises that the model map may be organized based on metabolism, which would be a fundamental shift in how scientists think about odors. In other words, odors that are close to each other on the map, or perceptually similar, are also more likely to be metabolically related. Sensory scientists currently organize molecules the way a chemist would, for example, asking does it have an ester or an aromatic ring?

“Our brains don’t organize odors in this way,” said Dr. Mainland. “Instead, this map suggests that our brains may organize odors according to the nutrients from which they derive.”

“This paper is a milestone in predicting scent from chemical structure of odorants,” Michael Schmuker, a professor of neural computation at the University of Hertfordshire who was not involved in the study, told IEEE Spectrum.  It might, he says, lead to possibilities like sharing smells over the Internet. 

Think about that. 

“We hope this map will be useful to researchers in chemistry, olfactory neuroscience, and psychophysics as a new tool for investigating the nature of olfactory sensation,” said Dr. Mainland. He further noted: “The most surprising result, however, is that the model succeeded at olfactory tasks it was not trained to do. The eye-opener was that we never trained it to learn odor strength, but it could nonetheless make accurate predictions.”

Next up on the team’s agenda is to see if the AI can learn to recognize mixtures of odors, which exponentially increases the number of resulting smells. Osmo also wants to see if AI can predict smells from chemical sensor readings, rather than from molecular structures that have already been digitized. And, “can we digitize a scent in one place and time, and then faithfully replicate it in another?”

That’s a very ambitious agenda.

Dr. Wiltschko claims: “Our model performs over 3x better than the standard scent ingredient discovery process used by major fragrance houses, and is fully automated.” One can imagine how this would be useful to those houses. Osmo wants to work with the fragrance industry to create safer products: “If we can make the fragrances we use every day safer and more potent (so we use less of them), we’ll help the health of everyone, and also the environment.”

When I first read about the study, I immediately thought of how dogs can detect cancers by smell, and how exciting it might be if AI could improve on that. Frankly, I’m not much interesting in designing better fragrances; if we’re going to spend money on training AI to recognize molecules, I’d rather it be spent on designing new drugs than new fragrances.

Fortunately, Osmo has much the same idea. Dr. Wiltschko writes:

If we can build on our insights to develop systems capable of replicating what our nose, or what a dog’s nose can do (smell diseases!), we can spot disease early, prevent food waste, capture powerful memories, and more. If computers could do these kinds of things, people would live longer lives – full stop. Digitizing scent could catalyze the transformation of scent from something people see as ephemeral to enduring.   

Now, that’s the kind of innovation that I’m hoping for.

Skeptics will say, well, AI isn’t really smelling anything, it’s just acting as though it does. E.g., there’s no perception, just prediction. One would make the same argument about AI taste, or vision, or hearing, not to mention thinking itself. But at some point, as the saying goes, if it looks like a duck, swims like a duck, and quacks like a duck, it’s probably a duck.  At some point in the not-so-distant future, AI is going to have senses similar to and perhaps much better than our own.

As Dr. Wilkschko hopes: “If computers could do these kinds of things, people would live longer lives – full stop.” 

Kim is a former emarketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor.

THCB Gang Episode 133, Thursday August 17

Joining Matthew Holt (@boltyboy) on #THCBGang on Thursday August 17 at 1pm PST 4pm EST are futurist Jeff Goldsmith: medical historian Mike Magee (@drmikemagee); policy expert consultant/author Rosemarie Day (@Rosemarie_Day1); and patient safety expert and all around wit Michael Millenson (@mlmillenson);

You can see the video below & if you’d rather listen than watch, the audio is preserved as a weekly podcast available on our iTunes & Spotify channels.

THCB 20th Birthday Classic: As I’ve always suspected, Health Care = Communism + Frappuccinos

By MATTHEW HOLT

Our 20th birthday continues with a few classics coming out. Back in 2005 I was really cutting a lyrical rug, and would never miss a chance to get that Cambridge training in Marxism into use. This essay about whether health care should be a public or private good has always been one of my favorites, even if I’m not sure Starbucks is still making Frappuccinos. And 18 years later the basic point of this essay remains true, even if many of you will not have a clue who Vioxx or Haliburton were or why they mattered back then!

Those of you who think I’m an unreconstructed commie will correctly suspect that I’ve always discussed Marxism in my health care talks. You’d be amazed at how many audiences of hospital administrators in the mid-west know nothing about the integral essentials of Marx’s theory of history. And I really enjoy bring the light to them, especially when I manage to reference Mongolia 1919, managed care and Communism in the same bullet point.

While I’ve always been very proud of that one (err.. maybe you have to be there, but you could always hire me to come tell it!), even if I am jesting, there’s a really loose use of the concept of Marxism in this 2005 piece (reprinted in 2009) called A Prescription for Marxism in Foreign Policy from (apparently) libertarian-leaning Harvard professor Kenneth Rogoff. He opens with this little nugget:

“Karl Marx may have suffered a second death at the end of the last century, but look for a spirited comeback in this one. The next great battle between socialism and capitalism will be waged over human health and life expectancy. As rich countries grow richer, and as healthcare technology continues to improve, people will spend ever growing shares of their income on living longer and healthier lives.”

Actually he’s right that there will be a backlash against the (allegedly) market-based capitalism — which has actually been closer to all-out mercantilist booty capitalism — that we’re seen over the last couple of decades. History tends to be reactive and societies go through long periods of reaction to what’s been seen before. In fact the 1980-20?? (10-15?) period of “conservatism” is a reaction to the 1930-1980 period of social corporatism seen in most of the western world. And any period in which the inequality of wealth and income in one society continues to grow at the current rate will eventually invite a reaction–you can ask Louis XVI of France about that.

But when Rogoff is talking about Marxism in health care what he really means is that, because health care by definition will consume more and more of our societal resources, the arguments about the creation and distribution of health care products and services will look more like the arguments seen in the debates about how the government used to allocate resources for “guns versus butter” in the 1950s. These days we are supposed to believe that government blindly accepts letting “the market” rule, even if for vast sways of the economy the government clearly rules the market, which in turn means that those corporations with political influence set the rules and the budgets (quick now, it begins with an H…).

Continue reading…

What Robotaxis Mean for Healthcare

BY KIM BELLARD

You may have seen that last week the California Public Utilities Commission (CPUC) gave approval for two companies to operate self-driving taxicabs (“robotaxis”) in San Francisco, available 24/7 and able to charge fares.  Think Uber or Lyft but without drivers. 

It has seemed inevitable for several years now, yet we’re not really ready.  It reminds me, of course, of how the future is coming fast for healthcare too, especially around artificial intelligence, and we’re not really ready for that either.

The two companies, Cruise (owned by GM) and Waymo (owned by Alphabet) have been testing the service for some time, under certain restrictions, and this approval loosens (but does not completely remove) the restrictions. The approval was not without controversy; indeed, the San Francisco police and fire departments,  among others, opposed it. “They are failing to regulate a dangerous, nascent industry,” said Justin Kloczko, a tech and privacy advocate for consumer protection non-profit Consumer Watchdog.  

The companies brag about their record of no fatalities, but the San Francisco Municipal Transportation Agency has collected almost 600 “incidents” involving autonomous vehicles, even with what they believe is very incomplete reporting.  “While we do not yet have the data to judge AVs against the standard human drivers are setting,” CPUC Commissioner John Reynolds admitted, “I do believe in the potential of this technology to increase safety on the roadway.”

I’m willing to stipulate that autonomous vehicle technology is not quite there yet, especially when mostly surrounded by human-driven vehicles, but I also have great confidence that we’ll get there quickly, and that it will radically change not just our driving but also our desire for owning vehicles. 

One of the most thoughtful discussions I’ve on the topic is from David Zipper in The Atlantic. He posits: 

A century ago, the U.S. began rearranging its cities to accommodate the most futuristic vehicles of the era, privately owned automobiles—making decisions that have undermined urban life ever since. Robotaxis could prove equally transformative, which makes proceeding with caution all the more necessary.

Continue reading…

Happy 20th Birthday THCB

Hard to believe it but 20 years ago (Aug 12 2003) I started writing THCB! Somehow 20 years later it’s still here. Lots of changes over the years. Hundreds of people have written for THCB, thousands have been interviewed on it, and we’ve made a little dent in the world of health care.

Next week we will run some new articles, new interviews and re-run a selection of the greatest hits….

THCB Spotlight: Dexcare CEO, Derek Streat

According to their press release, “Dexcare is a care-access platform to manage the logistics of digital-care delivery. The platform enables healthcare systems to forecast and predict demand and manage how and where care is merchandized to consumers – throughout the digital ecosystem”. What does that mean? How does it compare to a bunch of other digital health companies trying to manager consumer operations inside providers? And having been incubated not that long ago at Providence, how has this demand generation and management service grown so fast. And why has Iconiq Growth just pushed another $75m worth of chips onto the poker table in front of them?

Derek Streat has been around digital health for a while, having founded and sold an early Health 2.0 favorite, Medify. I took him through his market and what Dexcare does in a lot of detail, so hopefully you’ll find this look very educational, not only about Dexcare but also about the consumer market environment health systems are operating in. Matthew Holt

THCB Gang Episode 132, Thursday July 27

Joining Matthew Holt (@boltyboy) on #THCBGang on Thursday July 27 at 1pm PST 4pm EST are Olympic rower for 2 countries and DiME CEO Jennifer Goldsack, (@GoldsackJen); patient advocate Robin Farmanfarmaian (@Robinff3); Kim Bellard (@kimbbellard); and medical historian Mike Magee @drmikemagee.

You can see the video below & if you’d rather listen than watch, the audio is preserved as a weekly podcast available on our iTunes & Spotify channels.