Categories

Tag: Artificial intelligence

AI is Bright, But Can Also Be Dark

BY KIM BELLARD

If you’ve been following artificial intelligence (AI) lately – and you should be – then you may have started thinking about how it’s going to change the world. In terms of its potential impact on society, it’s been compared to the introduction of the Internet, the invention of the printing press, even the first use of the wheel. Maybe you’ve played with it, maybe you know enough to worry about what it might mean for your job, but one thing you shouldn’t ignore: like any technology, it can be used for both good and bad.  

If you thought cyberattacks/cybercrimes were bad when done by humans or simple bots, just wait to see what AI can do.  And, as Ryan Health wrote in Axios, “AI can also weaponize modern medicine against the same people it sets out to cure.”

We may need DarkBERT, and the Dark Web, to help protect us.

A new study showed how AI can create much more effective, cheaper spear phishing campaigns, and the author notes that the campaigns can also use “convincing voice clones of individuals.”  He notes: “By engaging in natural language dialog with targets, AI agents can lull victims into a false sense of trust and familiarity prior to launching attacks.”  

Continue reading…

OI May Be The Next AI

In the past few months, artificial intelligence (AI) has suddenly seemed to come of age, with “generative AI” showing that AI was capable of being creative in ways that we thought was uniquely human.  Whether it is writing, taking tests, creating art, inventing things, making convincing deepfake videos, or conducting searches on your behalf, AI is proving its potential.  Even healthcare has figured out a surprising number of uses.

It’s fun to speculate about which AI — ChatGPT, Bard, DeepMind, Sydney, etc. – will prove “best,” but it turns out that “AI” as we’ve known it may become outdated.  Welcome to “organoid intelligence” (OI).

————

I’d been vaguely aware of researchers working with lab-grown brain cells, but I was caught off-guard when Johns Hopkins University researchers announced organoid intelligence (a term they coined) as “the new frontier in biocomputing and intelligence-in-a-dish.”  Their goal: 

…we present a collaborative program to implement the vision of a multidisciplinary field of OI. This aims to establish OI as a form of genuine biological computing that harnesses brain organoids using scientific and bioengineering advances in an ethically responsible manner.

Continue reading…

AI are (going to be) people too

BY KIM BELLARD

My heart says I should write about Uvalde, but my head says, not yet; there are others more able to do that.  I’ll reserve my sorrow, my outrage, and any hopes I still have for the next election cycle.  

Instead, I’m turning to a topic that has long fascinated me: when and how are we going to recognize when artificial intelligence (AI) becomes, if not human, then a “person”?  Maybe even a doctor.

Continue reading…

DALL-E, Draw an AI Doctor

BY KIM BELLARD

I can’t believe I somehow missed when OpenAI introduced DALL-E in January 2021 – a neural network that could “generate images from text descriptions” — so I’m sure not going to miss now that OpenAI has unveiled DALL-E 2.  As they describe it, “DALL-E 2 is a new AI system that can create realistic images and art from a description in natural language.”  The name, by the way, is a playful combination of the animated robot WALL-E  and the idiosyncratic artist Salvator Dali.

This is not your father’s AI.  If you think it’s just about art, think again.  If you think it doesn’t matter for healthcare, well, you’ve been warned.

Here are further descriptions of what OpenAI is claiming:

“DALL·E 2 can create original, realistic images and art from a text description. It can combine concepts, attributes, and styles.

DALL·E 2 can make realistic edits to existing images from a natural language caption. It can add and remove elements while taking shadows, reflections, and textures into account.

DALL·E 2 can take an image and create different variations of it inspired by the original.”

Here’s their video:

I’ll leave it to others to explain exactly how it does all that, aside from saying it uses a process called diffusion, “which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.”  The end result is that, relative to DALL-E, DALL-E 2 “generates more realistic and accurate images with 4x greater resolution.”  

Continue reading…

Health Care Organizations Must Prioritize Cybersecurity Before Undergoing Digital Transformation

By TRAVIS GOOD

The health care industry is rapidly embracing new technologies. Covid-19 changed the way many industries operate, and healthcare is one industry that was particularly affected by the pandemic. Many health care organizations were already undergoing digital transformations, but Covid exponentially sped up those processes. Health care providers and health-tech companies were forced to adapt to the new normal and change the way they operate. Here are 3 major ways health care has changed in recent times. 

1. Increased popularity of telehealth services:

Covid made telehealth appointments a necessity, but even in a post-Covid world virtual visits are likely to remain a core component of modern healthcare. According to McKinsey, telehealth utilization was 78 times higher in April 2020 than in February 2020. It remained nearly 40 times as popular in 2021 as compared to pre-pandemic levels. 

Research shows that both patients and physicians are fans of telehealth. Many patients prefer the convenience of being able to speak to their doctor from home and physicians feel that offering telemedicine allows them to operate more efficiently. Phone and video-based medical appointments became mainstream in 2020, and they are unlikely to go away anytime soon. 

2. More wearable medical devices with connected ecosystems:

The number of wearable medical devices in use has skyrocketed over the past 5 years. The wearable medical device market is expected to reach $23 million in 2023, a major increase from $8 million in 2017. Gadgets like heart rate sensors, oxygen meters, and exercise trackers are all becoming increasingly popular. Many popular consumer products such as cell phones and smartwatches ship with built-in medical tracking technology.

Continue reading…

It’s complicated. A deep dive into the Viz/Medicare AI reimbursement model.

By LUKE OAKDEN-RAYNER

In the last post I wrote about the recent decision by CMS to reimburse a Viz.AI stroke detection model through Medicare/Medicaid. I briefly explained how this funding model will work, but it is so darn complicated that it deserves a much deeper look.

To get more info, I went to the primary source. Dr Chris Mansi, the co-founder and CEO of Viz.ai, was kind enough to talk to me about the CMS decision. He was also remarkably open and transparent about the process and the implications as they see them, which has helped me clear up a whole bunch of stuff in my mind. High fives all around!

So let’s dig in. This decision might form the basis of AI reimbursement in the future. It is a huge deal, and there are implications.


Uncharted territory

The first thing to understand is that Viz.ai charges a subscription to use their model. The cost is not what was included as “an example” in the CMS documents (25k/yr per hospital), and I have seen some discussion on Twitter that it is more than this per annum, but the actual cost is pretty irrelevant to this discussion.

For the purpose of this piece, I’ll pretend that the cost is the 25k/yr in the CMS document, just for simplicity. It is order-of-magnitude right, and that is what matters.

A subscription is not the only way that AI can be sold (I have seen other companies who charge per use as well) but it is a fairly common approach. Importantly though, it is unusual for a medical technology. Here is what CMS had to say:

Continue reading…

CT scanning is just awful for diagnosing Covid-19

By LUKE OAKDEN-RAYNER, MBBS

I got asked the other day to comment for Wired on the role of AI in Covid-19 detection, in particular for use with CT scanning. Since I didn’t know exactly what resources they had on the ground in China, I could only make some generic vaguely negative statements. I thought it would be worthwhile to expand on those ideas here, so I am writing two blog posts on the topic, on CT scanning for Covid-19, and on using AI on those CT scans.

As background, the pro-AI argument goes like this:

  1. CT screening detects 97% of Covid-19, viral PCR only detects 70%!
  2. A radiologist takes 5-10 minutes to read a CT chest scan. AI can do it in a second or two.
  3. If you use CT for screening, there will be so many studies that radiologists will be overwhelmed.

In this first post, I will explain why CT, with or without AI, is not worthwhile for Covid-19 screening and diagnosis, and why that 97% sensitivity report is unfounded and unbelievable.

Next post, I will address the use of AI for this task specifically.

Continue reading…

Can AI diagnose COVID-19 on CT scans? Can humans?

Vidur Mahajan
Vasanth Venugopal

By VASANTH VENUGOPAL MD and VIDUR MAHAJAN MBBS, MBA

What can Artificial Intelligence (AI) do?

AI can, simply put, do two things – one, it can do what humans can do. These are tasks like looking at CCTV cameras, detecting faces of people, or in this case, read CT scans and identify ‘findings’ of pneumonia that radiologists can otherwise also find – just that this happens automatically and fast. Two, AI can do things that humans can’t do – like telling you the exact time it would take you to go from point A to point B (i.e. Google maps), or like in this case, diagnose COVID-19 pneumonia on a CT scan.

Pneumonia on CT scans?

Pneumonia, an infection of the lungs, is a killer disease. According to WHO statistics from 2015, Community Acquired Pneumonia (CAP) is the deadliest communicable disease and third leading cause of mortality worldwide leading to 3.2 million deaths every year.

Pneumonias can be classified in many ways, including the type of infectious agent (etiology), source of infection and pattern of lung involvement. From an etiological classification perspective, the most common causative agents of pneumonia are bacteria (typical like Pneumococcus, H.Influenza and atypical like Legionella, Mycoplasma), viral (Influenza, Respiratory Syncytial Virus, Parainfluenza, and adenoviruses) and fungi (Histoplasma & Pneumocystis Carinii).

Continue reading…

Artificial Intelligence vs. Tuberculosis – Part 2

By SAURABH JHA, MD

This is the part two of a three-part series. Catch up on Part One here.

Clever Hans

Preetham Srinivas, the head of the chest radiograph project in Qure.ai, summoned Bhargava Reddy, Manoj Tadepalli, and Tarun Raj to the meeting room.

“Get ready for an all-nighter, boys,” said Preetham.

Qure’s scientists began investigating the algorithm’s mysteriously high performance on chest radiographs from a new hospital. To recap, the algorithm had an area under the receiver operating characteristic curve (AUC) of 1 – that’s 100 % on multiple-choice question test.

“Someone leaked the paper to AI,” laughed Manoj.

“It’s an engineering college joke,” explained Bhargava. “It means that you saw the questions before the exam. It happens sometimes in India when rich people buy the exam papers.”

Just because you know the questions doesn’t mean you know the answers. And AI wasn’t rich enough to buy the AUC.

The four lads were school friends from Andhra Pradesh. They had all studied computer science at the Indian Institute of Technology (IIT), a freaky improbability given that only hundred out of a million aspiring youths are selected to this most coveted discipline in India’s most coveted institute. They had revised for exams together, pulling all-nighters – in working together, they worked harder and made work more fun.

Continue reading…

Detecting Heart Conditions Faster: The Case for Biomarkers-PLUS-AI | Dean Loizou, Prevencio

BY JESSICA DAMASSA

Can artificial intelligence help prevent cardiovascular diseases? Biotech startup, Prevencio, has developed a proprietary panel of biomarkers that uses blood proteins and sophisticated AI algorithms to detect cardiovascular conditions like coronary and peripheral artery disease, aerotic stenosis, risk for stroke and more. Dean Loizou, Prevencio’s VP of Business Development, breaks down the process step-by-step and explains exactly how Prevencio reports its clinically viable scores to doctors. How does the AI fit into all this? We get to that too, plus the details around this startup’s plans for raising a B-round on the heels of this work with Bayer.

Filmed at Bayer G4A Signing Day in Berlin, Germany, October 2019.