Mike Magee – The Health Care Blog https://thehealthcareblog.com Everything you always wanted to know about the Health Care system. But were afraid to ask. Thu, 18 Apr 2024 01:17:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.4 Will Artificial Intelligence (AI) Trigger Universal Health Care in America? What do expert Academics say? https://thehealthcareblog.com/blog/2024/04/18/will-artificial-intelligence-ai-trigger-universal-health-care-in-america-what-do-expert-academics-say/ Thu, 18 Apr 2024 07:14:00 +0000 https://thehealthcareblog.com/?p=108014 Continue reading...]]>

By MIKE MAGEE

In his book, “The Age of Diminished Expectations” (MIT Press/1994), Nobel Prize winner, Paul Krugman, famously wrote, “Productivity isn’t everything, but in the long run it is almost everything.”

A year earlier, psychologist Karl E. Weich from the University of Michigan penned the term “sensemaking” based on his belief that the human mind was in fact the engine of productivity, and functioned like a biological computer which “receives input, processes the information, and delivers an output.”

But comparing the human brain to a computer was not exactly a complement back then. For example, 1n 1994, Krugman’s MIT colleague, economist Erik Brynjolfsson coined the term “Productivity Paradox” stating “An important question that has been debated for almost a decade is whether computers contribute to productivity growth.”

Now three decades later, both Krugman (via MIT to Princeton to CCNY) and Brynjolfsson (via Harvard to MIT to Stanford Institute for Human-Centered AI) remain in the center of the generative AI debate, as they serve together as research associates at the National Bureau of Economic Research (NBER) and attempt to “make sense” of our most recent scientific and technologic breakthroughs.

Not surprisingly, Medical AI (mAI), has been front and center. In November, 2023, Brynjolfsson teamed up with fellow West Coaster, Robert M. Wachter, on a JAMA Opinion piece titled “Will Generative Artificial Intelligence Deliver on Its Promise in Health Care?”

Dr. Wachter, the Chair of Medicine at UC San Francisco, coined his own ground-breaking term in 1996 – “hospitalist.” Considered the father of the field, he has long had an interest in the interface between computers and institutions of health care. 

In his 2014 New York Times bestseller, “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” he wrote, “We need to recognize that computers in healthcare don’t simply replace my doctor’s scrawl with Helvetica 12. Instead, they transform the work, the people who do it, and their relationships with each other and with patients.”

What Brynjolfsson and Wachter share in common is a sense of humility and realism when it comes to the history of systemic underperformance at the intersection of technology and health care.

They begin their 2023 JAMA commentary this way, “History has shown that general purpose technologies often fail to deliver their promised benefits for many years (‘the productivity paradox of information technology’). Health care has several attributes that make the successful deployment of new technologies even more difficult than in other industries; these have challenged prior efforts to implement AI and electronic health records.”

And yet, they are optimistic this time around.

Why? Primarily because of the speed and self-corrective capabilities of generative AI. As they conclude, “genAI is capable of delivering meaningful improvements in health care more rapidly than was the case with previous technologies.”

Still the “productivity paradox” is a steep hill to climb. Historically it is a byproduct of flaws in early version new technology, and status quo resistance embedded in “processes, structure, and culture” of corporate hierarchy. When it comes to preserving both power and profit, change is a threat.

As Brynjolfsson and Wachter put it diplomatically, “Humans, unfortunately, are generally unable to appreciate or implement the profound changes in organizational structure, leadership, workforce, and workflow needed to take full advantage of new technologies…overcoming the productivity paradox requires complementary innovations in the way work is performed, sometimes referred to as ‘reimagining the work.’”

How far and how fast could mAI push health care transformation in America? Three factors that favor rapid transformation this time around are improved readiness, ease of use, and opportunity for out-performance.

Readiness comes in the form of knowledge gained from the mistakes and corrective steps associated with EHR over the past two decades. A scaffolding infrastructure already exists, along with a level of adoption by physicians and nurses and patients, and the institutions where they congregate.

Ease of use is primarily a function of mAI being localized to software rather than requiring expensive, regulatory laden hardware devices. The new tools are “remarkably easy to use,” “require relatively little expertise,” and are “dispassionate and self-correcting” in near real-time when they err.

Opportunity to out-perform in a system that is remarkably inefficient, inequitable, often inaccessible and ineffective, has been obvious for some time. Minorities, women, infants, rural populations, the uninsured and under-insured, and the poor and disabled are all glaringly under-served.

Unlike the power elite of America’s Medical Industrial Complex, mAI is open-minded and not inherently resistant to change.

Multimodal, large language, self learning mAI is limited by only one thing – data. And we are literally the source of that data. Access to us – each of us and all of us – is what is missing.

What would you, as one of the 333 million U.S. citizens in the U.S., expect to offer in return for universal health insurance and reliable access to high quality basic health care services?

Would you be willing to provide full and complete de-identified access to all of your vital signs, lab results, diagnoses, external and internal images, treatment schedules, follow-up exams, clinical notes, and genomics?

Here’s what mAI might conclude in response to our collective data:

  1. It is far less expensive to pay for universal coverage than pay for the emergent care of the uninsured.
  2. Prior algorithms have been riddled with bias and inequity.
  3. Unacceptable variance in outcomes, especially for women and infants, plague some geographic regions of the nation.
  4. The manning table for non-clinical healthcare workers is unnecessarily large, and could easily be cut in half by simplifying and automating customer service interfaces and billing standards.
  5. Direct to Consumer marketing of pharmaceuticals and medical devices is wasteful, confusing, and no longer necessary or beneficial.
  6. Most health prevention and maintenance may now be personalized, community-based, and home-centered.
  7. Abundant new discoveries, and their value to society, will largely be able to be validated as worthy of investment (or not) in real time.
  8. Fraudulent and ineffective practices and therapies, and opaque profit sharing and kickbacks, are now able to be exposed and addressed.
  9. Medical education will now be continuous and require increasingly curious and nimble leaders comfortable with machine learning techniques.
  10. U.S. performance by multiple measures, against other developed nations, will be visible in real time to all.

The collective impact on the nation’s economy will be positive and measurable. As Paul Krugman wrote thirty years ago, “A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker.”

As it turns out, health data for health coverage makes “good sense” and would be a pretty good bargain for all Americans.

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex (Grove/2020).

]]>
Disability Activist: Take Great Care When Seeing Bias Toward Disabled Citizens https://thehealthcareblog.com/blog/2024/03/20/disability-activist-take-great-care-when-seeing-bias-toward-disabled-citizens/ Wed, 20 Mar 2024 07:56:00 +0000 https://thehealthcareblog.com/?p=107927 Continue reading...]]>

By RANDY SOUDERS

During the years I served as Chairman of the Board for Jean Kennedy Smith’s Arts and Disability program, Very Special Arts (VSA at the Kennedy Center), I had there opportunity to meet a wide range of remarkable and courageous disabled Americans. Among the lasting friendships is a painter and visual artist, Randy Souders, who was rendered quadriplegic at the age of 17 in a 1972 accident. His concerns of late have been heightened by Trump and MAGA Republicans. I share his communication with his permission here in the hope that tech designers and others will be alert to the fact that great care is required at this point, lest history repeat. — Mike Magee MD

When I was injured at the age of 17 the world was still quite closed for people like me. That was a year before passage of HR 504 of the Rehabilitation Act of 1973. As I recall that law was the first to mandate access to public places that received federal funds. A year later Jean Kennedy Smith founded VSA (Very Special Arts) which has provided important arts opportunities to literally millions of people with disabilities around the globe. It was a very different world back then and artistic achievement was an important way people such as myself could prove their worth to a society that still saw little evidence of it.

It’s unbelievable to think there are serious threats to roll back many of those hard won gains in the name of deregulation and profitability. Disability is costly and people with disabilities are still woefully underemployed. So when a billionaire presidential candidate repeatedly mocks people with disabilities, how long till the “useless/ unworthy” excuses rise again? The old term describing a person with a disability as an “invalid” has another meaning. The adjective use is defined as “Not valid; not true, correct, acceptable or appropriate.”

Few today are aware that the first victims of the Holocaust were the mentally, physically and neurologically disabled people. They were systematically murdered by several Nazi programs specifically targeting them. The Nazi regime was aided in their crimes by perverted “medical doctors and other experts” who were often seen wearing white lab coats in order to visually reinforce their propaganda.

Branded as “useless eaters” and existing as “lives not worthy of life,” people with disabilities were declared an unbearable burden both to German society and the state. As Holocaust historians have documented, “From 1939 to 1941 the Nazis carried out a campaign of euthanasia known as the T4 program (an abbreviation of Tiergartenstrasse 4 which itself was a shortened version of Zentral Dienststelle-T4: Central Office T4) the address from which the program was coordinated.”

These most vulnerable of humans were reportedly the first victims of mass extermination by poison gas and cheaper CO2 from automobile exhaust fumes. But first “a panel of medical experts were required to give their approval for the euthanasia/ ‘mercy-killing’ of each person.”

In the end an estimated quarter million people with disabilities were killed in gas chambers disguised as shower rooms. This model for killing disabled people was later applied to the industrialized murder within Nazi concentration and death camps such as Auschwitz-Birkenau.”

Much has been written on this topic but few seem to know the chronology and diabolical history of how these “beneficial cleansings” of undesirables often start. The Nazi’s enlisted medical doctors to provide them with a veneer of moral justification for their atrocities.

Throughout history, authoritarian political despots have also worked diligently to silence dissent and co-opt religion in order to assist in their mutual quests for total control and dominance of others.

And theocrats are convinced their particular splinter of a schism is the ultimate authority on earth as well as the entire universe. Stoning, beheadings and the hanging of transgressors and non-believers are arbitrarily justified by interpretations of their particular holy book.

There is much to fear when politicians exploit the religious beliefs of medical professionals in order to pass laws denying the rights of others to control their own bodies. This blatant pandering for votes by promising to deliver on religious wedge issues creates a positive feedback loop resulting in politicians being deified by their religious influencers. This is aided by a campaign of rationalization absolving them of their obvious failings. Such a campaign of apologetics by religious leaders is active and widespread in America as I type.

Examples include “God doesn’t call the qualified…He qualifies the called” (Exodus Chapter 4) and “God calls imperfect men to do His perfect will.” Is there even a red line where such “imperfect men” becomes an existential threat? Apparently not. I’m sure most citizens of the Third Reich didn’t think so until everything imploded.

The current Republican candidate for President is on the record as being a believer in the “racehorse theory” – the idea that selective breeding can improve a country’s performance, which American eugenicists and German Nazis used in the last century to buttress their goals of racial purity. On September 18, 2020 he told a mostly white crowd of supporters in Bemidji, Minn. “You have good genes. A lot of it is about the genes, isn’t it? Don’t you believe? The racehorse theory. You think we’re so different? You have good genes in Minnesota.”

This is one of many such statements he has made regarding genetics that has resulted in his personal superiority and that of his family. The New York Times reports “Mr. Trump was talking publicly about his belief that genetics determined a person’s success in life as early as 1988, when he told Oprah Winfrey that a person had ‘to have the right genes’ in order to achieve great fortune.”

These statements combined with those “about undocumented immigrants poisoning the blood” of America should equate to a 100 alarm fire.

Randy Souders is a Professional Artist, an Arts & Disability Advocate and has been Quadriplegic since 1972

]]>
The ‘Barbie Speech’ – How Much Has Really Changed For Women in America? https://thehealthcareblog.com/blog/2024/03/11/the-barbie-speech-how-much-has-really-changed-for-women-in-america/ Mon, 11 Mar 2024 16:47:01 +0000 https://thehealthcareblog.com/?p=107907 Continue reading...]]> By MIKE MAGEE

In our world where up is down, and black is white, there is a left and a right – it’s the middle we appear to be missing. Does it exist, or was it make believe all along?

Into this existential despair enters Britt Cagle Grant, the 47-year old Federal Judge of the U.S. Court of Appeals for the Eleventh Circuit. The Stanford Law graduate, blessed by the Federalist Society and Leonard Leo, and former clerk of Hon. Brett Kavanaugh, was nominated by Donald Trump and confirmed by the Senate on July 31, 2018.

Now six years later, her words in rejecting DeSantis’s “Stop Woke Act” (otherwise known as the “Individual Freedom Measure), are particularly crushing to her supporters: “By limiting its restrictions to a list of ideas designated as offensive, …it penalizes certain viewpoints — the greatest First Amendment sin. Banning speech on a wide variety of political topics is bad; banning speech on a wide variety of political viewpoints is worse.”

When still a Presidential candidate in 2022, DeSantis used the bill as the leading edge of a divisive campaign based on white nationalist victimization, stating, “No one should be instructed to feel as if they are not equal or shamed because of their race. In Florida, we will not let the far-left woke agenda take over our schools and workplaces.”

Ron and Casey DeSantis mirror in many ways the fictional Barbie and Ken – soon to be featured in the 2024 Academy Awards. The comparison of Ron to Ken needs little explanation. And Casey is equally well-credentialed. The former host of PGA Tour Today met her husband on the golf course, and was married at Disney World. Beautiful and smart as a whip, she graduated with a degree in Economics from the College of Charleston where she competed on the Equestrian Team.

With this most recent turn of events, the DeSantis family seems to be following the plot line (with its twists and turns) of Barbie – this year’s favorite for Picture of the Year. And in the aftermath of that film you will find a female disrupter at least as prominent as Justice Grant.

I am speaking of the brilliant actress, America Ferrera, who played a 39 year old mother and Mattel employee, and delivered what one film critique describes as “the ‘Barbie’ monologue we all talked about.” You can find the two minute speech in its entirety here, and it is well worth a listen. Ferrera herself described the big speech this way: “funny and subversive and delightfully weird.”

When I first heard the speech, (husband, father of a grown daughter, grandfather of six granddaughters, brother of six sisters) I cried at one specific line – “It’s too hard.” – That comes in the next to the last paragraph.

Here is “The Speech”:

“It is literally impossible to be a woman. You are so beautiful, and so smart, and it kills me that you don’t think you’re good enough. Like, we have to always be extraordinary, but somehow we’re always doing it wrong.

You have to be thin, but not too thin. And you can never say you want to be thin. You have to say you want to be healthy, but also you have to be thin. You have to have money, but you can’t ask for money because that’s crass. You have to be a boss, but you can’t be mean. You have to lead, but you can’t squash other people’s ideas. You’re supposed to love being a mother, but don’t talk about your kids all the damn time. You have to be a career woman but also always be looking out for other people.

You have to answer for men’s bad behavior, which is insane, but if you point that out, you’re accused of complaining. You’re supposed to stay pretty for men, but not so pretty that you tempt them too much or that you threaten other women because you’re supposed to be a part of the sisterhood.

But always stand out and always be grateful. But never forget that the system is rigged. So find a way to acknowledge that but also always be grateful.

You have to never get old, never be rude, never show off, never be selfish, never fall down, never fail, never show fear, never get out of line. It’s too hard! It’s too contradictory and nobody gives you a medal or says thank you! And it turns out in fact that not only are you doing everything wrong, but also everything is your fault.

I’m just so tired of watching myself and every single other woman tie herself into knots so that people will like us. And if all of that is also true for a doll just representing women, then I don’t even know.”

But in our binary world, is it enough to agree with Barbie when she suggests that “Naming the problem can break the spell?”

Or must we document again a litany of facts that document the harm done – 1 in 5 women victims of rape or attempted rape; epidemic (41%) domestic abuse and violence; unequal pay; forced birth enacted by male super-dominated Red State legislatures; absurd maternal/fetal mortality rates; no paid maternity leave; no universal preschool; Congress is 72% male; and I could go on. But I and many others have been this way before, in search of the right facts, the right message, to find the elusive “middle ground.”

Justice Grant’s appearance this week drew me back to March 24, 2005, when another Federal Justice from the Eleventh Circuit ruled for sanity in a Florida case, opposing both the Governor (Jeb Bush) and the President (George W. Bush). That Justice allowed Terri Schiavo’s feeding tube to be removed at a Pinellas Park hospice, where she died peacefully on March 31, 2005.

Terri had struggled with a hidden eating disorder (a condition shared by 9% of Americans), which went undiscovered when she sought evaluation for infertility. On February 25, 1990, she collapsed in the lobby of their apartment in St. Petersburg, Florida. She was resuscitated but from that day forward remained in a “permanent vegetative state.” A epic 15 year “culture war” ensued before final Judicial relief was grudgingly earned.

Shouting that day from street side the day she died was Randall Terry, leader of Operation Rescue, who somehow believed that Schiavo had not suffered enough, and what our country needed was a heavy dose of “traditional masculinity,” defined by the American Psychological Association, in 2018  as a blend of “stoicism, competitiveness, dominance and aggression—and on the whole, harmful.”

Is the middle really missing? As our fictional Barbie said, let’s believe that “naming the problem can break the spell,” and that others like Justice Britt Cagle Grant might unexpectedly come along Otherwise, we are likely to witness other Terri Schiavo’s come along, destined to die because being a woman in a Trumpets America is “just too hard.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex (Grove/2020).

]]>
Are AI Clinical Protocols A Dobb-ist Trojan Horse? https://thehealthcareblog.com/blog/2024/03/01/are-ai-clinical-protocols-a-dobb-ist-trojan-horse/ Fri, 01 Mar 2024 06:26:41 +0000 https://thehealthcareblog.com/?p=107889 Continue reading...]]>

By MIKE MAGEE

For most loyalist Americans at the turn of the 19th century, Justice John Marshall Harlan’s decision in Jacobson v. Massachusetts (1905). was a “slam dunk.” In it, he elected to force a reluctant Methodist minister in Massachusetts to undergo Smallpox vaccination during a regional epidemic or pay a fine.

Justice Harlan wrote at the time: “Real liberty for all could not exist under the operation of a principle which recognizes the right of each individual person to use his own, whether in respect of his person or his property, regardless of the injury that may be done to others.”

What could possibly go wrong here? Of course, citizens had not fully considered the “unintended consequences,” let alone the presence of President Wilson and others focused on “strengthening the American stock.”

This involved a two-prong attack on “the enemy without” and “the enemy within.”

The The Immigration Act of 1924, signed by President Calvin Coolidge, was the culmination of an attack on “the enemy without.” Quotas for immigration were set according to the 1890 Census which had the effect of advantaging the selective influx of Anglo-Saxons over Eastern Europeans and Italians. Asians (except Japanese and Filipinos) were banned.

As for “the enemy within,” rooters for the cause of weeding out “undesirable human traits” from the American populace had the firm support of premier academics from almost every elite university across the nation. This came in the form of new departments focused on advancing the “Eugenics Movement,” an excessively discriminatory, quasi-academic approach based on the work of Francis Galton, cousin of Charles Darwin.

Isolationists and Segregationists picked up the thread and ran with it focused on vulnerable members of the community labeled as paupers, mentally disabled, dwarfs, promiscuous or criminal.

In a strategy eerily reminiscent of that employed by Mississippi Pro-Life advocates in Dobbs v. Jackson Women’s Health Organization in 2021, Dr. Albert Priddy, activist director of the Virginia State Colony for Epileptics and Feebleminded, teamed up with radical Virginia state senator Aubrey Strode to hand pick and literally make a “federal case” out of a young institutionalized teen resident named Carrie Buck.

Their goal was to force the nation’s highest courts to sanction state sponsored mandated sterilization.

In a strange twist of fate, the Dobbs name was central to this case as well.

That is because Carrie Buck was under the care of foster parents, John and Alice Dobbs, after Carrie’s mother, Emma, was declared mentally incompetent. At the age of 17, Carrie, after having been removed from school after the 6th grade to work as a domestic for the Dobbs, was raped by their nephew and gave birth to a daughter, Vivian. This lead to her mandated institutionalization, and subsequent official labeling as an “imbecile.”

In his majority decision supporting Dr. Priddy, Buck v. Bell,  Supreme Court Chief Justice Oliver Wendall Holmes leaned heavily on precedent. Reflecting his extreme bias, he wrote: “The principle that supports compulsory vaccination is broad enough to cover the cutting of Fallopian tubes (Jacobson v. Massachusetts 197 US 11). Three generation of imbeciles are enough.”

Carrie Buck lived to age 76, had no mental illness, and read the Charlottesville, VA newspaper every day, cover to cover. There is no evidence that her mother Emma was mentally incompetent. Her daughter Vivian was an honor student, who died in the custody of the John and Alice Dobbs at the age of 8.

The deeply embedded roots of the prejudicial idea that inferiority is a biological construct was used to justify indentured servitude and enslaved Africans traces back to our very beginnings as a nation. Our third president, Thomas Jefferson, was not shy in declaring that his enslaved Africans were biologically distinguishable from land-holding whites. Channeling Eugenic activists a century later, the President noted his enslaved Africans suitability for brutal labor was based on their greater physiologic tolerance for plantation-level heat exposure, and lesser (required) kidney output.

Helen Burstin MD, CEO of the Council of Medical Specialty Societies, drew a direct line from those early days to the present day practice of medicine anchored in opaque decision support computerized algorithms. “It is mind-blowing in some ways how deeply embedded in history some of this misinformation is,” she said. She was talking about risk-prediction tools that are commercial and proprietary, and utilized for opague oversight of “roughly 200 million U.S. citizens per year.” Originally designed for health insurance prior approval systems and managed care decisions, they now provide underpinning for new AI super-charged personalized medicine decision support systems.

Documented misinformed and racially constructed clinical guidelines have been uncovered and rewritten over the past few years. They include obstetrical guidelines that disadvantaged black mothers seeking vaginal birth over Caesarian Section, and limitations on treatment of black children with fever and acute urinary tract infection, as just two examples. Other studies uncovered reinforcement of myths that “black people have higher pain thresholds,” greater strength, and resistance to disease – all in support of their original usefulness as slave laborers.

Can’t we just make a fresh start on clinical guidelines? Sadly, it is not that easy. As James Baldwin famously wrote, “People are trapped in history and history is trapped in them.” The explosion of technologic advance in health care has the potential to trap the bad with the good, as vast databases are fed into hungry machines indiscriminately.

Computing power, genomic databases, EMR’s, natural language processing, machine based learning, generative AI, and massive multimodal downloads bury our historic biases and errors under multi-layered camouflage. Modern day Dobb-ists have now targeted vulnerable women and children using carefully constructed legal cases and running them all the way up to the Supreme Court. This strategy was joined with a second (MAGA Republican take-over’s of state legislatures) to ban abortion, explore contraceptive restrictions, and eliminate fertility therapy. It is one more simple step to require encodement of these restrictions on medical freedom and autonomy into binding clinical protocols.

In an age where local bureaucrats are determined to “play doctor”, and modern day jurists are determined to provide cover for a third wave of protocol encoded Dobb-ists, “the enemy without” runs the risk of becoming “the enemy within.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex (Grove/2020).

]]>
The 7 Decade History of ChatGPT https://thehealthcareblog.com/blog/2024/02/19/the-7-decade-history-of-chatgpt/ Mon, 19 Feb 2024 07:06:00 +0000 https://thehealthcareblog.com/?p=107866 Continue reading...]]> By MIKE MAGEE

Over the past year, the general popularization of AI orArtificial Intelligence has captured the world’s imagination. Of course, academicians often emphasize historical context. But entrepreneurs tend to agree with Thomas Jefferson who said, “I like dreams of the future better than the history of the past.”

This particular dream however is all about language, its standing and significance in human society. Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”

Well before ChatGPT became a household phrase, there was LDT or the laryngeal descent theory. It professed that humans unique capacity for speech was the result of a voice box, or larynx, that is lower in the throat than other primates. This permitted the “throat shape, and motor control” to produce vowels that are the cornerstone of human speech. Speech – and therefore language arrival – was pegged to anatomical evolutionary changes dated at between 200,000 and 300,000 years ago.

That theory, as it turns out, had very little scientific evidence. And in 2019, a landmark study set about pushing the date of primate vocalization back to at least 3 to 5 million years ago. As scientists summarized it in three points: “First, even among primates, laryngeal descent is not uniquely human. Second, laryngeal descent is not required to produce contrasting formant patterns in vocalizations. Third, living nonhuman primates produce vocalizations with contrasting formant patterns.”

Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology. If you want to study speech science, you better have a working knowledge of “phonetics, anatomy, acoustics and human development” say the  experts. You could add to this “syntax, lexicon, gesture, phonological representations, syllabic organization, speech perception, and neuromuscular control.”

Professor Paul Pettitt, who makes a living at the University of Oxford interpreting ancient rock paintings in Africa and beyond, sees the birth of civilization in multimodal language terms. He says, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.  Google chair, Sundar Pichai, maintains a similarly expansive view when it comes to language. In his December 6, 2023, introduction of their ground breaking LLM (large language model), Gemini (a competitor of ChatGPT), he described the new product as “our largest and most capable AI model with natural image, audio and video understanding and mathematical reasoning.”

Digital Cognitive Strategist, Mark Minevich, echoed Google’s view that the torch of human language had now gone well beyond text alone and had been passed to machines. His review: “Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on, yet goes far beyond, predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

GPT what???

O.K. Let’s take a step back, and give us all a chance to catch-up.

What we call AI or “artificial intelligence” is a 70-year old concept that used to be called “deep learning.” This was the brain construct of University of Chicago research scientists Warren McCullough and Walter Pitts, who developed the concept of “neural nets” in 1944, modeling the theoretical machine learner after human brains, consistent of multiple overlapping transit fibers, joined at synaptic nodes which, with adequate stimulus could allow gathered information to pass on to the next fiber down the line.

On the strength of that concept, the two moved to MIT in 1952 and launched the Cognitive Science Department uniting computer scientists and neuroscientists. In the meantime, Frank Rosenblatt, a Cornell psychologist, invented the “first trainable neural network” in 1957 termed by him futuristically, the “Perceptron” which included a data input layer, a sandwich layer that could adjust information packets with “weights” and “firing thresholds”, and a third output layer to allow data that met the threshold criteria to pass down the line.

Back at MIT, the Cognitive Science Department was in the process of being hijacked in 1969 by mathematicians Marvin Minsky and Seymour Papert, and became the MIT Artificial Intelligence Laboratory. They summarily trashed Rosenblatt’s Perceptron machine believing it to be underpowered and inefficient in delivering the most basic computations. By 1980, the department was ready to deliver a “never mind,” as computing power grew and algorithms for encoding thresholds and weights at neural nodes became efficient and practical.

The computing leap, experts now agree, came “courtesy of the computer-game industry” whose “graphics processing unit” (GPU), which housed thousands of processing cores on a single chip, was effectively the neural net that McCullough and Pitts had envisioned. By 1977, Atari had developed game cartridges and microprocessor-based hardware, with a successful television interface.

With the launch of the Internet, and the commercial explosion of desk top computing, language – that is the fuel for human interactions worldwide – grew exponentially in importance. More specifically, the greatest demand was for language that could link humans to machines in a natural way.

With the explosive growth of text data, the focus initially was on Natural Language Processing (NLP), “an interdisciplinary subfield of computer science and linguistics primarily concerned with giving computers the ability to support and manipulate human language.” Training software initially used annotated or referenced texts to address or answer specific questions or tasks precisely. The usefulness and accuracy to address inquiries outside of their pre-determined training was limited and inefficiency undermined their usage.

But computing power had now advanced far beyond what Warren McCullough and Walter Pitts could have possibly imagined in 1944, while the concept of “neural nets” couldn’t be more relevant. IBM describes the modern day version this way:

“Neural networks …are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another… Artificial neural networks are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer…Once an input layer is determined, weights are assigned. These weights help determine the importance of any given variable, with larger ones contributing more significantly to the output compared to other inputs. All inputs are then multiplied by their respective weights and then summed. Afterward, the output is passed through an activation function, which determines the output. If that output exceeds a given threshold, it “fires” (or activates) the node, passing data to the next layer in the network… it’s worth noting that the “deep” in deep learning is just referring to the depth of layers in a neural network. A neural network that consists of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. A neural network that only has two or three layers is just a basic neural network.”

The bottom line is that the automated system responds to an internal logic. The computers “next choice” is determined by how well it fits in with the prior choices. And it doesn’t matter where the words or “coins” come from. Feed it data, and it will “train” itself; and by following the rules or algorithms imbedded in the middle decision layers or screens, it will “transform” the acquired knowledge, into generated” language that both human and machine understand.

In 2016, a group of tech entrepreneurs including Elon Musk and Reed Hastings, believing AI could go astray if restricted or weaponized, formed a non-profit called OpenAI. Two years later they released a deep learning product called Chat GPT.  This solution was born out of the marriage of Natural Language Processing and Deep Learning Neural Links with a stated goal of “enabling humans to interact with machines in a more natural way.”

The GPT stood for “Generative Pre-trained Transformer.” Built into the software was the ability to “consider the context of the entire sentence when generating the next word” – a tactic known as “auto-regressive.” As a “self-supervised learning model,” GPT is able to learn by itself from ingesting or inputting huge amounts of anonymous text; transform it by passing it through a variety of intermediary weighed screens that jury the content; and allow passage (and survival) of data that is validated. The resultant output? High output language that mimics human text.

Leadership in Microsoft was impressed, and in 2019 ponied up $1 billion to jointly participate in development of the product and serve as their exclusive Cloud provider.

The first ChatGPT-1 by OpenAI was first introduced by GPT-1 in 2018, but not formally released publicly until November 30, 2022.

It was trained on an enormous BooksCorpus dataset. Its’ design included an input and output layer, with 12 successive transformer layers sandwiched in between. It was so effective in Natural Language Processing that minimal fine tuning was required on the back end.

OpenAI released version two, called GPT-2, next, which was 10 times the size of its predecessor with 1.5 billion parameters, and the capacity to translate and summarize. GPT-3 followed. It had now grown to 175 billion parameters, 100 times the size of GPT-2, and was trained by ingesting a corpus of 500 billion content sources (including those of my own book – CODE BLUE). It could now generate long passages on verbal demand, do basic math, write code, and do (what the inventors describe as) “clever tasks.” An intermediate GPT 3.5 absorbed Wikipedia entries, social media posts and news releases.

On March 14, 2023, GPT-4 went big language, now with multimodal outputs including text, speech, images, and physical interactions with the environment. This represents an exponential convergence of multiple technologies including databases, AI, Cloud Computing, 5G networks, personal Edge Computing, and more.

 The New York Times headline announced it as “Exciting and Scary.” Their technology columnist wrote, “What we see emerging are machines that know how to reason, are adept at all human languages, and are able to perceive and interact with the physical environment.” He was not alone in his concerns. The Atlantic, at about the same time, ran an editorial titled, “AI is about to make social media (much) more toxic.

Leonid Zhukov, Ph.D, director of the Boston Consulting Group’s (BCG) Global AI, believes offerings like ChatGPT-4 and Genesis have the potential to become the brains of autonomous agents—which don’t just sense but also act on their environment—in the next 3 to 5 years. This could pave the way for fully automated workflows.”

Were he alive, Leonardo da Vinci, would likely be unconcerned. Five hundred years ago, he wrote nonchalantly, “It had long come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex (Grove/2020).

]]>
The 2024 Word of the Year: Missense https://thehealthcareblog.com/blog/2024/02/12/the-2024-word-of-the-year-missense/ Tue, 13 Feb 2024 01:37:37 +0000 https://thehealthcareblog.com/?p=107857 Continue reading...]]>

By MIKE MAGEE

Not surprisingly, my nominee for “word of the year” involves AI, and specifically “the language of human biology.”

As Eliezer Yudkowski, the founder of the Machine Intelligence Research Institute and coiner of the term “friendly AI” stated in Forbes:

Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement – wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.” 

Perhaps the simplest way to begin is to say that “missense” is a form of misspeak or expressing oneself in words “incorrectly or imperfectly.” But in the case of “missense”, the language is not made of words, where (for example) the meaning of a sentence would be disrupted by misspelling or choosing the wrong word.

With “missense”, we’re talking about a different language – the language of DNA and proteins. Specifically, the focus in on how the four base units or nucleotides that provide the skeleton of a strand of DNA communicate instructions for each of the 20 different amino acids in the form of 3 “letter” codes or “codons.”

In this protein language, there are four nucleotides. Each “nucleotide” (adenine, quinine, cytosine, thymine) is a 3-part molecule which includes a nuclease, a 5-carbon sugar and a phosphate group. The four nucleotides unique chemical structures are designed to create two “base-pairs.” Adenine links to Thymine through a double hydrogen bond, and Cytosine links to Guanine through a triple hydrogen bond. A-T and C-G bonds  effectively “reach across” two strands of DNA to connect them in the familiar “double-helix” structure. The strands gain length by using their sugar and phosphate molecules on the top and bottom of each nucleoside to join to each other, increasing the strands length.

The A’s and T’s and C’s and G’s are the starting points of a code. A string of three, for example A-T-G is called a “codon”, which in this case stands for one of the 20 amino acids common to all life forms, Methionine. There are 64 different codons – 61 direct the chain addition of one of the 20 amino acids (some have duplicates), and the remaining 3 codons serve as “stop codons” to end a protein chain.

Messenger RNA (mRNA) carries a mirror image of the coded nucleotide base string from the cell nucleus to ribosomes out in the cytoplasm of the cell. Codons then call up each amino acid, which when linked together, form the protein. The protein’s structure is defined by the specific amino acids included and their order of appearance. Protein chains fold spontaneously, and in the process form a 3-dimensional structure that effects their biologic functions.

A mistake in a single letter of a codon can result in a mistaken message or “missense.” In 2018, Alphabet (formerly Google) released AlphaFold, an artificial intelligence system able to predict protein structure from DNA codon databases, with the promise of accelerating drug discovery. Five years later, the company released AlphaMissense, mining AlphaFold databases, to learn the new “protein language” as with the large language model (LLM) product ChatGPT. The ultimate goal:  to predict where “disease-causing mutations are likely to occur.”

A work in progress, AlphaMissense has already created a catalogue of possible human missense mutations, declaring 57% to have no harmful effect, and 32% possibly linked to (still to be determined) human pathology. The company has open sourced much of its database, and hopes it will accelerate the “analyzes of the effects of DNA mutations and…the research into rare diseases.”

The numbers are not small. Believe it or not, AI says the 46-chromosome human genome theoretically harbors 71 million possible missense events waiting to happen. Up to now, they’ve identified only 4 million. For humans today, the average genome includes only 9000 of these mistakes, most of which have no bearing on life or limb.

But occasionally they do. Take for example Sickle Cell Anemia. The painful and life limiting condition is the result of a single codon mistake (GTG instead of GAG) on the nucleoside chain coded to create the protein hemoglobin. That tiny error causes the 6th amino acid in the evolving hemoglobin chain, glutamic acid, to be substituted with the amino acid valine. Knowing this, investigators have now used the gene-editing tool CRISPR (a winner of the Nobel Prize in Chemistry in 2020) to correct the mistake through autologous stem cell therapy.

As Michigan State University physicist Stephen Hsu said, “The goal here is, you give me a change to a protein, and instead of predicting the protein shape, I tell you: Is this bad for the human that has it? Most of these flips, we just have no idea whether they cause sickness.”

Patrick Malone, a physician researcher at KdT ventures, sees AI on the march. He says, this is “an example of one of the most important recent methodological developments in AI. The concept is that the fine-tuned AI is able to leverage prior learning. The pre-training framework is especially useful in computational biology, where we are often limited by access to data at sufficient scale.”

AlphaMissense creators believe their predictions may:

“Illuminate the molecular effects of variants on protein function.”

“Contribute to the identification of pathogenic missense mutations and previously unknown disease-causing genes.”

“Increase the diagnostic yield of rare genetic diseases.”

And of course, this cautionary note: The growing capacity to define and create life carries with it the potential to alter life. Which is to say, what we create will eventually change who we are, and how we behave toward each other.

Mike Magee MD is a Medical Historian and a regular THCB contributor. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex (Grove/2020)

]]>
Can Generative AI Improve Health Care Relationships? https://thehealthcareblog.com/blog/2024/01/30/can-generative-ai-improve-health-care-relationships/ Tue, 30 Jan 2024 17:20:36 +0000 https://thehealthcareblog.com/?p=107799 Continue reading...]]>

By MIKE MAGEE

“What exactly does it mean to augment clinical judgement…?”

That’s the question that Stanford Law professor, Michelle Mello, asked in the second paragraph of a May, 2023 article in JAMA exploring the medical legal boundaries of large language model (LLM) generative AI.

This cogent question triggered unease among the nation’s academic and clinical medical leaders who live in constant fear of being financially (and more important, psychically) assaulted for harming patients who have entrusted themselves to their care.

That prescient article came out just one month before news leaked about a revolutionary new generative AI offering from Google called Genesis. And that lit a fire.

Mark Minevich, a “highly regarded and trusted Digital Cognitive Strategist,” writing in a December issue of  Forbes, was knee deep in the issue writing, “Hailed as a potential game-changer across industries, Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on, yet goes far beyond, predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

Health professionals have been negotiating this space (information exchange with their patients) for roughly a half century now. Health consumerism emerged as a force in the late seventies. Within a decade, the patient-physician relationship was rapidly evolving, not just in the United States, but across most democratic societies.

That previous “doctor says – patient does” relationship moved rapidly toward a mutual partnership fueled by health information empowerment. The best patient was now an educated patient. Paternalism must give way to partnership. Teams over individuals, and mutual decision making. Emancipation led to empowerment, which meant information engagement.

In the early days of information exchange, patients literally would appear with clippings from magazines and newspapers (and occasionally the National Inquirer) and present them to their doctors with the open ended question, “What do you think of this?”

But by 2006, when I presented a mega trend analysis to the AMA President’s Forum, the transformative power of the Internet, a globally distributed information system with extraordinary reach and penetration armed now with the capacity to encourage and facilitate personalized research, was fully evident.

Coincident with these new emerging technologies, long hospital length of stays (and with them in-house specialty consults with chart summary reports) were now infrequently-used methods of medical staff continuous education. Instead, “reputable clinical practice guidelines represented evidence-based practice” and these were incorporated into a vast array of “physician-assist” products making smart phones indispensable to the day-to-day provision of care.

At the same time, a several decade struggle to define policy around patient privacy and fund the development of medical records ensued, eventually spawning bureaucratic HIPPA regulations in its wake.

The emergence of generative AI, and new products like Genesis, whose endpoints are remarkably unclear and disputed even among the specialized coding engineers who are unleashing the force, have created a reality where (at best) health professionals are struggling just to keep up with their most motivated (and often mostly complexly ill) patients. Needless to say, the Covid based health crisis and human isolation it provoked, have only made matters worse.

Like clinical practice guidelines, ChatGPT is already finding its “day in court.”  Lawyers for both the prosecution and defense will ask, “whether a reasonable physician would have followed (or departed from the guideline in the circumstances, and about the reliability of the guideline” – whether it exists on paper or smart phone, and whether generated by ChatGPT or Genesis.

Large language models (LLMs), like humans, do make mistakes. These factually incorrect offerings have charmingly been labeled “hallucinations.” But in reality, for health professionals they can feel like an “LSD trip gone bad.” This is because the information is derived from a range of opaque sources, currently non-transparent, with high variability in accuracy.

This is quite different from a physician directed standard Google search where the professional is opening only trusted sources. Instead, Genesis might be equally weighing a NEJM source with the modern day version of the National Inquirer. Generative AI outputs also have been shown to vary depending on day and syntax of the language inquiry.

Supporters of these new technologic applications admit that these tools are currently problematic but expect machine-driven improvement in generative AI to be rapid. They also have the ability to be tailored for individual patients in decision-support and diagnostic settings, and offer real time treatment advice. Finally, they self-updated information in real time, eliminating the troubling lags that accompanied original treatment guidelines.

One thing that is certain is that the field is attracting outsized funding. Experts like Mello predict that specialized applications will flourish. As she writes, “The problem of nontransparent and indiscriminate information sourcing is tractable, and market innovations are already emerging as companies develop LLM products specifically for clinical settings. These models focus on narrower tasks than systems like ChatGPT, making validation easier to perform. Specialized systems can vet LLM outputs against source articles for hallucination, train on electronic health records, or integrate traditional elements of clinical decision support software.”

One serious question remains. In the six-country study I conducted in 2002 (which has yet to be repeated), patients and physicians agreed that the patient-physician relationship was three things – compassion, understanding, and partnership. LLM generative AI products would clearly appear to have a role in informing the last two components. What their impact will be on compassion, which has generally been associated with face to face and flesh to flesh contact, remains to be seen.

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex (Grove/2020).

]]>
25th Amendment Still Not the Right Response to a Mentally Ill Trump https://thehealthcareblog.com/blog/2024/01/08/25th-amendment-still-not-the-right-response-to-a-mentally-ill-trump/ Mon, 08 Jan 2024 08:48:00 +0000 https://thehealthcareblog.com/?p=107771 Continue reading...]]>

By MIKE MAGEE

On May 16, 2017 New York Times conservative columnist, Russ Douthat, wrote “The 25th Amendment Solution for Removing Trump.” 

That column was the starting point for a Spring course I taught on the 25th Amendment at the President’s College in Hartford, CT. I will not summarize the entire course here, but would like to emphasize four points:

  1. The American public was adequately warned (now 7 years ago) of the risk that Trump represented to our nation and our democracy.
  2. Douthat’s piece triggered a journalistic debate which I summarize below with four slides drawn from my lectures.
  3. Had Pence and the cabinet chosen to activate the 25th Amendment, as it is written, Trump would have had the right to appeal “his inability”, forcing the Congress to decide whether there was cause to remove the President.
  4. Judging from the later impeachment of Trump in the House, but failure to convict in the Senate, it is unlikely a courageous Pence and Cabinet would have been backed by their own party.

Let’s look at four archived slides from the 2017 lecture, and then discuss our current options in the case of 2024 Trump against Democracy. 

Slide 1. Russ Douthat

        Slide 2. Jamal Greene (in response)

        Slide 3. Dahlia Lithwick (in response in SLATE)

        Slide 4. The 25th Amendment 

In 2017, Scott Bomboy, chief of the National Constitution Center, wrote:

“Section 4 is the most controversial part of the 25th Amendment: It allows the Vice President and either the Cabinet, or a body approved ‘by law’ formed by Congress, to jointly agree that ‘the President is unable to discharge the powers and duties of his office.’ This clause was designed to deal with a situation where an incapacitated President couldn’t tell Congress that the Vice President needed to act as President.”

“It also allows the President to protest such a decision, and for two-thirds of Congress to decide in the end if the President is unable to serve due to a condition perceived by the Vice President, and either the Cabinet or a body approved by Congress. So the Cabinet, on its own, can’t block a President from using his or her powers if the President objects in writing. Congress would settle that dispute and the Vice President is the key actor in the process.” What might have been (but was not) would have played out this way according to Constitutional scholars:

“… scholars Brian C. Kalt and David Pozen explain the problematic process if the Vice President and the Cabinet agree the President can’t serve.”

  1. “If this group declares a President ‘unable to discharge the powers and duties of his office,’ the Vice President immediately becomes Acting President.
  2. If and when the President pronounces himself able, the deciding group has four days to disagree.
  3. If it does not, the President retakes his powers.
  4. But if it does, the Vice President keeps control while Congress quickly meets and makes a decision…
  5. The Vice President continues acting as President only if two-thirds majorities of both chambers agree that the President is unable to serve.”

Had our leaders followed Russ Douthat’s advice seven years ago, it is highly unlikely that a 2/3rds majority of both chambers of Congress would have had their back. Instead, they went for Impeachment and failed, as Republicans chose rather to let voters decide. And they did, in 2020.

Few likely envisioned that mentally deranged (now former) President  Trump would launch a January 6th insurrection, embolden white nationalists militia across the nation, and follow thru on threats to run and win a 2nd term in 2024–intending to then free his followers from jail, to then fill their cells with those who attempted to hold him accountable for his historic misdeeds.

The 25th Amendment is no more a solution today than it was in 2017. Instead citizens loyal to our form of government rely in 2024 on two protective backstops:

  1. Our third pillar of government – The Courts (most especially the Supreme Court.
  2. The voter, whose second day of reckoning fast approaches.

Some believe we are once again engaged in a great Civil War. In its’ summary of the Gettysburg Address, National Geographic states that “Despite (or perhaps because of) its brevity, since (Abraham Lincoln’s) speech was delivered, it has come to be recognized as one of the most powerful statements in the English language and, in fact, one of the most important expressions of freedom and liberty in any language.”

The last paragraph of that two minute speech, delivered now 180 years and two months ago, reminds us that Americans died on “the battlefield” on January 6, 2021 defending our democratic government, and Lincoln’s words are today, more relevant than ever.

As described by historians, Lincoln made it clear that the stakes could not have been higher, well before the Dobbs decision and the appropriation of Hitler’s words by Trump. “Lincoln tied the current struggle to the days of the signing of the Declaration of Independence, speaking of the principles that the nation was conceived in: liberty and the proposition that all men are created equal. Moreover, he tied both to the abolition of slavery—a new birth of freedom—and the maintenance of representative government.

As they were spoken, November 19, 1863, here are Lincoln’s final words, ones that deserve a most careful reading: “It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they here gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom, and that government of the people, by the people, for the people, shall not perish from the earth.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex.

]]>
2024 Prediction: Society Will Arrive at an Inflection Point in AI Advancement https://thehealthcareblog.com/blog/2023/12/27/2024-prediction-society-will-arrive-at-an-inflection-point-in-ai-advancement/ Wed, 27 Dec 2023 05:26:00 +0000 https://thehealthcareblog.com/?p=107752 Continue reading...]]> By MIKE MAGEE

For my parents, March, 1965 was a banner month. First, that was the month that NASA launched the Gemini program, unleashing “transformative capabilities and cutting-edge technologies that paved the way for not only Apollo, but the achievements of the space shuttle, building the International Space Station and setting the stage for human exploration of Mars.” It also was the last month that either of them took a puff of their favored cigarette brand – L&M’s.

They are long gone, but the words “Gemini” and the L’s and the M’s have taken on new meaning and relevance now six decades later.

The name Gemini reemerged with great fanfare on December 6, 2023, when Google chair, Sundar Pichai, introduced “Gemini: our largest and most capable AI model.” Embedded in the announcement were the L’s and the M’s as we see here: “From natural image, audio and video understanding to mathematical reasoning, Gemini’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.

Google’s announcement also offered a head to head comparison with GPT-4 (Generative Pretrained Transformer-4.) It is the product of a non-profit initiative, and was released on March 14, 2023. Microsoft’s helpful AI search engine, Bing, helpfully informs that, “OpenAI is a research organization that aims to create artificial general intelligence (AGI) that can benefit all of humanity…They have created models such as Generative Pretrained Transformers (GPT) which can understand and generate text or code, and DALL-E, which can generate and edit images given a text description.”

While “Bing” goes all the way back to a Steve Ballmer announcement on May 28, 2009, it was 14 years into the future, on February 7, 2023, that the company announced a major overhaul that, 1 month later, would allow Microsoft to broadcast that Bing (by leveraging an agreement with OpenAI) now had more than 100 million users.

Which brings us back to the other LLM (large language model) – GPT-4, which the Gemini announcement explores in a head-to-head comparison with its’ new offering. Google embraces text, image, video, and audio comparisons, and declares Gemini superior to GPT-4.

Mark Minevich, a “highly regarded and trusted Digital Cognitive Strategist,” writing this month in Forbes, seems to agree with this, writing, “Google rocked the technology world with the unveiling of Gemini – an artificial intelligence system representing their most significant leap in AI capabilities. Hailed as a potential game-changer across industries, Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on yet goes far beyond predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

Expect to hear the word “multimodality” repeatedly in 2024 and with emphasis.

But academics will be quick to remind that the origins can be traced all the way back to 1952 scholarly debates about “discourse analysis”, at a time when my Mom and Dad were still puffing on their L&M’s. Language and communication experts at the time recognized “a major shift from analyzing language, or mono-mode, to dealing with multi-mode meaning making practices such as: music, body language, facial expressions, images, architecture, and a great variety of communicative modes.”

Minevich believes that “With Gemini’s launch, society has arrived at an inflection point with AI advancement.” Powerhouse consulting group, BCG (Boston Consulting Group), definitely agrees. They’ve upgraded their L&M’s, with a new acronym, LMM, standing for “large multimodal model.” Leonid Zhukov, Ph.D, director of the BCG Global AI Institute, believes “LMMs have the potential to become the brains of autonomous agents—which don’t just sense but also act on their environment—in the next 3 to 5 years. This could pave the way for fully automated workflows.”

BCG predicts an explosion of activity among its corporate clients focused on labor productivity, personalized customer experiences, and accelerated (especially) scientific R&D. But they also see high volume consumer engagement generating content, new ideas, efficiency gains, and tailored personal experiences.

This seems to be BCG talk for “You ain’t seen nothing yet.” In 2024, they say all eyes are on “autonomous agents.” As they describe what’s coming next: “Autonomous agents are, in effect, dynamic systems that can both sense and act on their environment. In other words, with stand-alone LLMs, you have access to a powerful brain; autonomous agents add arms and legs.”

This kind of talk is making a whole bunch of people nervous. Most have already heard Elon Musk’s famous 2023 quote, “Mark my words, AI is far more dangerous than nukes. I am really quite close to the cutting edge in AI, and it scares the hell out of me.”  BCG acknowledges as much, saying, “Using AI, which generates as much hope as it does horror, therefore poses a conundrum for business… Maintaining human control is central to responsible AI; the risks of AI failures are greatest when timely human intervention isn’t possible. It also demands tempering business performance with safety, security, and fairness… scientists usually focus on the technical challenge of building goodness and fairness into AI, which, logically, is impossible to accomplish unless all humans are good and fair.”

Expect in 2024 to see once again the worn out phrase “Three Pillars” . This time it will be attached to LMM AI, and it will advocate for three forms of “license” in operate:

  1. Legal license – “regulatory permits and statutory obligations.”
  2. Economic license – ROI to shareholders and executives.
  3. Social license – a social contract delivering transparency, equity and justice to society.

BCG suggests that trust will be the core challenge, and that technology is tricky. We’ve been there before. The 1964 Surgeon General’s report knocked the socks off of tobacco company execs who thought high-tech filters would shield them from liability. But the government report burst that bubble by stating “Cigarette smoking is a health hazard of sufficient importance in the United States to warrant appropriate remedial action.”  Then came the Gemini 6A’s 1st attempt to launch on December 12,1965.  It was cancelled when its’ fuel igniter failed.

Generative AI driven LMM’s will “likely be transformative,” but clearly will also have its ups and downs as well.  As BCG cautions, “Trust is critical for social acceptance, especially in cases where AI can act independent of human supervision and have an impact on human lives.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex.

]]>
A Speech For The Ages – 83 Years Ago This Christmas https://thehealthcareblog.com/blog/2023/12/22/a-speech-for-the-ages-83-years-ago-this-christmas/ Fri, 22 Dec 2023 07:27:00 +0000 https://thehealthcareblog.com/?p=107746 Continue reading...]]>

By MIKE MAGEE

On the evening of December 29, 1940, with election to his 3rd term as President secured, FDR delivered these words as part of his sixteenth “Fireside Chat”: “There can be no appeasement with ruthlessness…No man can tame a tiger into a kitten by stroking it.”

Millions of Americans, and millions of Britains were tuned in that evening, as President Roosevelt made clear where he stood while carefully avoiding over-stepping his authority in a nation still in the grips of a combative and isolationist opposition party.

That very evening, the Germans Luftwaffe, launched their largest yet raid on the financial district of London. Their “fire starter” group, KGr 100, initiated the attack with incendiary bombs that triggered fifteen hundred fires that began a conflagration ending in what some labeled the The Second Great Fire of London. Less than a year later, on the eve of another Christmas, we would be drawn into the war with the bombing of Pearl Harbor.

Now, 83 Christmases later, with warnings of “poisoning the blood of our people,” we find ourselves contending with our own Hitler here at home.  Trump is busy igniting white supremacist fires utilizing the same vocabulary and challenging the boundaries of decency, safety and civility. What has the rest of the civilized world learned in the meantime?

First, appeasement does not work. It expands the vulnerability of a majority suffering the “tyranny of the minority.”

Second, the radicalized minority will utilize any weapon available, without constraint, to maintain and expand their power.

Third, the battle to save and preserve democracy in these modern times is never fully won. We remain in the early years of this deadly serious conflict, awakened from a self-induced slumber on January 6, 2020.

Hitler was no more an “evil genius” than is Trump. But both advantaged historic and cultural biases and grievances, leveraging them and magnifying them with deliberate lies and media manipulation. Cultures made sick by racism, systemic inequality, hopelessness, patriarchy, and violence, clearly can be harnessed for great harm. But it doesn’t take a “genius.” Churchill never called Hitler a “genius.” Most often he only referred to him as “that bad man.”

The spectacle and emergence of Kevin McCarthy, followed by Mike Johnson, as Speaker of the House, and the contrasting address by House Minority Leader Hakeem Jeffries as he handed over the gavel, represent just one more skirmish in this “War for Democracy.” 

If our goal is a “healthier” America – one marked by compassion, understanding and partnership; one where fear and worry are counter-acted by touch and comfort; one where linkages between individuals, families, communities and societies are constructed to last – all signals confirm that the time is now to fight with vigor.

As Churchill vowed on his first day as Prime Minister, “I have nothing to offer but blood, toil, tears, and sweat.” At about the same time, FDR offered this encouragement, “We have no excuse for defeatism. We have every good reason for hope — hope for peace, yes, and hope for the defense of our civilization and for the building of a better civilization in the future.”

The re-emergence of white supremacists and nationalists, theocratic and patriarchal censorship, and especially post-Dobbs attacks on women’s freedom and autonomy, are real and substantial threats to our form of government. They indeed are minority views, but no more so than the minority in 1940 which allowed a small group of “bad men” to harness a relatively small nation of 70 million people into a force that very nearly conquered the world.

Following the December 7, 1941 attack on Pearl Harbor. Churchill packed his bags and headed directly to a British battleship for the 10-day voyage in rough seas (filled with German U-boats) to Norfolk, VA. Hours after arrival he was aboard a U.S. Navy plane for the 140 mile trip to the White House which he entered in a double breasted peacoat and a naval cap, chomping on a cigar. He would remain the guest of the Roosevelts for the next three weeks, heading home on January 14, 1942.

On Christmas Eve, he joined the President on the South Portico of the White House for the lighting of the White House Christmas tree. Here is what Churchill said to the President’s guests and 15,000 onlookers: “Let the children have their night of fun and laughter. Let gifts of Father Christmas delight their play. Let us share to the full in their unstinted pleasures before we turn again to the stern tasks and formidable year that lie before us. Resolve! – that by our sacrifice and daring, these same children shall not be robbed their inheritance and denied their right to live in a free and decent world.”

He spent the following day working on a speech to be delivered to a Joint Meeting of Congress on December 26, 1941, the kind of a Pep talk all good and decent people of America could benefit from today.  As we ourselves have learned since January 6, 2021, Churchill was right to warn us of complacency and caution, and that “many disappointments and unpleasant surprises await us.” 

He was clear and concise when he warned that day that Hitler and his Nazis (whom Trump so openly admires) possessed powers that “are enormous; they are bitter; they are ruthless.” But these “wicked men…know they will be called to terrible account…Now, we are the masters of our fate…The task which has been set is not above our strength. Its’ pangs and trials are not beyond our endurance.”

“Trump will be defeated,” he would say were he with us today. “You may be sure of that!” But we must be up to the task – brave, organized, and strategic. Now is the time, and as the British Times of London editorial reminded in 1942, as Churchill set foot once again on homeland after his American visit, timing is everything. “His visit to the United States has marked a turning-point of the war. No praise can be too high for the far-sightedness and promptness of the decision to make it.”

Mike Magee MD is a Medical Historian, a regular THCB contributor, and the author of CODE BLUE: Inside America’s Medical Industrial Complex.

]]>