Many say AI, including ChatGPT, isn’t good enough to be your doctor. We need a moonshot to change that.

from https://free-vectors.net/healthcare/chatbot-in-healthcare
License: https://creativecommons.org/licenses/by/4.0/

by Jonathan A. Handler, MD, FACEP, FAMIA

Too Many Lack Access to Healthcare

I know of someone, let’s call him “Xavier,” who had been in the hospital after a fall. It was discovered he had congestive heart failure, so the care team ordered an echocardiogram (“echo”, or heart ultrasound). It showed his heart function had been cut in half since his last echo, which had been normal. Sometime in the few years prior, he’d had heart damage, perhaps from a heart attack. They did a stress test and it was abnormal. They recommended follow-up as an outpatient with a cardiologist in two weeks, because they couldn’t do heart procedures right away that would require blood thinning due to the recent fall. A health system in the area said their next available appointment for a cardiologist to see a new patient was in…

6 months!

Yup, 6 months! A few days later, Xavier had some chest pain. Abnormal stress test + bad heart + chest pain = go to ER. At the ER, they said they’d give Xavier an “urgent” cardiology follow-up referral. How urgent is “urgent”? Two more weeks! Yes, that’s much better than 6 months, but in what universe is 2 weeks an “urgent” follow-up? Especially for someone with bad heart + abnormal stress test?

These types of experiences repeat themselves over and over for so many people. Research says the national average wait time for a new patient to see a physician in 2022 was 26 days, up from 21 days in 2004. If you live in Portland, Oregon, it’s over 45 days. The need for outpatient healthcare services has outstripped supply so badly that outpatient healthcare seems to have utterly failed for many.

Primary care providers (PCPs) agree. 61% of those surveyed say that their field is “crumbling” (and that’s up substantially from 46% last year), 80% said their field is undersized to meet patient needs, and only 19% say their practice is fully staffed.

They Say AI Won’t Replace Doctors

Many are excited about the potential of the ChatGPT AI for use in healthcare. However, an article from January, 2023 implied that ChatGPT can’t replace doctors (and thereby solve the doctor supply problem), in part because we will always depend on human clinicians for their empathy. It says, “But will ChatGPT have a charming bedside manner (possibly a doctor’s greatest asset)? Some things just can’t be taught.”

Is empathy always a doctor’s greatest asset? I know of someone, let’s call him “Zachary”, who had been in the hospital for a pneumonia. The hospitalist said he should get a chest x-ray 2 weeks after discharge to make sure it was totally gone. Several days after discharge, Zachary developed signs of a possible urinary tract infection. Luckily, he has a “concierge” doctor. That means Zachary pays extra, out of pocket, for extra service. That doctor has provided thoughtful care and empathetic responses at some important times, but some responses seemed less charming. A family member, who is also a clinician, called and asked about a urinalysis and the follow-up chest x-ray. His concierge doctor was annoyed with this ask. Then, Zachary developed increased swelling of his leg weeks after having hip surgery. The family member clinician called his concierge doctor and asked what he thought. The concierge doctor’s response? “So, now you’re going to want an ultrasound to make sure he doesn’t have a blood clot? I told you before, that’s not how this works. I’m not a vending machine.” Whoops! Maybe not the most charming and empathetic response. He ordered the test anyway, and fortunately, it showed no clot.

I hear stories like this all the time. Many clinicians are overworked and receiving demands from stressed-out patients and family members. Sometimes, the result is that the clinician does not always act as their best self, and the patient may not always get a thoughtful and empathetic response. Meanwhile, a study of ChatGPT showed that its responses to questions on a Reddit forum were typically not only of higher quality, they were generally much more empathetic than those of doctors. Apparently, a quality response and a charming bedside manner can be taught to the AI, and the AI may be able to provide it more consistently in some cases.

Despite its successes, I keep reading article after article after article after article suggesting that ChatGPT isn’t good enough to replace doctors. I think they are trying to reassure people, but those articles scare me. I mean, if those articles are right (and I think they are — here’s one study to support this), then how are we going to solve our healthcare access issues?

Addressing the Supply and Demand Mismatch

Some say we need to reduce our demand — that our society’s expectations of healthcare are unreasonable and we can’t afford to meet them. Is that true? A recent study showed that simply providing guideline-recommended primary care to a typical number of patients requires a primary care provider to do over 26 hours per day of work. Since there aren’t 26 hours per day, we can only conclude that not all patients get all guideline-recommended care. The Washington Post cites other work suggesting that 20% of patients with serious diagnoses may be initially misdiagnosed, and that most patients will have at least one incorrect or late diagnosis during their lifetime. It seems hard to argue that the problem is patients demanding too much when the data suggests that healthcare struggles with the basics.

Others say that AI will make doctors much more efficient, significantly alleviating or solving the problem. I call baloney! Even if AI completely eliminates all documentation and inbox management overhead by having speech recognition and generative AI do all that work, that would still leave them with 23.5 hours a day of work just to do guideline-recommended care for their current load of patients (based on the numbers in that article).

Others say documentation might comprise as much as 35% of a physician’s workload. Even if we can completely eliminate 35% of the work, that still leaves 17 hours a day of work just to do the basics for a typical PCP’s patient panel. How is someone working 17 hours a day going to see even more patients?

The article says that we can reduce a PCP’s workload down to 9.2 hours by radically overhauling how we provide and pay for primary care by using a team-based approach. That means dietitians, nurses, social workers, pharmacists, and other folks picking up most of a PCP’s work. If the AI does all of the remaining 2.6 hours of documentation and inbox management, the doctor would be left with 6.7 hours a day of work. With all that in place, if the PCP works 8 hours a day, then they will have 16% of their day available for expanding their patient capacity. That would be a big improvement, but it requires huge and probably very expensive changes in healthcare processes, practice, provider supply, reimbursement, regulation, and more. If we go forward with it, it will likely take a long time to implement. And… then we still need to figure out how to deal with an inadequate supply of specialists.

The overall shortages understate the problem, because the lack of supply in certain areas, such as rural areas, is even more severe and likely to worsen. For example, in 2030, non-metro areas are projected to have only 37% percent of the urologists needed to meet the demand.

Maybe we can just train up more providers? Unfortunately, training providers takes many years, and more providers could mean more healthcare spend at a time that many say we can’t afford our healthcare already. Even if we can train up enough providers, will they go to the places where shortages may be greatest, such as lower-income and rural communities?

These shortages are why so many have unacceptably long waits for care, and patients are suffering as a result.

We Need AI Doctors, and We Need Them Fast

We urgently need something better than “you can have reasonable access to care, quality of care, or doctors who aren’t burning out from overwork. Pick one of those three, and by the way, any of your choices will cost a fortune.” But that seems to be our situation. In fact, we need “all of the above at a fraction of the price.” Those with great access to great human doctors may not want or need AI doctors. Many others must suffer for months while waiting for their appointment with a human doctor. Those folks may wish to replace their human doctor with a great AI doctor who can see them right away. For that reason, it’s time to stop worrying about AI becoming good enough to replace doctors and start demanding AI that can.

To that end, I propose we need a government-sponsored, multi-disciplinary “moonshot” program to create AI doctors that are as good as, or better than, our human doctors for as much of our healthcare as possible. The cost of an AI doctor visit would be a small fraction of a human doctor visit. That would enable great care for all, including those who are underinsured, economically disadvantaged, geographically remote, or otherwise unable to readily access great care today. If truly successful, the AI doctors would be even better than our human doctors in quality of care and empathy. Even some with easy access to great human doctors might choose the AI doctors in some cases. In addition to providing care, a great AI doctor could also help assess the quality of care provided by human doctors at every visit. That’s something our healthcare system generally doesn’t do consistently today.

To achieve this goal of creating AI doctors at least as good as human doctors, payors, regulators, healthcare systems, medical informaticists, scientists, clinicians, patients, and other stakeholders must all come together to design and build an expert AI physician solution. It will be difficult, expensive, and complex. That’s why we need a government-sponsored “moonshot” program to make it work. In the 1960s, we took a literal “moonshot” to win the “Space Race” and create the first manned flight to the Moon. In 2020, we began a pharmaceutical race to make a life-saving vaccine against a deadly virus causing a worldwide pandemic. Now, we are in a demographics and economic race, with an aging population that could overwhelm our health system, and rapidly rising healthcare costs that could bankrupt the country. If we don’t take this “moonshot,” then we may lose the race by simply failing to enter it. We got to the Moon. We got safe and effective COVID vaccines. I hope (and think) we can do this too. At least we should try.

All opinions expressed here are entirely those of the author(s) and do not necessarily represent the opinions or positions of their employers, affiliates, or anyone else. The author(s) reserve the right to change his/her/their minds at any time.

3 responses to “Many say AI, including ChatGPT, isn’t good enough to be your doctor. We need a moonshot to change that.”

  1. The Innovation (I think) Most Needed to Address an Aging Population – Zero Effectors Avatar

    […] as readily doable by a robot. Think that the androids can’t provide empathy? As noted in one of my earlier posts, the medical literature suggests maybe they can! If software updates lead to enhanced capabilities […]

    Like

  2. An Innovation (I think) Needed to Address an Aging Population – Zero Effectors Avatar

    […] as readily doable by a robot. Think that the androids can’t provide empathy? As noted in one of my earlier posts, the medical literature suggests maybe they can! If software updates lead to enhanced capabilities […]

    Like

  3. Predicting Correctly Seems More Often Luck Than Skill. I’m Making Some Healthcare Predictions Anyway. – Zero Effectors Avatar

    […] have wondered whether AI will replace doctors (I wrote a blog post about this), but many have assumed that AI won’t soon replace clinicians that perform […]

    Like

Leave a comment