I’m an ER doctor: Here’s what AI startups get wrong about “ChatGPT for telehealth”

Inflect Health
4 min readOct 18, 2023

--

by Josh Tamayo-Sarver, MD, PhD

The explosion of telehealth might seem like a huge opportunity for AI. But it is also a huge risk. (Originally published in FastCompany.)

As a practicing ER doctor, I’m quite excited by the potential of LLMs like ChatGPT for health care. I already use it as a communication tool for some of my patients, and sometimes feed ChatGPT detailed prompts of patient symptoms to generate possible diagnoses I haven’t yet considered.

As a part-time advisor to healthtech startups, however, I’m increasingly concerned that many in Silicon Valley are confusing the capabilities of LLMs as they relate to the practice of medicine. Especially telehealth.

The omnibus bill President Biden signed last January extends telehealth flexibilities on patient appointments and insurance coverage through December 31, 2024. Assuming insurers continue following suit until then, we have about a year to deliver on the promise of telehealth.

This might seem like a glowing opportunity for a program like ChatGPT. Thanks to the pandemic, virtual care/telehealth is now a standard procedure, with roughly 1 in 4 patient appointments conducted remotely. So why not integrate an LLM into the remote treatment process, acting as a virtual nurse practitioner which communicates to patients through text? Indeed, there are already a number of startups roughly based on that use case.

After treating thousands of people remotely since the COVID era, I should explain in vivid detail how bad an idea this is.

All too often, I’ve cared for patients who treat their telehealth appointment like an UberEats delivery, expecting specific treatment for immediate, transactional results. And it’s only getting worse: Patients are actively pushing to have more virtual care options and the systems are responding.

It’s completely understandable that patients prefer a remote appointment. However, its extreme convenience for them — if not for us physicians and our staff — has also contributed to rampant devaluation of the doctor-patient relationship, if not outright abuse of it.

To cite just a few recent telehealth experiences:

  • I’ve treated several telehealth patients visibly having a heart attack during the call — sweating, struggling to breathe, complaining about chest pain — but still hoping I can somehow care for them remotely.
  • A parent once set up a telehealth appointment with me to examine her screaming 2-year-old with severe burns from pulling a pot of boiling water onto her head, and was irate when I activated 911 EMS on her behalf.
  • I can’t count the number of telehealth appointments where patients hope that our lack of an established relationship will enable them to convince me, for instance, that their dog really did eat all their fentanyl patches and they just need a refill.

Now imagine the abuse that’s sure to follow, were a program like ChatGPT injected into patient scenarios like these.

The overwhelming majority of my patients make telehealth appointments to care for genuine needs. The core problem is when I only see them remotely, it’s easy to address patients’ stated need — but often difficult to identify the root of the problem and transcend from transaction to healing.

Many LLM-related startups who pitch me often miss this distinction. If an app uses a GPT-powered chatbot to, say, order a pizza, the consumer requests are unambiguous; “I want a large pepperoni pizza with green pepper on one half” means exactly that.

In contrast, if the telehealth patient tells me they need pain medicine for their sudden onset back pain because it is making them light-headed, they may actually need emergent vascular surgery for ruptured abdominal aortic aneurysm. Or they could need a urologist for their kidney stone. Or they could need an oncologist for their pathological spinal fracture from cancer. Or they could need counseling for their opioid addiction. Or they could just need pain medicine.

Beyond the challenges of inserting LLMs into the healthcare relationship, I worry that the transactional nature of telehealth is already eroding physicians’ fealty to the Hippocratic Oath and its promise to treat the whole patient, not just the specific ailment. Will this millennia-old Oath survive in a future where the human doctors’ job is mainly about double-checking diagnoses delivered to us by AI?

We are racing against time to get these questions right — again, billing flexibilities for telehealth visits are set to expire at the end of 2024.

I’m confident that’s enough time to iterate on the best ways to implement AI into the process. One very promising possibility is leveraging an LLM-powered chatbot that interacts with patients before their appointment begins. If properly trained, this would be an invaluable time-saver, collecting pertinent patient information while flagging potential concerns.

If we can prove LLM-powered telehealth is better for patients and more cost effective for providers, we are likely to see calls in Congress to extend those telehealth billing flexibilities.

For the sake of my fellow physicians, I dearly hope that happens. After three years of trying to discern what our patients really need through blurry video calls, we desperately need a better way.

Dr. Josh Tamayo-Sarver works clinically in the emergency department of his local community and is a vice president of innovation at Inflect Health, an innovation incubator for health tech. He holds five healthtech-related U.S. patents, including two related to machine learning models.

Read his previous posts on Large Language Models/ChatGPT:

--

--