I’m an ER doctor. Here’s how I’m already using ChatGPT to help treat patients.
There are many futuristic possibilities for how AI might transform healthcare. But it can help doctors do a better job in small, safe ways — right now.
by Josh Tamayo-Sarver, MD, PhD
With explosive interest in ChatGPT and other large language model programs, many AI evangelists are touting their use as a diagnostic tool in healthcare — or even broadly declaring it to be the future of medicine. As with many high tech innovations, proclamations like this can often overshadow small but crucial applications that are immediately in our grasp.
I say that as a working ER doctor who is using ChatGPT to help treat patients right now.
But not for diagnosis. As I recently wrote and expanded on for Inflect’s podcast, ChatGPT works fairly well as a diagnostic assistant — but only if you feed it perfect information, and the actual patient has a classic presentation, which is rarely the case. (And you don’t mind a 50% success rate that often misses life-threatening conditions.)
A more immediate utility of ChatGPT in medicine was dramatized to me at around 3 a.m. during a recent night shift, treating a 96-year-old with dementia who had difficulty breathing because of fluid in her lungs. Adding to this challenge, her three children (all in their seventies) were also in the ER room.
Understandably concerned, they hovered around my staff and me, to the point where their constant barrage of requests was actually slowing down treatment. To compound the problem, they were absolutely certain what their mother needed.
“Nurse, our mother needs IV fluids to treat her dehydration. Please hurry!” All three of them repeated variations of that request to us for over an hour.
Small problem: Administering IV fluids would have made her much worse or even killed her.
“I know that you are concerned about your mom,” I tried explaining to them. “But she cannot breathe right now because she has pulmonary edema, which is fluid in her lungs. If I hydrate her with IV fluids, it will make her pulmonary edema worse and she might die. Once we have the fluid out of her lungs and breathing better, then we can worry about her being dehydrated.”
“But whenever she is sick, she just needs an IV because of dehydration,” the patient’s son insisted, adamant. “Why don’t you just give her some IV fluid? She will be better in no time.”
I tried to rephrase my explanation in multiple different ways, but judging by their blank expressions, none were resonating. This is actually a common situation in the ER. People do not wake up planning on an emergency that brings them to me in the dead of night, and are often in a decompensated emotional state.
To make matters worse, several other patients were in more immediate need of my attention.
Desperate for a solution, I went down the hall to my computer, and fired up ChatGPT-4. Typing in:
“Explain why you would not give IV fluids to someone with severe pulmonary edema and respiratory distress even though you might be concerned that the patient is dehydrated. Explain it in simple and compassionate terms so that a confused person who cares about their mother can understand.”
Within seconds, ChatGPT produced this:
I truly understand how much you care for your mother, and it’s natural to feel concerned about her well-being.
In cases of severe pulmonary edema and respiratory distress, it might seem like giving IV fluids would help with hydration, but it could actually make her condition worse.
When someone has pulmonary edema, their lungs are filled with excess fluid, which makes breathing difficult. If we were to give your mother IV fluids, it could potentially increase the fluid in her lungs, making it even harder for her to breathe and causing her more discomfort.
Our primary goal is to ensure your mother’s comfort and well-being. The medical team is dedicated to providing the best possible care for her, and they will closely monitor her hydration levels while using other treatments that are more appropriate for her specific condition.
Please know that your mother’s health and comfort are our top priorities, and we are here to support both of you during this difficult time. If you have any questions or concerns, don’t hesitate to reach out to the medical team.
We are all in this together, and we’ll do everything we can to help your mother recover.
I printed this response up, and read it to the concerned family. As I recited ChatGPT’s words, their agitated expressions immediately melted into calm agreeability.
Even better, I handed the printed copy to the nurse. While the patient was waiting for an ICU bed, the family would frequently return to the nurse to ask about IV fluids (some members may have been struggling with memory impairment), and he’d read the script to them. It always received a warm reception.
Just as key, the family quietly stepped back to let my staff focus on their work. I could then devote my time with the other patients who were even more sick, and manage the chaos for the rest of the night shift.
Since this incident, I’ve taken to using ChatGPT to help empathically explain specific medical scenarios to patients and their loved ones. It’s become an invaluable resource for the frequent situations where my ER ward is too busy or short-staffed for explaining complex medical diagnoses in a way that is accurate but easy to understand.
In doing so, I’ve come to realize that dealing with ChatGPT is like working with an incredibly brilliant, hard-working — and occasionally hungover — intern. That’s become my mental model for considering the usefulness of ChatGPT.
Now, for any potential application, I think, “Would a dedicated but occasionally hungover intern working on this make things easier for me and my staff — or would the work required managing them end up being more effort than just doing it without their involvement?”
Seen from that perspective, ChatGPT or a hungover intern can still, for instance:
- Take down my patient’s history
- Create long-form written communication for patients and staff
- Explain highly technical information to patients simply with empathy and compassion
In each case, the output from the hungover intern/ChatGPT needs to be carefully checked before it’s used. But in these scenarios, reviewing existing work is usually much faster than starting from scratch. As for my actual human (and hopefully not hungover) interns, ChatGPT would then free them up to focus on the activity needed most: caring for patients.
As an advocate of expanding usage of AI in healthcare, my hope is the industry focuses on more immediately achievable applications like this, as opposed to long-term prospects in which artificial intelligence completely transforms medicine.
Many or most of these futuristic scenarios overlook the practical challenges of implementation. Satisfying HIPAA rules around patient privacy alone may take many years, or decades, to resolve themselves before we could even contemplate directly using programs like ChatGPT in a medical theater.
Indirect uses of ChatGPT, however, are often ready for implementation now, and are desperately needed — especially with physician burnout levels at an all-time high. Already constrained by the few minutes we are allotted to see each patient in person, we need solutions that expand and enrich the doctor-patient relationship.
These time constraints lead directly to a common complaint: “Doctor speak,” in which highly technical explanations for symptoms and diagnoses overwhelm or even confuse patients, rather than make them feel cared for. All too often, physicians and their staff are too overwhelmed themselves to explain a medical situation in patient, empathic, plain language.
According to the CDC, emergency rooms across the U.S. see over 131 million visits per year. Assuming, for the sake of argument, that using ChatGPT to eliminate “doctor speak” were scaled to ERs across the country, and it saved an average of five minutes per patient, that would translate to well over 10 million hours of time saved per year — hours that physicians and their staff could then devote to more hands-on care for more patients.
There are still workflow challenges with these approaches. Currently, anything entered into ChatGPT is essentially published to the web; to be in line with HIPAA, any factors that would enable linkage to a specific patient would have to be removed or altered.
Additionally, systems that put guardrails on ChatGPT’s direct interaction with patients, in the case of medical history taking (for example), would need to be developed so that the hungover-intern aspect does not start producing catastrophic advice. But these are not insurmountable challenges, and the payoff is real.
I am a little embarrassed to admit that I have learned better ways of explaining things to my own patients from ChatGPT’s suggested responses. But I’m also greatly appreciative for the increased human connection I feel from a patient who understands what I am doing for them, and why.
There is a lot of hype about ChatGPT and other large language models taking away physician’s jobs because of their massive knowledge base. They won’t. But in a curious irony, my ER staff and I are able to devote far more time to the human equation of healthcare, thanks to artificial intelligence.
Originally published here in FastCompany
Dr. Josh Tamayo-Sarver works clinically in the emergency department of his local community and is a vice president of innovation at Inflect Health, an innovation incubator for health tech.