Sitemap

Learning History to Shape the Future: A Cautionary Tale for AI in Healthcare

4 min readMay 9, 2025

--

Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA

It was a late-night shift in the emergency department when I first encountered the paradox of history. A patient arrived with symptoms that seemed eerily familiar — a textbook case of a condition I had seen before. As I reached for the treatment protocol, I hesitated. The textbook approach had failed this patient once before, and I remembered the lesson vividly. Medicine, like history, is a field where learning from past mistakes is not just beneficial but essential.

This memory resurfaced recently as I reflected on the role of artificial intelligence (AI) in healthcare, particularly large language models (LLMs). These systems are designed to learn from historical data, but unlike humans, their goal isn’t to avoid repeating mistakes, it’s to replicate patterns. This fundamental difference raises critical questions about how we integrate AI into healthcare and ensure it serves as a tool for progress rather than a mirror of past flaws.

The Dual Nature of LLMs: Promise and Paradox

The Promise of Pattern Recognition
LLMs excel at identifying complex patterns in data, whether it’s analyzing language or interpreting medical imaging. This capability has transformative potential in healthcare:

  • Enhancing diagnostic accuracy by identifying subtle indicators in patient data.
  • Streamlining treatment protocols through predictive analytics.
  • Automating administrative tasks to reduce clinician burnout.

For instance, AI-driven tools have already demonstrated success in detecting early-stage cancers or predicting patient deterioration based on electronic health records. These advancements highlight the immense promise of LLMs to improve outcomes and efficiency in healthcare.

The Paradox of Historical Learning
Yet, this strength is also a source of concern. Unlike humans who study history to learn lessons and avoid repeating mistakes, LLMs are designed to replicate patterns they observe in historical data. This creates a paradox:

  • Human learning focuses on understanding causality and striving for improvement.
  • LLM learning aims to perfectly predict and replicate past patterns, regardless of ethical or practical implications.

This distinction is not merely theoretical; it has real-world consequences. For example, an LLM trained on historical medical records might inadvertently perpetuate biases or outdated practices embedded in those records. If unchecked, these systems risk reinforcing systemic inequalities rather than addressing them.

The Risks of Perpetuating Past Mistakes

The healthcare industry has long grappled with issues such as racial bias in treatment recommendations and disparities in access to care. These issues are often reflected in historical data, which serves as the foundation for training AI systems. Without careful oversight, LLMs could:

  • Replicate discriminatory practices found in historical datasets.
  • Reinforce outdated treatment protocols that have since been debunked.
  • Amplify systemic inequalities by favoring majority populations over marginalized groups.

For instance, studies have shown that algorithms used in healthcare can exhibit racial bias due to flawed training data. Similarly, AI systems trained on older medical guidelines might recommend treatments that are no longer considered best practice. These risks underscore the importance of curating training data and implementing safeguards to prevent harm.

Building an Ethically Sound Future

The integration of AI into healthcare is not just a technical challenge — it’s an ethical one. As we embrace these technologies, we must remain vigilant about their limitations and potential pitfalls. The goal should be to leverage AI’s capabilities to enhance healthcare delivery while actively steering it away from replicating the less desirable aspects of our medical history.

This requires more than just technical expertise; it demands a philosophical shift in how we approach innovation. We must view AI not as an infallible solution but as a tool that requires human guidance and oversight. By acknowledging its strengths and weaknesses, we can create systems that truly augment care rather than perpetuate past mistakes.

A Call to Action: Learn From History Without Repeating It

As someone who has spent years navigating both the clinical trenches and the frontier of healthcare innovation, I believe this moment calls for bold action. We cannot afford to be passive observers as AI reshapes our industry; we must actively shape its trajectory. Most importantly, however AI is making the sausage, we must be vigilant in our assessment of the outputs, like any diagnostic test or medical device, that it does what is says it does and is very clear in its limitations.

So here’s my challenge:

  • If you’re a healthcare leader, demand accountability from AI developers in the output of their tools.
  • If you’re an ethicist or researcher, push for interdisciplinary collaboration that bridges gaps between technology and humanity.
  • If you’re a clinician, advocate for tools that prioritize patient-centered care that are easy to understand the strengths and limitations, just like you would demand of any lab test or imaging study you consider using.

Together, we can ensure that AI learns from history not to repeat it but to improve upon it — a goal that aligns with the very essence of medicine itself.

Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA

Dr. Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA, develops and deploys technology solutions in the healthcare ecosystem as a clinician, business leader, software engineer, statistician, and social justice researcher. As the Vice President of Innovation at Inflect Health and Vituity, his unique formula of skills has helped develop over 35 solutions and scale multiple new healthcare products, including the first AI occult sepsis tool with FDA breakthrough designation. Dr. Tamayo-Sarver oversees corporate venture, internal incubation, and advisory services for AI-driven healthcare solutions, blending consumerism and clinical quality to fit the delicate balance of patient desire, user experience and quality medical care. A Harvard graduate, he holds degrees in biochemistry, epidemiology, and biostatistics, as well as a medical degree from Case Western Reserve University. He is a Mentor in the Emergence Program at Stanford University.

Follow him on LinkedIn — Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA

--

--

Inflect Health
Inflect Health

Written by Inflect Health

Healthcare. Optimized and accessible for all.

No responses yet