Sitemap

From Chaos to Clarity: Solving Interoperability Challenges at the Front End

4 min readMay 9, 2025

--

Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA

Fifteen years ago, I embarked on a journey into the crowded and complex world of computer-assisted coding and quality metric extraction. At the time, I was optimistic — perhaps naively so — that technology could solve the inefficiencies plaguing healthcare documentation. But as the years passed, I realized just how daunting the task was. Despite the brilliance of the minds working in this space, including many far smarter than me, the problem persisted: healthcare data was messy, inconsistent, and maddeningly heterogeneous.

One recent experience brought this challenge into sharp focus. It revealed not only why we’ve struggled for so long but also why our approach to solving interoperability issues might be fundamentally flawed — and how we might finally get it right.

The Problem with Data Normalization

Imagine trying to extract a simple metric from an electronic health record (EHR): whether or not a patient smokes. On paper, this sounds straightforward. In practice, it’s anything but. Within just one EHR system, different clinicians document smoking status in wildly different ways:

  • “No tobacco”
  • “No smoking”
  • “No tob.”
  • “Doesn’t smoke”
  • “No cigarettes”
  • “No inhaled tobacco”

And that’s before you account for variations across multiple EHR systems, specialties, or hospital networks. Even within a single clinician’s workflow, smoking status might be documented differently depending on the day or context. Sometimes it’s buried in one part of the chart; other times it’s recorded elsewhere entirely.

To address this inconsistency, we’ve spent years refining algorithms to account for all these variations. But here’s the rub: as we broadened our definitions to capture more data, accuracy suffered. Sensitivity and specificity — the holy grail of coding — dropped precipitously. The algorithm became bloated and unwieldy, missing key nuances while introducing false positives.

This is the paradox of data normalization in healthcare: the more we try to account for heterogeneity on the back end, the harder it becomes to maintain accuracy and reliability.

A Serendipitous Solution

Recently, I stumbled upon an unexpected breakthrough while working on an AI-driven documentation system. Unlike traditional systems that focus on cleaning up messy data after it’s been entered, this system took a different approach: it created a clear and concise table summarizing all relevant patient information for clinicians to review and correct in real time.

This wasn’t just a convenience feature, it was transformative. By presenting data in a standardized format during documentation itself, the system inadvertently solved two major problems:

  1. Data Normalization: The structured table homogenized and normalized disparate inputs into a consistent format.
  2. Metric Extraction: With clean, standardized data readily available, coding and quality metric extraction became significantly easier and more accurate.

In essence, we shifted the focus from fixing messy data on the back end to preventing messiness at the front end. And that made all the difference.

Why Front-End Interoperability Matters

For decades, healthcare has approached interoperability as a back-end problem: how do we clean up messy data after it’s been collected? This mindset has led to countless initiatives aimed at improving data curation and infrastructure, which have yielded incremental progress but failed to deliver transformative change.

What we’ve overlooked is that the most opportune moment to address interoperability isn’t after data has been collected; it’s during data entry itself. By providing tools that make documentation intuitive and standardized from the outset, we can create datasets that are inherently interoperable — no post-hoc cleaning required.

This shift has profound implications:

  • Improved Accuracy: Standardized documentation reduces errors caused by inconsistent or incomplete data.
  • Enhanced Efficiency: Front-end systems streamline workflows, saving time for clinicians and reducing administrative burden.
  • Scalable Solutions: Clean data at the source enables seamless integration across EHR systems and hospital networks.

Lessons Learned

Reflecting on this journey has taught me several key lessons about tackling interoperability challenges:

  1. Stop Fighting Data Chaos on the Back End
    The heterogeneity of healthcare data isn’t going away anytime soon. Instead of trying to clean up messy datasets after they’ve been created, focus on preventing messiness during documentation itself.
  2. Leverage AI for Real-Time Standardization
    AI-driven tools can help clinicians document patient information in a structured format without adding cognitive load or disrupting workflows.
  3. Collaborate Across Silos
    Companies working on data infrastructure must partner closely with those developing front-end documentation tools. True interoperability requires alignment across both ends of the spectrum.
  4. Prioritize Clinician Experience
    Tools that make documentation easier and more intuitive for clinicians are more likely to succeed — and their adoption will drive better data quality across the board.

A Call to Action

Healthcare leaders, innovators, and technologists: it’s time to rethink our approach to interoperability. The answers won’t come from yet another back-end solution or incremental improvement in data curation algorithms. They’ll come from empowering clinicians with tools that make standardized documentation effortless — and even enjoyable — at the point of care.

Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA

Dr. Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA, develops and deploys technology solutions in the healthcare ecosystem as a clinician, business leader, software engineer, statistician, and social justice researcher. As the Vice President of Innovation at Inflect Health and Vituity, his unique formula of skills has helped develop over 35 solutions and scale multiple new healthcare products, including the first AI occult sepsis tool with FDA breakthrough designation. Dr. Tamayo-Sarver oversees corporate venture, internal incubation, and advisory services for AI-driven healthcare solutions, blending consumerism and clinical quality to fit the delicate balance of patient desire, user experience and quality medical care. A Harvard graduate, he holds degrees in biochemistry, epidemiology, and biostatistics, as well as a medical degree from Case Western Reserve University. He is a Mentor in the Emergence Program at Stanford University.

Follow him on LinkedIn — Joshua Tamayo-Sarver, MD, PhD, FACEP, FAMIA

--

--

Inflect Health
Inflect Health

Written by Inflect Health

Healthcare. Optimized and accessible for all.

No responses yet