How to integrate AI agents with EHR systems: Epic, Cerner, and custom integrations

Most healthcare AI projects do not fail because the model is wrong. They fail because the integration is broken. Connecting an AI agent to a live EHR; reading structured patient data, writing results back to the chart, operating within HIPAA boundaries; is a different engineering problem from building the agent itself. This article covers the three integration scenarios that matter in practice: Epic’s Showroom ecosystem, Oracle Health’s Ignite API layer, and legacy or custom EHR environments. For each, we explain what the integration surface looks like, where the compliance obligations apply, and what separates a working prototype from a production-grade system.
How to integrate AI agents with EHR systems: Epic, Cerner, and custom integrations
AI agent EHR integration is where most healthcare AI projects fail to find their footing. The story is pretty typical: a model performs well in testing, the use case is well defined, then the team attempts to connect the agent to a live Epic or Cerner instance only to discover that EHR systems were built for human-initiated workflows, not autonomous agent loops.
The pressure to solve this is growing. According to Qventus’s Beyond the Pilot report published April 2026, 74% of health system technology leaders now cite dependence on their EHR vendor’s AI roadmap as their top obstacle. Only 22% say they would wait for an EHR feature rather than build with a third-party tool, down from 52% just one year earlier. Health systems are moving, and the integration architecture critical.
This article covers three integration scenarios in practical depth: Epic (Showroom ecosystem, per-instance connection model, SMART on FHIR), Oracle Health (Ignite APIs, FHIR R4-native, DSTU-2 deprecated), and custom or legacy EHRs (HL7 v2, middleware, bespoke APIs). For each, we address the engineering decisions and the compliance requirements that govern them.
In our experience, use case selection and transformation road mapping are separate work, each addressed through our TriStorm methodology.
How EHR integration is handled today, without AI agents
Before designing an AI agent EHR integration, it is worth being precise about what the current state actually looks like, because most integration challenges we have to takle are inherited from it.
Today, most integration between third-party systems and EHRs runs via point-to-point HL7 v2 messaging or FHIR R4 APIs. These are pipelines designed for human-initiated transactions: a clinician orders a test, a nurse posts a note, a scheduler confirms an appointment. Rules-based RPA tools automate some of this, but they operate on rigid pre-defined logic with no reasoning capability. When the input deviates from the expected pattern, the workflow stops and a human intervenes.
The operational cost accumulates quietly. Administrative staff reconcile records across systems manually. Clinicians copy data between applications that do not communicate. Pre-visit prep; retrieving a patient’s recent encounters, active medications, and outstanding results; is done by a person, often under time pressure, immediately before the appointment.
These are precisely the tasks AI agents are built to absorb. An agent that can retrieve patient context automatically, reason over it, and surface a structured summary before the clinician enters the room delivers measurable time savings. At Vstorm, we have seen this in practice: a multi-channel pre-appointment AI agent we deployed for a US healthcare provider serving more than 100,000 members produced more than five hours of savings per doctor per week and a 20% increase in patient engagement. You can read the full case study here.
The prerequisite for any of that is a reliable integration layer. An agent that cannot read structured patient data or write results back to the chart has no operational value.
“I think that the cost of waiting for Epic or Oracle, or any of them, is you might lose out. There’s a late-mover disadvantage.”
– Matthew Anderson, MD, CMIO at HonorHealth, stated in the Qventus 2026 CIO
The integration foundation: FHIR, HL7, and SMART on FHIR
All three integration scenarios in this article sit on the same technical foundation. Understanding it before examining individual EHR platforms saves time and prevents architectural errors.
FHIR R4 is the current regulatory floor. The 21st Century Cures Act mandates FHIR-based API access for patient data. ONC’s HTI-1 Final Rule requires USCDI v3 support via FHIR APIs. As of 2026, approximately 70% of hospitals use standardised APIs, most built on FHIR R4. The question at the design stage is not whether to implement FHIR API AI agent connectivity, it is how to do so correctly given the specific EHR environment.
HL7 v2 remains present in many legacy environments. An autonomous AI agent can monitor HL7 v2 feeds, transform messages to FHIR R4, and write structured data back to EHR systems, but this requires a transformation layer (commonly Mirth Connect, Azure Health Data Services, or AWS HealthLake) sitting between the EHR and the agent. Without this layer, the agent cannot consume the data reliably.
SMART on FHIR is the OAuth 2.0-based authorisation framework that governs how third-party applications authenticate against an EHR and define their data access scope. It is supported by Epic, Oracle Health, Athenahealth, and every ONC-certified EHR in the United States. A single SMART on FHIR app can, in principle, deploy across multiple EHR platforms. In practice, Epic and Oracle Health implement the specification differently; different scope naming conventions, throttling behaviour, and write-back constraints; requiring per-platform adaptation.
Key engineering decisions at this layer include: read-only versus read-write scope selection, token expiry and refresh handling, rate limit management, and FHIR resource type mapping across Patient, Encounter, Observation, MedicationRequest, Appointment, and DocumentReference resources. These decisions must be made before development begins, because they directly determine compliance scope.
Integrating AI agents with Epic
Epic holds 42.3% of US acute care hospitals and 54.9% of acute care beds as of 2024, per the KLAS US Acute Care EHR Market Share 2025 report. For most mid-market and enterprise health systems, AI agent EHR integration means integrating with Epic first.
The developer ecosystem. Epic replaced its App Orchard with Showroom in 2024. The current structure has three tiers: Connection Hub (basic directory listing at $500 per year), Toolbox (recommended integration patterns), and Workshop (deep co-development partnerships). Developers register at open.epic.com and receive a client ID and credentials. Being listed in Connection Hub does not grant write access. Production deployments with write-back capability require per-site approval from the individual Epic customer organisation, there is no single certification that grants cross-customer write access.
What the API exposes. Epic provides approximately 450 FHIR R4 endpoints across 55 resource types, covering patient demographics, scheduling via the Appointment resource, clinical notes via DocumentReference, lab results via Observation, medications via MedicationRequest, and clinical conditions. CDS Hooks allow agents to surface recommendations at specific workflow moments, for example, when a clinician opens a patient record, but CDS Hooks implementation requires coordination with the Epic instance administrator at each site.
The per-instance reality. Epic operates a federated model. Each healthcare organisation manages its own Epic instance. A third-party AI agent cannot connect once and reach all Epic customers. Each deployment requires a separate connection, per-instance scope negotiation, and ID mapping configuration. For organisations deploying across multiple health systems, this multiplies integration effort in proportion to the number of sites.
On-premise versus cloud. Cloud-hosted Epic (Epic on Azure) exposes APIs more directly. On-premise installations often require an interface engine, Mirth Connect or Rhapsody, as an intermediary, adding latency and configuration complexity that must be accounted for in the architecture.
Integrating AI agents with Oracle Cerner
Oracle Health holds 22.9% of US acute care hospitals per the same KLAS 2025 report. The platform presents a more complex integration picture than Epic, because two distinct EHR generations are active simultaneously.
FHIR R4 is now the only active standard. Oracle deprecated its DSTU-2 FHIR APIs at the end of 2025. R4 is now the primary integration standard across the Millennium platform, delivered via Oracle’s Ignite APIs. Write scopes confirmed in the FHIR specification include Appointment.write, Condition.write, AllergyIntolerance.write, and Communication.write, among others, accessible via SMART on FHIR with OAuth 2.0.
The new-generation EHR. Oracle launched its next-generation EHR in August 2025, built on Oracle Cloud Infrastructure with FHIR natively embedded and a voice-first interface. The Clinical AI Agent is embedded directly into clinical workflows across more than 30 medical specialties. Oracle reported in its own press release a nearly 30% reduction in physician documentation time. This is a figure worth noting, though it originates from Oracle’s own communications rather than independent third-party research.
Scope implications for custom agents. Organisations on Oracle’s new-generation EHR shift the integration question from building a connection to extending or customising the native agent layer. Organisations still running Cerner Millennium face a conventional SMART on FHIR integration via Ignite APIs, with the same per-tenant provisioning requirements as Epic.
The ambulatory gap. Oracle’s new-generation EHR is currently available for ambulatory providers only, with acute care functionality planned for 2026. Health systems running both ambulatory and inpatient workflows on Oracle may require two distinct integration architectures in the interim, one for the new platform and one for Millennium.
Custom and legacy EHR integrations
A significant portion of mid-market health systems, such as specialty practices, regional hospital groups, and behavioural health organisations, run EHRs built on older architectures: Meditech, Allscripts, NextGen, or fully bespoke systems. Many of these platforms have limited or no native FHIR support.
The middleware approach. An integration engine sits between the EHR and the AI agent, normalising HL7 v2 messages into FHIR-compatible formats the agent can consume. Mirth Connect, Azure Health Data Services, and AWS HealthLake are the most common choices. This is the most practical path for legacy environments. It is not a trivial undertaking: making legacy systems FHIR-capable through middleware can be disruptive and costly, and smaller providers often struggle to establish a clear ROI model before beginning.
When a custom API is the only option. For EHRs with no standard interface, agents must interact via database read replicas, exported flat files such as CSV or JSON, or vendor-specific APIs. Each approach introduces data freshness and reliability trade-offs. A read replica may lag the live system by minutes or hours. A flat file export may run on a schedule that makes real-time agent action impossible. These constraints must be explicitly modelled before any agent goes near production.
Observability is non-negotiable. In legacy environments, the EHR will not log agent interactions automatically. Purpose-built observability at the agent layer, capturing inputs, outputs, decision points, and write-back confirmations, is the only way to meet HIPAA audit trail requirements in these environments. This is not optional infrastructure; it is a compliance requirement that must be designed in from the start, not retrofitted after deployment.
Ready to see how agentic AI transforms business workflows?
Meet directly with our founders and PhD AI engineers. We will demonstrate real implementations from 30+ agentic projects and show you the practical steps to integrate them into your specific workflows—no hypotheticals, just proven approaches.
HIPAA compliance requirements for AI agent EHR integrations
HIPAA compliant AI agents require more than selecting the right EHR API. The compliance obligation extends across every system the agent touches.
What HIPAA requires at the integration layer. The Security Rule mandates encryption in transit (TLS 1.2+) and at rest (AES-256), role-based access controls, multi-factor authentication, and comprehensive audit logging of every agent interaction with electronic protected health information (ePHI). A documented security risk assessment must be completed before any new integration goes live, and updated whenever the integration changes significantly.
Business Associate Agreements. Any third-party system that processes ePHI, including the LLM inference endpoint, requires a signed BAA. The following major providers offer HIPAA-eligible services with BAA coverage as of 2026: Microsoft Azure OpenAI (text endpoints only), Anthropic (Enterprise API and HIPAA-ready Enterprise plan, available via a sales-assisted process, consumer products are not covered), Google Cloud Vertex AI, AWS Bedrock, and OpenAI (Enterprise and API customers only, not consumer ChatGPT). Standard developer accounts with these providers do not automatically qualify.
A BAA is not compliance. Signing a BAA establishes the contractual foundation for compliance. The developer organisation remains responsible for access controls, prompt logging, PHI handling in system prompts, data residency, and audit trail implementation. This distinction is the source of most compliance failures in healthcare AI projects, the team assumes the cloud contract covers the application layer.
SMART on FHIR scoping as a compliance control. Limiting the agent’s token scope to only the FHIR resources it requires, the principle of least privilege, reduces the breach surface and simplifies audit scope. This is an engineering decision that must be made at architecture stage, not after go-live.
EU AI Act dimension. For health systems operating in the EU or UK, clinical decision support agents fall under high-risk AI classification. This adds conformity assessment obligations, transparency requirements, and mandatory human oversight provisions as a second compliance layer alongside HIPAA-equivalent data protection obligations.
What a production-ready EHR integration actually looks like
A working prototype and a production-grade system solve different problems. The prototype demonstrates that the agent can retrieve data and generate output. The production system must do this reliably, at scale, within a compliance boundary, and with a human in the loop for clinical write-back actions.
A concrete architecture. An agent receives a trigger, for example, a new appointment confirmed in Epic. It retrieves patient context via FHIR: demographics, recent encounters, active medications, outstanding results. It runs reasoning over that context, generates a structured pre-visit summary, and posts it back to Epic as a DocumentReference resource. The full interaction; input, reasoning steps, output, and write-back confirmation; is logged to an observability platform. The clinician reviews the summary before the appointment begins. Nothing is committed to the live chart without a human approval step.
The three failure modes that kill EHR agent projects before go-live:
Write-back failures occur when the EHR rejects malformed FHIR resources. The most common causes are incorrect resource structure, missing required fields, or a scope mismatch between what the token authorises and what the agent attempts to write. These failures are often silent, the agent does not receive a clear error and the write simply does not persist.
Token expiry handling errors occur when the agent loses its OAuth access token mid-task and has no refresh logic in place. The agent breaks silently at an unpredictable point in the workflow. In production environments processing hundreds of interactions per day, this failure mode creates inconsistent outputs that are difficult to diagnose.
Data mapping errors occur when source fields in the EHR do not align with the target FHIR resource schemas the agent expects. A field that exists in one Epic instance may not exist in another. A Cerner Millennium environment may expose a resource differently than Oracle’s new-generation EHR. These mismatches produce structurally invalid payloads that the EHR rejects without detailed error messaging.
All three failure modes are solvable, but only if the integration layer is built with explicit error handling, retry logic, and end-to-end observability from the start.
How to choose the right integration approach
The right integration approach is determined by three variables: the EHR vendor and version, the agent’s required data access scope (read-only versus read-write), and the deployment model (cloud versus on-premise). The table below summarises the key decision dimensions across the main integration scenarios.
Platform |
Integration standard |
Write-back scope |
Certification / access pathway |
Coverage |
Epic |
FHIR R4, SMART on FHIR, CDS Hooks |
Available; requires per-site customer approval and ID mapping |
open.epic.com registration; Connection Hub listing ($500/year); per-instance provisioning |
Inpatient and ambulatory |
Oracle Cerner Millennium |
FHIR R4 via Ignite APIs (DSTU-2 deprecated end of 2025) |
Available; confirmed write scopes include Appointment, Condition, AllergyIntolerance |
Oracle Health developer registration; per-tenant provisioning |
Inpatient and ambulatory |
Oracle new-generation EHR |
FHIR R4 native on Oracle Cloud Infrastructure |
Via native Clinical AI Agent layer or SMART on FHIR |
Extend or customise native agent; separate third-party integration path |
Ambulatory only as of 2026; acute care planned |
Legacy / custom EHR |
HL7 v2 via middleware to FHIR R4, or vendor-specific API |
Depends on EHR — read replica, flat file export, or custom API |
No standard pathway; architecture designed per environment |
Varies by platform |
Selecting the integration approach is an architecture decision, not a vendor selection. It requires a clear understanding of the organisation’s data model, security posture, existing infrastructure, and operational workflows, before a line of code is written. Attempting to retrofit the integration architecture after development begins is one of the consistent patterns we see in EHR AI projects that stall before production.
For organisations at the beginning of this process, the Technology Consulting layer of our TriStorm methodology translates the operational roadmap into a deployable integration blueprint; including stack selection, integration point definition, and data and security requirements; before engineering begins.
Conclusion
The three integration paths covered in this article; Epic’s Showroom ecosystem, Oracle Health’s Ignite APIs, and legacy EHR middleware architectures; share the same compliance floor: FHIR R4, SMART on FHIR authorisation, HIPAA-mandated safeguards, and human-in-the-loop governance for any clinical write-back action.
The integration architecture itself is a solvable engineering problem. What kills EHR AI projects in practice is a combination of gaps discovered late: BAA obligations identified after the LLM provider is already selected, per-instance connection complexity underestimated at scoping, observability absent from the original design, and write-back failure modes that only surface under production conditions.
The distance between a working prototype and a HIPAA compliant AI agent operating reliably inside a live EHR is where most healthcare AI projects stall. Closing that gap requires the right integration architecture, compliance design from day one, and an engineering team that has navigated the failure modes before, not for the first time on your project.
Ready to see how agentic AI transforms business workflows?
Meet directly with our founders and PhD AI engineers. We will demonstrate real implementations from 30+ agentic projects and show you the practical steps to integrate them into your specific workflows—no hypotheticals, just proven approaches.
Summarize with AI
The LLM Book
The LLM Book explores the world of Artificial Intelligence and Large Language Models, examining their capabilities, technology, and adaptation.



