
Last Updated: March 9, 2026
Healthcare AI hits three walls before it reaches production: FDA clearance requirements, HIPAA constraints on training data, and EHR integration friction with Epic and Cerner. AI implementation healthcare is harder than other verticals. Not because the models are worse, but because the governance layer is thicker. Organizations that treat these as pure engineering problems stall at the pilot stage.
What FDA clearance requirements apply to healthcare AI software?
Any AI software that meets the FDA’s definition of Software as a Medical Device (SaMD) requires 510(k) clearance or De Novo authorization before clinical use, regardless of whether it makes a diagnosis directly.
The FDA has cleared over 1,250 AI-enabled medical devices as of July 2025. Of those, 671 are in radiology. That concentration isn’t accidental. Radiology was the first specialty to produce large, structured, labeled datasets at scale. Other specialties are catching up, but the regulatory backlog is real.
In January 2025, the FDA issued draft guidance on lifecycle management for AI-enabled device software. The December 2024 guidance on Predetermined Change Control Plans (PCCP) lets manufacturers pre-specify how models may change post-market without resubmission. But most health systems need to verify clearance status before deploying third-party tools like Aidoc or Viz.ai. Aidoc, deployed in 900+ hospitals, received FDA clearance for rib fracture CADt in February 2025. Clinical studies attribute a 26% reduction in CT turnaround time to its use. A custom-built in-house model carries no such clearance by default.
Can you use patient data to train AI models under HIPAA?
Protected health information (PHI) can be used for AI model training under HIPAA’s “healthcare operations” provisions without patient authorization, but de-identification must meet Safe Harbor or Expert Determination standards.
Safe Harbor requires stripping 18 specific data identifiers. This often degrades the clinical richness that makes training data valuable. Expert Determination requires a qualified statistician to certify that re-identification risk is very small. Both paths slow development cycles.
A January 2025 HHS proposed rule would bring ePHI used in AI training under the HIPAA Security Rule. Security and legal teams should treat that rule as coming, even without finalization. Tempus, which went public in 2024, built its cancer genomics dataset around compliant data partnerships with health systems. That model works at scale. But it took years to build.
If your organization is preparing data infrastructure for AI, the groundwork matters. See AI data readiness: what enterprises need to fix before scaling AI models for the broader enterprise framing.
Why is EHR integration the hardest part of healthcare AI deployment?
EHR integration is the hardest part of healthcare AI deployment because Epic and Cerner control data access through proprietary ecosystems, and 70% of hospitals cite integration as their top AI adoption barrier.
HL7 FHIR was supposed to solve this. It helps, but 84% of hospitals using FHIR APIs still report seamless data exchange as a challenge. The reasons: inconsistent implementation and security concerns. Epic’s App Orchard marketplace offers a path for vetted vendors, but it’s a closed ecosystem. In 2025, Particle Health and CureIS Healthcare filed antitrust claims against Epic over its data practices. Those cases are ongoing.
Nuance DAX, Microsoft’s ambient clinical documentation AI, plugged directly into Epic workflows. Peer-reviewed cohort studies show it cuts documentation time by roughly 50%, saving up to 7 minutes per encounter. That worked because Microsoft had the leverage to build a deep EHR partnership. Most vendors don’t. If your AI tool needs a custom integration build, budget 3-6 months of engineering time before you see clinical value.
What healthcare AI use cases have actually reached production?
Radiology triage, ambient clinical documentation, and precision medicine are the three use cases where healthcare AI has the most validated, FDA-cleared production deployments at enterprise scale.
| Use Case | Vendor Example | FDA Status | EHR Integration |
|---|---|---|---|
| Radiology triage | Aidoc, Viz.ai | Cleared (510(k) / De Novo) | PACS integration, some FHIR |
| Pathology diagnosis | PathAI (AISight Dx) | FDA-cleared for primary diagnosis | Health system collaborations |
| Clinical documentation | Nuance DAX | Not SaMD-regulated | Deep Epic integration |
| Precision medicine / genomics | Tempus | Companion diagnostics clearance | Custom data partnerships |
Only 18% of healthcare organizations are ready to deploy AI in care delivery, according to Menlo Ventures’ 2025 report. Yet 85% have explored it. The gap is governance, not technology. HL7 launched an AI Office in July 2025 and hired its first Chief AI Officer to address interoperability. But standards take time. Your deployment timeline should not assume they are solved.
For the governance framework that sits above all three of these walls, see how to build an AI governance framework for production deployment.
What to do next
Before committing to a healthcare AI vendor or building in-house, confirm three things. First, the tool’s FDA clearance status and SaMD classification. Second, the HIPAA data use agreement terms for model training. Third, the specific EHR integration pathway: App Orchard, FHIR API, or custom build. All three affect your go-live date more than the model itself.
Read next: What it actually takes to move AI from proof of concept to production