Overview

The most pressing governance challenge facing health systems is no longer simply securing data — it is governing artificial intelligence that has moved from passive analytical support into active clinical and operational execution, according to Sunil Dadlani, chief information and digital transformation officer at Atlantic Health System in Morristown, New Jersey.

Dadlani contends that AI accountability frameworks are on a trajectory to become as foundational to compliance obligations as HIPAA itself, driven by the speed at which agentic and autonomous AI is being embedded in care delivery. ‍‌‌‌‌‍Where earlier AI tools surfaced recommendations for clinicians to act upon, newer deployments are initiating actions directly within workflows.

The shift carries significant implications for covered entities and business associates, who face an expanding surface of regulatory risk at the intersection of existing HIPAA privacy and security rules and emerging AI-specific accountability requirements.

Key developments

Agentic AI is redefining the threat and compliance surface. AI systems that execute tasks — scheduling, documentation, clinical decision support, prior authorization — introduce new categories of PHI exposure that static risk analyses and legacy HIPAA security frameworks were not designed to address.

‍‌​​‌‍Regulatory convergence is accelerating. Federal and state-level AI governance activity, combined with HHS's ongoing HIPAA Security Rule modernization rulemaking, is creating pressure for covered entities to treat AI accountability not as an emerging best practice but as a near-term compliance requirement.

Governance gaps in procurement are a primary vulnerability. Health systems and independent practices frequently evaluate AI tools through a clinical or operational lens without subjecting them to the same business associate scrutiny, risk analysis rigor, and data-flow mapping required of other covered functions that handle PHI.

Workforce accountability structures have not kept pace. As AI systems assume roles previously held by staff, the human accountability chains that underpin HIPAA's administrative safeguard requirements become harder to trace — a structural problem that AI governance frameworks are designed to address.

‍‌​‌​‍## Industry impact

The regulatory backdrop matters here. HHS's proposed updates to the HIPAA Security Rule, published in early 2025, signal that the agency views current technical safeguard standards as insufficient for modern threat environments — including those created by AI adoption. While the final rule has not yet been published, the proposed changes would impose more prescriptive requirements around risk analysis, access controls, and audit capability.

‍​‌​‌‍Separately, the Biden-era executive order on AI and subsequent federal agency guidance documents established accountability expectations — including documentation, explainability, and human oversight requirements — that overlap substantially with HIPAA's existing administrative and technical safeguard categories. Covered entities that treat these frameworks as independent compliance silos risk duplicating effort while still leaving gaps.

Healthcare data breaches remain disproportionately costly. ‍‌​‌‌‍IBM's 2024 Cost of a Data Breach Report placed the average cost of a healthcare breach at $9.77 million — the highest of any industry for the fourteenth consecutive year — a figure that reflects both regulatory exposure and operational disruption.

What this means for independent practices

‍‌​​‌‍Independent practices that treat AI governance as a future-compliance concern rather than a present-day operational necessity are likely to face compounding exposure. The pattern in HIPAA enforcement history has been consistent: regulatory expectations solidify around prevailing technology adoption, and OCR investigations tend to follow the breach data. Practices that build AI accountability into their compliance programs now — before enforcement guidance is finalized — are better positioned to demonstrate good-faith compliance posture if scrutiny arrives.

What would have prevented this

AI-specific risk analysis protocols: Standard HIPAA risk analyses were designed around human-operated systems. Extending the analysis framework to cover AI decision logic, training data provenance, and autonomous action scope closes a structural documentation gap.

Vendor due diligence and business associate oversight: Treating AI vendors with PHI access under the same contractual and monitoring standards applied to any other business associate — including right-to-audit clauses and breach notification obligations — limits downstream liability.

Role-based access controls (RBAC) applied to AI system permissions: Restricting the data an AI system can access to only what is necessary for its defined function limits the blast radius of a misconfiguration or compromise, consistent with HIPAA's minimum necessary standard.

Audit logging with anomaly detection: Maintaining detailed logs of AI system actions on PHI, with automated alerting for out-of-pattern behavior, provides both the investigative record HIPAA requires and early warning of unauthorized or erroneous access.

Human-in-the-loop oversight requirements for high-stakes functions: Establishing mandatory human review checkpoints for AI outputs that affect clinical decisions, billing, or patient communication creates an accountability structure that maps directly onto HIPAA's administrative safeguard requirements.

Read the original at Healthcare IT News