
Last Updated: April 30, 2026
Eighty percent of enterprise AI projects never reach production. The obstacle is rarely the model. It’s the absence of a control structure that regulators, auditors, and boards can actually examine.
An enterprise AI governance framework is the answer to that control structure problem. For regulated industries, the window to build one proactively is closing fast.
US federal and state regulators moved in 2023 and 2024. The Federal Reserve’s SR 11-7 model risk guidance now applies squarely to AI systems in banking. The NAIC issued its Model AI Bulletin in December 2023. The Colorado AI Act, New York DFS Circular Letter No. 7, and Texas TRAIGA each set specific obligations for AI use in high-stakes decisions. The EU AI Act is phasing into force for companies with EU operations. India’s DPDP Act, the UAE PDPL, Singapore’s PDPA, and Canada’s AIDA direction extend those expectations globally.
88% of enterprises use AI today. Only 39% report measurable financial results. The gap sits squarely in governance and process, not in the quality of the underlying models.
Data & AI capabilities at Scadea
What’s in this article
- What is an enterprise AI governance framework?
- Why does enterprise AI need governance now?
- What controls belong in an AI governance framework?
- How do AI governance frameworks map to regulations?
- Where does human-in-the-loop fit in the governance framework?
- How does AI governance scale to agentic systems?
- What does AI governance look like in regulated industries?
- What to do next
- Frequently Asked Questions
What is an enterprise AI governance framework?
An enterprise AI governance framework is a set of named controls, role assignments, and regulation mappings that span the full AI lifecycle from data sourcing through incident response.
An enterprise AI governance framework defines who owns each AI control, which regulation each control addresses, and what evidence auditors can inspect. It covers five lifecycle stages: data governance, model governance, deployment governance, monitoring governance, and incident response. Without this structure, AI programs accumulate untracked risk at each stage.
The word “framework” gets overused in AI governance writing. Here it means something specific: named controls with owners, mapped to named regulations, covering every stage where a model touches business decisions or personal data. Not a set of aspirational principles on a slide deck.
The 10/20/70 rule captures why this matters. Roughly 10% of AI program effort goes into the model itself, 20% into infrastructure, and 70% into the people, process, and governance work that determines whether the model actually runs safely in production. Most governance programs invert this ratio. They over-invest in model selection and under-invest in the control layer that keeps it auditable.
Why does enterprise AI need governance now?
Enterprise AI needs governance now because US federal banking regulators, state insurance commissioners, and state legislatures have issued specific, enforceable obligations, and enforcement timelines are active.
The NIST AI RMF 1.0, published in January 2023, and its 2024 Generative AI Profile gave US enterprises a structured risk vocabulary. Federal banking regulators followed. OCC Bulletins 2013-29 and 2023-17, combined with SR 11-7, require banks to apply model risk management discipline to AI systems used in credit, fraud, and AML decisions. HIPAA and HITECH apply to any AI system that processes protected health information, regardless of the model’s purpose.
At the state level, the pace accelerated through 2024. Colorado’s AI Act targets high-risk consequential decisions. New York DFS Circular Letter No. 7 and Part 500 set specific expectations for insurers and financial services firms using AI. Texas TRAIGA and Utah’s AI Policy Act extended similar frameworks. California’s CCPA/CPRA imposes data rights obligations on AI systems that process consumer data at scale.
For enterprises with EU exposure, the EU AI Act’s prohibited-use and high-risk-system provisions carry real operational weight, alongside GDPR’s existing automated-decision-making rules. DORA adds ICT third-party risk requirements for financial entities. India’s DPDP Act, UAE PDPL, UAE DIFC Data Protection Law, Singapore MAS FEAT criteria and PDPA, and Canada’s AIDA direction extend similar obligations to regions where US enterprises commonly operate.
Companies operating across 40 or more jurisdictions routinely discover that their AI programs weren’t built to satisfy any of these frameworks simultaneously. Building a governance framework retroactively, under regulatory pressure, costs significantly more than building one correctly during deployment.
What controls belong in an AI governance framework?
An AI governance framework needs 15 named controls grouped across five lifecycle categories: data governance, model governance, deployment governance, monitoring governance, and incident response.
The table below names each control and its primary governance purpose. This is a reference structure, not a compliance checklist. Specific obligations vary by jurisdiction, industry, and risk tier.
| Category | Control | Primary governance purpose |
|---|---|---|
| Data governance | Data lineage tracking | Documents training data provenance for regulatory audit |
| Bias and fairness assessment | Detects discriminatory patterns before training and post-deployment | |
| Data access controls | Restricts PII and PHI access to authorized model pipelines | |
| Model governance | Model inventory and tiering | Classifies each model by risk level to prioritize oversight resources |
| Model documentation (model card) | Records purpose, training data, performance benchmarks, and known limitations | |
| Independent model validation | SR 11-7 requires validation by a function independent of model development | |
| Explainability requirements | Defines minimum explanation standards for consequential decisions (FCRA, ECOA) | |
| Deployment governance | Human-in-the-loop (HITL) review | Requires human sign-off on specified decision types before action is taken |
| Use-case approval gate | Risk and compliance sign-off before any new AI use case reaches production | |
| Third-party AI due diligence | Extends model risk management to vendor AI (DORA, OCC 2013-29) | |
| Monitoring governance | Model performance monitoring | Tracks drift, accuracy, and fairness metrics against approved thresholds |
| Automated alert and escalation | Triggers human review when performance metrics breach defined bounds | |
| Audit log integrity | Maintains tamper-evident records of model decisions and inputs | |
| Incident response | AI incident classification | Defines severity tiers for AI failures (wrong output vs. safety event) |
| Rollback and model suspension | Establishes the process and authority to suspend a model during an incident |
This 15-control structure is the operational backbone of an enterprise AI governance program. Each control needs an owner, a review cadence, and a way to produce evidence on demand.
How do AI governance frameworks map to regulations?
Each AI governance control maps to one or more named regulations, with US frameworks carrying the highest immediate compliance weight for most enterprises.
| Framework / Regulation | Primary jurisdiction | Key governance areas addressed |
|---|---|---|
| NIST AI RMF 1.0 + Gen AI Profile | US (voluntary; widely adopted) | Risk identification, measurement, management, governance across full AI lifecycle |
| SR 11-7 + OCC 2013-29 / 2023-17 | US banking / federal | Model inventory, independent validation, ongoing monitoring, vendor oversight |
| HIPAA / HITECH | US healthcare / federal | PHI access controls, minimum necessary principle, breach notification |
| NAIC Model AI Bulletin (Dec 2023) | US insurance (state-level adoption) | Insurer accountability for third-party AI, explainability, adverse-action disclosure |
| Colorado AI Act / NY DFS Circular No. 7 / Texas TRAIGA | US state | High-risk decision disclosures, algorithmic impact assessments, HITL obligations |
| SOX / GLBA Safeguards Rule / FCRA | US federal | Financial reporting integrity, data security, adverse-action notice accuracy |
| EU AI Act | EU (applies to US firms with EU operations) | High-risk system registration, conformity assessments, transparency requirements |
| GDPR / DORA | EU | Automated decision-making rights (GDPR Art. 22); ICT third-party risk (DORA) |
| India DPDP Act 2023 / RBI AI guidance | India | Data principal rights, consent requirements, RBI model risk expectations |
| UAE PDPL / DIFC Data Protection Law | UAE / DIFC | Data subject rights, cross-border transfer controls, AI accountability |
| Singapore PDPA + MAS FEAT | Singapore | Fairness, ethics, accountability, transparency criteria for financial AI |
| Canada PIPEDA + AIDA direction | Canada | High-impact AI system obligations, transparency, human oversight |
| ISO/IEC 42001:2023 | International | AI management system certification standard, cross-jurisdictional anchor |
A few practical notes. NIST AI RMF is voluntary, but US agencies increasingly reference it in enforcement guidance, so treating it as a de facto baseline is sensible. Specific article or clause requirements vary by jurisdiction and are best confirmed with legal counsel. ISO/IEC 42001 is the most useful cross-jurisdictional anchor because its structure maps to both NIST and EU AI Act requirements.
Where does human-in-the-loop fit in the governance framework?
Human-in-the-loop (HITL) is a deployment-governance control, not a separate framework. It defines which decision types require human review before a model’s output triggers action.
Automation bias is the specific failure mode HITL addresses. It occurs when a human reviewer defers uncritically to the model’s recommendation, defeating the control’s purpose. Multiple US frameworks point to this risk. The NAIC Model AI Bulletin requires insurers to maintain human accountability for adverse underwriting decisions. FCRA adverse-action rules require accurate, human-verifiable explanations for credit denials. The Colorado AI Act sets HITL-adjacent disclosure and review requirements for consequential automated decisions.
EU AI Act high-risk system rules, India’s DPDP accountability obligations, Singapore’s MAS FEAT criteria, and Canada’s AIDA direction address automation bias in parallel ways across their respective jurisdictions.
Designing HITL correctly means specifying the decision types that need review, the minimum review criteria (what the reviewer must evaluate, not just acknowledge), escalation paths when the reviewer disagrees with the model, and audit log requirements that prove review actually occurred. A checkbox labeled “approved” with no documented rationale doesn’t satisfy SR 11-7’s independent validation expectations or the NAIC’s accountability requirements.
How does AI governance scale to agentic systems?
AI governance scales to agentic systems by extending four controls: agent-level permission scopes, action-by-action audit trails, explicit boundary definitions, and incident response procedures for autonomous failure modes.
Standard model governance assumes a human submits a query and a model returns a response. Agentic AI breaks that assumption. An agent can browse the web, write and execute code, send emails, call external APIs, and trigger downstream workflows, all without a human approving each step. The governance gap isn’t theoretical. An agent with access to a customer database and an email API can act at scale before any human notices a problem.
The four agentic governance controls extend the standard framework:
- Permission scopes: Each agent gets explicit, minimal access rights. Access is scoped to the task, not to the full data environment. This is the agentic equivalent of the principle of least privilege in ISO/IEC 27001.
- Action-by-action audit logs: Every external action an agent takes, not just the final output, is logged with a timestamp, triggering prompt, and the authorization chain that permitted the action.
- Boundary definitions: Specific action categories (financial transactions above a threshold, communications to external parties, schema modifications) require either HITL approval or are blocked outright.
- Incident response for autonomous failure: An agentic incident is not the same as a standard software bug. Response procedures cover agent suspension, action rollback where possible, affected-party notification, and audit trail preservation for regulatory review.
NIST AI RMF’s Generative AI Profile addresses some of these patterns. DORA’s ICT incident reporting requirements apply when an agentic failure meets the materiality threshold. State AI laws are still catching up to agentic architectures, but the underlying accountability principle is the same: the deploying organization bears responsibility for the agent’s actions.
What does AI governance look like in regulated industries?
AI governance in regulated industries applies the same 15-control structure but weights different controls by sector, based on the specific regulatory obligations and failure modes each industry faces.
Banking, financial services, and insurance (BFSI). SR 11-7 and OCC 2013-29 make model inventory, independent validation, and ongoing monitoring the highest-priority controls. NAIC obligations add insurer-specific accountability requirements. Basel III and CCAR stress-testing rules apply when AI models feed risk calculations. FCRA and ECOA set explanation requirements for adverse decisions. A BFSI enterprise operating across 40 jurisdictions needs a compliance automation layer on top of the control framework, or manual tracking becomes the bottleneck.
Banking, financial services, and insurance at Scadea
Healthcare. HIPAA, HITECH, and 42 CFR Part 2 dominate. Any AI system that touches protected health information needs data access, data lineage, and breach-notification controls built into the deployment architecture, not added later. AI-enabled prior authorization tools need HITL controls that satisfy both HIPAA’s minimum-necessary principle and CMS program integrity requirements. One healthcare enterprise that automated prior authorization processing cut processing time from five days to 48 hours, but only after redesigning data access controls to meet HIPAA scope.
Gaming and hospitality. Title 31 BSA and FinCEN requirements apply to AI used in AML and suspicious-activity reporting. Responsible gambling AI tools face state-level gaming commission oversight. The NAIC Model AI Bulletin applies to any insurance product the gaming operator offers. Player analytics tools that influence marketing decisions also face FTC Section 5 scrutiny under the unfair or deceptive acts and practices standard.
Manufacturing. ISO/IEC 42001 and ISO/IEC 27001 are the most common anchors. AI systems in quality control, predictive maintenance, or supply chain optimization face fewer direct AI-specific regulations than BFSI or healthcare, but product liability exposure for AI-driven defects is an active legal risk. Model documentation and audit log controls are the most important starting points for manufacturing governance programs.
What to do next
Start with a governance gap assessment. Map your current AI use cases against the 15-control framework above. Note which controls exist, which are partially in place, and which are absent. That gap map becomes the input to a prioritized build plan.
The most common finding: model inventory and use-case approval gates are missing entirely, while monitoring controls exist only for production-critical systems. HITL review is documented in policy but not enforced in process. Incident response procedures treat AI failures as standard software incidents rather than model-specific events.
Three concrete next steps:
- Take the 10-category AI Readiness Assessment to score your governance program and get a gap diagnosis.
- Download the Enterprise AI Governance Reference Framework whitepaper for a detailed implementation guide with control specifications and a regulation mapping appendix.
- Book time with Scadea’s AI governance team to walk through the gap assessment results.
Frequently Asked Questions
What is the difference between an AI governance framework and AI ethics principles?
AI ethics principles are aspirational statements: fairness, transparency, accountability. An AI governance framework is operational. It’s named controls, role owners, regulation mappings, and audit evidence. Ethics principles may inform the framework’s design, but they’re not a substitute for it. A framework without operational controls isn’t a governance program.
Which US regulation requires AI governance most urgently for banks?
SR 11-7, issued by the Federal Reserve and the OCC, is the most directly enforceable framework for US banking organizations. It requires model inventory, independent validation, and ongoing performance monitoring for all models used in material business decisions. OCC Bulletin 2023-17 reinforced its application to AI and machine learning models specifically. Banks under SR 11-7 scope that haven’t applied it to AI models are exposed to supervisory criticism.
Does NIST AI RMF compliance satisfy EU AI Act requirements?
NIST AI RMF and the EU AI Act share structural similarities but aren’t interchangeable. NIST AI RMF is a voluntary risk management framework with no enforcement mechanism. The EU AI Act is binding regulation with conformity assessment requirements, incident reporting obligations, and prohibited-use provisions. An enterprise using NIST AI RMF as its governance base will have a head start on EU AI Act alignment, but specific EU Act obligations (registration, technical documentation, post-market monitoring) need additional work. ISO/IEC 42001 is the more direct cross-jurisdictional anchor.
What is human-in-the-loop (HITL) and when is it legally required?
Human-in-the-loop is a deployment governance control that requires a qualified human to review a model’s output before it triggers a consequential action. No single law universally mandates it, but multiple US regulations address related obligations. FCRA requires accurate, human-verifiable adverse-action notices for credit decisions. The Colorado AI Act requires disclosures and human review rights for high-risk consequential decisions. NAIC guidance requires insurer accountability for AI-driven underwriting decisions. The EU AI Act prohibits fully automated consequential decisions without human oversight for high-risk system categories.
How many AI controls does a typical enterprise governance program need?
A baseline enterprise AI governance program covers 15 controls across five lifecycle categories: data governance (3 controls), model governance (4 controls), deployment governance (3 controls), monitoring governance (3 controls), and incident response (2 controls). Not every control applies at equal weight across all use cases. Risk-tiering the model inventory lets governance teams focus the most intensive controls on the highest-stakes applications.
What is the NAIC Model AI Bulletin and who does it apply to?
The NAIC Model AI Bulletin, issued in December 2023, is guidance adopted by state insurance commissioners that sets expectations for insurers using AI in underwriting, claims, and rating decisions. It applies to licensed insurers and extends to third-party AI vendors used by those insurers. Key obligations include maintaining accountability for AI outcomes (even when the model is vendor-supplied), ensuring explainability for adverse decisions, and conducting ongoing monitoring. State adoption and enforcement vary; insurers should check the adoption status in each state where they operate.
How does AI governance apply to third-party AI vendors?
Third-party AI vendor governance is a named control in the deployment governance category. US frameworks are explicit: SR 11-7 applies model risk management requirements to vendor models used in material decisions. OCC 2013-29 extends third-party risk management to AI service providers. NAIC’s Model AI Bulletin holds the insurer accountable for vendor AI outcomes. DORA extends ICT third-party risk requirements to AI vendors used by EU financial entities. “The vendor is responsible” isn’t a defensible position with regulators. The deploying enterprise owns the risk.
What is ISO/IEC 42001 and how does it relate to AI governance?
ISO/IEC 42001:2023 is an international standard for AI management systems. It defines requirements for establishing, implementing, maintaining, and improving an AI management system within an organization. For enterprises operating across multiple jurisdictions, it serves as a cross-border governance anchor because its structure maps to both NIST AI RMF and EU AI Act requirements. Certification against ISO/IEC 42001 can simplify regulatory evidence packages in India, UAE, Singapore, and Canada, where regulators reference international standards in their guidance.