Definition: AI governance in APAC refers to the policies, controls, and oversight structures organisations use to deploy artificial intelligence responsibly within and across Asia-Pacific regulatory jurisdictions. In 2025 and 2026, this means navigating three interlocking regimes: Singapore’s Monetary Authority (MAS) AI Risk Management Guidelines, IMDA’s Model AI Governance Framework for Generative AI, and the EU AI Act’s extraterritorial reach into any APAC enterprise whose AI outputs touch European users.

The Regulatory Pressure Is Already at Your Door

Three major AI regulatory frameworks, MAS, IMDA, and the EU AI Act, now apply simultaneously to many APAC enterprises, creating compliance obligations that cannot be managed in silos. The pressure is not theoretical.

According to McKinsey’s State of AI 2025, 78% of organisations now regularly use AI, up from 55% in 2023. That same report found organisations now manage an average of four AI-related risks, double the figure from 2022. The adoption curve has outrun governance maturity everywhere, including APAC.

Meanwhile, Gartner (February 2026) projects that fragmented AI regulation will reach 75% of global economies by 2030. Organisations with dedicated AI governance platforms are already 3.4 times more likely to achieve high effectiveness compared to peers without them. The gap between policy and practice is where legal exposure lives.

“The gap between AI adoption and governance maturity is not a technology problem; it is a risk management problem with a regulatory deadline attached.”

AI governance in APAC is complicated by the fact that no single framework covers every risk. Singapore’s MAS focuses on financial institutions. IMDA’s Model AI Governance Framework targets the broader enterprise and technology sector. The EU AI Act reaches any company whose AI outputs affect European users, regardless of where the headquarters sit. Executives who treat these as separate workstreams will pay twice: once in duplication effort and once in missed controls.

MAS AI Risk Management: What Singapore’s Central Bank Now Requires

MAS requires all financial institutions to maintain an AI inventory, conduct risk-materiality assessments, establish board-level oversight, and implement lifecycle controls covering data governance, fairness, and third-party API risk. This is the clearest board-level mandate any APAC regulator has issued.

In December 2024, MAS published its AI Model Risk Management information paper following a thematic review of major banks. In November 2025, MAS issued a formal consultation on AI Risk Management Guidelines that covers every financial institution in Singapore, including branches and subsidiaries of foreign parents. These are not aspirational principles. They are supervisory expectations.

The Guidelines organise obligations into three areas. First, board and senior management must own AI risk governance: establishing frameworks, maintaining AI inventories, conducting risk-materiality assessments, and fostering an appropriate risk culture. Where AI risk is material, MAS proposes a dedicated cross-functional committee. Second, FIs must implement robust identification processes, maintain accurate AI inventories, and assess risk across impact, complexity, and reliance dimensions. Third, lifecycle controls must cover data management, fairness, explainability, human oversight, third-party risk, evaluation, testing, and change management throughout the AI system’s life.

“Where AI risk exposure is material, MAS expects a dedicated cross-functional committee, not a line item in an existing risk register.”

The MAS FEAT principles, Fairness, Ethics, Accountability, and Transparency, underpin all of this. But FEAT is now scaffolding, not the building. The 2024-2025 Guidelines add operational requirements on top: input validation, API authentication, network segmentation, role-based access, multi-factor authentication for privileged accounts, and separation of duties between model training and testing teams. In March 2026, MAS concluded Project MindForge Phase 2, publishing an AI Risk Management Toolkit developed with 24 institutions. It translates the Guidelines into an operationalisation handbook. CROs should treat it as a readiness checklist.

IMDA’s Model AI Governance Framework: The Nine Dimensions Every Enterprise Must Know

IMDA’s GenAI framework organises enterprise AI governance across nine dimensions from accountability and data quality to safety alignment and AI for public good, providing a testable, voluntary baseline aligned with OECD AI Principles and interoperable with EU and UK assurance models.

In May 2024, IMDA and the AI Verify Foundation published the Model AI Governance Framework for Generative AI, developed with input from over 70 global organisations including OpenAI, Google, Microsoft, and Anthropic. Its nine governance dimensions are: Accountability, Data, Trusted Development and Deployment, Incident Reporting, Testing and Assurance, Security, Content Provenance, Safety and Alignment Research and Development, and AI for Public Good.

Some dimensions apply to every organisation deploying GenAI Accountability, Data, and Trusted Development and Deployment are non-negotiable baselines. Others, such as Safety and Alignment R&D and AI for Public Good, are opportunities to contribute to a trusted ecosystem rather than hard compliance obligations. IMDA’s framework is explicit that the goal is a global minimum governance standard, not a prescriptive checklist.

The practical tool attached to the framework is AI Verify, an open-source testing toolkit that validates AI system performance against the framework’s dimensions through standardised tests. In February 2025, IMDA launched the Global AI Assurance Pilot to codify testing norms and build international interoperability. AI Verify test results are now being aligned with OECD AI Principles (2024) and the GPAI Code of Practice, which means passing AI Verify brings an organisation meaningfully closer to EU AI Act conformity evidence.

The EU AI Act’s Extraterritorial Reach: Why APAC Cannot Ignore Brussels

The EU AI Act applies to any organisation, regardless of headquarters, whose AI system outputs reach EU users, meaning APAC enterprises with European customers, partners, or subsidiaries face binding obligations, including fines up to EUR 35 million or 7% of global turnover.

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024, and its phased implementation is now well underway. Prohibited AI practices became enforceable in February 2025. GPAI governance rules took effect in August 2025. High-risk AI system requirements become fully enforceable in August 2026. The full timeline gives organisations a short runway, and APAC boards that have not yet mapped their AI portfolios against EU risk tiers are already behind.

“Like GDPR before it, the EU AI Act is extraterritorial by design. If your AI system’s output touches a European user, Brussels has jurisdiction, regardless of where your servers sit.”

This extraterritorial reach, often called the Brussels Effect, means a Singaporean fintech firm processing EU customer credit applications, a Malaysian logistics operator routing EU freight with AI, or a Tokyo-headquartered bank using AI risk models for EU entities is in scope. The compliance cost is real: a 2024 European Commission impact assessment estimated the average compliance cost for a single AI product at EUR 29,277 against a baseline development cost of EUR 170,000. That is approximately 17% overhead, manageable when planned, punishing when retrofitted.

The Act’s risk classification system divides AI into four tiers. Unacceptable risk systems, social scoring, and prohibited biometric uses are banned outright since February 2025. High-risk systems covering credit scoring, employment, education, critical infrastructure, and law enforcement face the full compliance burden: Article 9 risk management systems, Article 12 audit logging, Article 14 human oversight, and Article 43 conformity assessments. Limited risk systems face transparency obligations only. Minimal risk systems have no specific obligations.

For APAC CROs and legal teams, the most important insight is that the EU AI Act does not care whether an APAC enterprise has read it. Enforcement ramps with supervision from mid-2025, and national authorities in EU member states can already accept injunction applications today without waiting for the enforcement ramp.

Framework Comparison: MAS vs IMDA vs EU AI Act

FrameworkKey StrengthBest Used WhenPenalty / Enforcement
MAS AI Risk Management GuidelinesSector-specific depth for financial services; board-level accountability; integrates with MAS supervisory cycleDeploying AI in regulated Singapore financial services; building board AI risk reportingSupervisory action; licence conditions (no fixed fine schedule published)
IMDA Model AI Governance Framework (GenAI)Voluntary, testable via AI Verify; nine-dimension structure mapped to OECD principles; interoperable with EU and UK standardsAny APAC enterprise seeking a globally credible, structured governance baseline outside the financial sectorVoluntary, drives commercial trust and procurement eligibility; no regulatory penalty
EU AI Act (Regulation 2024/1689)Binding, risk-tiered global baseline; extraterritorial reach; largest enterprise penalty exposure of any AI law worldwideAny organisation whose AI outputs touch EU users; APAC exporters of AI-enabled products or services to EuropeUp to EUR 35M or 7% of global turnover (prohibited practices); EUR 15M or 3% (other violations)

Note: ISO/IEC 42001 (AI Management System standard) and the NIST AI Risk Management Framework sit beneath all three as voluntary international references that reduce evidence-gathering effort when demonstrating conformity.

Implementation Guidance: Three Actions for CROs in the Next 90 Days

The three highest-priority actions are: (1) complete an AI system inventory mapped to risk tiers; (2) establish a cross-functional AI governance committee with board visibility; (3) run AI Verify or equivalent testing against IMDA’s nine dimensions. These three actions simultaneously advance compliance across all three frameworks.

Action 1: Build and publish an AI inventory. McKinsey (2025) found that only 28% of CEOs take direct responsibility for AI governance oversight. A published AI inventory with named owners and risk-tier classifications changes that. It forces the conversation about what is high-risk under EU AI Act Article 6, what is material risk under MAS Guidelines, and what is untested under IMDA’s nine dimensions.

Action 2: Convene a cross-functional AI governance committee. MAS Guidelines propose this explicitly where AI risk is material. But the same committee structure satisfies EU AI Act Article 17 quality management obligations and IMDA’s Accountability dimension. Legal, compliance, technology, data, and business representatives must all have seats. A committee with no legal seat will miss extraterritorial exposure. A committee with no technology seat will miss model drift and third-party API risk.

Action 3: Validate with AI Verify. IMDA’s AI Verify toolkit is open-source and produces standardised test evidence against the nine governance dimensions. In 2025, IMDA aligned AI Verify outputs with OECD AI Principles and the GPAI Code of Practice. That alignment means AI Verify evidence now carries weight as conformity documentation in EU AI Act Article 9 risk management records. One testing exercise, two regulatory uses.

FAQ: AI Governance in APAC

Does the EU AI Act apply to my company if I am based in Singapore?

Yes, if your AI system outputs affect people in the EU. The Act applies to providers and deployers regardless of where they are headquartered, as long as the AI output reaches EU users. This extraterritorial scope mirroring GDPR means a Singapore bank with EU customers, a Malaysian logistics firm routing EU freight, or any APAC SaaS company with EU subscribers must assess their AI portfolio against EU risk tiers immediately.

What does MAS require that IMDA does not?

MAS is prescriptive and supervisory; IMDA is voluntary and testable. MAS requires mandatory board-level governance, AI inventories with risk-materiality assessments, cross-functional AI committees for material risk, and specific lifecycle controls on data, fairness, explainability, and third-party APIs for all Singapore financial institutions. IMDA’s framework is a voluntary best-practice standard with an open-source testing toolkit powerful for building commercial trust, but not backed by regulatory enforcement.

What is the first step to comply with all three frameworks at once?

Build a unified AI inventory. Every framework, MAS Guidelines, IMDA’s Accountability dimension, and EU AI Act Article 9 begins with knowing what AI systems are running, who owns them, and what risk tier they occupy. A well-structured AI inventory, enriched with risk classifications and data lineage, becomes the single source of truth from which board reports, audit logs, and conformity assessments all draw. Build it once; use it for all three regimes.

How does AI Verify help with EU AI Act compliance?

AI Verify generates standardised test evidence across IMDA’s nine governance dimensions, which are now aligned with OECD AI Principles and the GPAI Code of Practice, both of which inform EU AI Act conformity guidance. While AI Verify is not a formal EU conformity assessment body, its structured test outputs serve as documented evidence for EU AI Act Article 9 risk management systems and support the technical documentation requirements under Article 11. Many APAC organisations use AI Verify as the first layer of their EU compliance evidence package.

What are the penalties for non-compliance with the EU AI Act for an APAC company?

Penalties reach up to EUR 35 million or 7% of global annual turnover for violations of prohibited AI practices, whichever is higher. Other violations carry fines up to EUR 15 million or 3% of global turnover. Misstatements to regulators carry EUR 7.5 million or 1.5% fines. Crucially, enforcement ramps from August 2025 for GPAI providers and August 2026 for most high-risk AI deployers. National courts in EU member states can already grant injunctions today without waiting for the enforcement ramp.

Governance Is Now a Competitive Asset

Three insights stand above all others in this regulatory moment.

First, the regulatory perimeter is larger than most APAC boards realise. MAS governs Singapore financial institutions directly. IMDA provides the governance vocabulary for the broader enterprise sector. The EU AI Act governs any APAC enterprise whose AI output touches EU users, and that category includes far more organisations than have yet acknowledged it.

Second, the compliance workload is manageable if approached as a unified architecture. The AI inventory, risk-tier classification, human-oversight controls, and audit logging that MAS requires are the same controls EU AI Act Articles 9, 12, and 14 demand and IMDA’s nine dimensions assume. Build one control set, map it to all three regimes, and the duplication cost disappears.

Third, governance done properly is a market signal, not a drag on innovation. IMDA’s AI Verify already functions as a trust credential in enterprise procurement. MAS-compliant AI governance attracts institutional capital in Singapore’s financial markets. EU AI Act conformity is becoming table stakes for APAC companies with European partnerships or listings.

The question for every CRO and board member reading this is not whether to build an AI governance framework. That decision was made the moment your organisation deployed its first AI system in production. The question is whether your framework is designed for the regulatory environment of 2026 or for the one that existed before MAS, IMDA, and Brussels each raised the stakes.

About the Author: Shivi

Avatar photo
Table of Content