Japan Passes Landmark AI Law: A New Governance Model for the Age of Artificial Intelligence
On May 28, 2025, Japan passed its first national AI legislation, introducing a governance framework focused on structured collaboration, public transparency, and adaptive policy, a distinct alternative to enforcement-based regulatory regimes.
A New Approach to AI Governance
Japan's new "AI Promotion Act," passed with cross-party support, establishes a Cabinet-level AI Strategy Headquarters responsible for coordinating national AI policy. The law introduces a new approach to AI governance, emphasizing adaptive policy development, public transparency, and structured cooperation with stakeholders, without relying on punitive enforcement, the framework positions them as active partners in building responsible AI systems.
Clarity Around High-Risk AI Systems
Japan's Ministry of Internal Affairs and Communications, together with the Cabinet Secretariat, has issued detailed guidelines to identify and manage high-risk AI systems. Classification depends on the following four criteria:
Scope of Deployment: Systems used across ministries, or deployed in public-facing services, are flagged as higher risk due to their broad potential impact.
Function and Criticality: Systems performing tasks that affect fundamental rights, public safety, or service eligibility (e.g., social services, healthcare, or law enforcement) are categorized as high-risk.
Data Sensitivity: Any AI tool trained on or processing personal, confidential, or protected data is automatically subject to higher scrutiny.
Human Oversight: AI systems that operate without meaningful human review, especially in decision-making contexts, are elevated to high-risk status.
What Is Required of High-Risk AI Systems?
Under the new framework, entities deploying high-risk AI systems must:
Designate an internal lead responsible for AI governance (often a Chief AI Officer in government bodies).
Submit usage reports and risk assessments to the Advanced AI Utilization Advisory Board.
Implement technical safeguards to mitigate known risks, including bias, hallucination, and privacy violations.
Maintain traceability and documentation of model training and system updates.
Establish channels for human oversight, feedback, and error correction.
In the event of significant incidents, cooperate with official investigations and disclose findings publicly.
These expectations are not enforced through penalties but through structured state coordination, institutional responsibility, and reputational accountability. Ministries and private entities alike are expected to comply through voluntary commitment to national objectives, with the added pressure of transparency and public disclosure.
Stakeholders and Oversight
The AI Promotion Act applies to a broad range of stakeholders: national ministries, local governments, universities, research institutes, and private companies. It encourages multi-sector engagement in advancing safe, innovative, and accountable AI.
The AI Strategy Headquarters will monitor implementation across sectors, issue updated guidance, and convene expert boards for continuous evaluation of emerging risks.
Global Context
Unlike the EU’s AI Act, which formalizes mandatory compliance for high-risk systems through risk tiers and certifications, or the U.S. model, which relies on agency-specific rules, Japan offers a third model: coordinated, adaptive governance that depends on alignment, not enforcement.
This model aims to create a flexible, real-time regulatory environment, one that can evolve with the speed of innovation, while preserving fundamental rights and societal resilience.
Final Thought
As AI systems become more embedded in essential services and democratic processes, governments face a critical choice: rigid enforcement or structured collaboration. Japan is testing whether a trust-driven model can succeed in steering AI development toward public good without stifling innovation.