The EU GPAI Code of Practice: Advancing Trust Through Accountability
On 10 July 2025, the European Commission published the final version of the General-Purpose AI (GPAI) Code of Practice, a milestone development in the EU’s evolving AI governance regime. Though voluntary in nature, this Code provides critical scaffolding for compliance with the AI Act.
The Code arrives just weeks before the AI Act’s GPAI obligations formally begin to apply, and it offers a structured, comprehensive approach for aligning with Articles 53, 54, and 55. In the absence of detailed implementing guidelines, it is the most concrete expression to date of how the EU expects AI developers to operationalise transparency, safety, and accountability.
General-Purpose AI: Definition and Relevance for Compliance
Under the AI Act, a general-purpose AI model is defined as “an AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market or put into service”
These models are not developed for narrow or fixed applications. Instead, they serve as broad infrastructure, powering diverse use cases in law, education, healthcare, finance, creative industries, and enterprise services. Their integration into downstream systems is now widespread, including across high-risk AI use cases where GPAI serves as the underlying engine.
As their distribution increases, so does the regulatory urgency to establish clarity and control. Whether an organisation develops, integrates, or relies upon GPAI, understanding the compliance obligations that apply to these models is no longer optional. The EU’s AI Act introduces both general and additional obligations for GPAI depending on whether a model qualifies as having systemic risk. The new GPAI Code of Practice serves as the operational guide for meeting these obligations.
Legal Obligations: A Two-Tiered Framework
Providers of GPAI models must meet a core set of obligations under Article 53 of the AI Act. These include:
Maintaining up-to-date technical documentation
Sharing documentation with the EU AI Office, national regulators (upon request), and downstream providers (proactively)
Publishing a summary of the datasets used to train the model
Implementing a policy that ensures compliance with EU copyright and intellectual property law
For providers whose GPAI models present systemic risk additional obligations apply under Article 55. These include:
Mandatory notification to the AI Office
Regular adversarial testing (e.g. red teaming)
Proactive risk mitigation measures
Incident tracking and reporting mechanisms
Cybersecurity safeguards aligned with the model’s risk exposure
These obligations will be enforced as of 2 August 2025.
The GPAI Code of Practice: Purpose and Legal Standing
The GPAI Code of Practice is a voluntary instrument. It does not create new legal obligations beyond what is established in the AI Act. Instead, it provides structured guidance on how to operationalise those obligations, turning legal text into concrete governance practices.
The Code was developed through an expert-led, multi-stakeholder process and is expected to be formally endorsed by the European Commission via an implementing act. Once adopted, it will serve as a recognised tool for demonstrating compliance with the AI Act’s GPAI provisions.
In a critical statement accompanying the Code’s release, the Commission clarified that organisations signing onto the Code prior to 2 August 2025 will benefit from a flexible enforcement posture for a period of one year. This means that from August 2025 through August 2026, signatories will be presumed to be acting in good faith, even if full compliance has not yet been achieved.
This flexibility will not apply to non-signatories. Signing the Code is therefore not only an act of due diligence, it is also a pragmatic legal shield.
Several prominent AI developers, including OpenAI, Anthropic, Mistral AI, and Microsoft, have committed to signing the EU’s GPAI Code of Practice, signaling their intent to align early with the AI Act’s compliance framework. Their decision reflects a strategic move to benefit from the regulatory flexibility offered to signatories during the first year of enforcement. In contrast, Meta, have declined to sign the Code, citing concerns about legal uncertainty and potential constraints on innovation.
The Structure of the EU GPAI Code of Practice is divided into three chapters on Transparency, Copyright, and Safety & Security:
Transparency
The first chapter focuses on the documentation and information-sharing requirements outlined in Article 53. Central to this effort is the Model Documentation Form, a structured template that consolidates 42 required attributes across eight categories. These include model objectives, training data origin, compute intensity, environmental impact, and input/output modalities.
The documentation must be maintained internally, submitted to the AI Office or national regulators upon request, and proactively shared with downstream providers that incorporate the GPAI model into their own systems.
In addition to accessibility, the Code emphasises the integrity, accuracy, and confidentiality of the disclosed information. Ensuring that technical disclosures are secure and reliable is presented not only as a compliance duty, but as a prerequisite for trust in the AI supply chain.
Copyright
This chapter supports implementation of the AI Act’s Article 53(1)(c) requirement for a copyright compliance policy. It provides concrete expectations around how training data is sourced and used, drawing on established EU copyright directives.
Organisations adopting this chapter commit to:
Developing a documented copyright policy
Ensuring that only lawfully accessible data is used for training
Respecting signals such as opt-out headers and robots.txt files during data scraping
Preventing unauthorised reproduction of copyrighted content in outputs
Setting up a redress mechanism for affected rights holders
Although the creation of a copyright policy is mandatory under the AI Act, the Code offers a coherent and legally informed method of implementing it. While publication of this policy is encouraged, it is not strictly required.
Safety and Security
The most demanding chapter, focused on models with systemic risk, outlines ten commitments that collectively form a full lifecycle risk governance framework.
These commitments include the adoption of a dedicated safety and security policy, identification and analysis of systemic risks, determination of acceptable and unacceptable risks, implementation of risk mitigations throughout the model lifecycle, and clear assignment of internal responsibility.
Further obligations include reporting incidents, maintaining evidence of compliance, and ensuring that risk control measures are responsive to emerging threats. The guidance is ambitious in scope but reflects the level of responsibility attached to models with broad influence and impact.
Further guidance is expected in the coming months, including interpretive materials from the AI Office, sector-specific templates, and clarification on registration and oversight procedures. For now, the message from Brussels is clear: compliance is expected to begin as scheduled, and delays are not under consideration.
Our View
The publication of the GPAI Code of Practice marks a turning point in AI law. The AI Act is now being translated into real processes, tangible documentation, and measurable accountability.
For organisations responsible for GPAI models, the time to act is now. Adopting the Code strengthens your compliance posture and sends a clear message to regulators, clients, and partners: you’re serious about responsible AI. The margin for delay is gone.
At Art25 Consulting, we help you move fast and get it right. From readiness assessments and documentation design to policy alignment and deployment oversight, we turn regulation into action, and risk into advantage.
the complete EU GPAI Code of Practice and related materials can be directly accessed via the official EU portal.
If you need expert support in adopting the Code of Practice and fulfilling your obligations under the AI Act, contact us directly for tailored guidance and implementation support.