Regulatory Frameworks · Gateway Page

EU AI Act

A structured gateway to Regulation (EU) 2024/1689 — the world's first comprehensive legal framework for artificial intelligence. The Act entered into force on 1 August 2024, with obligations applying progressively through 2027.

Instrument Regulation (EU) 2024/1689
Entry into force 1 August 2024
Full application 2 August 2026
Authority European AI Office
03 · Key Highlights

Five essentials for understanding the Regulation.

Drawn from the official text and framed for quick reference. Each point anchors to a detailed section below.

01

First horizontal AI law worldwide.

The first comprehensive legal framework dedicated to artificial intelligence, applicable across all sectors and Member States.

Regulation (EU) 2024/1689
02

Risk-based architecture.

Four tiers determine obligations. Classification is use- and context-based; the same system can sit in different categories.

Articles 5, 6, 50, 95
03

Governance for general-purpose AI.

Documentation, transparency and copyright duties on GPAI providers, with additional obligations for systemic-risk models.

Chapter V · Articles 51–56
04

Phased application 2024 – 2027.

Prohibitions from Feb 2025, GPAI obligations from Aug 2025, full application Aug 2026, Annex I products Aug 2027.

Article 113
05

Penalties up to 7% of turnover.

Breach of Article 5 prohibitions carries fines of up to €35 million or 7% of worldwide annual turnover, whichever is higher.

Article 99
04 · How to Navigate

Two audiences. One unified page.

The structure serves legal and executive readers in parallel. Use the path that fits how you need to work today.

For legal & compliance professionals

Work with the official text.

Every expandable panel contains wording drawn directly from Regulation (EU) 2024/1689, with article-level citations.

Start with the risk classification and prohibited practices, then open the Legal framework section for definitions and penalties.

For executives & decision-makers

Start with summaries and implications.

Each section leads with a concise summary. Detailed legal text is one click away when your team needs to verify a specific obligation.

Focus on the risk tiers, phased timeline, and fines & penalties — the structural shape of what the Act requires.

05 · Risk Framework
Based on official source

Four categories. Different obligations.

The Regulation distinguishes AI systems by the risk they pose. Classification is use- and context-based. Select any tier to reveal the official wording.

Source Regulation (EU) 2024/1689 · Articles 5, 6, 50, 95 and Annex III. Verified from the Official Journal, L 12.7.2024. Open full legal text
01 Article 5 Unacceptable risk

Prohibited AI practices.

Banned in the EU. AI practices considered a clear threat to safety, livelihoods, or fundamental rights. Exhaustively listed in Article 5.

Article 5(1) · Prohibited AI practices

The following AI practices shall be prohibited. See the full list of eight prohibited practices in the Prohibited AI practices section below.

Source: Regulation (EU) 2024/1689, Article 5 · Official Journal L, 12.7.2024, page 51/144 · © European Union.
02 Article 6 · Annex III High risk

High-risk AI systems.

Permitted, subject to strict obligations. Systems with significant potential to affect health, safety or fundamental rights. Classification arises from product-safety integration or Annex III use.

Article 6(1) · Classification rules for high-risk AI systems

Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

Source: Regulation (EU) 2024/1689, Article 6(1) · Official Journal L, 12.7.2024, page 53/144 · © European Union.
03 Article 50 Limited risk

Transparency obligations.

Permitted, subject to transparency. AI systems that interact with natural persons, perform biometric categorisation or emotion recognition, or generate synthetic content — including deepfakes.

Article 50(1) · Direct interaction disclosure

Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.

Source: Regulation (EU) 2024/1689, Article 50(1) · Official Journal L, 12.7.2024, page 82/144 · © European Union.
04 Article 95 Minimal risk

Voluntary application.

No specific obligations. All other AI systems. The Regulation encourages voluntary application of the requirements for high-risk systems, and the adoption of codes of conduct.

Article 95(1) · Codes of conduct for voluntary application

The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct, including related governance mechanisms, intended to foster the voluntary application to AI systems, other than high-risk AI systems, of some or all of the requirements set out in Chapter III, Section 2 taking into account the available technical solutions and industry best practices allowing for the application of such requirements.

Source: Regulation (EU) 2024/1689, Article 95(1) · Official Journal L, 12.7.2024, page 113/144 · © European Union.
06 · Prohibited AI Practices
Official source · Article 5(1)

Eight categories of prohibited AI.

Article 5(1) prohibits specific AI practices altogether. The repeated legal stem is shown once below, followed by the eight categories as a structured legal map.

Shared legal stem · applies to each item

"The placing on the market, the putting into service or the use of an AI system that "

The following practices are prohibited
a Manipulation

Subliminal, manipulative or deceptive techniques.

… deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.

b Exploitation of vulnerabilities

Exploiting age, disability or social/economic vulnerability.

… exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.

c Social scoring

Evaluation or classification based on social behaviour.

… for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both: detrimental treatment in contexts unrelated to those in which the data was originally generated, or detrimental treatment that is unjustified or disproportionate to the behaviour or its gravity.

d Predictive policing (profiling)

Risk assessments of criminal offence based solely on profiling.

… for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. This prohibition does not apply to AI systems used to support human assessment already based on objective and verifiable facts directly linked to a criminal activity.

e Facial recognition databases

Untargeted scraping to build or expand databases.

… creates or expands facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

f Emotion recognition

Inferring emotions at work and in education.

… to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

g Biometric categorisation

Inferring race, politics, religion, sexual orientation.

… uses biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This does not cover labelling or filtering of lawfully acquired biometric datasets in the area of law enforcement.

h Real-time biometric identification

Real-time remote biometric ID in public spaces.

… uses 'real-time' remote biometric identification systems in publicly accessible spaces for law enforcement, unless and in so far as such use is strictly necessary for: the targeted search for victims of abduction, trafficking or sexual exploitation and missing persons; the prevention of a specific, substantial and imminent threat or terrorist attack; or the localisation of suspects of specified serious criminal offences.

Source Each item is a faithful presentation of Article 5(1), points (a)–(h) — repeated stem removed, legal wording within each item preserved. Official Journal L, 12.7.2024, pages 51–52/144. Open full legal text
07 · High-Risk Systems
Based on Article 6 · Annex III

Two routes into high-risk classification.

A system is high-risk when it is either a product-safety component under Annex I, or when it falls within one of the Annex III use-case domains.

Route A · Annex I

Product-safety systems.

AI embedded in products already covered by EU harmonisation legislation — including medical devices, machinery, toys, aviation, lifts, cableways and pressure equipment.

Classification arises when the AI is a safety component and the product requires third-party conformity assessment.

Article 6(1) · Product-safety high-risk classification

Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

Source: Regulation (EU) 2024/1689, Article 6(1) · Official Journal L, 12.7.2024, page 53/144 · © European Union.
Route B · Annex III

Use-case systems.

Stand-alone AI systems deployed in high-impact domains — biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice.

The Commission may update Annex III to add or modify use-cases over time.

Annex III · High-risk AI systems referred to in Article 6(2)

High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:

1. Biometrics, in so far as their use is permitted under relevant Union or national law.

2. Critical infrastructure — as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.

3. Education and vocational training — admissions, evaluation of learning outcomes, assessment of appropriate level of education, monitoring during tests.

4. Employment, workers' management and access to self-employment — recruitment, promotion, termination, task allocation, performance monitoring.

5. Access to and enjoyment of essential private and public services — public assistance benefits, creditworthiness, insurance pricing, emergency call triage.

6. Law enforcement — victim risk assessment, polygraphs, reliability of evidence, profiling in the course of investigations.

7. Migration, asylum and border control — polygraph-like tools, risk assessments, examination of applications.

8. Administration of justice and democratic processes — assistance to judicial authorities, influence on election outcomes.

Source: Regulation (EU) 2024/1689, Annex III · Official Journal L, 12.7.2024, pages 126–127/144 · © European Union. Items adapted to a structured presentation; legal scope preserved.
08 · Phased Application
Based on Article 113

Obligations apply progressively.

The Regulation entered into force on 1 August 2024. Different parts of the Act become applicable on different dates, as set out in Article 113.

Milestone 01
1 Aug 2024

Entry into force

The Regulation enters into force on the twentieth day following its publication in the Official Journal of the European Union. The compliance clock starts.

Milestone 02
2 Feb 2025

Prohibitions apply

Chapters I and II shall apply from 2 February 2025. The prohibitions of AI practices under Article 5 take effect across the Union.

Milestone 03
2 Aug 2025

Governance and GPAI obligations

Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the exception of Article 101. This covers notifying authorities, general-purpose AI models, governance, penalties, and confidentiality.

Milestone 04
2 Aug 2026

Full application

The Regulation shall apply from 2 August 2026. The remaining provisions become applicable, including obligations for high-risk AI systems listed in Annex III.

Milestone 05
2 Aug 2027

Annex I high-risk systems

Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027. Obligations for AI systems embedded in products regulated under Annex I harmonisation legislation become applicable.

Source Regulation (EU) 2024/1689, Article 113 — Entry into force and application. Official Journal L, 12.7.2024, page 123/144. Open full legal text
09 · Fines & Penalties
Based on Article 99

Three tiers of administrative fines.

Non-compliance carries administrative fines indexed to the severity of the breach. Figures below are drawn verbatim from Article 99.

Tier 01 · Highest

€35,000,000administrative fine, up to

or
7%of total worldwide annual turnover

Non-compliance with the prohibition of the AI practices referred to in Article 5 — whichever is higher.

Article 99(3)
Tier 02 · Operator obligations

€15,000,000administrative fine, up to

or
3%of total worldwide annual turnover

Non-compliance with obligations of providers, authorised representatives, importers, distributors, deployers, notified bodies, or transparency obligations under Article 50 — whichever is higher.

Article 99(4)
Tier 03 · Misleading info

€7,500,000administrative fine, up to

or
1%of total worldwide annual turnover

Supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request — whichever is higher.

Article 99(5)
Source Regulation (EU) 2024/1689, Article 99(3)–(5). For SMEs, including start-ups, each fine shall be up to the percentages or amount listed, whichever is lower. Official Journal L, 12.7.2024, pages 115–116/144. Open full legal text
11 · Glossary
Official definitions · Article 3

Definitions from the Regulation.

Article 3 of Regulation (EU) 2024/1689 provides the legal definitions used across the Act. Each term below is reproduced verbatim. Where helpful, a plain-language note is provided separately.

Source Regulation (EU) 2024/1689, Article 3 — Definitions. Official Journal L, 12.7.2024, pages 46–50/144. All definitions verbatim. Open full glossary
Article 3(1)

AI system

The core definition governing the scope of the Regulation.

Article 3(1) · Definition

'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Source: Regulation (EU) 2024/1689, Article 3(1) · © European Union.
Plain-language note

The definition is deliberately broad and technology-neutral. It covers machine learning systems, logic-based systems, and systems that combine both — provided they generate outputs influencing environments from inputs they receive.

Article 3(3)

Provider

The entity that develops or commissions an AI system and places it on the market.

Article 3(3) · Definition

'provider' means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.

Source: Regulation (EU) 2024/1689, Article 3(3) · © European Union.
Article 3(4)

Deployer

The entity that uses an AI system under its authority (outside of personal non-professional use).

Article 3(4) · Definition

'deployer' means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Source: Regulation (EU) 2024/1689, Article 3(4) · © European Union.
Article 3(9)

Placing on the market

The first making available of an AI system on the Union market.

Article 3(9) · Definition

'placing on the market' means the first making available of an AI system or a general-purpose AI model on the Union market.

Source: Regulation (EU) 2024/1689, Article 3(9) · © European Union.
Article 3(11)

Putting into service

The supply of an AI system for first use in the Union.

Article 3(11) · Definition

'putting into service' means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.

Source: Regulation (EU) 2024/1689, Article 3(11) · © European Union.
Article 3(2)

Risk

Probability of occurrence combined with severity.

Article 3(2) · Definition

'risk' means the combination of the probability of an occurrence of harm and the severity of that harm.

Source: Regulation (EU) 2024/1689, Article 3(2) · © European Union.
Article 3(8)

Operator

Collective term for everyone across the AI value chain.

Article 3(8) · Definition

'operator' means a provider, product manufacturer, deployer, authorised representative, importer or distributor.

Source: Regulation (EU) 2024/1689, Article 3(8) · © European Union.
Article 6 · Classification

High-risk AI system

A system classified as high-risk under either route of Article 6.

Plain-language note

A high-risk AI system is one that is either: (A) a safety component of, or itself, a product covered by EU harmonisation legislation in Annex I requiring third-party conformity assessment; or (B) used in one of the eight domains listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). Full legal definition is in Article 6.

Article 3(63)

General-purpose AI model

A model trained at scale, displaying generality, and capable of integration into many downstream systems.

Plain-language note

A general-purpose AI model is one that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of how the model is placed on the market. Article 53 sets out the obligations on GPAI providers.

Article 51 · GPAI with systemic risk

Systemic risk

A classification that triggers additional obligations on general-purpose AI providers.

Article 51(2) · Threshold

A general-purpose AI model shall be presumed to have high impact capabilities when the cumulative amount of computation used for its training measured in floating point operations is greater than 10²⁵.

Source: Regulation (EU) 2024/1689, Article 51(2) · Official Journal L, 12.7.2024, page 83/144 · © European Union.
Plain-language note

A GPAI model with systemic risk triggers additional obligations including model evaluation, adversarial testing, serious incident reporting to the European Commission, and cybersecurity protection measures.

12 · Explore Further

Go deeper into the EU AI Act.

This page is an entry point. Continue into the full legal text, emerging research, or the developments shaping application across the Union — all on dedicated ART25 pages.

Primary reference Available

Legal text

The full text of Regulation (EU) 2024/1689, structured for navigation — every article, every annex, on an internal ART25 page.

Open full legal text
Analysis In preparation

Research

Curated academic and institutional research examining interpretation, scope, and operational implications of the Regulation.

Expected soon
Updates In preparation

Developments

Implementing acts, delegated acts, guidance from the European AI Office, and milestones from the national supervisory authorities.

Expected soon

Make the AI Act a governance advantage.

We work with leadership teams to design AI governance postures that go beyond compliance — cleaner AI estates, sharper accountability, and the trust of regulators and customers.