EU AI Act
A structured gateway to Regulation (EU) 2024/1689 — the world's first comprehensive legal framework for artificial intelligence. The Act entered into force on 1 August 2024, with obligations applying progressively through 2027.
Obligations apply in stages
The EU AI Act entered into force on 1 August 2024, with obligations applying progressively depending on the type of requirement and AI system.
Find out more Fines & penaltiesUp to €35M or 7% of turnover
Breach of prohibited practices carries the highest penalty tier. Three tiers of administrative fines apply, indexed to severity.
Find out more Risk-based architectureFour categories of risk
Unacceptable, high, limited, and minimal risk. Classification is use-based and determines the obligations that apply.
Find out moreWhat you'll find here.
A guided view of the EU AI Act — from risk classification and obligations to the full legal text. Select a topic to jump directly to that section.
Five essentials for understanding the Regulation.
Drawn from the official text and framed for quick reference. Each point anchors to a detailed section below.
First horizontal AI law worldwide.
The first comprehensive legal framework dedicated to artificial intelligence, applicable across all sectors and Member States.
Risk-based architecture.
Four tiers determine obligations. Classification is use- and context-based; the same system can sit in different categories.
Governance for general-purpose AI.
Documentation, transparency and copyright duties on GPAI providers, with additional obligations for systemic-risk models.
Phased application 2024 – 2027.
Prohibitions from Feb 2025, GPAI obligations from Aug 2025, full application Aug 2026, Annex I products Aug 2027.
Penalties up to 7% of turnover.
Breach of Article 5 prohibitions carries fines of up to €35 million or 7% of worldwide annual turnover, whichever is higher.
Two audiences. One unified page.
The structure serves legal and executive readers in parallel. Use the path that fits how you need to work today.
Work with the official text.
Every expandable panel contains wording drawn directly from Regulation (EU) 2024/1689, with article-level citations.
Start with the risk classification and prohibited practices, then open the Legal framework section for definitions and penalties.
Start with summaries and implications.
Each section leads with a concise summary. Detailed legal text is one click away when your team needs to verify a specific obligation.
Focus on the risk tiers, phased timeline, and fines & penalties — the structural shape of what the Act requires.
Four categories. Different obligations.
The Regulation distinguishes AI systems by the risk they pose. Classification is use- and context-based. Select any tier to reveal the official wording.
Prohibited AI practices.
Banned in the EU. AI practices considered a clear threat to safety, livelihoods, or fundamental rights. Exhaustively listed in Article 5.
The following AI practices shall be prohibited. See the full list of eight prohibited practices in the Prohibited AI practices section below.
High-risk AI systems.
Permitted, subject to strict obligations. Systems with significant potential to affect health, safety or fundamental rights. Classification arises from product-safety integration or Annex III use.
Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
Transparency obligations.
Permitted, subject to transparency. AI systems that interact with natural persons, perform biometric categorisation or emotion recognition, or generate synthetic content — including deepfakes.
Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.
Voluntary application.
No specific obligations. All other AI systems. The Regulation encourages voluntary application of the requirements for high-risk systems, and the adoption of codes of conduct.
The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct, including related governance mechanisms, intended to foster the voluntary application to AI systems, other than high-risk AI systems, of some or all of the requirements set out in Chapter III, Section 2 taking into account the available technical solutions and industry best practices allowing for the application of such requirements.
Eight categories of prohibited AI.
Article 5(1) prohibits specific AI practices altogether. The repeated legal stem is shown once below, followed by the eight categories as a structured legal map.
"The placing on the market, the putting into service or the use of an AI system that …"
Subliminal, manipulative or deceptive techniques.
… deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.
Exploiting age, disability or social/economic vulnerability.
… exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.
Evaluation or classification based on social behaviour.
… for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both: detrimental treatment in contexts unrelated to those in which the data was originally generated, or detrimental treatment that is unjustified or disproportionate to the behaviour or its gravity.
Risk assessments of criminal offence based solely on profiling.
… for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. This prohibition does not apply to AI systems used to support human assessment already based on objective and verifiable facts directly linked to a criminal activity.
Untargeted scraping to build or expand databases.
… creates or expands facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
Inferring emotions at work and in education.
… to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
Inferring race, politics, religion, sexual orientation.
… uses biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This does not cover labelling or filtering of lawfully acquired biometric datasets in the area of law enforcement.
Real-time remote biometric ID in public spaces.
… uses 'real-time' remote biometric identification systems in publicly accessible spaces for law enforcement, unless and in so far as such use is strictly necessary for: the targeted search for victims of abduction, trafficking or sexual exploitation and missing persons; the prevention of a specific, substantial and imminent threat or terrorist attack; or the localisation of suspects of specified serious criminal offences.
Two routes into high-risk classification.
A system is high-risk when it is either a product-safety component under Annex I, or when it falls within one of the Annex III use-case domains.
Product-safety systems.
AI embedded in products already covered by EU harmonisation legislation — including medical devices, machinery, toys, aviation, lifts, cableways and pressure equipment.
Classification arises when the AI is a safety component and the product requires third-party conformity assessment.
Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
Use-case systems.
Stand-alone AI systems deployed in high-impact domains — biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice.
The Commission may update Annex III to add or modify use-cases over time.
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
1. Biometrics, in so far as their use is permitted under relevant Union or national law.
2. Critical infrastructure — as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
3. Education and vocational training — admissions, evaluation of learning outcomes, assessment of appropriate level of education, monitoring during tests.
4. Employment, workers' management and access to self-employment — recruitment, promotion, termination, task allocation, performance monitoring.
5. Access to and enjoyment of essential private and public services — public assistance benefits, creditworthiness, insurance pricing, emergency call triage.
6. Law enforcement — victim risk assessment, polygraphs, reliability of evidence, profiling in the course of investigations.
7. Migration, asylum and border control — polygraph-like tools, risk assessments, examination of applications.
8. Administration of justice and democratic processes — assistance to judicial authorities, influence on election outcomes.
Obligations apply progressively.
The Regulation entered into force on 1 August 2024. Different parts of the Act become applicable on different dates, as set out in Article 113.
Entry into force
The Regulation enters into force on the twentieth day following its publication in the Official Journal of the European Union. The compliance clock starts.
Prohibitions apply
Chapters I and II shall apply from 2 February 2025. The prohibitions of AI practices under Article 5 take effect across the Union.
Governance and GPAI obligations
Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the exception of Article 101. This covers notifying authorities, general-purpose AI models, governance, penalties, and confidentiality.
Full application
The Regulation shall apply from 2 August 2026. The remaining provisions become applicable, including obligations for high-risk AI systems listed in Annex III.
Annex I high-risk systems
Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027. Obligations for AI systems embedded in products regulated under Annex I harmonisation legislation become applicable.
Three tiers of administrative fines.
Non-compliance carries administrative fines indexed to the severity of the breach. Figures below are drawn verbatim from Article 99.
€35,000,000administrative fine, up to
orNon-compliance with the prohibition of the AI practices referred to in Article 5 — whichever is higher.
Article 99(3)€15,000,000administrative fine, up to
orNon-compliance with obligations of providers, authorised representatives, importers, distributors, deployers, notified bodies, or transparency obligations under Article 50 — whichever is higher.
Article 99(4)€7,500,000administrative fine, up to
orSupply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request — whichever is higher.
Article 99(5)Key provisions in structured reference.
A working reference to the Regulation's purpose, core definitions, general-purpose AI regime, and penalty structure.
Content sourced from official European Commission materials. Quotations are reproduced directly from Regulation (EU) 2024/1689 with minor formatting only.
Purpose of the Regulation.
Improving the functioning of the internal market while ensuring a high level of protection of health, safety, and fundamental rights.
The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.
Definition of an AI system.
The legal definition used throughout the Regulation. Technology-neutral by design.
'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Obligations for providers of GPAI.
Dedicated rules covering technical documentation, transparency to downstream providers, and copyright policy.
Providers of general-purpose AI models shall:
(a) draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities;
(b) draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems;
(c) put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790;
(d) draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.
The obligations in (a) and (b) do not apply to open-source GPAI models whose parameters, weights, architecture and usage information are publicly available — with the exception of models classified as posing systemic risk.
Administrative fines.
Three tiers of maximum administrative fines tied to the severity of the obligation breached.
Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 35 000 000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Non-compliance with any of the following provisions related to operators or notified bodies, other than those laid down in Articles 5, shall be subject to administrative fines of up to EUR 15 000 000 or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request shall be subject to administrative fines of up to EUR 7 500 000 or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Phased application schedule.
The dates at which different chapters of the Regulation become applicable.
This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
It shall apply from 2 August 2026. However:
(a) Chapters I and II shall apply from 2 February 2025;
(b) Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the exception of Article 101;
(c) Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027.
Definitions from the Regulation.
Article 3 of Regulation (EU) 2024/1689 provides the legal definitions used across the Act. Each term below is reproduced verbatim. Where helpful, a plain-language note is provided separately.
AI system
The core definition governing the scope of the Regulation.
'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The definition is deliberately broad and technology-neutral. It covers machine learning systems, logic-based systems, and systems that combine both — provided they generate outputs influencing environments from inputs they receive.
Provider
The entity that develops or commissions an AI system and places it on the market.
'provider' means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Deployer
The entity that uses an AI system under its authority (outside of personal non-professional use).
'deployer' means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Placing on the market
The first making available of an AI system on the Union market.
'placing on the market' means the first making available of an AI system or a general-purpose AI model on the Union market.
Putting into service
The supply of an AI system for first use in the Union.
'putting into service' means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
Risk
Probability of occurrence combined with severity.
'risk' means the combination of the probability of an occurrence of harm and the severity of that harm.
Operator
Collective term for everyone across the AI value chain.
'operator' means a provider, product manufacturer, deployer, authorised representative, importer or distributor.
High-risk AI system
A system classified as high-risk under either route of Article 6.
A high-risk AI system is one that is either: (A) a safety component of, or itself, a product covered by EU harmonisation legislation in Annex I requiring third-party conformity assessment; or (B) used in one of the eight domains listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). Full legal definition is in Article 6.
General-purpose AI model
A model trained at scale, displaying generality, and capable of integration into many downstream systems.
A general-purpose AI model is one that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of how the model is placed on the market. Article 53 sets out the obligations on GPAI providers.
Systemic risk
A classification that triggers additional obligations on general-purpose AI providers.
A general-purpose AI model shall be presumed to have high impact capabilities when the cumulative amount of computation used for its training measured in floating point operations is greater than 10²⁵.
A GPAI model with systemic risk triggers additional obligations including model evaluation, adversarial testing, serious incident reporting to the European Commission, and cybersecurity protection measures.
Go deeper into the EU AI Act.
This page is an entry point. Continue into the full legal text, emerging research, or the developments shaping application across the Union — all on dedicated ART25 pages.
Legal text
The full text of Regulation (EU) 2024/1689, structured for navigation — every article, every annex, on an internal ART25 page.
Research
Curated academic and institutional research examining interpretation, scope, and operational implications of the Regulation.
Developments
Implementing acts, delegated acts, guidance from the European AI Office, and milestones from the national supervisory authorities.
Make the AI Act a governance advantage.
We work with leadership teams to design AI governance postures that go beyond compliance — cleaner AI estates, sharper accountability, and the trust of regulators and customers.
