The European Artificial Intelligence Act (AI Act), a groundbreaking milestone as the world’s first comprehensive regulation on artificial intelligence, is now in effect. This historic act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights. The regulation’s primary goal is to establish a harmonised internal market for AI in the EU, thereby fostering the adoption of this technology and creating a supportive environment for innovation and investment.
The AI Act introduces a forward-looking definition of AI based on a product safety and risk-based approach in the EU:
Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. Due to their minimal risk to citizens’ rights and safety, these systems face no obligations under the AI Act. Companies can voluntarily adopt additional codes of conduct.
Specific transparency risk: AI systems like chatbots must disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems so that synthetic audio, video, text and image content is marked in a machine-readable format and detectable as artificially generated or manipulated.
High risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information, and a high level of robustness, accuracy, and cybersecurity. Importantly, these high-risk AI systems will also be subject to human oversight, ensuring that critical decision-making processes are not solely reliant on AI. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include, for example, AI systems used for recruitment to assess whether somebody is entitled to get a loan or to run autonomous robots.
Unacceptable risk: AI systems considered a clear threat to people’s fundamental rights will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance to encourage the dangerous behaviour of minors, systems that allow ‘social scoring’ by governments or companies, and specific applications of predictive policing. In addition, some uses of biometric systems will be prohibited, such as emotion recognition systems used at the workplace and some systems for categorising people or real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
To complement this system, the AI Act also introduces rules for so-called general-purpose AI models. These highly capable AI models are designed to perform a wide variety of tasks, like generating human-like text. General-purpose AI models are increasingly used as components of AI applications. The AI Act will ensure transparency along the value chain and address possible systemic risks of the most capable models.
Application and enforcement of the AI rules
Member States have until 2 August 2025 to designate national competent authorities who will oversee the application of the rules for AI systems and carry out market surveillance activities. The Commission’s AI Office will be the key implementation body for the AI Act at the EU level and the enforcer of the rules for general-purpose AI models.
Three advisory bodies will support the implementation of the rules. The European Artificial Intelligence Board will ensure a uniform application of the AI Act across EU Member States. It will act as the main body for cooperation between the Commission and the Member States. A scientific panel of independent experts will offer technical advice and input on enforcement. This panel can issue alerts to the AI Office about risks associated with general-purpose AI models. The AI Office can also receive guidance from an advisory forum composed of diverse stakeholders.
Companies that do not comply with the rules will be fined. Fines could be up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations, and up to 1.5% for supplying incorrect information.
Next Steps
The majority of the AI Act’s rules will start applying on 2 August 2026. However, prohibitions on AI systems deemed to present an unacceptable risk will already apply after six months, while the rules for so-called General-Purpose AI models will apply after 12 months.
To bridge the transitional period before full implementation, the Commission has launched the AI Pact. This initiative invites AI developers to voluntarily adopt critical obligations of the AI Act ahead of the legal deadlines.
The Commission is also developing guidelines to define and detail how the AI Act should be implemented and facilitating co-regulatory instruments like standards and codes of practice. The Commission opened expressions of interest to participate in drawing up the first general-purpose AI Code of Practice and a multi-stakeholder consultation, allowing all stakeholders to have their say on the first Code of Practice under the AI Act.