AI Act: Europe defines a classification of risks associated with AI
< Return to blog < Return to blog A new European regulation on AI Minimal AI risks Limited-risk AI High-risk AI AI presenting unacceptable risks Corporate Obligations Concrete impacts for users And then? Returning from the World of AI Cannes Festival, held at the Palais des Festivals in Cannes from February 13 to 15, 2025, we see that the landscape of products based on artificial intelligence (AI) continues to expand significantly. During previous editions, attention was focused on ChatGPT, a conversational AI which still arouses keen interest today. More recently, its Chinese competitor, DeepSeek, has emerged on the market. Although DeepSeek shows comparable performance, gray areas remain regarding its methods of collecting and using users’ personal data. We have also written an article detailing our position on whether or not DeepSeek should be integrated into our own AI solutions. At the same time, other models have emerged, highlighting strict respect for user data. Mistral AI, a French start-up, stands out by positioning itself as one of the solutions most compliant with the General Data Protection Regulation (GDPR). However, despite this reputation, Mistral AI is currently the subject of a complaint for an alleged personal data breach, calling into question its privacy practices. Faced with the rapid proliferation of this software and often opaque data protection policies, Europe reacted by adopting the AI Act last year. This legislative framework aims to regulate the use of AI systems in interaction with citizens of the European Union, thus ensuring the protection of their fundamental rights. Although we are not legal experts, we would like to provide you with an overview of the new legal obligations related to the use of AI systems, in order to inform you of the current implications of this legislation. 1. A new European regulation on AI In June 2024, the European Parliament adopted Regulation (EU) 2024/1689 (yes, another one!), establishing harmonized rules for artificial intelligence (AI) within the European Union. This legislative framework aims to regulate the development, marketing and use of AI systems, while protecting the fundamental rights of European citizens. One of the main objectives of this legislation is to ensure that AI systems respect Union values, including the protection of fundamental rights, democracy and the rule of law. The regulation establishes a classification of AI systems according to their level of risk: minimal, limited, high and unacceptable. Systems with unacceptable risk, such as certain facial recognition applications in public spaces, are prohibited, while those with high risk are subject to strict requirements before being placed on the market. AI Risk Pyramid 2. Minimal AI risks Artificial intelligence systems classified as presenting minimal risk include video games using AI to animate virtual characters or even spam filters that automatically sort emails. These applications do not pose a significant threat to the security or fundamental rights of citizens and, as such, are not subject to specific regulations under the AI Act. Companies and developers can therefore deploy them without any particular restrictions within the European Union. 3. Limited-risk AI This category includes automatic conversational agents (chatbots, such as dialogg.ai) as well as deepfakes or “hyperfakes”. Deepfakes leverage artificial intelligence to generate or alter videos, images or audio files in such a realistic way that they can be extremely difficult or impossible to detect. Due to the risks of manipulation and disinformation, these technologies are subject to strict transparency obligations. Regulations require companies and developers to clearly inform users when they interact with an AI, so that they can make an informed decision about whether or not to continue that interaction. For example, deepfakes must be explicitly marked that they are artificially generated, and chatbots must clearly signal that they are not humans. 4. High-risk AI High-risk AI systems are those that can have a significant impact on people’s safety or their fundamental rights. Notable examples include: AI-driven robot-assisted surgery applications, where an error could endanger the lives of patients. Automated sorting of CVs in recruitment processes, which can introduce discriminatory bias. Credit rating, which can restrict citizens’ access to financial loans. Automated examination of visa applications, directly influencing the freedom of movement of individuals. To be authorized, these systems must meet strict requirements for quality, transparency and human oversight to reduce risks. Companies developing or using these technologies must ensure rigorous assessment of their impact, as well as constant monitoring to prevent any abuse or malfunction. 5. AI presenting unacceptable risks AI systems and models with unacceptable risk are strictly prohibited within the European Union and cannot be commercialized under any circumstances. These technologies are considered a direct threat to the fundamental rights of citizens, public safety or democracy. Among these prohibited systems are: AIs designed to manipulate human behavior in deceptive or coercive ways. Technologies that abolish free will, such as those that unconsciously influence critical decisions (voting, consent, purchases, etc.). Certain biometric mass surveillance systems which could infringe on privacy and individual freedoms. By prohibiting these uses, the AI Act aims to preserve the fundamental values of the EU, by guaranteeing that AI remains a tool serving humans and not a means of controlling them. 6. Corporate Obligations Depending on the level of risk associated with AI, European rules establish new obligations. a) Transparency and accountability requirements Companies will have to provide transparent information about the AI they use. They will also have to ensure copyright compliance and clearly flag manipulated content (such as “deep fakes”), thus ensuring responsible and informed use of the technology. AI systems with unacceptable risks are outright banned within the European Union (with a few exceptions). b) For high-risk AIs High-risk AI systems, such as ChatGPT or DeepSeek, must: comply with strict obligations, particularly with regard to risk assessment provide increased transparency benefit from human supervision, thus guaranteeing the security and fundamental rights of EU citizens. These conditions will be assessed before they are placed on the market and throughout their life cycle. These high-risk technologies can only be placed on the market if they are subject to a