AI Risk Assessment
The AI Act employs a risk-based regulatory approach to determine which practices are prohibited, requiring assessment and classification of AI system risks, which is distinct from general DPIA and needs dedicated coverage.
Overview
Legal Framework
The AI Act's regulatory approach is established by its risk-based framework, articulated in Recital 26. This recital mandates a tiered system where legal obligations are calibrated to the level of risk an AI system poses. The law requires providers and deployers to systematically assess and classify their AI systems to determine if they fall into the prohibited, high-risk, or limited-risk categories. This classification dictates subsequent compliance steps, such as conformity assessments for high-risk systems or transparency obligations for certain limited-risk systems. Recital 76 further underscores that this risk assessment must specifically consider cybersecurity threats, including AI-specific attacks like data poisoning or adversarial attacks, as integral to evaluating systemic risks.
Practical Application
The AI Act's risk classification is a distinct legal obligation separate from a General Data Protection Regulation (GDPR) Data Protection Impact Assessment (DPIA). While a DPIA under Article 35 GDPR focuses on risks to the rights and freedoms of natural persons from data processing, the AI Act's risk assessment has a broader scope. It evaluates risks to health, safety, and fundamental rights that may arise from the AI system's functionality and use, irrespective of whether personal data is processed. The commentary from Tekst & Commentaar on the GDPR highlights that processing of special category data inherently carries significant risks to fundamental rights, a principle that informs the AI Act's stricter scrutiny of high-risk AI systems that process such data. Organizations must therefore conduct a dedicated AI risk assessment that maps the system's intended purpose and technical characteristics against the classification criteria in Annexes I, II, and III of the AI Act.
Key Considerations
- Conduct a standalone AI risk assessment prior to market placement or deployment, using the AI Act's annexes as a classification guide. Do not rely solely on a GDPR DPIA, as the scopes and legal triggers differ.
- Integrate AI-specific cybersecurity threat analysis (e.g., resilience against model evasion or data poisoning) into the risk assessment process, as required by Recital 76.
- For AI systems involving special category personal data, apply heightened scrutiny; the GDPR commentary confirms such processing is inherently high-risk, which will strongly influence the AI system's final risk classification under the AI Act.
Laws (68)
View all 68Recital 155
Recital 157
Recital 158
Recital 159
Recital 160
Recital 165
Recital 166
Recital 171
Recital 173
Recital 174
Recital 177
Recital 178
Article 6
Classification rules for high-risk AI systems
Article 16
Obligations of providers of high-risk AI systems
EU DATABASE FOR HIGH-RISK AI SYSTEMS
HIGH-RISK AI SYSTEMS
Recital 75
Recital 76
Recital 77
Case Law (1)
Guidance (3)
Guidelines 4/2019 on Article 25 Data Protection by Design and by Default Version 2.0 Adopted on 20 October 2020
Guidelines on data protection by design and by default
Guidelines 1/2018 on certification and identifying certification criteria in accordance with Articles 42 and 43 of the Regulation
Guidelines on certification and identifying certification criteria
Richtsnoeren van 1/2018 voor certificering en het vaststellen van certificeringscriteria overeenkomstig de artikelen 42 en 43 van de verordening
guidelines certificering
News (11)
A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act
We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed in the AI Omnibus. This transparency safeguard ensures that providers of AI systems cannot circumvent the core obligations of the AI Act. The post A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act appeared first on Access Now.
Kort:
EU News
'This briefing analyses the establishment of the European Anti-Money Laundering Authority (AMLA) as a cornerstone of the EU’s 2024 Anti-Money Laundering/Countering the Financing of Terrorism (AML/CFT) legislative reform. As AMLA formally began its operations in the summer of 2025, a key question ...
Kort gezegd:
Nieuws uit de Europese Unie.
"Deze briefing analyseert de oprichting van de Europese Autoriteit voor het Bestrijden van Geldwitwassen en Terrorismefinanciering (AMLA) als een belangrijk onderdeel van de hervorming van de EU-wetgeving op het gebied van het bestrijden van geldwitwassen en terrorismefinanciering (AML/CFT) in 2024. Nu AMLA officieel haar activiteiten in de zomer van 2025 is begonnen, is een belangrijke vraag..."
In short:
News from the European Union.
"This briefing analyzes the establishment of the European Authority for the Prevention of Money Laundering and the Financing of Terrorism (AMLA) as a key component of the reform of EU legislation on anti-money laundering and counter-terrorism financing (AML/CFT) in 2024. Now that AMLA has officially commenced its operations in the summer of 2025, a crucial question arises..."
The AI Act isn’t enough: closing the dangerous loopholes that enable rights violations
While the EU's AI Act aims to regulate high-risk AI systems, it is undermined by major loopholes that allow their unchecked use in the context of national security and law enforcement. These exemptions risk enabling, among others, mass surveillance of protests and discriminatory migration practices. To prevent this, EDRi affiliate Danes je nov dan has published recommendations for Slovenia to adopt stricter national safeguards and transparent oversight mechanisms. The post The AI Act isn’t
The AI law is not sufficient: we must address the dangerous loopholes that enable abuse and violate people's rights.
While the EU's AI legislation aims to regulate high-risk AI systems, it is undermined by significant exceptions that allow for their uncontrolled application in the context of national security and law enforcement. These exceptions risk, among other things, enabling mass surveillance of protests and discriminatory migration practices. To prevent this, the EDRi partner Danes je nov has published recommendations for Slovenia to implement stricter national safeguards and transparent oversight mechanisms. The post "The AI legislation is not..."
Wat is er gebeurd met de risicogebaseerde aanpak voor de overdracht van gegevens?
De AVG (Algemene Verordening Gegevensbescherming) omvat de verantwoordingsplicht (RBA) voor alle verplichtingen van de verantwoordelijke partij zoals die in de AVG zijn vastgelegd. Waar de overdrachtsregels worden beschreven als verplichtingen van de verantwoordelijke partij (in plaats van als absolute principes), is de verantwoordingsplicht van artikel 24 dus van toepassing. Volgens Lokke Moerel, professor in het internationaal ICT-recht aan de Universiteit van Tilburg en expert op het gebied van cyberbeveiliging, wordt dit niet tegengesproken door het vonnis van het Europees Hof van Justitie in de zaak Schrems II, noch door de aanbevelingen van het EDPB (European Data Protection Board) over aanvullende maatregelen na het vonnis Schrems II.
What Happened to the Risk-Based Approach to Data Transfers?
The GDPR incorporates the RBA for all obligations of the controller in the GDPR. Where the transfer rules are stated as obligations of the controller (rather than as absolute principles), the RBA of Article 24 therefore applies. Other than the DPAs assume, this is not contradicted by the ECJ in Schrems II nor by the EDPB recommendations on additional measures following the Schrems II judgment, according to Lokke Moerel, Professor of Global ICT Law at Tilburg University and a Dutch Cyber Security
Danish SA Declares Use of Google Analytics Unlawful Without Supplementary Measures
The Danish Data Protection Agency has looked into the tool Google Analytics and its settings, and the terms under which the tool is provided. On the basis of this review, the Danish Data Protection Agency concludes that the tool cannot, without more, be used lawfully. Lawful use requires the implementation of supplementary measures in addition to the settings provided by Google.
Irish Data Protection Commissioner Fines Instagram EUR 405M for Children Privacy Violations
> The fine is the result of an investigation that began in 2020 and focused on the company’s processing of children’s personal data. Based on press reports, the investigation focused on children between the ages of 13 and 17 who were allowed to operate business or creator Instagram accounts. As a result, children’s phone numbers and email addresses were publicly accessible.
CNIL Proposes 60 Million Euros Fine Against French AdTech Company For Non-Compliance with GDPR
> The proposed fine follows complaints filed by privacy NGO ‘Privacy International’ against Criteo. […] Under the CNIL’s sanction procedure, Criteo has the right to respond to the report, both with respect to the alleged infringements and the proposed sanction.