GPAI Systemic Risk
This new topic is needed because the content specifically addresses the classification and identification of general-purpose AI models that present systemic risk, which is a distinct regulatory category under the AI Act that requires dedicated coverage separate from general high-risk AI classification.
Overview
Legal Framework
The classification of general-purpose AI (GPAI) models presenting systemic risk is governed by the AI Act's framework for GPAI models with systemic risk. Recital 111 AI Act establishes the foundational requirement to develop a methodology for this classification, defining systemic risk as stemming from "particularly high capabilities." A GPAI model is deemed to present systemic risk if it has high-impact capabilities, to be determined based on appropriate technical tools and methodology. Recital 112 AI Act mandates a specific classification procedure, stating that a GPAI model meeting the applicable high-impact capability threshold shall be classified as a GPAI model with systemic risk. It imposes a notification obligation on the provider to inform the European Commission and the AI Office within two weeks of meeting the threshold or becoming aware that the model meets it.
Practical Application
The legal commentary emphasizes that the classification triggers a distinct and stringent regulatory regime, separate from the standard high-risk AI system rules. The methodology for identifying high-impact capabilities, which is central to the classification, will be further specified through delegated acts. In practice, the obligation rests on the GPAI provider to perform an internal assessment against the forthcoming technical criteria. The two-week notification deadline is strict and begins from the moment the provider has knowledge that the threshold is met, creating a significant compliance imperative for continuous monitoring of model capabilities and performance. Failure to notify can lead to enforcement action by the AI Office.
Key Considerations
- Proactive Capability Assessment: Providers of powerful GPAI models must implement internal governance to continuously evaluate whether their models meet the evolving technical criteria for "high-impact capabilities" that trigger systemic risk classification.
- Strict Notification Timeline: Organizations must have a compliance process ready to execute the mandatory notification to the Commission and AI Office within the strict two-week window from the date the classification threshold is met or identified.
- Regime Distinction: Being classified as a GPAI model with systemic risk subjects the provider to a separate set of obligations (e.g., model evaluation, risk assessment, incident reporting) under the AI Act's Title VIII, which are more extensive than those for non-systemic-risk GPAI models.
Laws (65)
View all 65Article 94
Procedural rights of economic operators of the general-purpose AI model
Article 101
Fines for providers of general-purpose AI models
Article 111
AI systems already placed on the market or put into service and general-purpose AI models already placed on the marked
Recital 97
Recital 104
Recital 120
Recital 136
Recital 163
Recital 164
Recital 173
Article 51
Classificatie van AI-modellen voor algemene doeleinden als AI-modellen voor algemene doeleinden met een systeemrisico
Article 55
Verplichtingen van aanbieders van AI-modellen voor algemene doeleinden met een systeemrisico
Article 90
Waarschuwingen voor systeemrisico’s door het wetenschappelijk panel
Recital 110
Recital 111
Recital 112
Recital 113
Recital 114
Recital 115
Guidance (5)
Guidelines 02/2021 on virtual voice assistants
Guidelines on virtual voice assistants
A virtual voice assistant (VVA) is a service that understands voice commands and executes them or mediates with other IT systems if needed. VVAs are currently available on most smartphones and tablets, traditional computers, and, in the latest years, even standalone devices like smart speakers. VVAs act as interface between users and their computing devices and online services such as search engines or online shops. Due to their role, VVAs have access to a huge amount of personal...
Richtsnoeren 01/2020 inzake de verwerking van persoonsgegevens in het kader van verbonden voertuigen en mobiliteitsgerelateerde toepassingen
guidelines connected vehicles
Guidelines 8/2020 on the targeting of social media users
Guidelines on the targeting of social media users
Richtsnoeren 02/2021 inzake virtuele spraakassistenten
guidelines over virtuele spraakassistenten
Een virtuele spraakassistent ( virtual voice assistant , of VVA) betreft een dienst die spraakgestuurde opdrachten begrijpt en uitvoert, of indien nodig als tussenschakel optreedt naar andere IT-systemen. Tegenwoordig is een VVA als optie beschikbaar op de meeste smartphones, tablets en reguliere computers en sinds enkele jaren zelfs op losse apparaten zoals smartspeakers. Een VVA functioneert als schakel tussen de gebruiker en zijn apparaat of een online dienst zoals een zoekmachine...
Guidelines 1/2020 on processing personal data in the context of connected vehicles and mobility related applications
Guidelines on processing of personal data through video devices
News (8)
EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects
We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human. LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can b
Artificial Insecurity: access and availability in the age of AI
In the third part of our blog series on the dodgy digital security practices underlying advanced AI tools, we look at how the availability of systems is impacted by the proliferation of large language models. The post Artificial Insecurity: access and availability in the age of AI appeared first on Access Now.
Artificial Insecurity: threats to information integrity
In the second part of our series on the dodgy digital security practices underlying advanced AI tools, we examine how LLMs threaten information integrity. The post Artificial Insecurity: threats to information integrity appeared first on Access Now.
Artificial Insecurity: how AI tools compromise confidentiality
In the first part of our blog series on the dodgy digital security practices underlying advanced AI tools, we unpack how LLMs can jeopardize the confidentiality of people’s data. The post Artificial Insecurity: how AI tools compromise confidentiality appeared first on Access Now.
Berlin Group beschließt Papiere zu LLMs und zu Data Sharing
Berlin Group beschließt Papiere zu LLMs und zu Data Sharing
Artificial intelligence: the action plan of the CNIL
The main thing is: The CNIL has been undertaking work for several years to anticipate and respond to the issues raised by AI. In 2023, it will extend its action on augmented cameras and wishes to expand its work to generative AIs, large language models and derived applications (especially chatbots). Its action plan is structured around four strands: to understand the functioning of AI systems and their impact on people; enabling and guiding the development of privacy-friendly AI; federate and
Is the AI Act caging ChatGPT and other General Purpose Artificial Intelligence systems?
> The growth of generative artificial intelligence systems has led EU lawmakers to focus on General Purpose AI in drafting the AI Act, which will set the framework governing artificial intelligence in the European Union. As previously reported, the EU Parliament has already broadened the definition of artificial intelligence for the purposes of the AI Act… The post Is the AI Act caging ChatGPT and other General Purpose Artificial Intelligence systems? appeared first on GamingTechLaw.
Overview of EU Strategy for Data: Digital Services Act
> The Digital Services Act was published in the Official Journal of the European Union Oct. 27. The DSA, which harmonizes conditions for the provision of intermediary services and increases transparency requirements for online intermediaries, will enter into force Nov. 16. In the latest installment of a multipart series, the IAPP Research and Insights team provides privacy professionals with an overview of the DSA, including the law's objectives, key requirements and enforcement.