Skip to content

High-Risk AI Classification

The content specifically addresses classification rules for high-risk AI systems under the AI Act, which is a distinct regulatory concept requiring its own dedicated topic beyond the general 'AI Risk Assessment' category.

high-risk AI AI classification risk classification AI Act classification high-risk systems risk categories AI system categorization

Overview

Legal Framework

The classification of an AI system as 'high-risk' is governed by Article 6 of the AI Act, which establishes a two-pronged legal test. First, an AI system must be intended for use as a safety component of a product, or be itself a product, covered by the Union harmonisation legislation listed in Annex I (e.g., machinery, medical devices, toys). Second, that product must be required to undergo a third-party conformity assessment under that sectoral legislation. Alternatively, an AI system is classified as high-risk if it falls within one of the specific use-cases listed in Annex III, which covers critical areas like biometrics, critical infrastructure, education, employment, and law enforcement. Recital 26 establishes the core rationale for this risk-based approach: rules must be tailored to the intensity and scope of the risks AI systems generate, with high-risk systems subject to a full suite of ex-ante conformity requirements.

Practical Application

The primary interpretive guidance comes from the AI Act's annexes and the European Commission's implementing acts, which will further specify the classification criteria. The classification is objective and based on the system's intended purpose as declared by the provider, irrespective of its potential for misuse. For systems under Annex I, the key is whether the product's governing legislation mandates a third-party assessment; if conformity can be declared by the manufacturer alone, the AI system is not high-risk via this route. For Annex III systems, the classification is automatic if the use-case description is met. Authorities will scrutinize a provider's stated intended purpose against these annexes. If a system not listed in the annexes performs a similar function to a listed high-risk system, the Commission may, via a delegated act, update Annex III to include it, but providers cannot self-classify systems as high-risk on this basis.

Key Considerations

  • Determine the Regulatory Pathway: First, ascertain if your AI system is a safety component of, or is itself, a product under Annex I legislation. If not, assess directly against the eight areas of Annex III.
  • Document the Intended Purpose: Rigorously document the system's intended purpose, as this is the legal anchor for classification. Marketing materials and technical documentation must align.
  • Monitor Annex Updates: Annex III is a living list. Providers must monitor delegated acts from the Commission that may add new high-risk use-cases, which could trigger a reclassification of existing systems.

Laws (75)

View all 75

News (3)

A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act

We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed in the AI Omnibus. This transparency safeguard ensures that providers of AI systems cannot circumvent the core obligations of the AI Act. The post A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act appeared first on Access Now.

The AI Act isn’t enough: closing the dangerous loopholes that enable rights violations

While the EU's AI Act aims to regulate high-risk AI systems, it is undermined by major loopholes that allow their unchecked use in the context of national security and law enforcement. These exemptions risk enabling, among others, mass surveillance of protests and discriminatory migration practices. To prevent this, EDRi affiliate Danes je nov dan has published recommendations for Slovenia to adopt stricter national safeguards and transparent oversight mechanisms. The post The AI Act isn’t

Is the AI Act caging ChatGPT and other General Purpose Artificial Intelligence systems?

> The growth of generative artificial intelligence systems has led EU lawmakers to focus on General Purpose AI in drafting the AI Act, which will set the framework governing artificial intelligence in the European Union. As previously reported, the EU Parliament has already broadened the definition of artificial intelligence for the purposes of the AI Act… The post Is the AI Act caging ChatGPT and other General Purpose Artificial Intelligence systems? appeared first on GamingTechLaw.