Skip to content

Prohibited AI Practices

The content specifically addresses prohibited AI practices under the AI Act, which is a distinct regulatory concept not adequately covered by existing topics. This requires its own topic to capture the specific restrictions, enforcement mechanisms, and compliance requirements.

prohibited AI banned AI practices forbidden AI systems article 5 AI Act high-risk AI unacceptable risk

Overview

Legal Framework

The AI Act establishes a specific regime for prohibited AI practices, which is a distinct regulatory concept. The primary legal basis for these prohibitions is found in the operative articles of the AI Act (Articles 5 et seq.), which are given context by Recitals 9 and 46. Recital 46 clarifies that certain AI practices are considered to pose an unacceptable risk to safety, livelihoods, and fundamental rights, thus warranting an outright ban rather than mere risk mitigation. Recital 9 anchors the Act within the New Legislative Framework, indicating that these prohibitions are harmonised rules directly applicable across the EU.

The law explicitly prohibits AI practices that contravene Union values. These include, but are not limited to: AI systems deploying subliminal techniques or exploiting vulnerabilities to materially distort a person's behaviour in a manner that causes harm; 'social scoring' by public authorities leading to detrimental treatment; and the use of 'real-time' remote biometric identification in publicly accessible spaces for law enforcement, with narrowly defined exceptions.

Practical Application

The prohibited practices are defined as having an intrinsically harmful purpose or effect. Unlike the regime for high-risk AI systems, which focuses on compliance with mandatory requirements to manage risk, the rules on prohibited practices are absolute bans. Enforcement will be the responsibility of national market surveillance authorities, who can order the withdrawal or recall of non-compliant systems. The interpretation of key terms like "subliminal techniques," "material distortion," and "detrimental treatment" will be crucial in enforcement and will likely be shaped by guidance from the European AI Office and subsequent case law. For instance, the prohibition on exploiting vulnerabilities will require an assessment of the specific circumstances of the user group (e.g., children, persons with disabilities) and the context of use.

Key Considerations

  • Conduct a Purpose Audit: Scrutinise the intended purpose and design of your AI system. A practice is prohibited based on its objective functionality and intended use, regardless of its technical safety. Systems designed for subliminal manipulation or social scoring are unlawful per se.
  • Assess the Operational Context: For practices like remote biometric identification, the legality hinges entirely on the specific operational context (e.g., "real-time" use in "publicly accessible spaces" for a predefined law enforcement purpose). Any deployment outside the exhaustively listed exceptions in the Act is prohibited.
  • Proactive Design Compliance: Given the absolute nature of the bans, compliance must be engineered into the system from the initial design phase. Relying on post-hoc mitigations is not sufficient for practices falling under Article 5.

Laws (79)

View all 79

Case Law (4)

News (5)

San Jose Can Protect Immigrants by Ending Flock Surveillance System

(This appeared as an op-ed published February 12, 2026 in the San Jose Spotlight, written by Huy Tran (SIREN), Jeffrey Wang (CAIR-SFBA), and Jennifer Pinsof.) As ICE and other federal agencies continue their assault on civil liberties, local leaders are stepping up to protect their communities. This includes pushing back against automated license plate readers, or ALPRs, which are tools of mass surveillance that can be weaponized against immigrants, political dissidents and other targets. In rec

A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act

We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed in the AI Omnibus. This transparency safeguard ensures that providers of AI systems cannot circumvent the core obligations of the AI Act. The post A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act appeared first on Access Now.

The AI Act isn’t enough: closing the dangerous loopholes that enable rights violations

While the EU's AI Act aims to regulate high-risk AI systems, it is undermined by major loopholes that allow their unchecked use in the context of national security and law enforcement. These exemptions risk enabling, among others, mass surveillance of protests and discriminatory migration practices. To prevent this, EDRi affiliate Danes je nov dan has published recommendations for Slovenia to adopt stricter national safeguards and transparent oversight mechanisms. The post The AI Act isn’t

The AI law is not sufficient: we must address the dangerous loopholes that enable abuse and violate people's rights.

While the EU's AI legislation aims to regulate high-risk AI systems, it is undermined by significant exceptions that allow for their uncontrolled application in the context of national security and law enforcement. These exceptions risk, among other things, enabling mass surveillance of protests and discriminatory migration practices. To prevent this, the EDRi partner Danes je nov has published recommendations for Slovenia to implement stricter national safeguards and transparent oversight mechanisms. The post "The AI legislation is not..."

Is the AI Act caging ChatGPT and other General Purpose Artificial Intelligence systems?

> The growth of generative artificial intelligence systems has led EU lawmakers to focus on General Purpose AI in drafting the AI Act, which will set the framework governing artificial intelligence in the European Union. As previously reported, the EU Parliament has already broadened the definition of artificial intelligence for the purposes of the AI Act… The post Is the AI Act caging ChatGPT and other General Purpose Artificial Intelligence systems? appeared first on GamingTechLaw.