Skip to content

Monitoring Actions under AI Act

The content specifically addresses 'Monitoring actions' as a distinct topic under the AI Act, which encompasses systematic oversight procedures, compliance verification, and market surveillance activities that are not fully captured by existing more general monitoring topics.

monitoring actions systematic monitoring compliance monitoring performance monitoring market surveillance monitoring ongoing monitoring monitoring procedures monitoring requirements

Overview

Legal Framework

The AI Act and Digital Services Act (DSA) establish distinct but related principles for monitoring. AI Act Recital 32 characterizes the use of real-time remote biometric identification by law enforcement as "particularly intrusive," creating a risk of a "feeling of constant surveillance." This implies a need for strict, purpose-bound monitoring of such high-risk AI systems to prevent fundamental rights violations. Conversely, DSA Recital 30 establishes a general prohibition against imposing de jure or de facto general monitoring obligations on providers of intermediary services. This prohibition is a cornerstone of the DSA's liability framework, though it does not preclude monitoring obligations in specific cases or pursuant to national authorities' orders.

Practical Application

These provisions create a bifurcated regime. For providers of high-risk AI systems under the AI Act, particularly in sensitive domains like law enforcement, the rationale of Recital 32 supports proactive, systematic monitoring for compliance with strict requirements and fundamental rights safeguards. For providers of intermediary services under the DSA, the default position from Recital 30 is a shield against obligations to generally monitor all user content to identify illegal material. Monitoring, if required, must be targeted and specific, such as acting on a valid court order or a specific notice about illegal content. The European Court of Justice's jurisprudence on the e-Commerce Directive's analogous provision (Article 15) informs this interpretation, consistently ruling against generalized monitoring obligations.

Key Considerations

  • Distinguish Between Regimes: An organization must first determine if its activity falls under the AI Act's high-risk/systemic rules or the DSA's intermediary service rules, as the permissible scope of monitoring actions differs fundamentally.
  • Implement Targeted, Not General, Monitoring (DSA Context): For DSA-covered services, any compliance monitoring system must be designed to avoid constituting a de facto general monitoring obligation. Systems should be triggered by specific, legally grounded requests or identifiable risks.
  • Justify High-Risk AI Monitoring (AI Act Context): For AI Act high-risk systems, especially in sensitive applications, documented monitoring procedures must be proportionate, focused on compliance and risk mitigation, and designed to prevent the "constant surveillance" effect cautioned against in Recital 32.

Laws (7)

Guidance (4)

News (2)