Post-Market Monitoring for AI Systems
Risk management systems require ongoing post-market monitoring to identify and respond to risks that emerge during real-world deployment. This is a distinct and critical component that warrants its own topic.
Overview
Legal Framework
Article 72 of the AI Act establishes the core obligation for providers of high-risk AI systems to implement a post-market monitoring (PMM) system. This system must actively and systematically collect, document, and analyze data on the AI system's performance throughout its lifecycle. The legal rationale, as noted in Recital 114, is to ensure providers can identify and mitigate risks that emerge during real-world use, which were not foreseeable during the conformity assessment. The PMM system forms an integral part of the provider’s quality management system.
Practical Application
The T&C commentary interprets this as a dynamic and ongoing duty of vigilance. Providers must establish a documented PMM plan proportionate to the nature and risks of the AI system. This involves setting up mechanisms to gather feedback from users, monitoring the system's outputs for anomalies or deviations from expected performance, and tracking incidents or misuse. The collected data must be analyzed to determine if the system continues to meet the safety and compliance requirements. If new risks or a significant change in existing risks is identified, the provider must immediately take necessary corrective actions, which may include updating the system, informing users, or, in serious cases, withdrawing the product.
Key Considerations
- Establish formal channels for receiving and processing feedback from deployers and end-users, as this is a critical source of post-market data.
- The PMM plan and its outputs must be used to update the system’s technical documentation and, where relevant, the EU declaration of conformity, ensuring an auditable trail of ongoing compliance.
- For general-purpose AI models with systemic risk, the obligations are enhanced, requiring specific model evaluation and adversarial testing protocols as part of PMM.
Laws (5)
Case Law (1)
Guidance (4)
Guidelines 3/2019 on processing of personal data through video devices
Guidelines on processing of personal data through video devices
Richtsnoeren 3/2019 inzake de verwerking van persoonsgegevens door middel van videoapparatuur
guidelines cameratoezicht
Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement
Guidelines on the use of facial recognition technology in the area of law enforcement
More and more law enforcement authorities (LEAs) apply or intend to apply facial recognition technology (FRT). It may be used to authenticate or to identify a person and can be applied on videos (e.g. CCTV) or photographs. It may be used for various purposes, including to search for persons in police watch lists or to monitor a person's movements in the public space. FRT is built on the processing of biometric data , therefore, it encompasses the processing of special categories ...
Guidelines 1/2019 on Codes of Conduct and Monitoring Bodies under Regulation 2016/679
Guidelines on codes of conduct and monitoring bodies