Skip to content
News
EN

Artificial intelligence: the action plan of the CNIL

CNIL

Content

The main thing is:

  • The CNIL has been undertaking work for several years to anticipate and respond to the issues raised by AI.
  • In 2023, it will extend its action on augmented cameras and wishes to expand its work to generative AIs, large language models and derived applications (especially chatbots).
  • Its action plan is structured around four strands:
    • to understand the functioning of AI systems and their impact on people;
    • enabling and guiding the development of privacy-friendly AI;
    • federate and support innovative players in the AI ecosystem in France and Europe;
    • audit and control AI systems and protect people.
  • This work will also make it possible to prepare for the entry into application of the draft European AI Regulation, which is currently under discussion.

The protection of personal data, a fundamental challenge in the development of AI

The development of AI is accompanied by challenges in the field of data protection and individual freedoms that the CNIL has been working to address for several years now. Since the publication in 2017 of its report on the ethical challenges of algorithms and artificial intelligence, the CNIL repeatedly pronounced on the issues raised by the new tools brought about by this new technology.

In particular, generative artificial intelligence (see box below) has been developing rapidly for several months, whether in the field of text and conversation, via large language models (LLMs_in_ English), such as GPT-3, BLOOM or Megatron NLG and derived chatbots (ChatGPT or Bard), but also in those of imaging (Dall-E, Midjourney, Stable Diffusion, etc.) or speech (Vall-E).

These foundation_models_ and the technological bricks that rely on them seem to already find many cases of application in a variety of sectors. Nevertheless, the understanding of their functioning, their possibilities and their limitations, as well as the legal, ethical and technical issues surrounding their development and use remain largely under debate.

Considering that the protection of personal data is a major challenge for the design and use of these tools, the CNIL publishes its action plan on artificial intelligence, which aims — among other things — to frame the development of generative AI.

What is generative AI?

Generative artificial intelligence is a system capable of creating text, images or other content (music, video, voice, etc.) from a human user’s instruction. These systems can produce new content from training data. Their performance is now close to some productions made by people because of the large amount of data that has been used for their training. However, these systems require the user to clearly specify their queries in order to achieve the expected results. A real know-how is therefore developed around the composition of the user’s queries (quick engineering).

For example, the image below, entitled ‘Space Opera Theatre’, was generated by user Jason M. Allen using the Midjourney tool on the basis of a textual instruction describing his expectations (theatrical decor, toges, pictorial inspirations, etc.).

IA générative : Space Opera Theatre - Jason M. Allen (2022)

Credit : Jason M. Allen (2022), CCo license

A four-pronged action plan

For several years, the CNIL has been undertaking work aimed at anticipating and responding to the challenges posed by artificial intelligence, its different variations (classification, prediction, content generation, etc.) and its various use cases. Its new artificial intelligence service will be dedicated to these issues, and will support other CNIL services that also face uses of these algorithms in many contexts.

Faced with challenges related to the protection of freedoms, the acceleration of AI and news related to generative AI, the regulation of artificial intelligence is a main focus of the CNIL’s action.

This regulation is structured around four objectives:

  • Understanding the functioning of AI systems and their impacts for people
  • Enabling and guiding the development of AI that respects personal data
  • Federating and supporting innovative players in the AI ecosystem in France and Europe
  • Audit and control AI systems and protect people
  1. Understanding the functioning of AI systems and their impacts on people

The innovative techniques used for the design and operation of AI tools raise new questions about data protection, in particular:

  • the fairness and transparency of the data processing underlying the operation of these tools;
  • the protection of publicly available data on the web against the use of scraping, or scraping, of data for the design of tools;
  • the protection of data transmitted by users when they use these tools, ranging from their collection (via an interface) to their possible re-use and processing through machine learning algorithms;
  • the consequences for the rights of individuals to their data, both in relation to those collected for the learning of models and those which may be provided by those systems, such as content created in the case of generative AI;
  • the protection against bias and discrimination that may occur;
  • the unprecedented security challenges of these tools.

These aspects will be one of the priority areas of work for the Artificial Intelligence Service and the CNIL Digital Innovation Laboratory (LINC).