France has launched an ambitious national AI strategy called "AI for Humanity" with the goal of making the country a world leader in artificial intelligence. The strategy, initiated in 2018, focuses on four key objectives: reinforcing the AI ecosystem to attract top talent, developing an open data policy (especially in areas of French excellence like healthcare), creating a regulatory and financial framework to foster "AI champions", and promoting AI regulation and ethics to ensure high standards and public acceptability.
President Emmanuel Macron has emphasised the societal impacts of AI, stating it will "raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences." The strategy builds on recommendations from the influential Villani report on meaningful AI development.
In November 2022, the government launched a second phase of the strategy running until 2025, with increased focus on trusted AI and generative AI technologies. However, some civil society groups have criticised the strategy's emphasis on ethical guidelines rather than binding regulations.
France has taken several steps to develop AI governance structures. The National Pilot Committee for Digital Ethics was created in 2019 to provide guidance on ethical issues related to AI and digital technologies. In 2023, the French data protection authority (CNIL) established a dedicated AI Department to address AI risks and prepare for upcoming EU regulations. The CNIL has published reports on ethical issues raised by AI and partnered with the national ombudsman to examine algorithmic discrimination. France is subject to EU data protection laws like GDPR and has updated its national laws accordingly.
However, there are concerns about potential conflicts of interest, as a new government committee on generative AI includes members from major tech companies.
As an EU member state, France will be subject to the EU AI Act once it comes into force. Key aspects include a risk-based approach with stricter rules for high-risk AI systems, prohibitions on certain AI practices deemed unacceptable, new transparency requirements for AI systems interacting with humans, and a special regime for general-purpose AI models.
France has advocated for some controversial positions during negotiations, including pushing for lighter regulation of general-purpose AI systems to support European AI companies. This stance has drawn criticism after revelations that some "European champions" have significant Silicon Valley backing.
The country will also need to apply the EU's Digital Services Act, which regulates online platforms and includes provisions related to algorithmic transparency and content moderation.
France has a complex relationship with facial recognition technology (FRT). FRT is used voluntarily in some contexts like airport passport control. A controversial digital ID program using facial recognition was delayed after legal challenges. In 2020, a court ruled against using facial recognition in high schools. The CNIL has issued guidance limiting facial recognition use by public authorities.
However, France recently passed a law allowing algorithmic video surveillance at the 2024 Paris Olympics, drawing criticism from human rights groups. The government maintains facial recognition will not be used, but concerns persist about expanded surveillance powers.
France has taken an active role in international AI initiatives. It has endorsed the OECD AI Principles and G20 AI Principles, co-founded the Global Partnership on Artificial Intelligence, and participates in EU and Council of Europe efforts on AI governance. France endorsed the Bletchley Declaration at the 2023 AI Safety Summit and will host the next AI Safety Summit in 2024. President Macron has called for using forums like the G7 and OECD to develop global AI regulations.
This country report is our interpretation and summary of the "CAIDP Artificial Intelligence & Democratic Values Index 2023". The full report can be found here - https://www.caidp.org/reports/aidv-2023/