cover image: France: Discriminatory algorithm used by the social security agency must be stopped

20.500.12592/5zx6xcy

France: Discriminatory algorithm used by the social security agency must be stopped

16 Oct 2024

The French authorities must immediately stop the use of a discriminatory risk-scoring algorithm used by the French Social Security Agency’s National Family Allowance Fund (CNAF), which is used to detect overpayments and errors regarding benefit payments, Amnesty International said today. On 15 October, Amnesty International and fourteen other coalition partners led by La Quadrature du Net (LQDN) submitted a complaint to the Council of State, the highest administrative court in France, demanding the risk-scoring algorithmic system used by CNAF be stopped. “From the outset, the risk-scoring system used by CNAF treats individuals who experience marginalization – those with disabilities, lone single parents who are mostly women, and those living in poverty – with suspicion. This system operates in direct opposition to human rights standards, violating the right to equality and non-discrimination and the right to privacy,” said Agnès Callamard, Secretary General at Amnesty International. In 2023, La Quadrature du Net (LQDN) got access to versions of the algorithm’s source code – a set of instructions written by programmers to create a software – thereby exposing the discriminatory nature of the system. Since 2010, CNAF has used a risk-scoring algorithm to identify people who are potentially committing benefits fraud by receiving overpayments. The algorithm assigns a risk score between zero and one to all recipients of family and housing benefits. The closer the score is to one, the higher the probability of being flagged for investigation. Overall, there are 32 million people in France living in households that receive a benefit from CNAF. Their sensitive personal data, as well as that of their family, is processed periodically, and a risk score is assigned. The criteria that increase one’s risk score include parameters which discriminate against vulnerable households, including being on a low income, being unemployed, living in a disadvantaged neighbourhood, spending a significant portion of income on rent, and working while having a disability. The details of those who are flagged due to having a high-risk score are compiled into a list that is investigated further by a fraud investigator. “While authorities herald the rollout of algorithmic technologies in social protection systems as a way to increase efficiency and detect fraud and errors, in practice, these systems flatten the realities of people’s lives. They work as extensive data-mining tools that stigmatize marginalized groups, and invade their privacy,” said Agnès Callamard. Amnesty International did not investigate specific cases of people flagged by the CNAF system. However, our investigations in Netherlands and Serbia suggest that using AI-powered systems and automation in the public sector enables mass surveillance: the amount of data that is collected is disproportionate to the purported aim of the system. Moreover, evidence by Amnesty International also exposed how many of these systems have been quite ineffective at actually doing what they purport to do—whether it be identifying fraud or errors in the benefits system.
france news press release
Pages
3
Published in
United Kingdom

Table of Contents

Related Topics

All