Algorithms Policed Welfare Systems For Years. Now They’re Under Fire for Bias

A coalition of human rights groups have today launched legal action against the French government over its use of algorithms to detect miscalculated welfare payments, alleging they discriminate against disabled people and single mothers.

The algorithm, used since the 2010s, violates both European privacy rules and French anti-discrimination laws, argue the 15 groups involved in the case, including digital rights group La Quadrature du Net, Amnesty International, and Collectif Changer de Cap, a French group that campaigns against inequality.

“This is the first time that a public algorithm has been the subject of a legal challenge in France,” says Valérie Pras of Collectif Changer de Cap, adding she wants these types of algorithms to be banned. “Other social organizations in France use scoring algorithms to target the poor. If we succeed in getting [this] algorithm banned, the same will apply to the others.”

The French welfare agency, the CNAF, analyzes the personal data of more than 30 million people—those claiming government support as well as the people they live with and their family members, according to the litigation, filed to France’s top administrative court on October 15.

Using their personal information, the algorithm gives each person a score between 0 and 1, based on how likely it estimates they are to be receiving payments they are not entitled to—either as fraud or by mistake.

France is one of many countries using algorithms to search for error or fraud in its welfare system. Last year, WIRED’s three-part investigation with Lighthouse Reports into fraud-detection algorithms in European welfare systems focused on their use in the Netherlands, Denmark and Serbia.

People with higher risk scores can then be subject to what welfare recipients across the bloc have described as stressful and intrusive investigations, which can also involve their welfare payments being suspended.

“The processing, implemented by the CNAF, constitutes massive surveillance and a disproportionate attack on the right to privacy,” the legal documents on the French algorithm read. “The effects of this algorithmic processing particularly affects the most precarious people.”

The CNAF has not publicly shared the source code of the model it is currently using to detect welfare payments made in error. But based on analysis of older versions of the algorithm, suspected to be in use until 2020, La Quadrature du Net claims the model discriminates against marginalized groups by scoring people who have disabilities, for example, as higher risk than others.

“People receiving a social allowance reserved for people with disabilities [the Allocation Adulte Handicapé, or AAH] are directly targeted by a variable in the algorithm,” says Bastien Le Querrec, legal expert at La Quadrature du Net. “The risk score for people receiving AAH and who are working is increased.”

Because it also scores single-parent families higher than two-parent families, the groups argue it indirectly discriminates against single mothers, who are statistically more likely to be sole-care givers. “In the criteria for the 2014 version of the algorithm, the score for beneficiaries who have been divorced for less than 18 months is higher,” says Le Querrec.

Changer de Cap says it has been approached by both single mothers and disabled people looking for help, after being subject to investigation.

The CNAF agency, which is in charge of distributing financial aid including housing, disability, and child benefits, did not immediately respond to a request for comment or to WIRED’s question about whether the algorithm currently in use had significantly changed since the 2014 version.

Just like in France, human rights groups in other European countries argue they subject the lowest-income members of society to intense surveillance—often with profound consequences.

When tens of thousands of people in the Netherlands—many of them from the country’s Ghanaian community—were falsely accused of defrauding the child benefits system, they weren’t just ordered to repay the money the algorithm said they allegedly stole. Many of them claim they were also left with spiraling debt and destroyed credit ratings.

The problem isn’t the way the algorithm was designed, but their use in the welfare system, says Soizic Pénicaud, a lecturer in AI policy at Sciences Po Paris, who previously worked for the French government on transparency of public sector algorithms. “Using algorithms in the context of social policy comes with way more risks than it comes with benefits,” she says. “I haven’t seen any example in Europe or in the world in which these systems have been used with positive results.”

The case has ramifications beyond France. Welfare algorithms are expected to be an early test of how the EU’s new AI rules will be enforced once they take effect in February 2025. From then, “social scoring”—the use of AI systems to evaluate people’s behavior and then subject some of them to detrimental treatment—will be banned across the bloc.

“Many of these welfare systems that do this fraud detection may, in my opinion, be social scoring in practice,” says Matthias Spielkamp, cofounder of the nonprofit Algorithm Watch. Yet public sector representatives are likely to disagree with that definition—with arguments about how to define these systems likely to end up in court. “I think this is a very hard question,” says Spielkamp.

Facebook
Twitter
LinkedIn
Telegram
Tumblr