The creation and adoption of surveillance systems based on Artificial Intelligence often feels like it’s widely outpacing the speed at which cities and countries can legislate any kind of control. So it’s always an encouraging sign when new well considered decisions are rendered and put the public good and human rights first.
One such decision is the Dutch courts’ ruling that a welfare surveillance system violates human rights.
A Dutch court has ordered the immediate halt of an automated surveillance system for detecting welfare fraud because it violates human rights … The case was seen as an important legal challenge to the controversial but growing use by governments around the world of artificial intelligence (AI) and risk modelling in administering welfare benefits and other core services.
It’s especially encouraging since disfranchised and minority populations are usually the ones facing the brunt of surveillance, with little recourse for corrections and / or without the means to pursue legal options.
Deployed primarily in low-income neighbourhoods, it gathers government data previously held in separate silos, such as employment, personal debt and benefit records, and education and housing histories, then analyses it using a secret algorithm to identify which individuals might be at higher risk of committing benefit fraud.
One hopes the decision will have repercussions far outside the Netherlands.
Alston predicted the judgment would be “a wake-up call for politicians and others, not just in the Netherlands”. The special rapporteur presented a report to the UN general assembly in October on the emergence of the “digital welfare state” in countries around the globe, warning of the need “to alter course significantly and rapidly to avoid stumbling, zombie-like, into a digital welfare dystopia”.
Image credit: Rotterdam at night by Joël de Vriend.
Tags: netherlands surveillancefrom kottke.org https://ift.tt/379MzEh
via IFTTT
EmoticonEmoticon