Guest post: AI surveillance in prisons is a terrible idea, both technologically and ethically

0
University of Washington professors Rachael Tatman and Emily M. Bender. (UW Pictures)

Editor’s Note: This is an article written by professors at the University of Washington Emily M. Bender and Rachael tatman on the use of AI in prisons.

Thomson Reuters Foundation reported on August 9, a panel of the United States House of Representatives asked the Department of Justice to explore the use of so-called “artificial intelligence” (AI) technology to monitor people’s telephone communications incarcerated for the avowed purpose of preventing violent crime and suicide. .

This is not a hypothetical exercise: LEO Technologies, a company “built for cops by cops”, already offers as a service the automated monitoring of telephone calls of people in prison with their relatives.

As linguists who study the development and application of speech recognition and other language technologies, including how they work (or do not work) with different varieties of languages, we would like to state clearly and firmly that it is a terrible idea both technologically and ethically.

We oppose large-scale surveillance by any means, especially when used against vulnerable populations without their consent or ability to opt out. Even if it could be shown that such surveillance is in the best interest of incarcerated persons and the communities to which they belong – which we do not believe it can be – attempting to automate this process increases the damage. potentials.

The main supposed benefit of the technology for incarcerated people, suicide prevention, is not achievable using a “keyword and phrase based” approach (such as LEO technologies). describes his product). Even Facebook’s suicide prevention program, which itself faced careful scrutiny by legal and ethical specialists, found keywords to be an ineffective approach because they does not take into account the context. In addition, humans frequently view the output of computer programs as “objective” and therefore make decisions based on wrong information without realizing it to be wrong.

And while the ability to prevent suicide was concrete and demonstrable, which it is not, it carries enormous potential for harm.

Automated transcription is a key part of these product offerings. The effectiveness of speech recognition systems depends on a close match between their training data and the input they receive in their deployment context, and for most modern speech recognition systems this means that the greater the A person’s speech is far from the presenter’s standard, the less system will be in correctly transcribing their speech.

Not only will such systems undoubtedly produce unreliable (while appearing very objective) information, the systems will also fail more often for people than the US justice system most often fails.

A 2020 study which included the Amazon service used by LEO Technologies for voice transcription corroborated earlier findings that the word error rate for speakers of African American English was about double that of white speakers. Since African Americans are imprisoned at a rate five times more than white Americans, these tools are profoundly unsuitable for their application and have the potential to increase already unacceptable racial disparities.

This surveillance, which covers not only the prisoners but also their interlocutors, is an unnecessary violation of privacy. Adding a so-called “AI” will only make matters worse: machines are unable to accurately transcribe even the warm and heartwarming language of the house accurately and at the same time give a false burst of “objectivity” inaccurate transcriptions. Should those whose relatives are incarcerated bear the burden of defending themselves against charges based on erroneous transcripts of what they have said? This invasion of privacy is all the more exasperating as prisoners and their families often have to paying outrageous rates for phone calls in the first place.

We urge Congress and the DOJ to abandon this path and avoid incorporating automated prediction into our legal system. LEO Technologies claims to “shift the law enforcement paradigm from reactive to predictive,” a paradigm that seems at odds with a legal system where guilt must be proven.

And, finally, we urge all concerned to remain very skeptical of “AI” applications. This is particularly true when it has real impacts on people’s lives, and even more so when these people are, like incarcerated people, particularly vulnerable.


Source link

Leave A Reply

Your email address will not be published.