[community] Fwd: Please sign against race science
Jutta Treviranus
jtreviranus at ocadu.ca
Sun Jun 21 22:19:55 UTC 2020
This is an issue we have been speaking out against in the past but it appears that publishers continue to support this form of pseudo science.
If anyone has questions about the harms caused by this form of AI, I would be happy to explain.
best,
Jutta
Begin forwarded message:
From: Meredith Whittaker <meredith at ainowinstitute.org<mailto:meredith at ainowinstitute.org>>
Subject: Please sign against race science
Date: June 21, 2020 at 5:08:48 PM EDT
Dear all,
I'm working with an academic coalition calling on publishers to stop platforming race science dressed up as machine learning. The coalition has drafted a letter of dissent regarding this kind of research, which has reemerged in recent years in the AI research field and is based on repeatedly debunked race science.
*
If you would like to sign on to this letter<https://docs.google.com/document/d/1whaEKj1jAVpcam8AgGgJ9Bvgm1-BjaJecvr_1GVjr6g/edit?usp=sharing>, please fill out this form<https://docs.google.com/forms/d/e/1FAIpQLSdEYVIGq5040cim6b9VcgUbQKW_-W7BBj_qYascoLnFIgkMYw/viewform> by Monday, June 22nd. Feel free to distribute amongst your networks to sign.
*
However, Please refrain from publishing this on social media until Tuesday, June 23rd, after we formally send the letter to Springer. At that point, we would love your help amplifying.
Our goal is to bolster abolitionist work within academia by holding publishing bodies and universities accountable for legitimizing research that serves the carceral state. Time and time again, scholars and activists have shown that this research perpetuates anti-Black racism and violence. We demand that all publishers stop publishing debunked and racist pseudoscience. Our first letter goes to Springer University Press.
On May 5th, 2020 Harrisburg University published a press release<https://web.archive.org/web/20200506013352/https://harrisburgu.edu/hu-facial-recognition-software-identifies-potential-criminals/> promoting an article titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” which will be published in an edited anthology by Springer Press later this year. In the press release, researchers claimed to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” This study isn't unique. Over the last several years there’s been a steady stream of peer-reviewed publications which all use machine learning to make similar neo-phrenological claims.
In light of the current global movement against racialized police brutality, several industry leaders have announced temporary moratoria on facial recognition sales to police (Amazon<https://blog.aboutamazon.com/policy/we-are-implementing-a-one-year-moratorium-on-police-use-of-rekognition>, Microsoft<https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/>, and IBM<https://www.ibm.com/blogs/policy/facial-recognition-susset-racial-justice-reforms/>). These announcements are a start, but there is much more work to be done to dismantle the tech-to-prison pipeline, especially in the academy.
We understand this is a deeply taxing time, especially for those at the front lines of the fight for racial justice. We hope this effort builds from and highlights the importance of this vital work.
In Solidarity,
Meredith
--
Meredith Whittaker
Co-director, AI Now Institute
Research Professor, NYU
Google Open Research Founder
More information about the community
mailing list