- Algorithms of Oppression: How Search Engines Reinforce Racism](https://www.amazon.com/Algorithms-Oppression-Search-Engines-Reinforce/dp/1479837245)
- https://github.com/linkedin/LiFT
- "The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows."
- See 2021 LinkedIn announcement article, below.
-
2021-05-14 Image classification algorithms at Apple, Google still push racist tropes
-
LinkedIn open-sources toolkit to measure AI model fairness
- https://venturebeat.com/2020/08/25/linkedin-open-sources-toolkit-to-measure-ai-model-fairness/
- "designed to enable the measurement of fairness in AI and machine learning workflows. The company says LiFT can be deployed during training and scoring to measure biases in training data sets, and to evaluate notions of fairness for models while detecting differences in their performance across subgroups."
- https://venturebeat.com/2020/08/25/linkedin-open-sources-toolkit-to-measure-ai-model-fairness/
-
Ball, Patrick
- Human Rights Data Analysis Group (HRDAG)
- https://hrdag.org/
- https://twitter.com/vm_wylbur
-
Gohdes, Anita
- Professor, The Hertie Schoool
- http://www.anitagohdes.net/
- https://twitter.com/ARGohdes
-
Lum, Kristian, PhD
- Lead Statistician at the Human Rights Data Analysis Group (HRDAG)
- https://hrdag.org/people/kristian-lum-phd/
- "Kristian’s research primarily focuses on examining the uses of machine learning in the criminal justice system and has concretely demonstrated the potential for machine learning-based predictive policing models to reinforce and, in some cases, amplify historical racial biases in law enforcement"
- https://twitter.com/KLdivergence
- Lead Statistician at the Human Rights Data Analysis Group (HRDAG)
-
Venkatasubramanian, Suresh, PhD
- School of Computing, University of Utah
- https://algorithmicfairness.wordpress.com/
- https://twitter.com/geomblog
-
Vishnoi, Nisheeth
- Professor of CS@Yale
- http://www.cs.yale.edu/homes/vishnoi/Home.html
- https://twitter.com/NisheethVishnoi
-
Algorithm Watch
-
Human Rights Data Analysis Group
-
Safety & Justice Challenge
-
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- https://www.documentcloud.org/documents/2840784-Practitioner-s-Guide-to-COMPAS-Core.html#document/p30/a296482
- https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say
- https://www.propublica.org/article/propublica-responds-to-companys-critique-of-machine-bias-story
- https://www.propublica.org/article/technical-response-to-northpointe
- https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
-
1Algorithms, Platforms, and Ethnic Bias: An Integrative Essay
-
PreTrial Risk Assessment Tools
-
https://medium.com/mit-media-lab/the-algorithmic-justice-league-3cc4131c5148