Skip to content

A big honking list of all of the codes of ethics/principles that are focused on data or engineering

License

Notifications You must be signed in to change notification settings

stephen-andrew-lynch/ethics-lists

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

An Ethics-List Update

This started as an attempt to keep an up-to-date list of all of the AI ethics-related resources on the web, but fell by the wayside when I left LinkedIn to pursue other professional interests in late 2021. I am going to make a second go at updating the list in early 2025.


2021 "Understanding Responsible AI" Note

I created a primer email about "understanding responsible AI 101" note that I typically sent to journalists starting in 2019 and got tons of positive feeback from folks who were grokking what was new and happening in the field. I eventually published a public version on LinkedIn in 2021. You can see the full post on LinkedIn here.

I've also recreated the post below, for your reading pleasure.

A Fast, Easy & Comprehensive introduction to Responsible AI for us nontechnical folks (j/k, you can only choose two)

By Stephen Lynch August 17, 2021

I've spent five years working with the LinkedIn Engineering Data org, and for all five of those years I've been privileged to sit in on conversations about what can broadly be termed responsible AI or (when discussed more broadly) responsible design; making sure our use of data and AI first avoid causing harm and second are used to make the world a more fair place.

Explaining this topic to other Comms professionals or reporters is complicated, owing to the fact that talking about AI can itself be complicated… and then you're adding on top of it a discussion of LinkedIn's products, general notions of fairness/equity, discussions of intersectional harms, academic research in the field, etc. It's all really complicated!

In order to help interested folks bootstrap their understanding of fairness in AI, there's an email I've developed over the years that gives a quick overview of some of the AI fairness "greatest hits" in a format that lets you first skim my (very cursory) summaries, and then dig in where you're interested.

Disclaimer: I am not a responsible AI expert. I am just informed enough to write an article like this, but definitely ignorant enough to omit some key research and concepts. Please do not rely on me as your sole source of information on this subject!

But first (because I am a PR person after all), here's a quick survey of some of the work that's happened here at LinkedIn in this field:

Let's dive in to some essential academic reading and primary sources:

First up, here are some of the relatively famous papers and studies in the field, all of which are worth taking the time to read in full, when you're ready.

  • Buolamwini & Gebru (2017-2018)--Gender Shades study: What (should be) one of the widely-known examples of algorithmic bias used in production systems. Watching talks by Joy Buolamwini and Timnit Gebru about how facial recognition systems got to the point where they were much worse at identifying the faces of black women is also very illuminating WRT topics of AI fairness in general. gendershades.org

  • Crawford & Calo (2016)--"There is a blind spot in AI research": Evaluating AI outside of its social context may seem like a simple idea, but it was one that was sorely lacking from the discussion of responsible AI prior to 2016: Nature

  • Angwin, Larson, Mattu & Kirchner, ProPublica's "Machine Bias" (2016) analysis of the COMPAS sentencing algorithm: Conversations about fairness in AI are inextricably linked to the way they're used in the real world, and this is an example of systemic problems in a much higher-stakes domain than 99.9% of AI systems: ProPublica

  • Karen Hao, MIT Technology Review (2019)--"This is how AI bias really happens—and why it's so hard to fix": The shortest and perhaps most comprehensive explanation of the different considerations that need to be discussed (and they all need to be discussed) when you're discussing issues of algorithmic bias. MIT Technology Review

There are also lots of very accessible papers that discuss the ways that many of problems in fairness, AI bias, etc. are products of some fundamental assumptions that have been made by computer scientists (rather than outright nefarious motives or willful negligence).

  • Lipton & Steinheart (2018)--Troubling Trends in Machine Learning Scholarship: A very engaging description of some of the problems in the AI community. The section on "suitcase words" is probably required reading for any PR person or reporter who is interested in this subject (and something I grappled with in writing this post :)). arXiv

  • Hutchinson & Mitchell--50 Years of Test (Un)fairness: Lessons for Machine Learning: Another great, very readable survey of AI fairness issues. I also heartily recommend watching or listening to any talk by Margaret Mitchell, if you find this subject to be interesting. m-mitchell.com

  • Gebru, Wallach & Crawford--Datasheets for Datasets: There are lots of problems in AI, but what about solutions? Some of them aren't based on algorithms at all. Much like some of LinkedIn's internal efforts, Gebru et al recommend including better & more meaningful metadata on AI training datasets. arXiv

  • Hardt, Price & Srebro--Equality of Opportunity in Supervised Learning/Pratik Gajane & Mykola Pechenizkiy--On Formalizing Fairness in Prediction with Machine Learning: What does "fair" even mean? It turns out that it can mean different things in different contexts, and if you want an AI system to be fair… then you really need to define it well. NIPS && arXiv

  • Keyes, Hutson & Durbin--A Mulching Proposal: A cautionary tale for anyone who thinks AI fairness can be treated as a PR or purely academic exercise; A Modest Proposal for the ICML set. ironholds.org

Next, here are some non-academic reports/projects on applications of responsible AI in policy and the real world implications of some of these technologies:

  • European Commission--The Age of Artificial Intelligence Towards a European Strategy for Human-Centric Machines: ec.europa.eu

  • White House--Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights: obamawhitehouse.archives.gov

  • RAND--"Pitfalls of Predictive Policing": rand.org

Finally, if you've read this far, check out this talk by our own Guillaume Saint-Jacques talking about key considerations in responsible AI research at LinkedIn:

HBS Digital Initiative: Approaches to Fairness, Equity, and Inequality at LinkedIn | Guillaume Saint-Jacques

About

A big honking list of all of the codes of ethics/principles that are focused on data or engineering

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published