Skip to content

The official GitHub page for the paper "Who is Accountable? The Data, Model or Regulations? A Review of Bias, Fairness, and Safety towards Responsible AI"

Notifications You must be signed in to change notification settings

anas-zafar/Responsible-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 

Repository files navigation

"Who is Accountable? The Data, Model or Regulations? A Review of Bias, Fairness, and Safety towards Responsible AI"

🌟 Overview

Welcome to the official repository for our Responsible AI survey paper. Here you will a list of comprehensive latest papers for Responsible AI domain

What is Responsible AI?

Responsible Artificial Intelligence (RAI) encompasses the principles, practices, and frameworks governing the design, deployment, and operation of AI systems to ensure they function in accordance with ethical standards, maintain transparency in their decision-making processes, demonstrate clear accountability mechanisms, and fundamentally align with societal values and human welfare objectives.

This repository provides an overview of RAI papers in the following areas:

  • Policy and Governance
  • Explainable AI [XAI]
  • Privacy
  • Security
  • Safety
  • Ethical Implications
  • AI Welfare And Rights

1. Existing RAI Surveys

  • [2024/10] Responsible AI in the Global Context: Maturity Model and Survey Anka Reuel,Patrick Connolly [paper]
  • [2024/4] Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering Qinghua Lu, Liming Zhu [paper]
  • [2024/2] Toward Responsible Artificial Intelligence in Long-Term Care: A Scoping Review on Practical Approaches Dirk R M Lukkien, MSc, Henk Herman Nap, PhD [paper]
  • [2023/10] Responsible AI (RAI) Games and Ensembles Yash Gupta, Runtian Zhai, Arun Suggala, Pradeep Ravikumar [paper]

2. RAI Papers

  • Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective * Mousa Al-kfairy ,Dheya Mustafa*[paper]
  • Focus! Rating XAI Methods and Finding Biases Anna Arias-Duart, Ferran Parés[paper]
  • Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez[paper]
  • A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When? * Subrato Bharati, M. Rubaiyat Hossain Mondal*[paper]
  • Fairness in Machine Learning: A Survey Simon Caton, Christian Haas[paper]
  • Efficient data representation by selecting prototypes with importance weights Gurumoorthy, K. S., Dhurandhar[paper]
  • TED: Teaching AI to explain its decisions Hind, M., Wei, D., Campbell[paper]
  • A unified approach to interpreting model predictions Lundberg, S. M., & Lee[paper]
  • Leveraging latent features for local explanations Luss, R., Chen, P. Y[paper]
  • Contrastive Explanations Method with Monotonic Attribute Functions Luss et al[paper]
  • Boolean Decision Rules via Column Generation Dash et al[paper]
  • Socially Responsible AI Algorithms: Issues, Purposes, and Challenges L Cheng, KR Varshney[paper]
  • Explainable AI (XAI): Core ideas, techniques, and solutions R Dwivedi, D Dave[paper]
  • Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective Mousa Al-kfairy, Dheya Mustafa [paper]
  • Data cards: Purposeful and transparent dataset documentation for responsible AI Pushkarna, M., Zaldivar, A., & Kjartansson, O. [paper]
  • Healthsheet: Development of a transparency artifact for health datasets Rostamzadeh, N., Mincu, D., Roy, S., Smart, A., Wilcox, L., Pushkarna, M., & Heller, K. [paper]
  • Fairness through Experimentation: Inequality in A/B testing as an approach to responsible design Saint-Jacques, G., Sepehri, A., Li, N., & Perisic, I. [paper]
  • The energy and carbon footprint of training end-to-end speech recognizers Parcollet, T., & Ravanelli, M. [paper]
  • Carbon emissions and large neural network training Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L.M., Rothchild, D., So, D., Texier, M., and Dean, J. [paper]
  • Hidden technical debt in machine learning systems Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D. & Dennison, D. [paper]
  • Machine learning: The high interest credit card of technical debt Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., & Young, M. [paper]
  • Energy and policy considerations for deep learning in NLP Strubell, E., Ganesh, A., & McCallum, A. [paper]
  • Sustainable AI: AI for sustainability and the sustainability of AI van Wynsberghe, A. [paper]
  • Green Algorithms: Quantifying the carbon emissions of computation Lannelongue, L. et al. [paper]
  • Trust in Machine Learning Varshney, K. [paper]
  • Interpretable AI Thampi, A. [paper]
  • AI Fairness Mahoney, T., Varshney, K.R., Hind, M. [paper]
  • Practical Fairness Nielsen, A. [paper]
  • Hands-On Explainable AI (XAI) with Python Rothman, D. [paper]
  • AI and the Law Kilroy, K. [paper]
  • Responsible Machine Learning Hall, P., Gill, N., Cox, B. [paper]
  • Privacy-Preserving Machine Learning [paper]
  • Human-In-The-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI [paper]
  • Interpretable Machine Learning With Python: Learn to Build Interpretable High-Performance Models With Hands-On Real-World Examples [paper]
  • Responsible AI Hall, P., Chowdhury, R. [paper]
  • Quantifying the carbon emissions of machine learning Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. [paper]
  • Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models Li, P., Yang, J., Islam, M. A., & Ren, S. [paper]
  • The energy and carbon footprint of training end-to-end speech recognizers Parcollet, T., & Ravanelli, M. [paper]
  • Carbon emissions and large neural network training Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L.M., Rothchild, D., So, D., Texier, M., & Dean, J. [paper]
  • Hidden technical debt in machine learning systems Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D.,& Dennison, D. [paper]
  • Machine learning: The high interest credit card of technical debt Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D.,& Young, M. [paper]
  • Energy and policy considerations for deep learning in NLP Strubell, E., Ganesh, A., & McCallum, A. [paper]
  • Sustainable AI: AI for sustainability and the sustainability of AI van Wynsberghe, A. [paper]
  • Green Algorithms: Quantifying the carbon emissions of computation Lannelongue, L. et al [paper]
  • Sustainable AI: Environmental implications, challenges, and opportunities Wu, C.-J., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., Chang, G., Aga, F., Huang, J., Bai, C., Gschwind, M., Gupta, A., Ott, M., Melnikov, A., Candido, S., Brooks, D., Chauhan, G., Lee, B., Lee, H.-H., & Hazelwood, K. [paper]
  • A Survey on Knowledge Graphs: Representation, Acquisition, and Applications IEEE Transactions on Neural Networks and Learning Systems [paper]
  • Chain-of-Thought Prompting Elicits Reasoning in Large Language Models NeurIPS [paper]
  • A Nutritional Label for Rankings SIGMOD’18 [paper]
  • Graph of Thoughts: Solving Elaborate Problems with Large Language Models arXiv, 2024 [paper]
  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models NeurIPS [paper]
  • Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models ICML 2024 [paper]
  • A Unified Approach to Interpreting Model Predictions NIPS 2017 [paper]
  • Sparse Autoencoders Find Highly Interpretable Features in Language Models arXiv, 2023 [paper]
  • Why Should I Trust You? Explaining the Predictions of Any Classifier KDD 2016 [paper]
  • Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation ICML 2024 [paper]
  • Evolving Generative AI: Entangling the Accountability Relationship Marc T.J Elliott, Deepak P [paper]
  • Ted-spad: Temporal distinctiveness for self-supervised privacy-preservation for video anomaly detection J Fioresi, IR Dave, M Shah [paper]
  • FAIR Enough: Develop and Assess a FAIR-Compliant Dataset for Large Language Model Training? S Raza, S Ghuge, C Ding, E Dolatabadi, D Pandya [paper]
  • Exploring Memorization and Copyright Violation in Frontier LLMs: A Study of the New York Times v. OpenAI 2023 Lawsuit J Freeman, C Rippe, E Debenedetti, M Andriushchenko [paper]

3. Datasets

  • AI Risk Database [Link]
  • AI Risk Repository [Link]
  • ARC AGI [Link]
  • Common Corpus [Link]
  • An ImageNet replacement for self-supervised pretraining without humans [Link]
  • Huggingface Data Sets [Link]
  • The Stack [Link]

4. Contributing Authors

  • SHAINA RAZA∗, Vector Institute, Canada
  • RIZWAN QURESHI∗, Center for Research in Computer Vision, The University of Central Florida, USA
  • ANAM ZAHID, Department of Computer Science, Information Technology University of the Punjab, Pakistan
  • JOE FIORESI, Center for Research in Computer Vision, The University of Central Florida, USA
  • MAGED SHOMAN, University of Tennessee; Oak Ridge National Lab, USA
  • FERHAT SADAK, Department of Mechanical Engineering, Bartin University, Türkiye
  • MUHAMMAED SAEED, Saarland University, Germany
  • RANJAN SAPKOTA, Washington State University, USA
  • ADITYA JAIN, University of Texas at Austin, USA
  • ANAS ZAFAR, Fast School of Computing, National University of Computer and Emerging Sciences, Pakistan
  • MUNEEB UL HASSAN, School of Information Technology, Deakin University, Australia
  • FAHAD SHAHBAZ KHAN, Mohammed Bin Zayed University of Artificial Intelligence, UAE
  • MUBARAK SHAH, Center for Research in Computer Vision, The University of Central Florida, USA

*Equal contributors

About

The official GitHub page for the paper "Who is Accountable? The Data, Model or Regulations? A Review of Bias, Fairness, and Safety towards Responsible AI"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published