This repository contains research and code to guide responsible and ethical development of large language models (LLMs). It is a living repository that will be actively updated.
-
Research Papers: Latest papers on safety, ethics, and alignment for LLMs.
-
Coding Guidelines: Best practices for building guardrails into LLM systems.
-
Conversational Guidelines: Recommendations for safer LLM chatbots.
-
Governance Frameworks: Proposed oversight and control methods for LLMs.
-
Contributing Guide: Instructions for contributions.
This repository aims to promote beneficial development of LLMs aligned with human values. We believe groundbreaking AI like LLMs requires great responsibility. Our goal is to aggregate knowledge and code to help build LLMs that are safe, ethical and reliable.
We welcome contributions from researchers and developers! Please see the contributing guide for details on how to submit your research papers, code samples, or other relevant content.
Together we can build AI that benefits all of humanity. We encourage open collaboration towards this goal.