-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Moderation Policy to handle LLM "contributions" #942
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the policy update, nobody wants slop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with the suggested edits.
Co-authored-by: Jordan Harband <[email protected]>
Pinging @nodejs/tsc because of this at the bottom of the current policy:
|
72 hours have elapsed, merging |
It seems likely that the volume of LLM-generated PRs and comments is going to increase, and a lot of it is just noise tbh, bringing no value for the contributor nor the project, and wasting reviewers' time. Since those do not exactly fit "bot" nor "spam" categories, I suggest we create a new category for those, slightly more forgiving since it's probably fair to assume there's an actual human behind.
I don't know if it needs to be explicitly stated, using an LLM is not the problem per-se, as long as the contribution has value and the contributor is able to back it up – so basically "if you act like a bot, don't complain about being treated as one".