Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset type conversion utilities #6

Open
9 tasks done
qgallouedec opened this issue Jan 6, 2025 · 1 comment
Open
9 tasks done

Dataset type conversion utilities #6

qgallouedec opened this issue Jan 6, 2025 · 1 comment
Labels
🐛 bug Something isn't working 📚 documentation Improvements or additions to documentation

Comments

@qgallouedec
Copy link

System Info

Some things that are not really correct in the dataset type conversions in https://huggingface.co/docs/trl/main/en/dataset_formats#utilities-for-converting-dataset-types:

  1. when converting a preference dataset to an unpaired preference dataset with unpair_preference_dataset(), we are converting from a relative ranking to an absolute ranking. In a preference dataset, despite having a "chosen" and a "rejected" example, both can be good or both bad, just one slightly better/worse. See the example below.
    So one should not convert a Preference dataset to an Unpaired Preference Dataset without keeping an eye on absolute ratings from e.g. a reward model.
    Suggestion: At least add a warning to the documentation and conversion code or even remove it
  2. when converting from Unpaired preference or Stepwise supervision to anything un-labeled like Language modeling or Prompt-completion, only the good (label=True) examples should be used. Like when converting from a Preference dataset it only uses the chosen completions.
    Suggestion: Can easily fix that in the example conversion code

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

from datasets import Dataset
dataset_dict = {
    "prompt": ["The sky is", "The sun is"]
    "chosen": [" blue.", " in our solar system"],
    "rejected": [" above.", " in the sky."]
}
dataset = Dataset.from_dict(dataset_dict)
dataset = unpair_preference_dataset(dataset)
dataset[1]

outputs:

e.g.

{'prompt': 'The sky is', 'completion': ' above.', 'label': False}

Expected behavior

{'prompt': 'The sky is', 'completion': ' blue.', 'label': True}
{'prompt': 'The sky is', 'completion': ' above.', 'label': True}

Checklist

  • I have checked that my issue isn't already filed (see open issues)
  • I have included my system information
  • Any code provided is minimal, complete, and reproducible (more on MREs)
  • Any code provided is properly formatted in code blocks, (no screenshot, more on code blocks)
  • Any traceback provided is complete
@github-actions github-actions bot added 🐛 bug Something isn't working 📚 documentation Improvements or additions to documentation labels Jan 6, 2025
@August-murr
Copy link
Owner

oops
didn't realize this fork doesn't have the same labels as TRL.
I'll add it and we'll try again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working 📚 documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants