Skip to content

v0.0.4

Pre-release
Pre-release
Compare
Choose a tag to compare
@alekseymalakhov11 alekseymalakhov11 released this 15 Oct 11:44
· 57 commits to main since this release
0f07cd6

Turbo-Alignment v0.0.4 Release Notes πŸš€

What's New 😎

  • πŸš€ Performance Optimizations

    • Streamlined the processing of textual data by introducing Liger Kernels for Gemma2, significantly improving both computation time and memory management.
    • Switched the RM Trainer to use one concatenated forward pass instead of two, offering a more efficient training cycle especially with FSDP or Deepspeed.
  • πŸ”„ Training Strategy Enhancements

    • Add precomputed-margin to pair-preference dataset to facilitate the application of algorithms like SLiC-HF with added support for DPO with margin.
    • Included a new feature for RM-Sampling to utilize multiple GPUs, accelerating the inference process.
  • ✌️ New Losses And Metrics For Preference Optimization

    • Added APO-Zero and APO-Down losses, enriching the toolbox for preference optimization.
    • Added ASFT loss, which is effective approach that better aligns LLMs by optimizing absolute likelihood for each response
    • Integrated compute_flips metrics into DPOTrainer, providing more nuanced insight into model performance.
  • πŸ”  More Flexible Settings For SpecialTokensSetter

    • Introduced SpecialTokensSetting to better control all new tokens added to tokenizer and embedding model layer.
  • πŸ“€ Enhanced Dataset Handling

    • Added the ability to use not just bots but also assistant replicas in datasets.
    • Implemented functionality to skip system prompts in chat datasets.
  • 🧹 New Logging Features

    • Added the ability to use ClearML logging.

Documentation and Tutorials πŸ“š

  • πŸ“˜ Update README, Docs and tutorials
    • Updated the README, documentation, and tutorials to provide clearer guidance to users, including a newly added citation section for academic referencing.

Improvements and Fixes πŸ› οΈ

  • βš™οΈ Dependencies Updates

    • Updated the versions of transformers, accelerate, and vllm to support modern architectures like LLama3.1 and Gemma2.
    • Enhanced project management with an updated poetry version, simplifying dependency resolution and packaging.
    • Removed AllenAI dependencies for a more streamlined package with fewer third-party requirements.
  • 🧠 Corrected ORPO Loss

    • Added the missed NLL loss part in ORPOLoss.
  • πŸ™ vLLM Inference With Adapters

    • Added ability to use PEFT models with vLLM.
  • πŸ₯‰ Fix Deepspeed Stage3 Problems

    • Added ability to train AutoModelForSequenceClassification with Deepspeed Stage3.
  • 🐞 Tokenization Bugs

    • Addressed an error that caused VLLM to incorrectly use two tokens instead of one.
    • Implemented a fix for the keep_end truncation strategy in the chat dataset, ensuring text samples are correctly truncated.

Full Changelog πŸ“

You can view the complete list of changes in this release by visiting the changelog on GitHub: Full Changelog.

New Contributors 🌟


We hope you enjoy these updates! As always, we welcome your feedback and contributions to make Turbo-Alignment even better.

Don't forget to star ⭐️ the repo if you find it useful, and watch it for future updates.

Thank you for supporting Turbo-Alignment! πŸ™Œ


Need help or have questions? Reach out to us on GitHub Issues, and we’ll be there to support you.


Installation

Upgrade to the latest Turbo-Alignment release with:

pip install turbo-alignment==0.0.4

β€” Turbo-Alignment Team 🀫