v0.0.4
Pre-releaseTurbo-Alignment v0.0.4 Release Notes π
What's New π
-
π Performance Optimizations
- Streamlined the processing of textual data by introducing Liger Kernels for Gemma2, significantly improving both computation time and memory management.
- Switched the RM Trainer to use one concatenated forward pass instead of two, offering a more efficient training cycle especially with FSDP or Deepspeed.
-
π Training Strategy Enhancements
- Add precomputed-margin to pair-preference dataset to facilitate the application of algorithms like SLiC-HF with added support for DPO with margin.
- Included a new feature for RM-Sampling to utilize multiple GPUs, accelerating the inference process.
-
βοΈ New Losses And Metrics For Preference Optimization
- Added APO-Zero and APO-Down losses, enriching the toolbox for preference optimization.
- Added ASFT loss, which is effective approach that better aligns LLMs by optimizing absolute likelihood for each response
- Integrated compute_flips metrics into DPOTrainer, providing more nuanced insight into model performance.
-
π More Flexible Settings For SpecialTokensSetter
- Introduced SpecialTokensSetting to better control all new tokens added to tokenizer and embedding model layer.
-
π Enhanced Dataset Handling
- Added the ability to use not just bots but also assistant replicas in datasets.
- Implemented functionality to skip system prompts in chat datasets.
-
π§Ή New Logging Features
- Added the ability to use ClearML logging.
Documentation and Tutorials π
- π Update README, Docs and tutorials
- Updated the README, documentation, and tutorials to provide clearer guidance to users, including a newly added citation section for academic referencing.
Improvements and Fixes π οΈ
-
βοΈ Dependencies Updates
- Updated the versions of transformers, accelerate, and vllm to support modern architectures like LLama3.1 and Gemma2.
- Enhanced project management with an updated poetry version, simplifying dependency resolution and packaging.
- Removed AllenAI dependencies for a more streamlined package with fewer third-party requirements.
-
π§ Corrected ORPO Loss
- Added the missed NLL loss part in ORPOLoss.
-
π vLLM Inference With Adapters
- Added ability to use PEFT models with vLLM.
-
π₯ Fix Deepspeed Stage3 Problems
- Added ability to train AutoModelForSequenceClassification with Deepspeed Stage3.
-
π Tokenization Bugs
- Addressed an error that caused VLLM to incorrectly use two tokens instead of one.
- Implemented a fix for the keep_end truncation strategy in the chat dataset, ensuring text samples are correctly truncated.
Full Changelog π
You can view the complete list of changes in this release by visiting the changelog on GitHub: Full Changelog.
New Contributors π
We hope you enjoy these updates! As always, we welcome your feedback and contributions to make Turbo-Alignment even better.
Don't forget to star βοΈ the repo if you find it useful, and watch it for future updates.
Thank you for supporting Turbo-Alignment! π
Need help or have questions? Reach out to us on GitHub Issues, and weβll be there to support you.
Installation
Upgrade to the latest Turbo-Alignment release with:
pip install turbo-alignment==0.0.4
β Turbo-Alignment Team π€«