Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added adaptation code, conversion code, and scripts for Qwen models. #19

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ZHAOTING
Copy link

This PR enables pre-training and continual pre-training of Qwen models.

We have used the code for continually pre-training Qwen-14B on 66B tokens of Japanese data and produced Nekomata-14B. It is part of the AWS LLM development support program in Japan, and thus we hope to release the code in addition to the already released model weights.

Changes include,

  • Code for model weights conversion between HF and Nemo checkpoints.
  • A new config option that enables QKV bias individually.
  • Minor model code changes that reflect the above option.
  • trust_remote_code=True when loading AutoTokenizer for the Qwen tokenizer.
  • transformers>=4.32.0 and tiktoken in requirements.txt.
  • Training config files and scripts.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant