Add Llama 3.1 Instruct Chat template, Ensure Correctness of 3, Small Refactor of Chat Template registration for shareGPT #1903
+41
−45
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Add Llama 3.1 Instruct Chat template, which mainly differs in these ways:
To use it, specify
chat_template: llama31
in the config.Ensure correctness of Llama 3 Chat template by removing the default assignment
You are a helpful assistant
system message.Refactor some redundancy out
Motivation and Context
Important for those who want to fine-tune the instruct versions, as well as anyone who just wants to have the same prompt template as instruct versions, even if they are fine-tuning from base.
How has this been tested?
Preprocessing combinations of the following 2 variables in a multi-turn setting:
llama3
vsllama31
With the debug flag, making sure that the loss mask is correct and that the tokens match using tokenizer.apply_chat_template
Screenshots (if appropriate)
Example Llama 3.1 tokenized output: