-
Notifications
You must be signed in to change notification settings - Fork 764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1/n] torchtune <> llama-stack integration skeleton #540
Conversation
added some initial comments |
llama_stack/providers/inline/post_training/meta_reference/__init__.py
Outdated
Show resolved
Hide resolved
llama_stack/providers/inline/post_training/meta_reference/config.py
Outdated
Show resolved
Hide resolved
...stack/providers/inline/post_training/meta_reference/recipes/lora_finetuning_single_device.py
Outdated
Show resolved
Hide resolved
llama_stack/providers/inline/post_training/meta_reference/utils.py
Outdated
Show resolved
Hide resolved
llama_stack/providers/inline/post_training/torchtune/recipes/lora_finetuning_single_device.py
Show resolved
Hide resolved
} | ||
|
||
EXPECTED_DATASET_SCHEMA: Dict[str, List[Dict[str, ParamType]]] = { | ||
"alpaca": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few questions:
- what do these three options mean?
- what does
instruction
mean? does it meansystem_prompt
? - do you think we can use the types we have in the rest of our system -- for example, how is a dialog represented? We should be able to re-use the
UserMessage
,SystemMessage
types we have in the rest of the system. Evals uses some of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do these three options mean?
the 3 options mean 3 eligible alpaca dataset schemas. 'input' and 'text' columns are optional for alpaca dataset schema (see: https://github.com/pytorch/torchtune/blob/9cfa28835246a4c1ac4449e703eae8f49227db55/torchtune/data/_messages.py#L696 and https://huggingface.co/datasets/tatsu-lab/alpaca?row=0).
what does instruction mean? does it mean system_prompt?
instruction is different from system_prompt here. In alpaca dataset, 'instruction' pairs with 'input' as user_prompt (example: https://github.com/pytorch/torchtune/blob/9cfa28835246a4c1ac4449e703eae8f49227db55/torchtune/data/_messages.py#L696)
do you think we can use the types we have in the rest of our system -- for example, how is a dialog represented? We should be able to re-use the UserMessage, SystemMessage types we have in the rest of the system. Evals uses some of them.
torchtune has its own Message definition in the data transform https://github.com/pytorch/torchtune/blob/9cfa28835246a4c1ac4449e703eae8f49227db55/torchtune/data/_messages.py#L724. I lean toward directly import torchtune data transform to the stack and reuse its Message type. For dataset schema validation, I refer to how eval does
async def validate_eval_input_dataset_schema(self, dataset_id: str) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's get this in!!!!
Context
This is the 1st of series PRs that integrate torchtune with llama-stack as meta reference post-training implementation. For MVP, we will focus on single device LoRA SFT.
Though this PR is still WIP, we want to get early feedback on the high level design of this skeleton while still working on several details
Scope
To limit the scope of this PR, we focus on the skeleton of the implementation.
What are included?
What are not includes?
Testing
e2e test
Although we haven't added detailed testing and numerical parity check with torchtune yet, we did a simple E2E test from client to server
llama stack build --template experimental-post-training --image-type conda
andllama stack run experimental-post-training
llama-stack-client --endpoint http://devgpu018.nha2.facebook.com:5000 post_training supervised_fine_tune
server
client
parity check
torchtune dataloader output and llama-stack post training dataloader output are same
torchtune LoRA SFT and llama-stack post training LoRA SFT on alpaca dataset with llama3.2 3B instruct model are numerical match
**unit test **