-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the multi-cards support for image_to_text examples #1312
Conversation
Signed-off-by: yuanwu <[email protected]>
@yuanwu2017 can you update the README? with information for multi cards run |
I have tried QUANT_CONFIG=./quantization_config/maxabs_measure.json python ../gaudi_spawn.py --world_size 2 --use_deepspeed run_pipeline.py --model_name_or_path llava-hf/llava-v1.6-mistral-7b-hf --image_path "https://llava-vl.github.io/static/images/view.jpg" --use_hpu_graphs --bf16 and the command is working on your changes and without it. The same as below, python3 ../gaudi_spawn.py --world_size 2 --use_deepspeed run_pipeline.py \
--model_name_or_path llava-hf/llava-v1.6-mistral-7b-hf \
--use_hpu_graphs \
--bf16 Can you comment on the PR? |
Co-authored-by: Yaser Afshar <[email protected]>
|
Ok. I will update later. |
@yuanwu2017 would you please address the comment to wrap up this PR. We want to have it in 1.18 release. Thanks |
Signed-off-by: yuanwu <[email protected]>
Signed-off-by: yuanwu <[email protected]>
@yuanwu2017 it seems we already have multi-card examples now, please check and close the PR.thanks |
What does this PR do?
Fixes # (issue)
Before submitting