Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问,llamafactory现在支持在昇腾910上进行模型评估嘛? #5434

Open
1 task done
yiyayieryo opened this issue Sep 13, 2024 · 3 comments
Open
1 task done
Labels
npu This problem is related to NPU devices pending This problem is yet to be addressed

Comments

@yiyayieryo
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

llamafactory:0.9.0,platform:昇腾910B

Reproduction

ASCEND_RT_VISIBLE_DEVICES=4 llamafactory-cli eval /app/examples/lora_single_gpu/llama3_lora_eval.yaml
按照如上模型评估命令,调用不起来了昇腾910NPU

Expected behavior

No response

Others

No response

@github-actions github-actions bot added pending This problem is yet to be addressed npu This problem is related to NPU devices labels Sep 13, 2024
@codemayq
Copy link
Collaborator

请问程序表现是什么,是只用了CPU跑, 没有用NPU跑吗

@yiyayieryo
Copy link
Author

请问程序表现是什么,是只用了CPU跑, 没有用NPU跑吗

是的,就是跑咱们example中的样例,llamafactory-cli eval
期间查看NPU使用情况,发现均未使用,跑的很慢

@codemayq
Copy link
Collaborator

那就跑普通的推理,而不是eval呢? 理论两者表现应该是一样的。需要先验证 NPU相关配置本身是否都正常了,否则可能就只用了CPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
npu This problem is related to NPU devices pending This problem is yet to be addressed
Projects
None yet
Development

No branches or pull requests

2 participants