We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llamafactory:0.9.0,platform:昇腾910B
ASCEND_RT_VISIBLE_DEVICES=4 llamafactory-cli eval /app/examples/lora_single_gpu/llama3_lora_eval.yaml 按照如上模型评估命令,调用不起来了昇腾910NPU
No response
The text was updated successfully, but these errors were encountered:
请问程序表现是什么,是只用了CPU跑, 没有用NPU跑吗
Sorry, something went wrong.
是的,就是跑咱们example中的样例,llamafactory-cli eval 期间查看NPU使用情况,发现均未使用,跑的很慢
那就跑普通的推理,而不是eval呢? 理论两者表现应该是一样的。需要先验证 NPU相关配置本身是否都正常了,否则可能就只用了CPU
No branches or pull requests
Reminder
System Info
llamafactory:0.9.0,platform:昇腾910B
Reproduction
ASCEND_RT_VISIBLE_DEVICES=4 llamafactory-cli eval /app/examples/lora_single_gpu/llama3_lora_eval.yaml
按照如上模型评估命令,调用不起来了昇腾910NPU
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered: