Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

多卡训练的显存占用 #6437

Closed
1 task done
Moon-404 opened this issue Dec 25, 2024 · 1 comment
Closed
1 task done

多卡训练的显存占用 #6437

Moon-404 opened this issue Dec 25, 2024 · 1 comment
Labels
duplicate This issue or pull request already exists solved This problem has been already solved

Comments

@Moon-404
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

  • llamafactory version: 0.9.2.dev0
  • Platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
  • Python version: 3.10.15
  • PyTorch version: 2.5.1 (GPU)
  • Transformers version: 4.45.2
  • Datasets version: 2.19.1
  • Accelerate version: 1.0.1
  • PEFT version: 0.12.0
  • TRL version: 0.9.6
  • GPU type: NVIDIA GeForce RTX 4090
  • DeepSpeed version: 0.15.4

Reproduction

llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml

Expected behavior

使用 lora 训练时发现多卡并非将模型切分到多卡上训练,而是每个卡上都占用了一个模型并行训练。

llama-factory 是否有将模型切分到多卡上进行训练的功能?

Others

No response

@github-actions github-actions bot added the pending This problem is yet to be addressed label Dec 25, 2024
@hiyouga hiyouga added duplicate This issue or pull request already exists solved This problem has been already solved and removed pending This problem is yet to be addressed labels Dec 25, 2024
@hiyouga hiyouga closed this as completed Dec 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants