Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

论文中提到在PPO流程中可以固定其他模型,先训练reward model直到value loss为0,请问这边具体是怎么进行训练的呢? #52

Open
HCHCXY opened this issue Mar 7, 2024 · 1 comment

Comments

@HCHCXY
Copy link

HCHCXY commented Mar 7, 2024

No description provided.

@refrain-wbh
Copy link
Contributor

reward model不参与训练,你是否指的是critic model或者value model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants