We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't train reward model with batch
seq, prompt_mask, labels = next(train_loader) loss = reward_model(seq, prompt_mask = prompt_mask, labels = labels) accelerator.backward(loss / GRADIENT_ACCUMULATE_EVERY)
i set this but i get error from code, check source code , found out this:
if self.binned_output: return F.mse_loss(pred, labels) return F.cross_entropy(pred, labels)
cross_entropy DO NOT support multi trainset. i change to mse_loss ,still error.
how i compute loss from multi trainset , like batch size set 8 ,
The text was updated successfully, but these errors were encountered:
reward model doesn't need training.
Are you serious?
how to explain README example?
Sorry, something went wrong.
No branches or pull requests
can't train reward model with batch
i set this but i get error from code, check source code , found out this:
cross_entropy DO NOT support multi trainset. i change to mse_loss ,still error.
how i compute loss from multi trainset , like batch size set 8 ,
The text was updated successfully, but these errors were encountered: