-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding reverse and symmetric KLD losses #2094
base: main
Are you sure you want to change the base?
Conversation
- Adding KLD losses based on [link](https://github.com/jongwooko/distillm/blob/17c0f98bc263b1861a02d5df578c84aea652ee65/distillm/losses.py)
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2094
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 3097e7c with merge base 32e265d ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @insop! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
@ebsmothers , @lindawangg, PTAL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @insop for the PR! I left a few comments but no major concerns. One thing you'll need to fix is the failing linter job -- if you haven't already you can set up and run pre-commit on all your modified files by following this section of our contributing guide (assuming you already performed a dev install). If you have any trouble do let me know and we can help out.
Implementation of https://github.com/jongwooko/distillm/blob/17c0f98bc263b1861a02d5df578c84aea652ee65/distillm/losses.py | ||
|
||
Args: | ||
sym_kd_ratio (float): Ratio of symmetric KL divergence loss. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: let's be a bit more explicit, e.g. "When set to 1 this loss reduces to forward KL divergence, when set to 0 this loss reduces to reverse kl divergence". Also separately it'd be good to do a value check that 0 <= sym_kd_ratio <= 1
on init.
@@ -138,3 +237,164 @@ def forward( | |||
) | |||
|
|||
return total_fkl_loss / torch.sum(mask.view(-1), dim=0) | |||
|
|||
class ReverseKLWithChunkedOutputLoss(torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessary for this PR but as we are starting to have a proliferation of chunked loss implementations I wonder whether it'd be worth investing in a general utility to wrap an arbitrary loss with chunking operation @felipemello1
student_chunk, teacher_chunk, label_chunk, normalize=False | ||
) | ||
|
||
return total_rkl_loss / torch.sum(mask.view(-1), dim=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think our existing chunked forward KL loss even does this, but I wonder why we don't do the same check that torch.sum(mask.view(-1), dim=0) != 0
that we're doing in the unchunked version? Couldn't we still potentially get division by zero here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We didn't add this check for the chunked forward KL either, mainly because if all the labels are ignore index, there's probably something wrong with the training data. Wondering if we should add this check in the training module to cover all the loss cases, maybe in a separate diff?
elif isinstance(loss, ReverseKLWithChunkedOutputLoss): | ||
loss.rkl_loss = torch.compile(loss.rkl_loss, backend=backend) | ||
elif isinstance(loss, SymmetricKLWithChunkedOutputLoss): | ||
loss.sym_kl_loss = torch.compile(loss.sym_kl_loss, backend=backend) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kinda related to my comment above, we should also consider just defining a protocol ChunkedLoss
or something having a method e.g. def base_loss(self, *args, **kwargs)
and having all of CEWithChunkedOutputLoss
, ForwardKLWithChunkedOutputLoss
, ReverseKLWithChunkedOutputLoss
, SymmetricKLWithChunkedOutputLoss
inherit from that. Then L76-L85 basically just become
if isinstance(loss, ChunkedLoss):
loss.base_loss = torch.compile(loss.base_loss, backend=backend)
Again, not a blocker for this particular PR.
@@ -114,3 +114,201 @@ def test_forward_kl_loss_expected(self): | |||
# assert | |||
assert_expected(chunked_loss, expected_loss, rtol=1e-2, atol=1e-2) | |||
assert_expected(standard_loss, expected_loss, rtol=1e-2, atol=1e-2) | |||
|
|||
class TestReverseKLWithChunkedOutputLoss: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding these unit tests!
dtype=torch.bfloat16, | ||
) | ||
labels = torch.tensor([[0, 3, 3, 1], [1, 1, 1, 1]]) | ||
expected_loss = torch.tensor(0.6775, dtype=torch.float32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to verify: did you set this value based on the reference implementation in distillm?
Thank you for the review and comments, @ebsmothers. |
@insop do you have any results training with the losses that you could add to the test plan? |
Context
What is the purpose of this PR? Is it to
Please link to any issues this PR addresses.
Changelog
What are the changes made in this PR?
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example