Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding reverse and symmetric KLD losses #2094

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

insop
Copy link

@insop insop commented Nov 30, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?

  • Adding reverse and symmetric KLD loss
  • Adding KLD losses based on link

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Nov 30, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2094

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 3097e7c with merge base 32e265d (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @insop!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@insop insop changed the title Adding reverse and symmetric KLD loss Adding reverse and symmetric KLD losses Nov 30, 2024
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@insop insop marked this pull request as ready for review November 30, 2024 04:29
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 30, 2024
@insop insop marked this pull request as draft November 30, 2024 04:40
@insop insop marked this pull request as ready for review November 30, 2024 04:40
@insop
Copy link
Author

insop commented Nov 30, 2024

@ebsmothers , @lindawangg, PTAL.
Thank you.

Copy link
Contributor

@ebsmothers ebsmothers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @insop for the PR! I left a few comments but no major concerns. One thing you'll need to fix is the failing linter job -- if you haven't already you can set up and run pre-commit on all your modified files by following this section of our contributing guide (assuming you already performed a dev install). If you have any trouble do let me know and we can help out.

Implementation of https://github.com/jongwooko/distillm/blob/17c0f98bc263b1861a02d5df578c84aea652ee65/distillm/losses.py

Args:
sym_kd_ratio (float): Ratio of symmetric KL divergence loss.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: let's be a bit more explicit, e.g. "When set to 1 this loss reduces to forward KL divergence, when set to 0 this loss reduces to reverse kl divergence". Also separately it'd be good to do a value check that 0 <= sym_kd_ratio <= 1 on init.

@@ -138,3 +237,164 @@ def forward(
)

return total_fkl_loss / torch.sum(mask.view(-1), dim=0)

class ReverseKLWithChunkedOutputLoss(torch.nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessary for this PR but as we are starting to have a proliferation of chunked loss implementations I wonder whether it'd be worth investing in a general utility to wrap an arbitrary loss with chunking operation @felipemello1

student_chunk, teacher_chunk, label_chunk, normalize=False
)

return total_rkl_loss / torch.sum(mask.view(-1), dim=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think our existing chunked forward KL loss even does this, but I wonder why we don't do the same check that torch.sum(mask.view(-1), dim=0) != 0 that we're doing in the unchunked version? Couldn't we still potentially get division by zero here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We didn't add this check for the chunked forward KL either, mainly because if all the labels are ignore index, there's probably something wrong with the training data. Wondering if we should add this check in the training module to cover all the loss cases, maybe in a separate diff?

Comment on lines +82 to +85
elif isinstance(loss, ReverseKLWithChunkedOutputLoss):
loss.rkl_loss = torch.compile(loss.rkl_loss, backend=backend)
elif isinstance(loss, SymmetricKLWithChunkedOutputLoss):
loss.sym_kl_loss = torch.compile(loss.sym_kl_loss, backend=backend)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kinda related to my comment above, we should also consider just defining a protocol ChunkedLoss or something having a method e.g. def base_loss(self, *args, **kwargs) and having all of CEWithChunkedOutputLoss, ForwardKLWithChunkedOutputLoss, ReverseKLWithChunkedOutputLoss, SymmetricKLWithChunkedOutputLoss inherit from that. Then L76-L85 basically just become

if isinstance(loss, ChunkedLoss):
	loss.base_loss = torch.compile(loss.base_loss, backend=backend)

Again, not a blocker for this particular PR.

@@ -114,3 +114,201 @@ def test_forward_kl_loss_expected(self):
# assert
assert_expected(chunked_loss, expected_loss, rtol=1e-2, atol=1e-2)
assert_expected(standard_loss, expected_loss, rtol=1e-2, atol=1e-2)

class TestReverseKLWithChunkedOutputLoss:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding these unit tests!

dtype=torch.bfloat16,
)
labels = torch.tensor([[0, 3, 3, 1], [1, 1, 1, 1]])
expected_loss = torch.tensor(0.6775, dtype=torch.float32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to verify: did you set this value based on the reference implementation in distillm?

@insop
Copy link
Author

insop commented Dec 1, 2024

Thank you for the review and comments, @ebsmothers.
Ack and will soon follow up the comments.

@lindawangg
Copy link
Contributor

@insop do you have any results training with the losses that you could add to the test plan?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants