Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding reverse and symmetric KLD losses #2094

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Conversation

insop
Copy link
Contributor

@insop insop commented Nov 30, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?

  • Adding reverse and symmetric KLD loss
  • Adding KLD losses based on link

Test plan

Model #Params Method HumanEval base HumanEval plus MBPP base MBPP plus
Llama3.2 8B Teacher (Llama3.1) 39.0 34.1 64.3 53.2
1B Base model 18.3 15.9 35.4 29.0
FT w/o KD 22.0 17.7 37.6 31.7
Forward KLD 22.6 19.5 41.3 33.1
Reverse KLD 20.7 17.7 39.2 33.6
Symmetric KLD 23.2 20.1 41.0 33.9

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Nov 30, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2094

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @insop!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@insop insop changed the title Adding reverse and symmetric KLD loss Adding reverse and symmetric KLD losses Nov 30, 2024
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@insop insop marked this pull request as ready for review November 30, 2024 04:29
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 30, 2024
@insop insop marked this pull request as draft November 30, 2024 04:40
@insop insop marked this pull request as ready for review November 30, 2024 04:40
@insop
Copy link
Contributor Author

insop commented Nov 30, 2024

@ebsmothers , @lindawangg, PTAL.
Thank you.

Copy link
Contributor

@ebsmothers ebsmothers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @insop for the PR! I left a few comments but no major concerns. One thing you'll need to fix is the failing linter job -- if you haven't already you can set up and run pre-commit on all your modified files by following this section of our contributing guide (assuming you already performed a dev install). If you have any trouble do let me know and we can help out.

torchtune/modules/loss/kd_losses.py Show resolved Hide resolved
@@ -138,3 +237,164 @@ def forward(
)

return total_fkl_loss / torch.sum(mask.view(-1), dim=0)

class ReverseKLWithChunkedOutputLoss(torch.nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessary for this PR but as we are starting to have a proliferation of chunked loss implementations I wonder whether it'd be worth investing in a general utility to wrap an arbitrary loss with chunking operation @felipemello1

torchtune/modules/loss/kd_losses.py Show resolved Hide resolved
torchtune/training/_compile.py Show resolved Hide resolved
@insop
Copy link
Contributor Author

insop commented Dec 1, 2024

Thank you for the review and comments, @ebsmothers.
Ack and will soon follow up the comments.

@lindawangg
Copy link
Contributor

@insop do you have any results training with the losses that you could add to the test plan?

@insop
Copy link
Contributor Author

insop commented Jan 3, 2025

Thanks @insop for the PR! I left a few comments but no major concerns. One thing you'll need to fix is the failing linter job -- if you haven't already you can set up and run pre-commit on all your modified files by following this section of our contributing guide (assuming you already performed a dev install). If you have any trouble do let me know and we can help out.

My apologies for the long delay. Addresses pre-commit check and review comments.

@insop
Copy link
Contributor Author

insop commented Jan 3, 2025

@insop do you have any results training with the losses that you could add to the test plan?

@lindawangg

My apologies for the long delay. Here is one example of running a small-scale test using Llama 3.1 8B as a teacher and Llama 3.2 1B as a student with the Code Alpaca dataset.

I'm not entirely sure about 'adding to the test plan' in this case. Could you please clarify your suggestion?

Model #Params Method HumanEval base HumanEval plus MBPP base MBPP plus
Llama3.2 8B Teacher (Llama3.1) 39.0 34.1 64.3 53.2
1B Base model 18.3 15.9 35.4 29.0
FT w/o KD 22.0 17.7 37.6 31.7
Forward KLD 22.6 19.5 41.3 33.1
Reverse KLD 20.7 17.7 39.2 33.6
Symmetric KLD 23.2 20.1 41.0 33.9

@insop insop requested review from ebsmothers and lindawangg January 3, 2025 00:33
@insop
Copy link
Contributor Author

insop commented Jan 10, 2025

Let me know if you have any comments.

@ebsmothers
Copy link
Contributor

ebsmothers commented Jan 13, 2025

@insop sorry for the delayed response. Re "adding to the test plan", I think it's just referring to updating the PR summary to show your results. Let me quickly update it for you based on your previous comment. Also, where possible it's helpful to provide repro commands as part of the test plan to make it easier for others to verify. (Don't worry about it in this case, as your results are several weeks old now there is no need to go and dig them up.) Otherwise it appears that your linter job is still failing, lmk if you need any help with this.

@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 95.52239% with 6 lines in your changes missing coverage. Please review.

Project coverage is 67.04%. Comparing base (213f386) to head (a4c818c).
Report is 13 commits behind head on main.

Files with missing lines Patch % Lines
torchtune/training/_compile.py 0.00% 4 Missing ⚠️
torchtune/modules/loss/kd_losses.py 96.66% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2094      +/-   ##
==========================================
+ Coverage   65.41%   67.04%   +1.62%     
==========================================
  Files         344      352       +8     
  Lines       20658    20698      +40     
==========================================
+ Hits        13514    13877     +363     
+ Misses       7144     6821     -323     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@insop
Copy link
Contributor Author

insop commented Jan 14, 2025

Thank you @ebsmothers, I have updated the lint issue.

I used default config files and argument overriding for my training. I will put them together for others to use shortly.
I will update the PR based on your comment about my config in a day or two.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants