-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bias tensors #1259
base: main
Are you sure you want to change the base?
Bias tensors #1259
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1259
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit bbea338 with merge base 766bee9 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
4c08ee4
to
964ae69
Compare
Branch: GraniteCodeSupport Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteCodeSupport Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteCodeSupport Signed-off-by: Gabe Goodhart <[email protected]>
964ae69
to
bbea338
Compare
Thanks for the review/merge on #1255! This PR is now ready for review |
Current tests are run through .github/workflows, with some scripts in .ci (including a script that can be used to ensure that code in Or were you looking for "unit tests" of subcomponents with a Python driver? If you are looking for python-level unit tests, I don't think we have any right now, but that doesn't mean we can't have any. If you want to make a proposal, you might discuss with @byjlw and @Jack-Khuu, and @lessw2020 for distributed inference. |
Dependencies
This PR is part of a sequence in support of adding Granite Code. It depends on merging the following PRs:
Issues
Closes #1250
Description
This PR adds support for models which have
bias
tensors for the attention and ffn modules alongside the primary weight tensors.Changes
weight_map
in HF checkpoint conversionwqkv
tensors for bias as well as weights in HF checkpoint conversionTransformerArgs
to allow models to indicate the presence ofattention_bias
andfeed_forward_bias
tensorsAttention
andFeedForward
modules' tensors'bias
arguments based on the config argsTesting
In conjunction with my other changes for Granite Code, I've been able to validate that the results produced with this logic do produce the expected token sequence.
NOTE: If there's any preferred way to include unit tests along with the PR, please let me know and I can get them added! I don't see a familiar unit test structure in the project at this point, so I've been relying on local ad-hoc testing.