-
Notifications
You must be signed in to change notification settings - Fork 206
Issues: pytorch/torchchat
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
x86 CPU: BF16 should improve decoding performance relative to FP32 on x86, even without hardware BF16
actionable
Items in the backlog waiting for an appropriate impl/fix
enhancement
New feature or request
performance
#1253
opened Oct 2, 2024 by
swolchok
llama-3.2-11b-vision : size mismatch for encoder.clip.token_pos_embedding.global_token_positional_embedding
bug
Something isn't working
Llama 3.2- Multimodal
Issues related to Multimodal of Llama3.2
#1229
opened Sep 29, 2024 by
openconcerto
Llama 3.2 MM Multiturn Browser: Second message errors out
Browser
Issue Related to UI components and behavior
bug
Something isn't working
Llama 3.2- Multimodal
Issues related to Multimodal of Llama3.2
#1224
opened Sep 28, 2024 by
Jack-Khuu
CLI chat mode doesn't work on 11b model
Known Gaps
These are known Gaps/Issues/Bug items in torchchat
Llama 3.2- Multimodal
Issues related to Multimodal of Llama3.2
#1223
opened Sep 27, 2024 by
Gasoonjia
A lot of duplicate code between generate.py and the openai_api.py
#1214
opened Sep 26, 2024 by
byjlw
Distributed inference runtime error
Distributed
Issues related to all things distributed
#1207
opened Sep 25, 2024 by
guijuzhang
convert_hf_checkpoint only relies on model_name to resolve TransformerArgs
actionable
Items in the backlog waiting for an appropriate impl/fix
good first issue
Good for newcomers
#1179
opened Sep 23, 2024 by
Jack-Khuu
Issue running on iOS
bug
Something isn't working
Mobile - iOS
Issues Related to the iOS Workflow
#1167
opened Sep 20, 2024 by
raghukiran1224
[distributed][perf] ensure that all decoding ops are happening on gpu with no cpu sync
Distributed
Issues related to all things distributed
performance
#1147
opened Sep 15, 2024 by
lessw2020
[Distributed] Did not find tokenizer at {tokenizer_path}
Distributed
Issues related to all things distributed
#1146
opened Sep 14, 2024 by
kwen2501
Failures when using PyTorch local build vs. binaries
bug
Something isn't working
enhancement
New feature or request
#1134
opened Sep 11, 2024 by
angelayi
int4_weight_only in Cuda compile := RuntimeError: _apply(): Couldn't swap Linear.weight
bug
Something isn't working
Compile / AOTI
Issues related to AOT Inductor and torch compile
Quantization
Issues related to Quantization or torchao
#1125
opened Sep 10, 2024 by
Jack-Khuu
should have the ability to output debug artifacts when exporting to a .pte file
enhancement
New feature or request
ExecuTorch
Issues related to ExecuTorch installation, export, or build. Mobile uses separate tags
#1101
opened Sep 3, 2024 by
byjlw
Slow eval performance for .pte models
actionable
Items in the backlog waiting for an appropriate impl/fix
ExecuTorch
Issues related to ExecuTorch installation, export, or build. Mobile uses separate tags
performance
#1066
opened Aug 27, 2024 by
vmpuri
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.