-
Notifications
You must be signed in to change notification settings - Fork 242
Issues: pytorch/torchchat
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Llama3.2 vision model AOTI integration
Compile / AOTI
Issues related to AOT Inductor and torch compile
enhancement
New feature or request
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1497
opened Feb 21, 2025 by
larryliu0820
Misaligned AOTI input; potential perf gains by fixing?
actionable
Items in the backlog waiting for an appropriate impl/fix
bug
Something isn't working
Compile / AOTI
Issues related to AOT Inductor and torch compile
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1424
opened Dec 14, 2024 by
Jack-Khuu
Eval fails on CUDA with AOTI exported model
bug
Something isn't working
Compile / AOTI
Issues related to AOT Inductor and torch compile
Evaluation/Benchmarking
Issues Related to Evaluation and Benchmarking
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1311
opened Oct 17, 2024 by
mikekgfb
RuntimeError: CUDA error: named symbol not found
Compile / AOTI
Issues related to AOT Inductor and torch compile
#1298
opened Oct 14, 2024 by
mikekgfb
export to AOTI using cuda doesn't work using WSL
Compile / AOTI
Issues related to AOT Inductor and torch compile
#1293
opened Oct 10, 2024 by
byjlw
ProTip!
Add no:assignee to see everything that’s not assigned.