Skip to content

Issues: bitsandbytes-foundation/bitsandbytes

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Wrong result 8bit blockwise quantization over float16 bug Something isn't working high priority (first issues that will be worked on) Low Risk Risk of bugs in transformers and other libraries x64 CPU
#1540 opened Feb 25, 2025 by ZiyueXu77 v0.47.0
torch.version.cuda.split error bug Something isn't working high priority (first issues that will be worked on) ROCm
#1513 opened Feb 13, 2025 by Lier0 v0.46.0
Cannot load pre-quantized Janus Pro 7B bug Something isn't working medium priority (will be worked on after all high priority issues) Medium risk Risk of bugs in transformers and other libraries
#1498 opened Jan 30, 2025 by neilmehta24
LoRA + deepspeed zero3 finetuing using 8bit quantization of base weights results in increased loss bug Something isn't working contributions-welcome We welcome contributions to fix this issue!
#1451 opened Dec 12, 2024 by winglian
FSDP2 integration: torch.chunks(Params4bit) not returning Params4bit subclass bug Something isn't working FSDP help wanted Extra attention is needed high priority (first issues that will be worked on)
#1424 opened Nov 21, 2024 by mreso
AdEMA NaN when loading from state_dict bug Something isn't working Optimizers Issues or feature requests relating to optimizers
#1382 opened Oct 2, 2024 by darius-lam
Bug when using optimizer LAMB 32bits bug Something isn't working contributions-welcome We welcome contributions to fix this issue! low priority (will be worked on after all priority issues) Optimizers Issues or feature requests relating to optimizers
#1350 opened Sep 5, 2024 by FrsECM
quantize_4bit/dequantize_4bit gives wrong output on in-contiguous tensor bug Something isn't working low priority (will be worked on after all priority issues) question Further information is requested
#1342 opened Aug 30, 2024 by chenqianfzh
Linear8bitLt can not be moved back to cpu bug Something isn't working low priority (will be worked on after all priority issues)
#1332 opened Aug 24, 2024 by Nerogar
out kwarg in matmul_4bit() is not working bug Something isn't working contributions-welcome We welcome contributions to fix this issue!
#1235 opened Jun 1, 2024 by chenqianfzh
Typo? I think you mean __getitem__() bug Something isn't working contributions-welcome We welcome contributions to fix this issue! Low Risk Risk of bugs in transformers and other libraries
#1138 opened Mar 19, 2024 by BruceDai003
Save 4bits llama model to Torchscript failed bug Something isn't working huggingface-related A bug that is likely due to the interaction between bnb and HF libs (transformers, accelerate, peft) medium priority (will be worked on after all high priority issues)
#1009 opened Feb 1, 2024 by dcy0577
ProTip! Updated in the last three days: updated:>2025-03-19.