-
Notifications
You must be signed in to change notification settings - Fork 210
Open
Description
There is an issue right now in patch_compiling_bitsandbytes() function that won't let us properly load models for Lora training, this is the working solution that I tested on my fork and was able to start the training:
# Also disable compiling on bitsandbytes
def patch_compiling_bitsandbytes():
# All Unsloth Zoo code licensed under LGPLv3
os.environ["UNSLOTH_PATCHED"] = "1"
import bitsandbytes
if Version(bitsandbytes.__version__) >= Version("0.46.0"):
if os.environ.get("UNSLOTH_ENABLE_LOGGING", "0") == "1":
print("Unsloth: Bitsandbytes >= 0.46.0 supports torch.compile - enabling.")
else:
# Disable dynamo on Linear4bit, Linear8bit and other future modules
if os.environ.get("UNSLOTH_ENABLE_LOGGING", "0") == "1":
print("Unsloth: Bitsandbytes < 0.46.0 does not support torch.compile - disabling.")
# FIX: Create a namespace dict to store imports
namespace = {}
for x in ["bitsandbytes.nn.modules", "peft.tuners.lora.bnb",]:
try:
# Import the module into namespace
exec(f"import {x}", namespace)
# Get the module from namespace
module = eval(x, namespace)
layers = dir(module)
for fx in layers:
try:
layer = getattr(module, fx)
except:
continue
if not hasattr(layer, "forward"):
continue
if hasattr(layer.forward, "__wrapped__"):
continue
# Disable dynamo on the forward method
layer.forward = torch._disable_dynamo(layer.forward)
except ImportError:
# Raise a helpful message if peft is not installed
if "peft" in x:
raise ImportError("Unsloth: Please install peft via `pip install peft`")
continue
return
pass
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels