You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: src/transformers/modeling_utils.py
+6
Original file line number
Diff line number
Diff line change
@@ -4327,6 +4327,12 @@ def from_pretrained(
4327
4327
"You cannot combine Quantization and loading a model from a GGUF file, try again by making sure you did not passed a `quantization_config` or that you did not load a quantized model from the Hub."
0 commit comments