Description
Hi,
When I try to run "FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb" in google colab, I came aross a problem.
The code is
model_name = "THUDM/chatglm2-6b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(
model_name,
quantization_config=q_config,
trust_remote_code=True,
device='cuda'
)
and, the error is
ImportError: Using load_in_8bit=True
requires Accelerate: pip install accelerate
and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes
or pip install bitsandbytes`
model = prepare_model_for_int8_training(model, use_gradient_checkpointing=True)
Last, I program the code in Gcolab pro and I am sure both packages is installed.
Please help me solve the problem, thank you so much!