How much memory is needed for the GPU to run FinGPT-Forecaster? #149
Unanswered
jiahuiLeee
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
base_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Llama-2-7b-chat-hf',
token=access_token,
trust_remote_code=True,
device_map="auto",
torch_dtype=torch.float16,
offload_folder="offload/"
)
model = PeftModel.from_pretrained(
base_model,
'FinGPT/fingpt-forecaster_dow30_llama2-7b_lora',
offload_folder="offload/"
)
model = model.eval()
run python FinGPT/fingpt/FinGPT_Forecaster/app.py, I got this error:
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.68it/s]
/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/transformers/utils/hub.py:373: FutureWarning: The
use_auth_token
argument is deprecated and will be removed in v5 of Transformers.warnings.warn(
WARNING:root:Some parameters are on the meta device device because they were offloaded to the disk and cpu.
Traceback (most recent call last):
File "/home/ljh/Fin4LLM/FinGPT/fingpt/FinGPT_Forecaster/app.py", line 31, in
model = PeftModel.from_pretrained(
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/peft/peft_model.py", line 278, in from_pretrained
model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/peft/peft_model.py", line 587, in load_adapter
dispatch_model(
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/accelerate/big_modeling.py", line 378, in dispatch_model
offload_state_dict(offload_dir, disk_state_dict)
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/accelerate/utils/offload.py", line 98, in offload_state_dict
index = offload_weight(parameter, name, save_dir, index=index)
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/accelerate/utils/offload.py", line 32, in offload_weight
array = weight.cpu().numpy()
NotImplementedError: Cannot copy out of meta tensor; no data!
Beta Was this translation helpful? Give feedback.
All reactions