You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to train model llama-13b ,SFT stage,7b is ok when i use 8*24g(3090). but 13B is OOM. i have try all the ways in deepspeedchat to reduce memory,all OOM!
i want to try use 'load_in_8bit=True' when load model,but ERROR!
how to modify the code???