-
Notifications
You must be signed in to change notification settings - Fork 21
Open
Description
I have a script which pulls a dataset and model for the Llama2-70b workload to my server. After the user enters the HF token, it is expected that the model will download to a specified location, not merely to the 'cache'. The dataset does this successfully by setting a parameter 'outdirname', but the model does not. From past experience I attempted the following steps, but in all cases, the model remains only in the cache:
- Setting the '--to' flag
- Setting the '--outdirname' flag
- Setting these environmental variables: LLAMA2_CHECKPOINT_PATH and CM_ML_MODEL_PATH
This is the script I'm trying to pull with: https://github.com/mlcommons/cm4mlops/tree/mlperf-inference/script/get-ml-model-llama2
Is there any way to direct the final output location?
Metadata
Metadata
Assignees
Labels
No labels