Skip to content

Can't set output location for model download #658

@keithachorn-intel

Description

@keithachorn-intel

I have a script which pulls a dataset and model for the Llama2-70b workload to my server. After the user enters the HF token, it is expected that the model will download to a specified location, not merely to the 'cache'. The dataset does this successfully by setting a parameter 'outdirname', but the model does not. From past experience I attempted the following steps, but in all cases, the model remains only in the cache:

  • Setting the '--to' flag
  • Setting the '--outdirname' flag
  • Setting these environmental variables: LLAMA2_CHECKPOINT_PATH and CM_ML_MODEL_PATH

This is the script I'm trying to pull with: https://github.com/mlcommons/cm4mlops/tree/mlperf-inference/script/get-ml-model-llama2

Is there any way to direct the final output location?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions