Skip to content

Leverage the HF cache for models #992

Open
@byjlw

Description

🚀 The feature, motivation and pitch

torchchat currently uses the hf hub which has it's own model cache, torchchat copies it into it's own model directory so you end up two copies of the same model.

We should leverage the hf hub cache but not force users to use that location if they're using their own models.

Alternatives

No response

Additional context

From r/localllama
"One annoying thing is that it uses huggingface_hub for downloading but doesn't use the HF cache - it uses it's own .torchtune folder to store models so you just end up having double of full models (grr). Just use the defaul HF cache location.”

RFC (Optional)

No response

Metadata

Assignees

No one assigned

    Labels

    actionableItems in the backlog waiting for an appropriate impl/fixenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions