Skip to content

Support for Ollama or Hugging Face models #52

@amoroccoire

Description

@amoroccoire

What would you like to see?

I would like to configure LLM and embeddings locally, and these are generally available on Ollama or Hugging Face.

Why is it useful? (optional)

It is useful when, due to customer or company requirements, everything must be implemented locally.

What problem does this solve?
Using the cloud is not the problem, but it is when the use case does not require it.

Examples or links? (optional)

https://github.com/HKUDS/LightRAG.git

Mockups, references, or prior art.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions