Skip to content

Proposal: Add support for a local embedding model #5

@tieckit

Description

@tieckit

Hi,

I’d like to contribute by adding support for serving a local embedding model.
The idea is to serve the embedding model via a vLLM container.

Specifically, I’m thinking of adding support for small Korean embedding models that can run on low-spec GPUs, such as:

  • dragonkue/multilingual-e5-small-ko-v2

  • dragonkue/BGE-m3-ko

Would this be a welcome contribution? Please let me know if there are any guidelines or preferences I should follow.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions