Skip to content

[NEW] Vector databse related support #950

Open
@indranilr

Description

@indranilr

The problem/use-case that the feature addresses

Enable use of Valkey with LLM applications for semantic LLM caching, semantic conversation cache, LLM semantic routing

Description of the feature

  • Introduce support for vector data types and similarity search query
    • Support following methods and engines (Method : HNSW,FLAT, Engine : NMSLIB, Faiss)
  • Vector Range Search (e.g. find all vectors within a radius of a query vector)
  • Support Hyrbid search (lexical and semantic search)
  • Document ranking (using tf-idf, with optional user-provided weights)
  • Support for JSON based representation of vectors.
  • LLM Semantic Cache and Chat Session history management APIs support
  • Introduce related client python library to use the vector database related function from LLM chain - integration with Langchain, haystack, llamaIndex
  • Have default embedding models or use custom embedding/re-ranking, ability to integrate with HCP hosted Embedding/reranking models for the same through configuration.

Alternatives you've considered

Refer to below issue.

https://github.com/orgs/valkey-io/discussions/371

Additional information

Consider port of https://www.redisvl.com/index.html

Metadata

Metadata

Assignees

No one assigned

    Labels

    client-changes-neededClient changes may be required for this feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions