Skip to content

Create a LiteLLM layer for handling requests to different inference backends #9

@debabratamishra

Description

@debabratamishra

Currently the application supports Ollama backend only. Instead, the call needs to be routed to LiteLLM which will enable interaction across multiple backends.
Important resources to look at :

Metadata

Metadata

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions