Hi team,
I’m exploring the use of LiteLLM in a GenAI application that requires text-to-video generation from models like:
• Azure OpenAI Sora
• AWS Bedrock Nova / Nova Reel
These models generate short videos based on natural language prompts.
I understand that LiteLLM currently supports chat/completions/image models, but I couldn’t find documentation or examples for video generation models.
My questions:
1. Is there any current way to use video generation models (e.g., via custom proxying or non-chat model support) through LiteLLM?
2. If not, are there any recommended alternatives or workarounds to use these models alongside LiteLLM?
3. Is support for non-chat media generation models like video on the roadmap?
I’m trying to avoid building separate handling for these APIs if LiteLLM can route or abstract it. Any guidance would be appreciated!