PraisonAI + LiteLLM / Ollama / Azure — what's your setup? #1387
Dhivya-Bharathy
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
PraisonAI agents often sit behind LiteLLM for model routing, Ollama for local inference, or Azure OpenAI for hosted enterprise workloads. The "best" combination depends heavily on your constraints — cost ceiling, latency requirements, data-residency rules, team size, and whether you need the models to run on your own hardware.
There's no single right answer, so let's compare notes.
Reply with:
The goal is a practical reference other builders can search later, not a hype thread.
Repo: MervinPraison/PraisonAI
Beta Was this translation helpful? Give feedback.
All reactions