Releases: BerriAI/litellm
v1.69.0-stable
What's Changed
- [Docs] v1.69.0-stable by @ishaan-jaff in #10731
- Litellm emails smtp fixes by @ishaan-jaff in #10730
- [Docs] Email notifs by @ishaan-jaff in #10733
- Litellm staging 05 10 2025 - openai pdf url support + sagemaker chat content length error fix by @krrishdholakia in #10724
Full Changelog: v1.69.0-nightly...v1.69.0-stable
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.69.0-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 240.0 | 264.33108534405653 | 6.12787888551344 | 0.0 | 1834 | 0 | 216.09041499999648 | 1326.1799069999824 |
Aggregated | Passed ✅ | 240.0 | 264.33108534405653 | 6.12787888551344 | 0.0 | 1834 | 0 | 216.09041499999648 | 1326.1799069999824 |
v1.69.0-nightly
What's Changed
- build: update model in test by @krrishdholakia in #10706
- fix: support for python 3.11- (re datetime UTC) (#10471) by @ishaan-jaff in #10701
- [FIX] Update token fields in schema.prisma to use BigInt for improved… by @husnain7766 in #10697
- [Refactor] Use pip package for enterprise/ folder by @ishaan-jaff in #10709
- [Feat] Add streaming support for using bedrock invoke models with /v1/messages by @ishaan-jaff in #10710
- Add
--version
flag tolitellm-proxy
CLI by @msabramo in #10704 - Add management client docs by @msabramo in #10703
- fix(caching_handler.py): fix embedding str caching result by @krrishdholakia in #10700
- Azure LLM: fix passing through of azure_ad_token_provider parameter by @claralp in #10694
- set correct context window length for all gemini 2.5 variants by @mollux in #10690
- Fix log table bugs (after filtering logic was added) by @NANDINI-star in #10712
- fix(router.py): write file to all deployments by @krrishdholakia in #10708
- Litellm Unified File ID output file id support by @krrishdholakia in #10713
- complete unified batch id support - replace model in jsonl to be deployment model name by @krrishdholakia in #10719
- [UI] Bug Fix - Allow Copying Request / Response on Logs Page by @ishaan-jaff in #10720
- [UI] QA Logs page - Fix bug where log did not remain in focus + text overflow on error logs by @ishaan-jaff in #10725
- Add target model name validation by @krrishdholakia in #10722
- [Bug fix] - allow using credentials for /moderations by @ishaan-jaff in #10723
- [DB] Add index for session_id on LiteLLM_SpendLogs by @ishaan-jaff in #10727
- [QA Bug fix] fix: ensure model info does not get overwritten when editing a model on UI by @ishaan-jaff in #10726
- Mutable default arguments on embeddings/completion headers parameters breaks watsonx by @terylt in #10728
New Contributors
- @husnain7766 made their first contribution in #10697
- @claralp made their first contribution in #10694
- @mollux made their first contribution in #10690
- @terylt made their first contribution in #10728
Full Changelog: v1.68.2-nightly...v1.69.0-nightly
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.69.0-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 292.69430995024163 | 6.184694862389694 | 0.0 | 1849 | 0 | 216.9113210000262 | 60025.948276999996 |
Aggregated | Passed ✅ | 250.0 | 292.69430995024163 | 6.184694862389694 | 0.0 | 1849 | 0 | 216.9113210000262 | 60025.948276999996 |
v1.68.2.dev6
What's Changed
- build: update model in test by @krrishdholakia in #10706
- fix: support for python 3.11- (re datetime UTC) (#10471) by @ishaan-jaff in #10701
- [FIX] Update token fields in schema.prisma to use BigInt for improved… by @husnain7766 in #10697
New Contributors
- @husnain7766 made their first contribution in #10697
Full Changelog: v1.68.2-nightly...v1.68.2.dev6
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.2.dev6
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 190.0 | 210.63173736431506 | 6.257034907859717 | 0.0 | 1872 | 0 | 166.34112399992773 | 1685.74146200001 |
Aggregated | Passed ✅ | 190.0 | 210.63173736431506 | 6.257034907859717 | 0.0 | 1872 | 0 | 166.34112399992773 | 1685.74146200001 |
v1.68.2-nightly
What's Changed
- [Refactor - Filtering Spend Logs] Add
status
to root of SpendLogs table by @ishaan-jaff in #10661 - Filter logs on status and model by @NANDINI-star in #10670
- [Refactor] Anthropic /v1/messages endpoint - Refactor to use base llm http handler and transformations by @ishaan-jaff in #10677
- [Feat] Add support for using Bedrock Invoke models in /v1/messages format by @ishaan-jaff in #10681
- fix(factory.py): Handle system only message to anthropic by @krrishdholakia in #10678
- Realtime API - Set 'headers' in scope for websocket auth requests + reliability fix infinite loop when model_name not found for realtime models by @krrishdholakia in #10679
- Extract 'thinking' from nova response + Add 'drop_params' support for gpt-image-1 by @krrishdholakia in #10680
- New azure models by @emerzon in #9956
- Add GPTLocalhost to "docs/my-website/docs/projects" by @GPTLocalhost in #10687
- Add nscale support for streaming by @tomukmatthews in #10698
New Contributors
- @GPTLocalhost made their first contribution in #10687
Full Changelog: v1.68.1.dev4...v1.68.2-nightly
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.2-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 190.0 | 223.07673508503882 | 6.209370359620187 | 0.0033419646714855688 | 1858 | 1 | 75.31227999999146 | 4978.849046000022 |
Aggregated | Passed ✅ | 190.0 | 223.07673508503882 | 6.209370359620187 | 0.0033419646714855688 | 1858 | 1 | 75.31227999999146 | 4978.849046000022 |
v1.68.1.dev4
What's Changed
- Contributor PR - Return 404 when
delete_verification_tokens
(POST /key/delete
) fai… by @ishaan-jaff in #10605 - Fix otel - follow genai semantic conventions + support 'instructions' param for tts by @krrishdholakia in #10608
- make openai model O series conditional accept provider/model by @aholmberg in #10591
- add gemini-2.5-pro-preview-05-06 model prices and context window by @marty-sullivan in #10597
- Fix: Ollama integration KeyError when using JSON response format by @aravindkarnam in #10611
- [Feat] V2 Emails - Fixes for sending emails when creating keys + Resend API support by @ishaan-jaff in #10602
- [Feat] Add User invitation emails when inviting users to litellm by @ishaan-jaff in #10615
- [Fix] SCIM - Creating SCIM tokens on Admin UI by @ishaan-jaff in #10628
- Filter on logs table by @NANDINI-star in #10644
- [Feat] Bedrock Guardrails - Add support for PII Masking with bedrock guardrails by @ishaan-jaff in #10642
- [Feat] Add endpoints to manage email settings by @ishaan-jaff in #10646
- Contributor PR - MCP Server DB Schema (#10634) by @ishaan-jaff in #10641
- Ollama - fix custom price cost tracking + add 'max_completion_token' support by @krrishdholakia in #10636
- fix cerebras llama-3.1-70b model_prices_and_context_window, not llama3.1-70b by @xsg22 in #10648
- Fix cache miss for gemini models with response_format by @casparhsws in #10635
- Add user management functionality to Python client library & CLI by @msabramo in #10627
- [BETA] Support unified file id (managed files) for batches by @krrishdholakia in #10650
- Fix Slack alerting not working if using a DB by @hypermoose in #10370
- Add support for Nscale (EU-Sovereign) Provider by @tomukmatthews in #10638
- Add New Perplexity Models by @keyute in #10652
New Contributors
- @aholmberg made their first contribution in #10591
- @aravindkarnam made their first contribution in #10611
- @xsg22 made their first contribution in #10648
- @casparhsws made their first contribution in #10635
- @hypermoose made their first contribution in #10370
- @tomukmatthews made their first contribution in #10638
- @keyute made their first contribution in #10652
Full Changelog: v1.68.1-nightly...v1.68.1.dev4
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.1.dev4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 190.0 | 233.10816080888745 | 6.241336822394705 | 0.0 | 1868 | 0 | 166.93079599997418 | 5406.457653000075 |
Aggregated | Passed ✅ | 190.0 | 233.10816080888745 | 6.241336822394705 | 0.0 | 1868 | 0 | 166.93079599997418 | 5406.457653000075 |
v1.68.1.dev2
Full Changelog: v1.68.1.dev1...v1.68.1.dev2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.1.dev2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 240.0 | 271.34034604220733 | 6.1752223755996924 | 0.0 | 1848 | 0 | 206.34432800000013 | 5012.736279000023 |
Aggregated | Passed ✅ | 240.0 | 271.34034604220733 | 6.1752223755996924 | 0.0 | 1848 | 0 | 206.34432800000013 | 5012.736279000023 |
v1.68.1.dev1
What's Changed
- Github: Increase timeout of litellm tests by @zoltan-ongithub in #10568
- [Docs] Change llama-api link for litellm by @seyeong-han in #10556
- [Feat] v2 Custom Logger API Endpoints by @ishaan-jaff in #10575
- [Bug fix] JSON logs - Ensure only 1 log is emitted (previously duplicate json logs were getting emitted) by @ishaan-jaff in #10580
- Update gemini-2.5-pro-exp-03-25 max_tokens to 65,535 by @mkavinkumar1 in #10548
- Update instructor.md by @thomelane in #10549
- fix issue when databrick use external model, the delta could be empty… by @frankzye in #10540
- Add
litellm-proxy
CLI (#10478) by @ishaan-jaff in #10578
New Contributors
- @zoltan-ongithub made their first contribution in #10568
- @mkavinkumar1 made their first contribution in #10548
- @thomelane made their first contribution in #10549
- @frankzye made their first contribution in #10540
Full Changelog: v1.68.0-nightly...v1.68.1.dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.1.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 210.0 | 244.34719839029643 | 6.203411663807808 | 0.0 | 1855 | 0 | 183.31073700005618 | 5362.244745999988 |
Aggregated | Passed ✅ | 210.0 | 244.34719839029643 | 6.203411663807808 | 0.0 | 1855 | 0 | 183.31073700005618 | 5362.244745999988 |
v1.68.1-nightly
What's Changed
- Add bedrock llama4 pricing + handle llama4 templating on bedrock invoke route by @krrishdholakia in #10582
Full Changelog: v1.68.1.dev2...v1.68.1-nightly
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.1-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 210.0 | 234.26202141345593 | 6.161378945167915 | 0.0 | 1843 | 0 | 179.4365540000058 | 3332.6730800000064 |
Aggregated | Passed ✅ | 210.0 | 234.26202141345593 | 6.161378945167915 | 0.0 | 1843 | 0 | 179.4365540000058 | 3332.6730800000064 |
v1.68.0-nightly
What's Changed
- [Contributor PR] Support Llama-api as an LLM provider (#10451) by @ishaan-jaff in #10538
- UI - fix(model_management_endpoints.py): allow team admin to update model info + fix request logs - handle expanding other rows when existing row selected + fix(organization_endpoints.py): enable proxy admin with 'all-proxy-model' access to create new org with specific models by @krrishdholakia in #10539
- [Bug Fix] UnicodeDecodeError: 'charmap' on Windows during litellm import by @ishaan-jaff in #10542
- fix(converse_transformation.py): handle meta llama tool call response by @krrishdholakia in #10541
Full Changelog: v1.67.6.dev1...v1.68.0-nightly
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.0-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.68.0-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 180.0 | 210.99923315604772 | 6.1894793990457675 | 0.0 | 1852 | 0 | 166.69672900002297 | 3755.0343799999837 |
Aggregated | Passed ✅ | 180.0 | 210.99923315604772 | 6.1894793990457675 | 0.0 | 1852 | 0 | 166.69672900002297 | 3755.0343799999837 |
v1.68.0-stable
What's Changed
- Handle more gemini tool calling edge cases + support bedrock 'stable-image-core' by @krrishdholakia in #10351
- [Feat] Add logging callback support for /moderations API by @ishaan-jaff in #10390
- [Reliability fix] Redis transaction buffer - ensure all redis queues are periodically flushed by @ishaan-jaff in #10393
- [Bug Fix] Responses API - fix for handling multiturn responses API sessions by @ishaan-jaff in #10415
- build(deps): bump axios, @docusaurus/core, @docusaurus/plugin-google-gtag, @docusaurus/plugin-ideal-image and @docusaurus/preset-classic in /docs/my-website by @dependabot in #10419
- docs: Fix link formatting in GitHub PR template by @user202729 in #10417
- docs: Improve documentation of phoenix logging by @user202729 in #10416
- [Feat Security] - Allow blocking web crawlers by @ishaan-jaff in #10420
- [Feat] Add support for using Bedrock Knowledge Bases with LiteLLM /chat/completions requests by @ishaan-jaff in #10413
- Revert "build(deps): bump axios, @docusaurus/core, @docusaurus/plugin-google-gtag, @docusaurus/plugin-ideal-image and @docusaurus/preset-classic in /docs/my-website" by @ishaan-jaff in #10421
- fix google studio url by @nonZero in #10095
- [New model] Add openai/computer-use-preview cost tracking / pricing by @ishaan-jaff in #10422
- fix(langsmith.py): respect langsmith batch size param by @krrishdholakia in #10411
- Support
x-litellm-api-key
header param + allow key at max budget to call non-llm api endpoints by @krrishdholakia in #10392 - Update fireworks ai pricing by @krrishdholakia in #10425
- Schedule budget resets at expectable times (#10331) by @krrishdholakia in #10333
- Embedding caching fixes - handle str -> list cache, set usage tokens for cache hits, combine usage tokens on partial cache hits by @krrishdholakia in #10424
- Contributor PR - Support OPENAI_BASE_URL in addition to OPENAI_API_BASE (#9995) by @ishaan-jaff in #10423
- New feature: Add Python client library for LiteLLM Proxy by @msabramo in #10445
- Add key-level multi-instance tpm/rpm/max parallel request limiting by @krrishdholakia in #10458
- [UI] Allow adding triton models on LiteLLM UI by @ishaan-jaff in #10456
- [Feat] Vector Stores/KnowledgeBases - Allow defining Vector Store Configs by @ishaan-jaff in #10448
- Add low-level interface to client library for doing HTTP requests by @msabramo in #10452
- Correctly re-raise 504 errors and Add
gpt-4o-mini-tts
support by @krrishdholakia in #10462 - UI - Fix filtering on key alias + support global sorting on keys by @krrishdholakia in #10455
- [Bug Fix] Ensure Non-Admin virtual keys can access /mcp routes by @ishaan-jaff in #10473
- [Fixes] Azure OpenAI OIDC - allow using litellm defined params for OIDC Auth by @ishaan-jaff in #10394
- Add supports_pdf_input: true to Claude 3.7 bedrock models by @RupertoM in #9917
- Add
llamafile
as a provider (#10203) by @peteski22 , in #10482 - Fix mcp.md in documentation by @1995parham in #10493
- docs(realtime): yaml config example for realtime model by @kmontocam in #10489
- Fix return finish_reason = "tool_calls" for gemini tool calling by @krrishdholakia in #10485
- Add user + team based multi-instance rate limiting by @krrishdholakia in #10497
- mypy tweaks by @msabramo in #10490
- Add vertex ai meta llama 4 support + handle tool call result in content for vertex ai by @krrishdholakia in #10492
- Fix and rewrite of token_counter by @happyherp in #10409
- [Fix + Refactor] Trigger Soft Budget Webhooks When Key Crosses Threshold by @ishaan-jaff in #10491
- [Bug Fix] Ensure Web Search / File Search cost are only added when the response includes the too call by @ishaan-jaff in #10476
- Fixes for
test_team_budget_metrics
andtest_generate_and_update_key
by @S1LV3RJ1NX in #10500 - [Feat] KnowledgeBase/Vector Store - Log
StandardLoggingVectorStoreRequest
for requests made when a vector store is used by @ishaan-jaff in #10509 - Don't depend on uvloop on windows (#10060) by @ishaan-jaff in #10483
- fix: PydanticDeprecatedSince20: Support for class-based
config
is eprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. by @Elijas in #9372 - [Feat] Show Vector Store / KB Request on LiteLLM Logs Page by @ishaan-jaff in #10514
- Fix pytest event loop warning (#9641) by @msabramo in #10512
- UI - fix adding vertex models with reusable credentials + fix pagination on keys table + fix showing org budgets on table by @krrishdholakia in #10528
- Playwright test for team admin (#10366) by @krrishdholakia in #10470
- [QA] Bedrock Vector Stores Integration - Allow using with registry + in OpenAI API spec with tools by @ishaan-jaff in #10516
- UI - allow reassigning team to other org by @krrishdholakia in #10527
- [Models/ LLM Credentials] Fix edit credentials modal by @NANDINI-star in #10519
New Contributors
- @peteski22 made their first contribution in in #10482
- @user202729 made their first contribution in #10417
- @nonZero made their first contribution in #10095
- @RupertoM made their first contribution in #9917
- @1995parham made their first contribution in #10493
- @kmontocam made their first contribution in #10489
- @happyherp made their first contribution in #10409
- @Elijas made their first contribution in #9372
Full Changelog: v1.67.4-stable...v1.67.7-stable