Check for existing issues
What happened?
When sending a request to a multipart/form-data endpoint (e.g. POST /v1/images/edits) with a user_config form field whose value is a JSON string, the proxy does not JSON-decode the field. As a result user_config arrives downstream as a str rather than a dict, which then either:
- crashes with
litellm.router.Router() argument after ** must be a mapping, not str at route_llm_request.py:200 (Router(**user_config)), or
- gets silently dropped during request setup, after which
route_request falls through to the catch-all and raises ProxyModelNotFoundError("Invalid model name passed in model=...") at route_llm_request.py:324,
depending on the exact request path.
The same user_config payload works correctly when sent on a JSON-bodied endpoint (e.g. POST /v1/images/generations), because _read_request_body parses the entire JSON body and user_config arrives as a dict.
Root cause — litellm/proxy/common_utils/http_parsing_utils.py, in the multipart branch of _read_request_body:
if "form" in content_type:
parsed_body = dict(await request.form())
if "metadata" in parsed_body and isinstance(parsed_body["metadata"], str):
parsed_body["metadata"] = json.loads(parsed_body["metadata"])
Only metadata gets JSON-decoded. user_config (and a top-level tags array, if sent) stay as strings.
Impact — proxies that rely on per-request user_config for routing (no static model_list) cannot use multipart endpoints at all. Image edits, audio transcriptions, and any future multipart endpoint inherit this gap.
Steps to Reproduce
1. Minimal proxy config (config.yaml) — empty model_list, so routing depends entirely on user_config:
litellm_settings:
drop_params: true
# intentionally empty — per-request user_config provides the model_list
model_list: []
2. Start the proxy:
litellm --config config.yaml --port 4000
# (or via Docker: ghcr.io/berriai/litellm:v1.81.0)
3. Create a 1×1 PNG so the request is well-formed:
python -c "import base64,sys; sys.stdout.buffer.write(base64.b64decode('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkAAIAAAoAAv/lxKUAAAAASUVORK5CYII='))" > tiny.png
4. Send a multipart POST /v1/images/edits with user_config as a JSON-string form field — same shape that works on /v1/images/generations over JSON:
curl -X POST http://localhost:4000/v1/images/edits \
-H "Authorization: Bearer sk-1234" \
-F "model=openai/gpt-image-1" \
-F "prompt=test" \
-F "image=@tiny.png" \
-F 'user_config={"model_list":[{"model_name":"openai/gpt-image-1","litellm_params":{"model":"openai/gpt-image-1","api_key":"sk-fake"}}]}'
5. Observe the failure. Depending on the path, one of:
500 Internal Server Error with litellm.router.Router() argument after ** must be a mapping, not str, or
400 Bad Request with /images/edits: Invalid model name passed in model=openai/gpt-image-1. Call /v1/models to view available models for your key.
6. (Sanity check that the same user_config works on a JSON endpoint) — send the equivalent to /v1/images/generations as JSON and it routes correctly:
curl -X POST http://localhost:4000/v1/images/generations \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-image-1",
"prompt": "test",
"user_config": {"model_list":[{"model_name":"openai/gpt-image-1","litellm_params":{"model":"openai/gpt-image-1","api_key":"sk-fake"}}]}
}'
This call gets past routing (and would only fail later at the OpenAI call due to the fake key), proving the same user_config payload is valid — it just doesn't survive the multipart body parser.
Relevant log output
ERROR: common_request_processing.py:912 - litellm.proxy.proxy_server._handle_llm_api_exception():
Exception occured - litellm.router.Router() argument after ** must be a mapping, not str
Traceback (most recent call last):
File "/usr/lib/python3.13/site-packages/litellm/proxy/image_endpoints/endpoints.py", line 288, in image_edit_api
return await processor.base_process_llm_request(...)
File "/usr/lib/python3.13/site-packages/litellm/proxy/common_request_processing.py", line 667, in base_process_llm_request
llm_call = await route_request(...)
File "/usr/lib/python3.13/site-packages/litellm/proxy/route_llm_request.py", line 200, in route_request
user_router = litellm.Router(**router_config)
TypeError: litellm.router.Router() argument after ** must be a mapping, not str
The router_config at line 200 is the literal string '{"model_list":[...]}' — _read_request_body returned the form field as a string and route_request splatted it into Router().
Drop-in for the bug body
Use this as the "Actual behavior / log output" section:
**Actual response (500):**
{"error":{"message":"litellm.router.Router() argument after ** must be a mapping, not str","type":"None","param":"None","code":"500"}}
**Proxy traceback:**
TypeError: litellm.router.Router() argument after ** must be a mapping, not str
Traceback (most recent call last):
File "litellm/proxy/image_endpoints/endpoints.py", line 288, in image_edit_api
return await processor.base_process_llm_request(...)
File "litellm/proxy/common_request_processing.py", line 667, in base_process_llm_request
llm_call = await route_request(...)
File "litellm/proxy/route_llm_request.py", line 200, in route_request
user_router = litellm.Router(**router_config)
The crash happens because `_read_request_body`
(`litellm/proxy/common_utils/http_parsing_utils.py:40-43`) only JSON-decodes
the `metadata` field for multipart bodies. `user_config` arrives as a
JSON-shaped `str`, and `route_request` splats it directly into `Router()`.
What part of LiteLLM is this about?
Proxy
What LiteLLM version are you on ?
v1.81.0
Twitter / LinkedIn details
No response
Check for existing issues
What happened?
When sending a request to a multipart/form-data endpoint (e.g.
POST /v1/images/edits) with auser_configform field whose value is a JSON string, the proxy does not JSON-decode the field. As a resultuser_configarrives downstream as astrrather than adict, which then either:litellm.router.Router() argument after ** must be a mapping, not stratroute_llm_request.py:200(Router(**user_config)), orroute_requestfalls through to the catch-all and raisesProxyModelNotFoundError("Invalid model name passed in model=...")atroute_llm_request.py:324,depending on the exact request path.
The same
user_configpayload works correctly when sent on a JSON-bodied endpoint (e.g.POST /v1/images/generations), because_read_request_bodyparses the entire JSON body anduser_configarrives as adict.Root cause —
litellm/proxy/common_utils/http_parsing_utils.py, in the multipart branch of_read_request_body:Only
metadatagets JSON-decoded.user_config(and a top-leveltagsarray, if sent) stay as strings.Impact — proxies that rely on per-request
user_configfor routing (no staticmodel_list) cannot use multipart endpoints at all. Image edits, audio transcriptions, and any future multipart endpoint inherit this gap.Steps to Reproduce
1. Minimal proxy config (
config.yaml) — emptymodel_list, so routing depends entirely onuser_config:2. Start the proxy:
litellm --config config.yaml --port 4000 # (or via Docker: ghcr.io/berriai/litellm:v1.81.0)3. Create a 1×1 PNG so the request is well-formed:
4. Send a multipart
POST /v1/images/editswithuser_configas a JSON-string form field — same shape that works on/v1/images/generationsover JSON:5. Observe the failure. Depending on the path, one of:
500 Internal Server Errorwithlitellm.router.Router() argument after ** must be a mapping, not str, or400 Bad Requestwith/images/edits: Invalid model name passed in model=openai/gpt-image-1. Call /v1/models to view available models for your key.6. (Sanity check that the same
user_configworks on a JSON endpoint) — send the equivalent to/v1/images/generationsas JSON and it routes correctly:This call gets past routing (and would only fail later at the OpenAI call due to the fake key), proving the same
user_configpayload is valid — it just doesn't survive the multipart body parser.Relevant log output
What part of LiteLLM is this about?
Proxy
What LiteLLM version are you on ?
v1.81.0
Twitter / LinkedIn details
No response