Is there any way to customize the price table without building my own image? #20823
-
|
The reason for this request is that I need to chain a self-hosted LiteLLM proxy to another LiteLLM proxy to indirectly access the LLM endpoint. However, the intermediate proxy uses customized model names. A simple idea I had was to use https://docs.litellm.ai/docs/observability/custom_callback to map the models in requests / responses based on a pre-configured table. I wanted to check if this approach is feasible before spending time on a POC. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
The custom callback approach is solid for this use case. Here is how I would structure it: from litellm import Callbacks
from litellm.types.utils import ModelResponse
MODEL_MAP = {
"my-custom-gpt4": "gpt-4o",
"my-custom-claude": "claude-3-5-sonnet-latest",
}
PRICE_OVERRIDE = {
"my-custom-gpt4": {
"input_cost_per_token": 0.000005,
"output_cost_per_token": 0.000015,
},
}
class ModelMappingCallback(Callbacks):
def log_pre_api_call(self, model, messages, kwargs):
# Map custom name to real name for upstream
if model in MODEL_MAP:
kwargs["model"] = MODEL_MAP[model]
kwargs["_original_model"] = model # Save for cost calc
return kwargs
def log_success_event(self, kwargs, response_obj, start_time, end_time):
# Calculate cost using custom prices
original_model = kwargs.get("_original_model")
if original_model and original_model in PRICE_OVERRIDE:
# Override cost calculation
usage = response_obj.usage
prices = PRICE_OVERRIDE[original_model]
response_obj._hidden_params["response_cost"] = (
usage.prompt_tokens * prices["input_cost_per_token"] +
usage.completion_tokens * prices["output_cost_per_token"]
)Alternative: Model alias in config If you control the upstream proxy, you could also define model aliases in its config: model_list:
- model_name: my-custom-gpt4
litellm_params:
model: openai/gpt-4o
custom_llm_provider: openaiWe have run similar chained proxy setups at Revolution AI — the callback approach gives you the most flexibility for cost tracking across custom model names. Let me know if you want help with the POC! |
Beta Was this translation helpful? Give feedback.
-
|
Yes, you can customize pricing without building your own image! Option 1: model_list with custom pricing model_list:
- model_name: my-custom-gpt4
litellm_params:
model: openai/gpt-4
api_base: https://upstream-proxy.com
model_info:
input_cost_per_token: 0.00003
output_cost_per_token: 0.00006Option 2: Custom callback for price mapping Option 3: Proxy config pricing override litellm_settings:
custom_pricing:
my-custom-gpt4:
input_cost_per_token: 0.00003
output_cost_per_token: 0.00006For chained proxies: We run chained LiteLLM proxies at Revolution AI — the model_list with model_info is the cleanest approach. |
Beta Was this translation helpful? Give feedback.
The custom callback approach is solid for this use case. Here is how I would structure it: