You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I run a few "local" models using a few different tools that are capable of presenting an OpenAI API-compatible interface. Sometimes I like to ask a few different models the same question to compare results. That works great if you're using a backend that accepts and respects the passed-in model name unless you want to run the same or similar queries against two different backends - let's say I want to compare the results from my local CodeLlama model to GPT-4's answer, or move between them to hone the final result.
It seems possible to support this by enhancing the "model" model - say to allow the value of the --model param to include an API url.
This would be useful to me but I'm not sure if it's too much of an edge case to balance the added complexity.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I run a few "local" models using a few different tools that are capable of presenting an OpenAI API-compatible interface. Sometimes I like to ask a few different models the same question to compare results. That works great if you're using a backend that accepts and respects the passed-in model name unless you want to run the same or similar queries against two different backends - let's say I want to compare the results from my local CodeLlama model to GPT-4's answer, or move between them to hone the final result.
It seems possible to support this by enhancing the "model" model - say to allow the value of the
--model
param to include an API url.This would be useful to me but I'm not sure if it's too much of an edge case to balance the added complexity.
Beta Was this translation helpful? Give feedback.
All reactions