Question about the temperature parameter in the Hugging Face Inference API #2880
Unanswered
xufengduan
asked this question in
Q&A
Replies: 1 comment
-
Q1: If you don't explicitly set the temperature, the Hugging Face Inference API will use the model's default value specified in its generation_config.json. It does not use a global default of 1 in this case. Q2: If you don't provide these parameters, the API will use the defaults defined in the model's generation_config.json. If the model doesn't have those settings, the API may have its own fallback defaults. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I have a question regarding the temperature parameter in the Hugging Face Inference API, particularly in the context of chat models. According to the documentation, the default value for temperature is 1. However, I noticed that some models seem to have a different default, such as 0.6, as specified in their generation_config.json file.
Here are my questions:
When using the Inference API, if I don’t explicitly set the temperature parameter, does the API always use the model’s default value from the generation_config.json? Or does it fall back to a global default of 1 as mentioned in the docs?
If I don’t pass in any additional parameters (like max_length, top_p, etc.), does the API automatically use all the defaults specified in the model’s generation_config.json file? Or are there other fallback defaults from the API side?
Thank you in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions