v5.1.7 Additional Custom Endpoint Model Settings
Changes v5.1.6...v5.1.7
- Added additional settings when the Open AI Formatted API model option is selected
- Top_P This setting limits the model's choices to a percentage of likely tokens: only the top tokens whose probabilities add up to P. A lower value makes the model's responses more predictable, while the default setting allows for a full range of token choices.
- repetition_penalty Helps to reduce the repetition of tokens from the input. A higher value makes the model less likely to repeat tokens, but too high a value can make the output less coherent (often with run-on sentences that lack small words). Token penalty scales based on original token's probability.
- Updated the settings view to have labels for each input field now.
- Increased the max width of message bubbles to make better use of available screen space.
Parameters Reference from Open Router
Full Changelog: v5.1.6...v5.1.7
