chore(release): release version 1.33.2 (patch)#8898
chore(release): release version 1.33.2 (patch)#8898github-actions[bot] wants to merge 1 commit intorelease/1.33.1from
Conversation
goose Release Manual Testing ChecklistVersion: 1.33.2Identify the high risk changes in this ReleaseIt will generate an analysis report in Regression TestingMake a copy of this document for each version and check off as steps are verified. Provider Testing
Starting ConversationsTest various ways to start a conversation:
RecipesCreate Recipe from Session
Use Existing Recipe
Recipe Management
Recipe from File
recipe:
title: test recipe again
description: testing recipe again
instructions: The value of test_param is {{test_param}}
prompt: What is the value of test_param?
parameters:
- key: test_param
input_type: string
requirement: required
description: Enter value for test_param
ExtensionsManual Extension Addition
Playwright Extension
Extension with Environment Variables
Speech-to-Text (Local Model)
Settings
Follow-up IssuesLink any GitHub issues filed during testing: Tested by: _____ |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 74d17d65af
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| "context": 204800, | ||
| "output": 131072 | ||
| "context": 202752, | ||
| "output": 0 |
There was a problem hiding this comment.
Set GLM-5 output limit above zero
This change sets aihubmix/glm-5 to "limit": { "output": 0 }, which is treated as a real cap rather than “unset”: ModelConfig::with_canonical_limits copies that value into max_tokens (crates/goose/src/model.rs:160-165), and OpenAI-format requests always emit it as max_tokens/max_completion_tokens (crates/goose/src/providers/formats/openai.rs:1032-1040). In practice, users selecting this model without an explicit override will send a zero-token completion budget and can get rejected requests or empty responses.
Useful? React with 👍 / 👎.
| "context": 204800, | ||
| "output": 131072 | ||
| "context": 256000, | ||
| "output": 0 |
There was a problem hiding this comment.
Set Kimi K2.5 output limit above zero
aihubmix/kimi-k2.5 is also assigned "limit": { "output": 0 } here, which flows into max_tokens via with_canonical_limits (crates/goose/src/model.rs:160-165) and is then unconditionally serialized into OpenAI-compatible payloads (crates/goose/src/providers/formats/openai.rs:1032-1040). For a text-generating model this makes default requests ask for zero completion tokens, leading to failed or empty completions unless every caller overrides token limits manually.
Useful? React with 👍 / 👎.
Release v1.33.2
How to Release
Push the release tag to trigger the release:
git fetch && git tag v1.33.2 origin/release/1.33.2 git push origin v1.33.2The tag push will trigger the release build. This PR will be automatically closed.
Cherry-Picks
If you need to include additional fixes, cherry-pick them into the
release/1.33.2branch before tagging.Important Notes
mainChanges in This Release
Comparing:
v1.33.1...v1.33.2This release PR was generated automatically.