Skip to content

Commit 51cc5db

Browse files
docs: max tokens config (#6596)
1 parent 2f083b8 commit 51cc5db

2 files changed

Lines changed: 13 additions & 5 deletions

File tree

documentation/docs/guides/config-files.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ The following settings can be configured at the root level of your config.yaml f
2929
| `GOOSE_PROVIDER` | Primary [LLM provider](/docs/getting-started/providers) | "anthropic", "openai", etc. | None | Yes |
3030
| `GOOSE_MODEL` | Default model to use | Model name (e.g., "claude-3.5-sonnet", "gpt-4") | None | Yes |
3131
| `GOOSE_TEMPERATURE` | Model response randomness | Float between 0.0 and 1.0 | Model-specific | No |
32+
| `GOOSE_MAX_TOKENS` | Maximum number of tokens for each model response (truncates longer responses) | Positive integer | Model-specific | No |
3233
| `GOOSE_MODE` | [Tool execution behavior](/docs/guides/goose-permissions) | "auto", "approve", "chat", "smart_approve" | "auto" | No |
3334
| `GOOSE_MAX_TURNS` | [Maximum number of turns](/docs/guides/sessions/smart-context-management#maximum-turns) allowed without user input | Integer (e.g., 10, 50, 100) | 1000 | No |
3435
| `GOOSE_LEAD_PROVIDER` | Provider for lead model in [lead/worker mode](/docs/guides/environment-variables#leadworker-model-configuration) | Same as `GOOSE_PROVIDER` options | Falls back to `GOOSE_PROVIDER` | No |
@@ -43,8 +44,8 @@ The following settings can be configured at the root level of your config.yaml f
4344
| `GOOSE_ALLOWLIST` | URL for allowed extensions | Valid URL | None | No |
4445
| `GOOSE_RECIPE_GITHUB_REPO` | GitHub repository for recipes | Format: "org/repo" | None | No |
4546
| `GOOSE_AUTO_COMPACT_THRESHOLD` | Set the percentage threshold at which goose [automatically summarizes your session](/docs/guides/sessions/smart-context-management#automatic-compaction). | Float between 0.0 and 1.0 (disabled at 0.0)| 0.8 | No |
46-
| `otel_exporter_otlp_endpoint` | OTLP endpoint URL for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | URL (e.g., `http://localhost:4318`) | None | No |
47-
| `otel_exporter_otlp_timeout` | Export timeout in milliseconds for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | Integer (ms) | 10000 | No |
47+
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP endpoint URL for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | URL (e.g., `http://localhost:4318`) | None | No |
48+
| `OTEL_EXPORTER_OTLP_TIMEOUT` | Export timeout in milliseconds for [observability](/docs/guides/environment-variables#opentelemetry-protocol-otlp) | Integer (ms) | 10000 | No |
4849
| `SECURITY_PROMPT_ENABLED` | Enable [prompt injection detection](/docs/guides/security/prompt-injection-detection) to identify potentially harmful commands | true/false | false | No |
4950
| `SECURITY_PROMPT_THRESHOLD` | Sensitivity threshold for [prompt injection detection](/docs/guides/security/prompt-injection-detection) (higher = stricter) | Float between 0.01 and 1.0 | 0.7 | No |
5051
<!-- | `SECURITY_PROMPT_CLASSIFIER_ENABLED` | Enable ML-based prompt injection detection for advanced threat identification | true/false | false | No | -->
@@ -89,8 +90,8 @@ GOOSE_SEARCH_PATHS:
8990
- "/opt/homebrew/bin"
9091

9192
# Observability (OpenTelemetry)
92-
otel_exporter_otlp_endpoint: "http://localhost:4318"
93-
otel_exporter_otlp_timeout: 20000
93+
OTEL_EXPORTER_OTLP_ENDPOINT: "http://localhost:4318"
94+
OTEL_EXPORTER_OTLP_TIMEOUT: 20000
9495

9596
# Security Configuration
9697
SECURITY_PROMPT_ENABLED: true

documentation/docs/guides/environment-variables.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,14 +19,21 @@ These are the minimum required variables to get started with goose.
1919
| `GOOSE_PROVIDER` | Specifies the LLM provider to use | [See available providers](/docs/getting-started/providers#available-providers) | None (must be [configured](/docs/getting-started/providers#configure-provider-and-model)) |
2020
| `GOOSE_MODEL` | Specifies which model to use from the provider | Model name (e.g., "gpt-4", "claude-sonnet-4-20250514") | None (must be [configured](/docs/getting-started/providers#configure-provider-and-model)) |
2121
| `GOOSE_TEMPERATURE` | Sets the [temperature](https://medium.com/@kelseyywang/a-comprehensive-guide-to-llm-temperature-%EF%B8%8F-363a40bbc91f) for model responses | Float between 0.0 and 1.0 | Model-specific default |
22+
| `GOOSE_MAX_TOKENS` | Sets the maximum number of tokens for each model response (truncates longer responses) | Positive integer (e.g., 4096, 8192) | Model-specific default |
2223

2324
**Examples**
2425

2526
```bash
2627
# Basic model configuration
2728
export GOOSE_PROVIDER="anthropic"
28-
export GOOSE_MODEL="claude-sonnet-4-20250514"
29+
export GOOSE_MODEL="claude-sonnet-4-5-20250929"
2930
export GOOSE_TEMPERATURE=0.7
31+
32+
# Set a lower limit for shorter interactions
33+
export GOOSE_MAX_TOKENS=4096
34+
35+
# Set a higher limit for tasks requiring longer output (e.g. code generation)
36+
export GOOSE_MAX_TOKENS=16000
3037
```
3138

3239
### Advanced Provider Configuration

0 commit comments

Comments
 (0)