Skip to content

fix(core): allow disabling title temperature#1249

Merged
omeraplak merged 1 commit intomainfrom
fix/generate-title-temperature-override
Apr 25, 2026
Merged

fix(core): allow disabling title temperature#1249
omeraplak merged 1 commit intomainfrom
fix/generate-title-temperature-override

Conversation

@omeraplak
Copy link
Copy Markdown
Member

@omeraplak omeraplak commented Apr 25, 2026

PR Checklist

Please check if your PR fulfills the following requirements:

Bugs / Features

What is the current behavior?

Conversation title generation always sends temperature: 0. Reasoning models such as gpt-5-mini can warn or fail because they do not support temperature, and title generation failures are only logged at debug level.

What is the new behavior?

generateTitle.temperature can now be configured. It still defaults to 0 for backwards compatibility, and users can set temperature: null to omit the parameter for reasoning models. Unsupported temperature warnings and title generation failures are surfaced at warn level with guidance.

fixes #1233

Notes for reviewers

Verified with:

  • pnpm --filter @voltagent/core test -- src/agent/agent.spec.ts src/memory/manager/memory-manager.spec.ts
  • pnpm --filter @voltagent/core build
  • Real examples/base call using openai/gpt-5-mini with default temperature: 0 and with temperature: null.

Summary by cubic

Adds an option to disable temperature for conversation title generation in @voltagent/core. This avoids errors on reasoning models and surfaces clearer warnings; default remains 0, set generateTitle.temperature: null to omit the param.

  • Bug Fixes

    • Added generateTitle.temperature (number or null); omits the parameter when null.
    • Detects unsupported-temperature warnings across providers and logs a hint to use null.
    • Warns and keeps the default title when generation returns empty output or fails; full error stays at debug.
    • Updated tests and docs.
  • Migration

    • For reasoning models, set generateTitle.temperature: null in memory config.

Written for commit 998167d. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Conversation title generation accepts an optional temperature setting; null omits temperature for reasoning models.
  • Improvements

    • Default temperature is 0 when unspecified.
    • Models that don’t support temperature now emit warnings; generation failures log at warn level and fall back to the default title.
  • Documentation

    • Examples and notes updated (including larger example maxOutputTokens) and guidance for disabling temperature.

@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Apr 25, 2026

🦋 Changeset detected

Latest commit: 998167d

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@voltagent/core Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 25, 2026

📝 Walkthrough

Walkthrough

Conversation title generation accepts an optional generateTitle.temperature (number | null). null omits the temperature parameter for reasoning models; non-finite values coerce to 0. Temperature-related AI SDK warnings are detected and logged at warn with a hint, and title-generation failures are logged at warn (full error serialized at debug). Title-generation config type is extended accordingly.

Changes

Cohort / File(s) Summary
Type Configuration
packages/core/src/memory/types.ts
Add `temperature?: number
Core Title Generation
packages/core/src/agent/agent.ts
Derive temperature from normalized config (null → omit); pass temperature to spans / generateText only when defined; detect temperature-related warnings via isTemperatureWarning and emit [Memory] warn with hint to set generateTitle.temperature: null; upgrade generation-error log to warn and surface error.message while keeping full error at debug.
Memory Manager
packages/core/src/memory/manager/memory-manager.ts
On title-generation failure, log at warn and include message from the error plus a hint about generateTitle.temperature; fallback title behavior unchanged.
Tests
packages/core/src/agent/agent.spec.ts, packages/core/src/memory/manager/memory-manager.spec.ts
Add tests asserting default temperature: 0 is sent, temperature: null suppresses the parameter and span attribute, detection of temperature-unsupported warnings, and that failures fall back to default title while emitting warn logs with structured fields.
Docs & Metadata
website/docs/agents/memory/overview.md, .changeset/generate-title-temperature-override.md
Docs show temperature: null example, note default temperature: 0, increase example maxOutputTokens, document reasoning-model behavior and warn-level logging; add patch Changeset.

Sequence Diagram(s)

sequenceDiagram
  participant Agent as Agent
  participant Trace as TraceContext
  participant LLM as AI_SDK
  participant MemoryMgr as MemoryManager
  participant Logger as Logger

  Agent->>Trace: start LLM span (callOptions include temperature if defined)
  Trace->>LLM: generateText(model, messages, maxOutputTokens, temperature?)
  LLM-->>Trace: result (text, warnings?)
  Trace-->>Agent: result
  alt warnings include temperature-unsupported
    Agent->>Logger: warn "[Memory]" + serialized warning + hint (generateTitle.temperature: null)
  end
  alt generateText throws
    Agent->>Logger: warn "[Memory] Failed to generate conversation title" + error.message + hint
    Agent->>Logger: debug: safeStringify(full error)
    Agent->>MemoryMgr: return fallback title ("Conversation")
  else success
    Agent->>MemoryMgr: return generated title
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐇 I nudge the logs with twitchy nose,

Temperature once froze title prose,
Now null lets wise models freely hum,
Warnings trumpet so the problem's done,
A happy rabbit hops — titles come!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix(core): allow disabling title temperature' clearly and concisely describes the main change: making generateTitle.temperature configurable to support reasoning models.
Description check ✅ Passed The description follows the template with all key sections filled: checklist completed, current behavior documented, new behavior explained, issue linked (#1233), testing verified, docs and changesets added.
Linked Issues check ✅ Passed All coding requirements from issue #1233 are met: generateTitle.temperature can be set to null to omit temperature, warnings and failures are logged at warn level with guidance, and docs are updated to note reasoning-model limitations.
Out of Scope Changes check ✅ Passed All changes directly address issue #1233: adding temperature configurability, detecting/warning on unsupported temperature, elevating logging severity, and updating docs and tests; no unrelated modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/generate-title-temperature-override

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@joggrbot

This comment has been minimized.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 261-266: The isTemperatureWarning function uses incorrect
discriminant and property names causing TypeScript type errors: change checks to
use the correct variant strings ("unsupported-setting" and "compatibility") and
narrow the union before accessing variant-specific properties; for the
"unsupported-setting" variant inspect warning.setting, for "compatibility"
inspect warning.feature, and otherwise inspect warning.details — all with
toLowerCase() after confirming the field exists to safely detect the substring
"temperature" while preserving type safety for the Warning union used by the AI
SDK v6.0.0.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a53a9ccc-ec61-45ee-9bbf-3e15ad7e1d65

📥 Commits

Reviewing files that changed from the base of the PR and between 69b78fd and ef44250.

📒 Files selected for processing (7)
  • .changeset/generate-title-temperature-override.md
  • packages/core/src/agent/agent.spec.ts
  • packages/core/src/agent/agent.ts
  • packages/core/src/memory/manager/memory-manager.spec.ts
  • packages/core/src/memory/manager/memory-manager.ts
  • packages/core/src/memory/types.ts
  • website/docs/agents/memory/overview.md

Comment thread packages/core/src/agent/agent.ts
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 7 files

@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented Apr 25, 2026

Deploying voltagent with  Cloudflare Pages  Cloudflare Pages

Latest commit: 998167d
Status: ✅  Deploy successful!
Preview URL: https://195b80f7.voltagent.pages.dev
Branch Preview URL: https://fix-generate-title-temperatu.voltagent.pages.dev

View logs

@omeraplak omeraplak force-pushed the fix/generate-title-temperature-override branch from ef44250 to aa8c70a Compare April 25, 2026 13:37
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/core/src/agent/agent.ts (1)

4819-4829: ⚠️ Potential issue | 🟠 Major

Warn when title generation produces no usable title.

This still returns null silently when generateText succeeds but the sanitized title is empty, so the "Conversation" fallback remains opaque for the “no usable output” case this PR is trying to surface. Please log a warn before returning null.

Suggested change
           const resolvedUsage = result.usage ? await Promise.resolve(result.usage) : undefined;
           const title = sanitizeConversationTitle(result.text ?? "", maxLength);
+          if (!title) {
+            context.logger.warn("[Memory] Conversation title generation returned no usable title", {
+              finishReason: result.finishReason,
+              warnings: result.warnings ? safeStringify(result.warnings) : undefined,
+              hint:
+                temperature !== undefined
+                  ? "If your title generation model does not support temperature, set generateTitle.temperature to null."
+                  : undefined,
+            });
+          }
           if (title) {
             llmSpan.setAttribute("output", title);
           }
           finalizeLLMSpan(SpanStatusCode.OK, {
             usage: resolvedUsage,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.ts` around lines 4819 - 4829, When
sanitizeConversationTitle(result.text ?? "", maxLength) yields no usable title,
add a warning log before returning null: log a concise warning that title
generation produced an empty/sanitized result (include context like the raw
result.text, result.finishReason or providerMetadata if available) and then
proceed to finalizeLLMSpan and return null; locate the block using
sanitizeConversationTitle, llmSpan, finalizeLLMSpan and add the
processLogger.warn (or the module's existing logger.warn) call just before the
existing return title || null.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 4835-4839: The warn log in agent.ts currently includes the full
serialized error via safeStringify(error); remove safeStringify(error) from the
context.logger.warn call and keep only sanitized fields (e.g., message and the
existing hint) to avoid dumping provider payloads, then log the full serialized
error at debug level using context.logger.debug or similar (e.g.,
context.logger.debug("[Memory] Full error for title generation", { error:
safeStringify(error) })) so detailed diagnostics remain available but not in
warn logs; target the existing call site that uses context.logger.warn("[Memory]
Failed to generate conversation title", ...) to make this change.

---

Outside diff comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 4819-4829: When sanitizeConversationTitle(result.text ?? "",
maxLength) yields no usable title, add a warning log before returning null: log
a concise warning that title generation produced an empty/sanitized result
(include context like the raw result.text, result.finishReason or
providerMetadata if available) and then proceed to finalizeLLMSpan and return
null; locate the block using sanitizeConversationTitle, llmSpan, finalizeLLMSpan
and add the processLogger.warn (or the module's existing logger.warn) call just
before the existing return title || null.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cacd2b7d-35d0-48fd-aa4b-8bbb9c03e11d

📥 Commits

Reviewing files that changed from the base of the PR and between ef44250 and aa8c70a.

📒 Files selected for processing (7)
  • .changeset/generate-title-temperature-override.md
  • packages/core/src/agent/agent.spec.ts
  • packages/core/src/agent/agent.ts
  • packages/core/src/memory/manager/memory-manager.spec.ts
  • packages/core/src/memory/manager/memory-manager.ts
  • packages/core/src/memory/types.ts
  • website/docs/agents/memory/overview.md
✅ Files skipped from review due to trivial changes (2)
  • packages/core/src/memory/types.ts
  • .changeset/generate-title-temperature-override.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/core/src/memory/manager/memory-manager.spec.ts

Comment thread packages/core/src/agent/agent.ts
@omeraplak omeraplak force-pushed the fix/generate-title-temperature-override branch from aa8c70a to 998167d Compare April 25, 2026 13:45
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
packages/core/src/memory/manager/memory-manager.ts (1)

613-618: Align warn-log shape and hint wording with agent.ts.

The sibling site in packages/core/src/agent/agent.ts (lines 4840-4846) intentionally keeps the warn payload concise (message + hint only) and emits a separate debug log with the full serialized error. Here the full safeStringify(error) is folded into the warn payload itself, and the hint string is also worded differently:

  • memory-manager: "If your title generation model does not support temperature, set generateTitle.temperature to null."
  • agent: "Set generateTitle.temperature to null to omit temperature for title generation."

Two divergent shapes/wordings for the same failure mode make log filtering and docs harder. Suggest mirroring the agent.ts pattern.

♻️ Proposed alignment
     } catch (error) {
-      context.logger.warn("[Memory] Failed to generate conversation title", {
-        error: safeStringify(error),
-        message: error instanceof Error ? error.message : undefined,
-        hint: "If your title generation model does not support temperature, set generateTitle.temperature to null.",
-      });
+      context.logger.warn("[Memory] Failed to generate conversation title", {
+        message: error instanceof Error ? error.message : undefined,
+        hint: "Set generateTitle.temperature to null to omit temperature for title generation.",
+      });
+      context.logger.debug("[Memory] Full error for title generation", {
+        error: safeStringify(error),
+      });
     }

Note: the memory-manager.spec.ts assertions added in this PR likely check the current payload shape — update them accordingly if you adopt this change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/memory/manager/memory-manager.ts` around lines 613 - 618,
The warn log in memory-manager.ts currently embeds safeStringify(error) and a
differently worded hint; change the context.logger.warn call (in the generate
title error handler) to only include message and the exact hint string used in
agent.ts ("Set generateTitle.temperature to null to omit temperature for title
generation."), and add a separate context.logger.debug (or context.logger.error
if preferred) that logs the full serialized error via safeStringify(error);
update references to context.logger.warn in the generate-title error handling
code path (and adjust memory-manager.spec.ts expectations) to match this new
shape and wording.
packages/core/src/agent/agent.spec.ts (3)

1910-2002: Optional: split this test to isolate the two behaviors being asserted.

The test name is "should use temperature 0 by default when generating conversation titles", but the body also asserts the unsupported-temperature warning path (lines 1995-2001) by injecting a warnings array into the mock response. These are two distinct concerns (default temperature value + warning surfacing) and a failure in the warning assertion would obscure the default-temperature regression you actually care about.

Consider splitting into two tests, or removing the warnings from this mock and covering the unsupported-temperature warning in a separate, dedicated test. Not blocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.spec.ts` around lines 1910 - 2002, The test
mixes two concerns (default temperature and warning surfacing); split it into
two tests: (1) keep the "should use temperature 0 by default when generating
conversation titles" test and remove the mocked response's warnings array so it
only asserts that ai.generateText was called with temperature 0, maxOutputTokens
32 and the span attribute llm.temperature is 0 (locate the title generator via
(agent as any).createConversationTitleGenerator and the ai.generateText mock
calls), and (2) add a new test that mocks ai.generateText to return a response
containing the warnings array and asserts context.logger.warn was called with
the "[Memory] Conversation title generation model does not support temperature"
message and appropriate hint/warning contents; ensure both tests reuse the same
context/span setup and clearly reference createConversationTitleGenerator,
ai.generateText mock, and context.traceContext.createChildSpan in their
assertions.

2213-2225: Both expect(...).toHaveBeenCalledWith(...) assertions match the same single warn call — confirm intent.

vi.fn toHaveBeenCalledWith succeeds if any recorded call matches. The two assertions here together require that there is at least one [Memory] Failed to generate conversation title warn call whose payload (a) does not have an error key, and (b) does have message and hint. Since the implementation only emits this warn once, both expectations are satisfied by the same call — which is presumably the intent (the full error must live in debug, not warn).

If you want to make that contract explicit and harder to accidentally regress (e.g., if a future change adds a second matching warn call carrying error), assert against warnSpy.mock.calls directly so you check every call rather than matching any:

♻️ Stricter assertion alternative
-      expect(context.logger.warn).toHaveBeenCalledWith(
-        "[Memory] Failed to generate conversation title",
-        expect.not.objectContaining({
-          error: expect.anything(),
-        }),
-      );
-      expect(context.logger.warn).toHaveBeenCalledWith(
-        "[Memory] Failed to generate conversation title",
-        expect.objectContaining({
-          message: "Unsupported temperature",
-          hint: expect.stringContaining("generateTitle.temperature"),
-        }),
-      );
+      const failureWarnCalls = (context.logger.warn as any).mock.calls.filter(
+        ([msg]: [string]) => msg === "[Memory] Failed to generate conversation title",
+      );
+      expect(failureWarnCalls).toHaveLength(1);
+      const [, payload] = failureWarnCalls[0];
+      expect(payload).toEqual(
+        expect.objectContaining({
+          message: "Unsupported temperature",
+          hint: expect.stringContaining("generateTitle.temperature"),
+        }),
+      );
+      expect(payload).not.toHaveProperty("error");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.spec.ts` around lines 2213 - 2225, The two
toHaveBeenCalledWith assertions both match the same warn call, so make the
intent explicit by inspecting the mock's recorded calls directly: grab the calls
from (context.logger.warn as jest.Mock).mock.calls, assert there is exactly one
call (or the expected number), then assert the first argument equals "[Memory]
Failed to generate conversation title" and the second argument does not contain
an error key and does contain message: "Unsupported temperature" and a hint
string containing "generateTitle.temperature"; this replaces the two loose
expect(...).toHaveBeenCalledWith(...) assertions with deterministic checks
against context.logger.warn's mock.calls.

2163-2163: Test name doesn't match what it asserts.

The it(...) description says "should keep full conversation title generation errors at debug level", but the assertions verify both the new warn-level summary (lines 2213-2225) and the debug-level full error (lines 2226-2231). A more accurate name would be e.g. "should warn with summary and log full error at debug level when title generation fails" so the intent is obvious from the test report.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/agent/agent.spec.ts` at line 2163, The test description for
the it block in agent.spec.ts is misleading: rename the test's title string from
"should keep full conversation title generation errors at debug level" to
something that reflects both assertions (e.g. "should warn with summary and log
full error at debug level when title generation fails") so the test intent
matches the assertions that check a warn-level summary and a debug-level full
error; update the it(...) title in the spec containing the assertions around
warn (lines verifying summary) and debug (lines verifying full error)
accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/core/src/agent/agent.spec.ts`:
- Around line 1910-2002: The test mixes two concerns (default temperature and
warning surfacing); split it into two tests: (1) keep the "should use
temperature 0 by default when generating conversation titles" test and remove
the mocked response's warnings array so it only asserts that ai.generateText was
called with temperature 0, maxOutputTokens 32 and the span attribute
llm.temperature is 0 (locate the title generator via (agent as
any).createConversationTitleGenerator and the ai.generateText mock calls), and
(2) add a new test that mocks ai.generateText to return a response containing
the warnings array and asserts context.logger.warn was called with the "[Memory]
Conversation title generation model does not support temperature" message and
appropriate hint/warning contents; ensure both tests reuse the same context/span
setup and clearly reference createConversationTitleGenerator, ai.generateText
mock, and context.traceContext.createChildSpan in their assertions.
- Around line 2213-2225: The two toHaveBeenCalledWith assertions both match the
same warn call, so make the intent explicit by inspecting the mock's recorded
calls directly: grab the calls from (context.logger.warn as
jest.Mock).mock.calls, assert there is exactly one call (or the expected
number), then assert the first argument equals "[Memory] Failed to generate
conversation title" and the second argument does not contain an error key and
does contain message: "Unsupported temperature" and a hint string containing
"generateTitle.temperature"; this replaces the two loose
expect(...).toHaveBeenCalledWith(...) assertions with deterministic checks
against context.logger.warn's mock.calls.
- Line 2163: The test description for the it block in agent.spec.ts is
misleading: rename the test's title string from "should keep full conversation
title generation errors at debug level" to something that reflects both
assertions (e.g. "should warn with summary and log full error at debug level
when title generation fails") so the test intent matches the assertions that
check a warn-level summary and a debug-level full error; update the it(...)
title in the spec containing the assertions around warn (lines verifying
summary) and debug (lines verifying full error) accordingly.

In `@packages/core/src/memory/manager/memory-manager.ts`:
- Around line 613-618: The warn log in memory-manager.ts currently embeds
safeStringify(error) and a differently worded hint; change the
context.logger.warn call (in the generate title error handler) to only include
message and the exact hint string used in agent.ts ("Set
generateTitle.temperature to null to omit temperature for title generation."),
and add a separate context.logger.debug (or context.logger.error if preferred)
that logs the full serialized error via safeStringify(error); update references
to context.logger.warn in the generate-title error handling code path (and
adjust memory-manager.spec.ts expectations) to match this new shape and wording.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f9bc782c-07eb-4726-b903-e5620d34c42e

📥 Commits

Reviewing files that changed from the base of the PR and between aa8c70a and 998167d.

📒 Files selected for processing (7)
  • .changeset/generate-title-temperature-override.md
  • packages/core/src/agent/agent.spec.ts
  • packages/core/src/agent/agent.ts
  • packages/core/src/memory/manager/memory-manager.spec.ts
  • packages/core/src/memory/manager/memory-manager.ts
  • packages/core/src/memory/types.ts
  • website/docs/agents/memory/overview.md
✅ Files skipped from review due to trivial changes (2)
  • packages/core/src/memory/types.ts
  • .changeset/generate-title-temperature-override.md

@omeraplak omeraplak merged commit b4cb089 into main Apr 25, 2026
23 checks passed
@omeraplak omeraplak deleted the fix/generate-title-temperature-override branch April 25, 2026 14:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

generateTitle silently fails with reasoning models (gpt-5-mini, o1, o3) due to hardcoded temperature: 0

1 participant