Skip to content

Conversation

@aayush-kapoor
Copy link
Contributor

Background

#11467

Improper mapping of reasoningConfig to it's equivalent reasoning_effort when using openAI models in bedrock

Summary

swap inreasoning_effort param for when an openai model is used

Manual Verification

verified by running the cli example in examples/ai-core/src/generate-text/bedrock-openai-reasoning.ts which failed before the fix.

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Related Issues

Fixes #11467

@vercel-ai-sdk vercel-ai-sdk bot added ai/provider provider/amazon-bedrock Issues related to the @ai-sdk/amazon-bedrock provider labels Jan 7, 2026

const maxReasoningEffort =
bedrockOptions.reasoningConfig?.maxReasoningEffort;
const isOpenAIModel = this.modelId.startsWith('openai.');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenAI models should use reasoning_effort directly, not reasoningConfig, when handling maxReasoningEffort

Fix on Vercel

@aayush-kapoor aayush-kapoor merged commit eff1cb6 into main Jan 8, 2026
18 checks passed
@aayush-kapoor aayush-kapoor deleted the aayush/bedrock-reasoningconfig branch January 8, 2026 15:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/provider provider/amazon-bedrock Issues related to the @ai-sdk/amazon-bedrock provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: @ai-sdk/amazon-bedrock sends 'reasoningConfig' (camelCase) but AWS Bedrock expects 'reasoning_effort' (snake_case)

3 participants