feat: add vercel-ai-sdk.mdc rule to prevent v6 hallucinations#2727
feat: add vercel-ai-sdk.mdc rule to prevent v6 hallucinations#2727cubaseuser123 wants to merge 2 commits intogiselles-ai:mainfrom
Conversation
|
@cubaseuser123 is attempting to deploy a commit to the Giselle Team on Vercel. A member of the Team first needs to authorize it. |
|
Finished running flow.
|
||||||||||||||||||
|
Review Summary by QodoAdd Vercel AI SDK v6 Cursor rule to prevent common hallucinations
WalkthroughsDescription• Adds comprehensive Cursor rule for Vercel AI SDK v6 patterns • Documents 7 common API hallucinations with correct/incorrect examples • Includes official documentation links for each pattern • Provides canonical Route Handler template for reference Diagramflowchart LR
A["Vercel AI SDK v6<br/>Common Hallucinations"] -->|toolContext| B["experimental_context"]
A -->|generateObject| C["Output.object()"]
A -->|parameters| D["inputSchema"]
A -->|Zod.strict()| E["tool strict option"]
A -->|toDataStreamResponse| F["toUIMessageStreamResponse"]
A -->|maxSteps| G["stopWhen: stepCountIs"]
A -->|Raw messages| H["UIMessage + conversion"]
B --> I["Cursor Rule<br/>vercel-ai-sdk.mdc"]
C --> I
D --> I
E --> I
F --> I
G --> I
H --> I
File Changes1. .cursor/rules/vercel-ai-sdk.mdc
|
📝 WalkthroughWalkthroughNew documentation file added that documents Vercel AI SDK v6 usage patterns and replacements for deprecated APIs, with code examples and a canonical streaming route handler. No executable code changes. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 QA Testing Assistant by Giselle📋 Manual QA ChecklistBased on the changes in this PR, here are the key areas to test manually:
✨ Prompt for AI AgentsUse the following prompts with Cursor or Claude Code to automate E2E testing: 📝 E2E Test Generation Prompt |
Code Review by Qodo
1. v6 rule vs repo ai@5
|
| --- | ||
| description: Vercel AI SDK v6 patterns — prevents common hallucinations of legacy APIs | ||
| globs: *.tsx,*.ts | ||
| alwaysApply: false |
There was a problem hiding this comment.
1. V6 rule vs repo ai@5 🐞 Bug ✓ Correctness
The rule is labeled as “Vercel AI SDK v6 patterns” and applies to all *.ts/*.tsx, but the repo catalog pins ai to 5.0.101; this can drive agents to generate APIs/patterns that may not exist in the installed SDK, leading to compilation/runtime failures when code is added based on this rule.
Agent Prompt
## Issue description
The new Cursor rule is presented as “Vercel AI SDK v6 patterns” but the repo pins `ai` to 5.0.101. Since the rule’s `globs` cover all TS/TSX files, agents may generate code using APIs/patterns that don’t exist for the repo’s installed SDK version, causing compile/runtime errors.
## Issue Context
Cursor rules influence generated code. If the rule is version-specific, it should either be scoped to the relevant code areas or clearly gated by the dependency version.
## Fix Focus Areas
- .cursor/rules/vercel-ai-sdk.mdc[1-12]
- pnpm-workspace.yaml[52-60]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
shige
left a comment
There was a problem hiding this comment.
Thanks for the contribution! Preventing AI SDK hallucinations is a great idea — we've definitely seen agents generate outdated patterns. However, there are a few issues that need to be addressed before we can merge this.
1. Version mismatch: our project is on AI SDK v5, not v6
We currently pin ai: 5.0.101 in pnpm-workspace.yaml. Several patterns in this rule are v6-specific and would be incorrect guidance for our codebase:
generateObject→Output.object():generateObjectis still valid in v5maxSteps→stopWhen: stepCountIs(N):maxStepsis the correct option in v5toDataStreamResponse()→toUIMessageStreamResponse(): our codebase actually usestoUIMessageStream()(neither of the two options listed)
Could you verify each pattern against v5 and adjust accordingly?
2. Code style: single quotes → double quotes
Our project uses Biome with double quotes as the standard. All code examples in the rule use single quotes, which would cause agents to generate code that fails our formatter. Please update all examples to use double quotes.
3. Overly broad glob scope
globs: *.tsx,*.ts applies this rule to every TypeScript file, including UI components, tests, and database schemas that never touch the AI SDK. Consider narrowing the glob to relevant directories (e.g., packages/giselle/src/generations/**) or removing it so the rule is only triggered manually.
4. Canonical example doesn't match project patterns
The route handler example uses model: 'anthropic/claude-sonnet-4.5' as a raw string, but we use a language model registry pattern. This could steer agents away from our established conventions.
5. Relationship with existing update-ai-sdk.mdc
We already have .cursor/rules/update-ai-sdk.mdc. Could you clarify how these two rules relate? It might make sense to merge them or clearly differentiate their purposes.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
.cursor/rules/vercel-ai-sdk.mdc (1)
3-3: Glob pattern is narrower than actual AI SDK usage across the codebase.The glob
packages/giselle/src/generations/**/*.tsonly targets one package. Verification shows AI SDK patterns (streamText, createUIMessageStream, toUIMessageStream) are used in multiple packages outside this glob:
packages/react/src/generations/(createUIMessageStream usage confirmed)packages/protocol/src/generation/packages/rag/src/embedder/packages/langfuse/src/packages/github-tool/src/Since the rule title states "Correct Patterns for This Project" (emphasis suggesting broad scope), consider expanding the glob pattern to cover all AI SDK usage:
-globs: packages/giselle/src/generations/**/*.ts +globs: "{packages/**/src/generations/**/*.ts,apps/**/trigger/**/*.ts}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.cursor/rules/vercel-ai-sdk.mdc at line 3, The current glob "packages/giselle/src/generations/**/*.ts" is too narrow and misses AI SDK usages (e.g., streamText, createUIMessageStream, toUIMessageStream) in other packages; update the rule in vercel-ai-sdk.mdc to expand the glob to cover all packages where those symbols appear (for example include patterns like packages/**/src/generations/**/*.ts and packages/**/src/**/generations/**/*.ts and/or add explicit package paths such as packages/react/src/generations/**, packages/protocol/src/generation/**, packages/rag/src/embedder/**, packages/langfuse/src/**, packages/github-tool/src/**) so the rule will scan all locations referencing streamText, createUIMessageStream and toUIMessageStream.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.cursor/rules/vercel-ai-sdk.mdc:
- Around line 115-119: Update the documentation so it does not label
toUIMessageStreamResponse as "outdated": explicitly state that
toUIMessageStreamResponse (introduced in the v5 SDK) is a current helper that
wraps a raw stream into a Response object, while toDataStreamResponse is the
removed/outdated helper; locate the example lines showing
result.toDataStreamResponse() and result.toUIMessageStreamResponse() and adjust
the wording to reflect this distinction.
- Around line 194-230: The example stops after creating uiMessageStream (from
streamTextResult.toUIMessageStream) and never returns an HTTP response; update
the canonical Route Handler snippet to return the stream as the route's HTTP
response (take uiMessageStream produced by streamTextResult.toUIMessageStream
and send it back from the handler with appropriate status and
streaming/content-type headers) so a copy-pasteable handler actually returns the
streaming response to the client.
- Around line 153-166: Update the docs/examples to actually show converting
UIMessage[] to ModelMessage[] using convertToModelMessages() before passing
messages into model APIs: replace or augment the “✅ CORRECT” snippet (which
currently just imports UIMessage and ModelMessage) with a short example that
calls convertToModelMessages(messages) and then calls streamText({ messages:
modelMessages, ... }); also fix the canonical pattern that currently passes
messages directly (lines around streamText call) to instead use the converted
modelMessages so the examples consistently demonstrate convertToModelMessages(),
referencing the types UIMessage, ModelMessage, the helper
convertToModelMessages, and the streamText call site.
---
Nitpick comments:
In @.cursor/rules/vercel-ai-sdk.mdc:
- Line 3: The current glob "packages/giselle/src/generations/**/*.ts" is too
narrow and misses AI SDK usages (e.g., streamText, createUIMessageStream,
toUIMessageStream) in other packages; update the rule in vercel-ai-sdk.mdc to
expand the glob to cover all packages where those symbols appear (for example
include patterns like packages/**/src/generations/**/*.ts and
packages/**/src/**/generations/**/*.ts and/or add explicit package paths such as
packages/react/src/generations/**, packages/protocol/src/generation/**,
packages/rag/src/embedder/**, packages/langfuse/src/**,
packages/github-tool/src/**) so the rule will scan all locations referencing
streamText, createUIMessageStream and toUIMessageStream.
| ```typescript | ||
| // ❌ WRONG — outdated helpers | ||
| result.toDataStreamResponse(); | ||
| result.toUIMessageStreamResponse(); | ||
| ``` |
There was a problem hiding this comment.
toUIMessageStreamResponse() is mislabeled as "outdated."
toUIMessageStreamResponse was introduced in the v5 SDK and is still a valid, current helper — it simply wraps the raw stream into a Response object. Calling it "outdated" alongside the genuinely removed toDataStreamResponse() is factually wrong and may cause an agent to avoid it even in contexts where it's appropriate.
📝 Suggested wording fix
// ❌ WRONG — outdated helpers
result.toDataStreamResponse();
-result.toUIMessageStreamResponse();
+// ❌ AVOID in this project — wraps the stream into a Response directly;
+// use toUIMessageStream() and compose the response via createUIMessageStreamResponse()
+// result.toUIMessageStreamResponse();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.cursor/rules/vercel-ai-sdk.mdc around lines 115 - 119, Update the
documentation so it does not label toUIMessageStreamResponse as "outdated":
explicitly state that toUIMessageStreamResponse (introduced in the v5 SDK) is a
current helper that wraps a raw stream into a Response object, while
toDataStreamResponse is the removed/outdated helper; locate the example lines
showing result.toDataStreamResponse() and result.toUIMessageStreamResponse() and
adjust the wording to reflect this distinction.
| ## 6. Messages: `UIMessage` + `ModelMessage` | ||
|
|
||
| Client messages are `UIMessage` type. Model calls expect `ModelMessage`. Use `convertToModelMessages()` to convert between them. | ||
|
|
||
| ```typescript | ||
| // ✅ CORRECT | ||
| import { streamText, type UIMessage, type ModelMessage } from "ai"; | ||
| ``` | ||
|
|
||
| ```typescript | ||
| // ❌ WRONG — using untyped message arrays | ||
| const { messages } = await req.json(); | ||
| streamText({ messages, ... }); // Missing type safety | ||
| ``` |
There was a problem hiding this comment.
Rule 6's ✅ CORRECT example never demonstrates convertToModelMessages() usage.
The prose on line 155 says to use convertToModelMessages() to convert between UIMessage[] and ModelMessage[], but the code example shows only type imports — the actual call site is never demonstrated. An agent following this rule won't know where or how to invoke the conversion.
Compounding this, the canonical pattern (lines 211–213) passes messages directly to streamText without any conversion, directly contradicting the advice here.
📝 Proposed example additions
// ✅ CORRECT
-import { streamText, type UIMessage, type ModelMessage } from "ai";
+import { streamText, convertToModelMessages, type UIMessage } from "ai";
+
+// In your route handler:
+const { messages }: { messages: UIMessage[] } = await req.json();
+
+const result = streamText({
+ model,
+ messages: convertToModelMessages(messages), // UIMessage[] → ModelMessage[]
+ tools: { ... },
+});And in the canonical pattern (line 213):
- messages,
+ messages: convertToModelMessages(messages),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## 6. Messages: `UIMessage` + `ModelMessage` | |
| Client messages are `UIMessage` type. Model calls expect `ModelMessage`. Use `convertToModelMessages()` to convert between them. | |
| ```typescript | |
| // ✅ CORRECT | |
| import { streamText, type UIMessage, type ModelMessage } from "ai"; | |
| ``` | |
| ```typescript | |
| // ❌ WRONG — using untyped message arrays | |
| const { messages } = await req.json(); | |
| streamText({ messages, ... }); // Missing type safety | |
| ``` | |
| ## 6. Messages: `UIMessage` + `ModelMessage` | |
| Client messages are `UIMessage` type. Model calls expect `ModelMessage`. Use `convertToModelMessages()` to convert between them. | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.cursor/rules/vercel-ai-sdk.mdc around lines 153 - 166, Update the
docs/examples to actually show converting UIMessage[] to ModelMessage[] using
convertToModelMessages() before passing messages into model APIs: replace or
augment the “✅ CORRECT” snippet (which currently just imports UIMessage and
ModelMessage) with a short example that calls convertToModelMessages(messages)
and then calls streamText({ messages: modelMessages, ... }); also fix the
canonical pattern that currently passes messages directly (lines around
streamText call) to instead use the converted modelMessages so the examples
consistently demonstrate convertToModelMessages(), referencing the types
UIMessage, ModelMessage, the helper convertToModelMessages, and the streamText
call site.
| ## Reference: Canonical Streaming Pattern | ||
|
|
||
| A correct streaming setup following this project's conventions: | ||
|
|
||
| ```typescript | ||
| import { | ||
| streamText, | ||
| stepCountIs, | ||
| smoothStream, | ||
| type UIMessage, | ||
| } from "ai"; | ||
| import { createGateway } from "@ai-sdk/gateway"; | ||
|
|
||
| const gateway = createGateway({ | ||
| headers: aiGatewayHeaders, | ||
| }); | ||
|
|
||
| const streamTextResult = streamText({ | ||
| model: gateway("openai/gpt-4o"), | ||
| messages, | ||
| tools: toolSet, | ||
| stopWhen: stepCountIs(5), | ||
| experimental_transform: smoothStream({ | ||
| delayInMs: 1000, | ||
| chunking: "line", | ||
| }), | ||
| onError: ({ error }) => { | ||
| // handle error | ||
| }, | ||
| }); | ||
|
|
||
| const uiMessageStream = streamTextResult.toUIMessageStream({ | ||
| onFinish: async ({ messages: generateMessages }) => { | ||
| // handle completion | ||
| }, | ||
| }); | ||
| ``` |
There was a problem hiding this comment.
Canonical "Route Handler" pattern is incomplete: no HTTP response is returned.
The PR describes this as a "canonical Route Handler reference template for copy-paste," but the snippet ends after creating uiMessageStream (line 225) with no return statement. A copy-pasted route handler would silently return nothing.
It should close with something like:
const uiMessageStream = streamTextResult.toUIMessageStream({
onFinish: async ({ messages: generateMessages }) => {
// handle completion
},
});
+
+return new Response(uiMessageStream, {
+ headers: { "Content-Type": "text/plain; charset=utf-8" },
+});
+// — or, using the SDK helper —
+// import { createUIMessageStreamResponse } from "ai";
+// return createUIMessageStreamResponse({ stream: uiMessageStream });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.cursor/rules/vercel-ai-sdk.mdc around lines 194 - 230, The example stops
after creating uiMessageStream (from streamTextResult.toUIMessageStream) and
never returns an HTTP response; update the canonical Route Handler snippet to
return the stream as the route's HTTP response (take uiMessageStream produced by
streamTextResult.toUIMessageStream and send it back from the handler with
appropriate status and streaming/content-type headers) so a copy-pasteable
handler actually returns the streaming response to the client.
|
Thanks for the detailed review! I went through your codebase, specifically the files generate-content.ts, both Postgres tools, and biome.json, to ensure every pattern matches what you actually use. Here are the changes:
One suggestion that I would like to make: Since update-ai-sdk.mdc exists specifically for version upgrades, I added a brief note in Section 7 to mention that generateObject is deprecated upstream in v6. When you upgrade, this rule will already be ready. I can remove this note if you prefer to keep the rule focused only on v5.
packages/giselle/src/generations/**/*.ts
How this rule differs from update-ai-sdk.mdc: That rule covers upgrading SDK versions (running pnpm outdated, updating the catalog). This rule is about writing correct code against the current pinned SDK version. |
Summary
I started using the Vercel AI SDK when version 6 was released. I've noticed that when I ask an agent (Cursor, Antigravity, Claude) to write code, it often defaults to version 4 or 5 patterns.
These agents frequently confuse
toolContextwithexperimental_context, use the wronggenerateObjectsyntax, and getexecutesignatures wrong.To address this, I created a Cursor rule that teaches agents how to avoid these hallucinations. This approach completely stopped the errors for me.
Related Issue
vercel-labs/agent-skills#133
Changes
Added
.cursor/rules/vercel-ai-sdk.mdc— a companion to the existingupdate-ai-sdk.mdcthat focuses on writing correct v6 code. It covers 7 hallucination-prone API changes:toolContext→experimental_contextgenerateObject→Output.object()(Legacy)parameters→inputSchema.strict()on Zod →strict: trueon tooltoDataStreamResponse()→toUIMessageStreamResponse()maxSteps→stopWhen: stepCountIs(N)UIMessage+convertToModelMessages()All patterns are sourced from official Vercel AI SDK docs.
Testing
N/A — documentation-only change, no runtime code affected.
Other Information
Every code example includes a ✅ CORRECT / ❌ WRONG comparison with links to official docs. A canonical Route Handler reference is included at the end as a quick-copy template.
Summary by CodeRabbit