-
Notifications
You must be signed in to change notification settings - Fork 2.3k
feat (ai/ui): useChat
refactor
#6243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v5
Are you sure you want to change the base?
Conversation
if (id) { | ||
this.setMessages({ id, messages: [] }); | ||
this.resetActiveResponse(id); | ||
} else { | ||
const ids = Array.from(this.chats.keys()); | ||
for (const id of ids) { | ||
this.clear(id); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just evict from map? if not, error also needs to be cleared.
needs to abort ongoing streaming?
onUpdate({ message, data, replaceLastMessage }) { | ||
mutateStatus('streaming'); | ||
|
||
throttledMutate( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how is throttling working now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pushed up!
const { subscribe, getMessages, getStatus, getError, setStatus } = | ||
useChatStore({ | ||
store: chatStore.current, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this necessary vs. using chatStore current directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm leaning towards this so that we can encapsulate the stable selector functions (e.g. getMessages
) in one spot! like a separation of concerns b/w subscription/reading (within hook) and imperative mutations
onStoreChange: callback, | ||
eventType: 'chat-messages-changed', | ||
}), | ||
() => getMessages(chatId), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the messages immutable? (even deeper in the tree, e.g. some tool invocations)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they are not truly immutable + should be treated as read-only (which we can add to documentation); only mutated via methods on chat store! from our end, to truly enforce immutability, we'd likely want to introduce something like immer
- would love to discuss this further tomorrow!
Different chat applications have different message metadata that should be displayed in the UI. Metadata can include timestamps, names, model information, token usage, and more. Our current mechanisms are either limited (fixed `createdAt`) or difficult to use (untyped `data` and `annotations`). - introduce generic UI message `metadata` property - data stream parts - rename `finish-message` to `finish` stream part - introduce `start` stream part with `metadata` and `messageId` properties - introduce `metadata` stream part with `metadata` property - add `metadata` property to `finish`, `finish-step`, and `start-step` stream parts - `streamText` `DataStreamOptions` support `metadata` function for converting parts to metadata - rename `step-start` part to `start-step` - rename `step-finish` part to `finish-step` - remove deprecated properties from parts - add `start` part - rename `usage` property on `finish` part to `totalUsage` - remove `appendResponseMessages` function - remove `experimental_generateMessageId` property from `streamText` and `generateText` - remove `ResponseMessage` `id` property - `streamText` `DataStreamOptions` support `newMessageId`, `originalMessages`, and `onFinish` for ui message persistence - remove `createdAt` property - remove `annotations` property - add generic `metadata` property - remove `data` support - remove options from `onFinish` - add message metadata generics & `messageMetadataSchema` - remove `data` support * manually tested persistence examples (`next-openai`) * manually tested metadata examples (`next-openai`) * Re-introduce data mechanism in `useCompletion` * Add message data parts * Rework ui messages error handling * persistence support in create data stream or as stream transformer
# Releases ## [email protected] ### Major Changes - d964901: - remove setting temperature to `0` by default - remove `null` option from `DefaultSettingsMiddleware` - remove setting defaults for `temperature` and `stopSequences` in `ai` to enable middleware changes - 0560977: chore (ai): improve consistency of generate text result, stream text result, and step result - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> - bfbfc4c: feat (ai): streamText/generateText: totalUsage contains usage for all steps. usage is for a single step. - ea7a7c9: feat (ui): UI message metadata - 1409e13: chore (ai): remove experimental continueSteps ### Patch Changes - 66af894: fix (ai): respect content order in toResponseMessages - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [516be5b] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [516be5b] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [516be5b] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [f07a6d4] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - ea7a7c9: feat (ui): UI message metadata ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [516be5b] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Major Changes - 516be5b: ### Move Image Model Settings into generate options Image Models no longer have settings. Instead, `maxImagesPerCall` can be passed directly to `generateImage()`. All other image settings can be passed to `providerOptions[provider]`. Before ```js await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, }); ``` After ```js await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, }); ``` Pull Request: <vercel#6180> ### Patch Changes - Updated dependencies [516be5b] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [516be5b] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [516be5b] - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - f07a6d4: fix(providers/google): accept nullish in safetyRatings - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [d964901] - Updated dependencies [0560977] - Updated dependencies [66af894] - Updated dependencies [516be5b] - Updated dependencies [bfbfc4c] - Updated dependencies [ea7a7c9] - Updated dependencies [1409e13] - [email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [d964901] - Updated dependencies [0560977] - Updated dependencies [66af894] - Updated dependencies [516be5b] - Updated dependencies [bfbfc4c] - Updated dependencies [ea7a7c9] - Updated dependencies [1409e13] - [email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [d964901] - Updated dependencies [0560977] - Updated dependencies [66af894] - Updated dependencies [516be5b] - Updated dependencies [bfbfc4c] - Updated dependencies [ea7a7c9] - Updated dependencies [1409e13] - [email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [d964901] - Updated dependencies [0560977] - Updated dependencies [66af894] - Updated dependencies [516be5b] - Updated dependencies [bfbfc4c] - Updated dependencies [ea7a7c9] - Updated dependencies [1409e13] - [email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [d964901] - Updated dependencies [0560977] - Updated dependencies [66af894] - Updated dependencies [516be5b] - Updated dependencies [bfbfc4c] - Updated dependencies [ea7a7c9] - Updated dependencies [1409e13] - [email protected] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [ea7a7c9] - @ai-sdk/[email protected] ## @ai-sdk/[email protected] ### Patch Changes - cff5a9e: fix (ai-sdk/vue): fix status reactivity - Updated dependencies [d964901] - Updated dependencies [0560977] - Updated dependencies [66af894] - Updated dependencies [516be5b] - Updated dependencies [bfbfc4c] - Updated dependencies [ea7a7c9] - Updated dependencies [1409e13] - [email protected] - @ai-sdk/[email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
## Background `isLoading` in `useChat` has been replaced by `state`, which allows for more fine-grained ui state management. ## Summary Remove deprecated `isLoading` helper from `useChat`.
…ions` (vercel#6280) - `data` on `ChatRequestOptions` is unnecessary - you can use `body` instead. - `allowEmptySubmit` on `ChatRequestOptions` is unnecessary - we can detect if there is any content. - remove `data` and `allowEmptySubmit` from `ChatRequestOptions`
`onResponse` callback is unrelated to streaming and should not be used. Remove `onResponse` callback from `useChat` and `useCompletion`
What was originally a general purpose data stream is now very specific to streamed `UIMessages`. The original name obfuscates what it does. Rename `DataStream*` to `UIMessageStream*`
# Releases ## [email protected] ### Major Changes - e7dc6c7: chore (ai): remove onResponse callback - a34eb39: chore (ai): remove `data` and `allowEmptySubmit` from `ChatRequestOptions` - b33ed7a: chore (ai): rename DataStream_ to UIMessage_ - 765f1cd: chore (ai): remove deprecated useChat isLoading helper ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [e7dc6c7] - Updated dependencies [a34eb39] - Updated dependencies [b33ed7a] - Updated dependencies [765f1cd] - [email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [e7dc6c7] - Updated dependencies [a34eb39] - Updated dependencies [b33ed7a] - Updated dependencies [765f1cd] - [email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [e7dc6c7] - Updated dependencies [a34eb39] - Updated dependencies [b33ed7a] - Updated dependencies [765f1cd] - [email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [e7dc6c7] - Updated dependencies [a34eb39] - Updated dependencies [b33ed7a] - Updated dependencies [765f1cd] - [email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [e7dc6c7] - Updated dependencies [a34eb39] - Updated dependencies [b33ed7a] - Updated dependencies [765f1cd] - [email protected] ## @ai-sdk/[email protected] ### Patch Changes - Updated dependencies [e7dc6c7] - Updated dependencies [a34eb39] - Updated dependencies [b33ed7a] - Updated dependencies [765f1cd] - [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
## Background Currently chat state is managed in the `useChat` implementations in the React/Svelte/Vue packages. This leads to code duplication, issues with extensibility, and bugs in state synchronization (e.g. add tool results while a stream is ongoing). Also, the SWR usage can sometimes collide with the SWR setup that users have. ## Summary * introduce `ChatStore` and `ChatTransport` concepts that are used in `useChat` (React) * `ChatStore` centralizes client-side chat management * `ChatTransport` is an extensible interface (open-closed pattern) that manages backend interaction * introduce `defaultChatStore` factory function * changes `useChat` (React) parameters * fix concurrent `addToolResult` when streaming * remove SWR usage from `useChat` (React)  ## Verification Tested with `examples/next-openai` chat. ## Future Work * add mechanism for loading the initial chat messages * use chat store in svelte * use chat store in vue * rename id in post request to `chatId` * reintroduce and test throttling in useChat (React) * add clearChat functionality to store * split stream protocols into separate chat transports ## Related Issues * Initial exploration PR #6243 --------- Co-authored-by: Grace Yun <[email protected]>
Note: This is a new branch pulling latest changes from #5770
Background & Summary
Replaces use of
SWR
for managing CRUD for chat state (messages, status, and error) inuseChat
hook (the scope of this PR only includes changes to the ReactuseChat
hook). The newChatStore
class is now the single source of truth across:useChat
callChatApi
processChatTextResponse
/processChatResponse
Key Changes
ChatStore:
A subscription-based class that manages chat state and notifies consumers of updates.
Unified State Management:
All chat-related modules now interact with the same state instance, ensuring consistency and avoiding race conditions (note: this was commonly faced when users toggled experimental throttle).
Improved State Updates:
The subscription model allows for more granular and reliable updates, reducing the risk of race conditions or stale state.
Previously, chat state was managed in a disjointed manner, leading to inconsistent updates and potential synchronization issues between components and API calls. By centralizing state management in ChatStore, we ensure consistent state across all consumers, easier debugging and extension of chat logic, better performance by avoiding unnecessary re-renders or fetches.
Considerations
Chat
class; this would have meantChatStore
would act as a manager of chats instead of a holding the core business logic for per-chat CRUD. We decided against this to avoid de-centralizing state management.Tasks
pnpm changeset
in the project root)pnpm prettier-fix
in the project root)Future Work
Related Issues