Skip to content

feat (ai/ui): useChat refactor #6243

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 29 commits into
base: v5
Choose a base branch
from

Conversation

iteratetograceness
Copy link
Collaborator

@iteratetograceness iteratetograceness commented May 9, 2025

Note: This is a new branch pulling latest changes from #5770

Background & Summary

Replaces use of SWR for managing CRUD for chat state (messages, status, and error) in useChat hook (the scope of this PR only includes changes to the React useChat hook). The new ChatStore class is now the single source of truth across:

  • useChat
  • callChatApi
  • processChatTextResponse/processChatResponse

Key Changes

ChatStore:
A subscription-based class that manages chat state and notifies consumers of updates.

Unified State Management:
All chat-related modules now interact with the same state instance, ensuring consistency and avoiding race conditions (note: this was commonly faced when users toggled experimental throttle).

Improved State Updates:
The subscription model allows for more granular and reliable updates, reducing the risk of race conditions or stale state.

Previously, chat state was managed in a disjointed manner, leading to inconsistent updates and potential synchronization issues between components and API calls. By centralizing state management in ChatStore, we ensure consistent state across all consumers, easier debugging and extension of chat logic, better performance by avoiding unnecessary re-renders or fetches.

Considerations

  1. Abstraction Layer: we briefly entertained further abstraction through the introduction of a Chat class; this would have meant ChatStore would act as a manager of chats instead of a holding the core business logic for per-chat CRUD. We decided against this to avoid de-centralizing state management.
  2. [WIP]

Tasks

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • Formatting issues have been fixed (run pnpm prettier-fix in the project root)

Future Work

Related Issues

@iteratetograceness iteratetograceness changed the base branch from main to v5 May 9, 2025 03:11
Comment on lines +518 to +527
if (id) {
this.setMessages({ id, messages: [] });
this.resetActiveResponse(id);
} else {
const ids = Array.from(this.chats.keys());
for (const id of ids) {
this.clear(id);
}
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just evict from map? if not, error also needs to be cleared.

needs to abort ongoing streaming?

onUpdate({ message, data, replaceLastMessage }) {
mutateStatus('streaming');

throttledMutate(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how is throttling working now

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pushed up!

const { subscribe, getMessages, getStatus, getError, setStatus } =
useChatStore({
store: chatStore.current,
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this necessary vs. using chatStore current directly?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm leaning towards this so that we can encapsulate the stable selector functions (e.g. getMessages) in one spot! like a separation of concerns b/w subscription/reading (within hook) and imperative mutations

onStoreChange: callback,
eventType: 'chat-messages-changed',
}),
() => getMessages(chatId),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the messages immutable? (even deeper in the tree, e.g. some tool invocations)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they are not truly immutable + should be treated as read-only (which we can add to documentation); only mutated via methods on chat store! from our end, to truly enforce immutability, we'd likely want to introduce something like immer - would love to discuss this further tomorrow!

iteratetograceness and others added 13 commits May 11, 2025 22:17
Different chat applications have different message metadata that should
be displayed in the UI. Metadata can include timestamps, names, model
information, token usage, and more.

Our current mechanisms are either limited (fixed `createdAt`) or
difficult to use (untyped `data` and `annotations`).

- introduce generic UI message `metadata` property
- data stream parts
   - rename `finish-message` to `finish` stream part
- introduce `start` stream part with `metadata` and `messageId`
properties
   - introduce `metadata` stream part with `metadata` property
- add `metadata` property to `finish`, `finish-step`, and `start-step`
stream parts
- `streamText` `DataStreamOptions` support `metadata` function for
converting parts to metadata

- rename `step-start` part to `start-step`
- rename `step-finish` part to `finish-step`
- remove deprecated properties from parts
- add `start` part
- rename `usage` property on `finish` part to `totalUsage`

- remove `appendResponseMessages` function
- remove `experimental_generateMessageId` property from `streamText` and
`generateText`
- remove `ResponseMessage` `id` property
- `streamText` `DataStreamOptions` support `newMessageId`,
`originalMessages`, and `onFinish` for ui message persistence

- remove `createdAt` property
- remove `annotations` property
- add generic `metadata` property

- remove `data` support
- remove options from `onFinish`
- add message metadata generics & `messageMetadataSchema`

- remove `data` support

* manually tested persistence examples (`next-openai`)
* manually tested metadata examples (`next-openai`)

* Re-introduce data mechanism in `useCompletion`
* Add message data parts
* Rework ui messages error handling
* persistence support in create data stream or as stream transformer
# Releases
## [email protected]

### Major Changes

-   d964901: - remove setting temperature to `0` by default
    -   remove `null` option from `DefaultSettingsMiddleware`
- remove setting defaults for `temperature` and `stopSequences` in `ai`
to enable middleware changes

- 0560977: chore (ai): improve consistency of generate text result,
stream text result, and step result

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

- bfbfc4c: feat (ai): streamText/generateText: totalUsage contains usage
for all steps. usage is for a single step.

-   ea7a7c9: feat (ui): UI message metadata

-   1409e13: chore (ai): remove experimental continueSteps

### Patch Changes

-   66af894: fix (ai): respect content order in toResponseMessages
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [516be5b]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [516be5b]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [516be5b]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [f07a6d4]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   ea7a7c9: feat (ui): UI message metadata

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [516be5b]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Major Changes

-   516be5b: ### Move Image Model Settings into generate options

Image Models no longer have settings. Instead, `maxImagesPerCall` can be
passed directly to `generateImage()`. All other image settings can be
passed to `providerOptions[provider]`.

    Before

    ```js
    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });
    ```

    After

    ```js
    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });
    ```

    Pull Request: <vercel#6180>

### Patch Changes

-   Updated dependencies [516be5b]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [516be5b]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [516be5b]
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   f07a6d4: fix(providers/google): accept nullish in safetyRatings
-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [d964901]
-   Updated dependencies [0560977]
-   Updated dependencies [66af894]
-   Updated dependencies [516be5b]
-   Updated dependencies [bfbfc4c]
-   Updated dependencies [ea7a7c9]
-   Updated dependencies [1409e13]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [d964901]
-   Updated dependencies [0560977]
-   Updated dependencies [66af894]
-   Updated dependencies [516be5b]
-   Updated dependencies [bfbfc4c]
-   Updated dependencies [ea7a7c9]
-   Updated dependencies [1409e13]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [d964901]
-   Updated dependencies [0560977]
-   Updated dependencies [66af894]
-   Updated dependencies [516be5b]
-   Updated dependencies [bfbfc4c]
-   Updated dependencies [ea7a7c9]
-   Updated dependencies [1409e13]
    -   [email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [d964901]
-   Updated dependencies [0560977]
-   Updated dependencies [66af894]
-   Updated dependencies [516be5b]
-   Updated dependencies [bfbfc4c]
-   Updated dependencies [ea7a7c9]
-   Updated dependencies [1409e13]
    -   [email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [d964901]
-   Updated dependencies [0560977]
-   Updated dependencies [66af894]
-   Updated dependencies [516be5b]
-   Updated dependencies [bfbfc4c]
-   Updated dependencies [ea7a7c9]
-   Updated dependencies [1409e13]
    -   [email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [ea7a7c9]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   cff5a9e: fix (ai-sdk/vue): fix status reactivity
-   Updated dependencies [d964901]
-   Updated dependencies [0560977]
-   Updated dependencies [66af894]
-   Updated dependencies [516be5b]
-   Updated dependencies [bfbfc4c]
-   Updated dependencies [ea7a7c9]
-   Updated dependencies [1409e13]
    -   [email protected]
    -   @ai-sdk/[email protected]

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
## Background

`isLoading` in `useChat` has been replaced by `state`, which allows for
more fine-grained ui state management.

## Summary

Remove deprecated `isLoading` helper from `useChat`.
…ions` (vercel#6280)

- `data` on `ChatRequestOptions` is unnecessary - you can use `body`
instead.
- `allowEmptySubmit` on `ChatRequestOptions` is unnecessary - we can
detect if there is any content.

- remove `data` and `allowEmptySubmit` from `ChatRequestOptions`
`onResponse` callback is unrelated to streaming and should not be used.

Remove `onResponse` callback from `useChat` and `useCompletion`
What was originally a general purpose data stream is now very specific
to streamed `UIMessages`. The original name obfuscates what it does.

Rename `DataStream*` to `UIMessageStream*`
# Releases
## [email protected]

### Major Changes

-   e7dc6c7: chore (ai): remove onResponse callback
- a34eb39: chore (ai): remove `data` and `allowEmptySubmit` from
`ChatRequestOptions`
-   b33ed7a: chore (ai): rename DataStream_ to UIMessage_
-   765f1cd: chore (ai): remove deprecated useChat isLoading helper

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e7dc6c7]
-   Updated dependencies [a34eb39]
-   Updated dependencies [b33ed7a]
-   Updated dependencies [765f1cd]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e7dc6c7]
-   Updated dependencies [a34eb39]
-   Updated dependencies [b33ed7a]
-   Updated dependencies [765f1cd]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e7dc6c7]
-   Updated dependencies [a34eb39]
-   Updated dependencies [b33ed7a]
-   Updated dependencies [765f1cd]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e7dc6c7]
-   Updated dependencies [a34eb39]
-   Updated dependencies [b33ed7a]
-   Updated dependencies [765f1cd]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e7dc6c7]
-   Updated dependencies [a34eb39]
-   Updated dependencies [b33ed7a]
-   Updated dependencies [765f1cd]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e7dc6c7]
-   Updated dependencies [a34eb39]
-   Updated dependencies [b33ed7a]
-   Updated dependencies [765f1cd]
    -   [email protected]

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
lgrammel added a commit that referenced this pull request May 14, 2025
## Background
Currently chat state is managed in the `useChat` implementations in the
React/Svelte/Vue packages. This leads to code duplication, issues with
extensibility, and bugs in state synchronization (e.g. add tool results
while a stream is ongoing). Also, the SWR usage can sometimes collide
with the SWR setup that users have.

## Summary
* introduce `ChatStore` and `ChatTransport` concepts that are used in
`useChat` (React)
   * `ChatStore` centralizes client-side chat management
* `ChatTransport` is an extensible interface (open-closed pattern) that
manages backend interaction
* introduce `defaultChatStore` factory function
* changes `useChat` (React) parameters
* fix concurrent `addToolResult` when streaming
* remove SWR usage from `useChat` (React)

![IMG_7349
Large](https://github.com/user-attachments/assets/f2d9d007-ff8a-488e-8869-61bda6109370)

## Verification
Tested with `examples/next-openai` chat.

## Future Work
* add mechanism for loading the initial chat messages
* use chat store in svelte
* use chat store in vue
* rename id in post request to `chatId`
* reintroduce and test throttling in useChat (React)
* add clearChat functionality to store
* split stream protocols into separate chat transports

## Related Issues
* Initial exploration PR #6243

---------

Co-authored-by: Grace Yun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants