Reverted: Commit 349c24c
This commit was reverted due to a regression that was breaking the main branch. The changes affected CI configuration syncing between projects.
Changes reverted:
- Reverted workflow changes in
.github/workflows/autofix.yml - Reverted nx.json configuration changes
- Reverted package.json script changes
- Reverted ai-solid package changes (tsconfig, test utilities, package scripts)
- Restored
scripts/clean.sh - Renamed
scripts/generate-docs.tsback toscripts/generateDocs.ts - Restored size-limit configuration and dependencies
- Restored pnpm overrides
New Package: Framework-agnostic headless client for TanStack AI chat functionality.
Installation:
npm install @tanstack/ai-clientFeatures:
- ✅ Framework-agnostic (works with React, Vue, Svelte, vanilla JS, etc.)
- ✅ Headless client with state management
- ✅ Connection adapters for SSE, HTTP streams, and server functions
- ✅ Stream processing with smart chunking strategies
- ✅ Automatic tool call handling
New Package: Pre-built React UI components for chat interfaces.
Installation:
npm install @tanstack/ai-react-uiFeatures:
- ✅ Pre-built chat UI components
- ✅ Customizable styling
- ✅ Works with
@tanstack/ai-react
New Package: Python utilities for converting AI provider events to TanStack AI StreamChunk format.
Installation:
pip install tanstack-aiFeatures:
- ✅ Message formatting for Anthropic and OpenAI
- ✅ Stream chunk conversion from provider events
- ✅ SSE formatting utilities
- ✅ Type-safe with Pydantic models
Usage:
from tanstack_ai import StreamChunkConverter, format_sse_chunk
converter = StreamChunkConverter(model="claude-3-haiku-20240307", provider="anthropic")
async for event in anthropic_stream:
chunks = await converter.convert_event(event)
for chunk in chunks:
yield format_sse_chunk(chunk)See: Package Documentation | Python FastAPI Example
New Package: PHP utilities for converting AI provider events to TanStack AI StreamChunk format.
Installation:
composer require tanstack/aiFeatures:
- ✅ Message formatting for Anthropic and OpenAI
- ✅ Stream chunk conversion from provider events
- ✅ SSE formatting utilities
- ✅ PHP 8.1+ with type safety
Usage:
use TanStack\AI\StreamChunkConverter;
use TanStack\AI\SSEFormatter;
$converter = new StreamChunkConverter(
model: "claude-3-haiku-20240307",
provider: "anthropic"
);
foreach ($anthropicStream as $event) {
$chunks = $converter->convertEvent($event);
foreach ($chunks as $chunk) {
echo SSEFormatter::formatChunk($chunk);
}
}See: Package Documentation | PHP Slim Example
New Example: Framework-free chat application using pure JavaScript and @tanstack/ai-client.
Features:
- ✅ Pure vanilla JavaScript (no frameworks!)
- ✅ Real-time streaming with
@tanstack/ai-client - ✅ Beautiful, responsive UI
- ✅ Connects to Python FastAPI backend
See: Vanilla Chat Example
New Example: FastAPI server that streams AI responses in SSE format.
Features:
- ✅ FastAPI with SSE streaming
- ✅ Converts Anthropic/OpenAI events to StreamChunk format
- ✅ Compatible with
@tanstack/ai-client - ✅ Tool call support
New Example: PHP Slim Framework server with Anthropic and OpenAI support.
Features:
- ✅ Slim Framework with SSE streaming
- ✅ Converts Anthropic/OpenAI events to StreamChunk format
- ✅ Compatible with
@tanstack/ai-client - ✅ PHP 8.1+ with type safety
See: PHP Slim Example
New Feature: Smart chunking strategies for optimal UX in @tanstack/ai-client.
Built-in Strategies:
ImmediateStrategy- Emit content immediatelyPunctuationStrategy- Emit at sentence boundariesBatchStrategy- Batch N characters before emittingWordBoundaryStrategy- Emit at word boundariesCompositeStrategy- Combine multiple strategies
Usage:
import {
ChatClient,
fetchServerSentEvents,
PunctuationStrategy,
} from '@tanstack/ai-client'
const client = new ChatClient({
connection: fetchServerSentEvents('/api/chat'),
chunkingStrategy: new PunctuationStrategy(),
})See: Stream Processing Quick Start
New Feature: @tanstack/ai-client now uses flexible connection adapters for streaming.
API:
import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client'
const client = new ChatClient({
connection: fetchServerSentEvents('/api/chat', {
headers: { Authorization: 'Bearer token' },
}),
})Benefits:
- ✅ Support SSE, HTTP streams, WebSockets, server functions, etc.
- ✅ Easy to test with custom adapters
- ✅ Extensible for any streaming scenario
Built-in Adapters:
fetchServerSentEvents(url, options)- For SSE (default)fetchHttpStream(url, options)- For newline-delimited JSONstream(factory)- For direct async iterables (server functions)
With React:
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const chat = useChat({
connection: fetchServerSentEvents('/api/chat'),
})Create Custom Adapters:
import type { ConnectionAdapter } from '@tanstack/ai-client'
const wsAdapter: ConnectionAdapter = {
async *connect(messages, data) {
const ws = new WebSocket('wss://api.example.com')
// ... WebSocket logic
},
abort() {
ws.close()
},
}
const chat = useChat({ connection: wsAdapter })Documentation:
- 📖 Connection Adapters Guide - Complete guide
- 📖 Connection Adapters API - API reference
New Feature: agentLoopStrategy parameter replaces maxIterations with a flexible strategy pattern.
Before:
const stream = ai.chat({
model: "gpt-4",
messages: [...],
tools: [...],
maxIterations: 5,
});After:
import { maxIterations, untilFinishReason, combineStrategies } from "@tanstack/ai";
const stream = ai.chat({
model: "gpt-4",
messages: [...],
tools: [...],
agentLoopStrategy: maxIterations(5), // Or custom strategy
});Built-in Strategies:
maxIterations(max)- Continue for max iterationsuntilFinishReason(reasons)- Stop on specific finish reasonscombineStrategies(strategies)- Combine multiple strategies
Refactoring: Tool execution logic extracted into separate ToolCallManager class.
Benefits:
- ✅ Reduced
chat()method size from ~180 lines to ~85 lines - ✅ Independently testable
- ✅ Cleaner separation of concerns
The chat() method has been split into two distinct methods with different behaviors:
Before:
// Promise mode
const result = await ai.chat({
model: "gpt-4",
messages: [...],
as: "promise"
});
// Stream mode
const stream = ai.chat({
model: "gpt-4",
messages: [...],
as: "stream"
});
// Response mode
const response = ai.chat({
model: "gpt-4",
messages: [...],
as: "response"
});After:
// Promise-based completion (no automatic tool execution)
const result = await ai.chatCompletion({
model: "gpt-4",
messages: [...]
});
// Streaming with automatic tool execution loop
const stream = ai.chat({
model: "gpt-4",
messages: [...],
tools: [weatherTool] // Auto-executed when called
});
// HTTP streaming
const stream = ai.chat({
model: "gpt-4",
messages: [...]
});
return toStreamResponse(stream); // Exported from @tanstack/aiThe chat() method now includes an automatic tool execution loop:
import { chat, tool, maxIterations } from '@tanstack/ai'
import { openai } from '@tanstack/ai-openai'
const stream = chat({
adapter: openai(),
model: 'gpt-4o',
messages: [{ role: 'user', content: "What's the weather in Paris?" }],
tools: [weatherTool],
agentLoopStrategy: maxIterations(5), // Optional: control loop
})
// SDK automatically:
// 1. Detects tool calls from model
// 2. Executes tool.execute() functions
// 3. Adds results to conversation
// 4. Continues conversation with model
// 5. Emits tool_call and tool_result chunksNew Chunk Types:
tool_call- Model is calling a tooltool_result- Tool execution result (new!)
Control the tool execution loop with flexible strategies:
import {
maxIterations,
untilFinishReason,
combineStrategies,
} from '@tanstack/ai'
// Built-in strategies
agentLoopStrategy: maxIterations(10)
agentLoopStrategy: untilFinishReason(['stop', 'length'])
agentLoopStrategy: combineStrategies([
maxIterations(10),
({ messages }) => messages.length < 100,
])
// Custom strategy
agentLoopStrategy: ({ iterationCount, messages, finishReason }) => {
return iterationCount < 10 && messages.length < 50
}Tool execution logic extracted into a testable class:
import { ToolCallManager } from '@tanstack/ai'
const manager = new ToolCallManager(tools)
// Accumulate tool calls from stream
manager.addToolCallChunk(chunk)
// Check if tools need execution
if (manager.hasToolCalls()) {
const results = yield * manager.executeTools(doneChunk)
}
// Clear for next iteration
manager.clear()import { toStreamResponse, toServerSentEventsStream } from '@tanstack/ai'
// Full HTTP Response with SSE headers
return toStreamResponse(stream)
// Just the ReadableStream (for custom response)
return new Response(toServerSentEventsStream(stream), {
headers: { 'X-Custom': 'value' },
})// From @tanstack/ai
export { chat, chatCompletion } // Separate streaming and promise methods
export { toStreamResponse, toServerSentEventsStream } // HTTP helpers
export { ToolCallManager } // Tool execution manager
export { maxIterations, untilFinishReason, combineStrategies } // Loop strategies
export type { AgentLoopStrategy, AgentLoopState } // Strategy types
export type { ToolResultStreamChunk } // New chunk typeSee docs/MIGRATION_UNIFIED_CHAT.md for complete migration guide.
Quick migration:
- Replace
chat({ as: "promise" })withchatCompletion() - Replace
chat({ as: "stream" })withchat() - Replace
chat({ as: "response" })withchat()+toStreamResponse() - Import
toStreamResponsefrom@tanstack/ai(not subpath) - Update
maxIterations: 5toagentLoopStrategy: maxIterations(5)(optional)
- Smaller chat() method: Reduced from ~180 lines to ~85 lines
- Testable components: ToolCallManager and strategies have unit tests (23 tests, all passing)
- Separation of concerns: Tool execution logic isolated from chat logic
- Strategy pattern: Flexible control over tool execution loop
- Better documentation: Comprehensive guides for all features
New documentation:
- Tool Execution Loop - How automatic execution works
- Agent Loop Strategies - Controlling the loop
- Unified Chat API - Updated API reference
# Run all tests
pnpm test
# Run tests for @tanstack/ai
cd packages/ai && pnpm test
# Test coverage:
# - ToolCallManager: 7 tests
# - Agent Loop Strategies: 16 tests
# - Total: 23 tests, all passingmaxIterationsas a number still works (converted to strategy automatically)- All existing functionality preserved
- Gradual migration path available
-
chat()method:- No longer accepts
asoption - Now streaming-only
- Includes automatic tool execution loop
- No longer accepts
-
New
chatCompletion()method:- Promise-based
- Supports structured output
- No automatic tool execution
-
Import changes:
toStreamResponsenow from@tanstack/ai(not subpath)
✅ Clearer API - Method names indicate behavior
✅ Automatic tool execution - No manual management
✅ Flexible control - Strategy pattern for loops
✅ Better organized - Tool logic in separate class
✅ Well tested - 23 unit tests
✅ Better docs - Comprehensive guides
✅ Type-safe - Full TypeScript support
For questions or issues, see the documentation or examples.