Skip to content

Latest commit

 

History

History
254 lines (207 loc) · 9.38 KB

File metadata and controls

254 lines (207 loc) · 9.38 KB

import { AiChatTemplate } from '@/templates/AiChatTemplate'; import { PreviewFullScreen } from '@/app/_components/preview-full-screen';

AI Chat

A composable AI chat UI component for Apollo Vertex. Built with React, TypeScript, and Tailwind CSS. Designed to work with TanStack AI — you bring useChat and a connection adapter, the component handles the chrome (scroll, input, loading, suggestions, errors) while you control how messages and tool calls render.

All visual states and sub-components → Component Preview

Features

  • TanStack AI Integration — Works with useChat from @tanstack/ai-react and UIMessage types
  • ComposableAiChat is the shell, AiChatMessage renders messages, you iterate parts and render tools inline
  • Type-Safe Tool Rendering — Check part.name in the parts loop and TypeScript narrows part.output automatically
  • AgentHub Adapter — Built-in adapter for the UiPath AgentHub normalized LLM endpoint (OpenAI + Anthropic models)
  • Markdown Rendering — Renders assistant responses with GitHub Flavored Markdown
  • Suggestion Buttons — Interactive choice buttons rendered from tool results
  • Error Display — Inline error banner for API and network errors
  • i18n Support — Built-in internationalization via react-i18next
  • Accessible — WCAG 2.1 compliant with keyboard navigation and ARIA live regions

Installation

npx shadcn@latest add @uipath/ai-chat

Quick Start

import { useChat } from '@tanstack/ai-react';
import { AiChat } from '@/components/ui/ai-chat/components/ai-chat';
import { AiChatMessage } from '@/components/ui/ai-chat/components/ai-chat-message';
import { createAgentHubConnection } from '@/components/ui/ai-chat/adapters/agenthub/adapter';

function BasicChat() {
  const connection = createAgentHubConnection({
    baseUrl: 'https://cloud.uipath.com/{org}/{tenant}/agenthub_/llm/api',
    model: { vendor: 'openai' as const, name: 'gpt-4o' },
    accessToken: () => getAccessToken(),
    systemPrompt: 'You are a helpful assistant.',
  });

  const { messages, sendMessage, isLoading, stop, clear, error } = useChat({
    connection,
  });

  return (
    <AiChat
      messages={messages}
      isLoading={isLoading}
      onSendMessage={(text) => sendMessage(text)}
      onStop={stop}
      onClearChat={clear}
      error={error}
      title="AI Assistant"
    >
      {messages.map((message) => (
        <AiChatMessage key={message.id} message={message} />
      ))}
    </AiChat>
  );
}

Tool Rendering

Render tool output inline in the chat — just like TanStack AI's own examples. Define tools with toolDefinition, pass the input through as output in your client tool, then check part.name in the parts loop. TypeScript narrows part.output automatically.

import { z } from 'zod';
import { toolDefinition } from '@tanstack/ai';
import { clientTools } from '@tanstack/ai-client';
import { stream, useChat } from '@tanstack/ai-react';
import { AiChat } from '@/components/ui/ai-chat/components/ai-chat';
import { AiChatMessage } from '@/components/ui/ai-chat/components/ai-chat-message';

// 1. Define tools — output passes input through for rendering
const showResultsInput = z.object({
  entityName: z.string(),
  columns: z.array(z.string()),
});

const showResultsDef = toolDefinition({
  name: 'show_results',
  description: 'Display a results table',
  inputSchema: showResultsInput,
  outputSchema: showResultsInput,
});

const showResults = showResultsDef.client((input) => input);
const toolDefs = clientTools(showResults);

// 2. Wire it up — iterate parts, render tools inline
function ChatWithTools() {
  const { messages, sendMessage, isLoading, stop } = useChat({
    connection,
    tools: toolDefs,
  });

  return (
    <AiChat
      messages={messages}
      isLoading={isLoading}
      onSendMessage={(text) => sendMessage(text)}
      onStop={stop}
    >
      {messages.map((message) => (
        <AiChatMessage key={message.id} message={message}>
          {message.parts.map((part) => {
            // TypeScript narrows part.output when you check part.name
            if (part.type === 'tool-call' && part.name === 'show_results' && part.output) {
              return <ResultsTable key={part.id} entity={part.output.entityName} columns={part.output.columns} />;
            }
            return null;
          })}
        </AiChatMessage>
      ))}
    </AiChat>
  );
}

AgentHub Adapter

The built-in adapter for the UiPath AgentHub normalized LLM endpoint. It converts TanStack AI UIMessage arrays to the AgentHub wire format, calls the endpoint, and parses the SSE response back into AG-UI StreamChunk events.

import { createAgentHubConnection, type AgentHubAdapterConfig } from '@/components/ui/ai-chat/adapters/agenthub/adapter';

const connection = createAgentHubConnection({
  baseUrl: 'https://cloud.uipath.com/{org}/{tenant}/agenthub_/llm/api',
  model: { vendor: 'openai', name: 'gpt-4o' },
  accessToken: () => getAccessToken(),
  systemPrompt: 'You are a helpful assistant.',
  maxTokens: 2048,
  temperature: 0.7,
  tools: toolDefs,
});

The model.vendor field controls wire-format differences:

  • "openai" — flat tool definitions ({ name, description, parameters })
  • "anthropic" — Anthropic tool format ({ type: "custom", input_schema }), non-empty assistant content on tool-call messages
  • The X-UiPath-LlmGateway-NormalizedApi-ModelName header is always sent for routing
  • Responses are always OpenAI-compatible SSE regardless of the underlying model

Suggestion Buttons

Return a choices object as a tool result content to render interactive suggestion buttons. Buttons disappear after the user sends another message.

Try it out — type "give me some choices" in the demo above to see suggestion buttons in action. The demo uses the presentChoices example tool defined in examples/choices-tool.ts.

The choices format:

{
  "type": "choices",
  "prompt": "How would you like to proceed?",
  "options": [
    { "id": "approve", "label": "Approve Document", "recommended": true },
    { "id": "reject", "label": "Reject Document" }
  ]
}

Handle selection explicitly or let the default behavior send option.label as a message:

<AiChat
  messages={messages}
  isLoading={isLoading}
  onSendMessage={(text) => sendMessage(text)}
  onStop={stop}
  onChoiceSelect={(option) => {
    sendMessage(option.value ?? option.label);
  }}
>
  {messages.map((message) => (
    <AiChatMessage key={message.id} message={message} />
  ))}
</AiChat>

Error Display

Pass an Error object to show an inline error banner:

<AiChat
  messages={messages}
  isLoading={isLoading}
  onSendMessage={(text) => sendMessage(text)}
  onStop={stop}
  error={error}
>
  {messages.map((message) => (
    <AiChatMessage key={message.id} message={message} />
  ))}
</AiChat>

API Reference

<AiChat>

Chat shell component. Handles layout, scroll, input, loading indicator, suggestions, and errors. Render messages as children.

Prop Type Default Description
messages UIMessage[] required Messages from useChat
isLoading boolean required Loading state from useChat
onSendMessage (content: string) => void required Send handler
onStop () => void required Stop/abort handler
children ReactNode Message list (typically messages.map(...))
onClearChat () => void Clear handler
onChoiceSelect (option: ChoiceOption) => void Suggestion button handler (default: sends option.label)
assistantName string "AI Assistant" Assistant display name
title string Chat title in the header
emptyState ReactNode Custom empty state
placeholder string Input placeholder
showClearButton boolean true Show the clear button
error Error | null Inline error banner

<AiChatMessage>

Renders a single message with avatar, name, markdown text, and children for custom content (tool output).

Prop Type Default Description
message UIMessage required The message to render
assistantName string "AI Assistant" Assistant display name
children ReactNode Custom content rendered below the message text (tool output, etc.)

AgentHubAdapterConfig

Configuration for the AgentHub adapter.

Property Type Default Description
baseUrl string required AgentHub base URL (/chat/completions is appended)
model { vendor: 'openai' | 'anthropic'; name: string } required Model config
accessToken string | () => string | null required Bearer token (refreshed per request if function)
systemPrompt string | () => string System prompt prepended to messages (function form is called per request)
maxTokens number 2048 Max response tokens
temperature number 0.7 Sampling temperature
tools ReadonlyArray<AnyClientTool> Client tools — wire-format definitions are derived automatically