A Node.js implementation of the temporal-ai-agent repository, featuring a multi-turn conversation AI agent running inside Temporal workflows with a modern React frontend.
This project demonstrates how to build reliable AI agents using Temporal workflows. The agent can:
- 🤖 Multi-turn Conversations: Engage in complex, stateful conversations
- 🔧 Tool Execution: Use various tools to accomplish tasks (search, flights, invoices, email)
- ✅ Human-in-the-Loop: Request approval for tool executions
- 🔄 Self-Healing: Automatically retry failed operations
- 📊 State Management: Maintain conversation state across failures
- 🌐 Multi-Model Support: Use OpenAI, Anthropic, or Google AI models via Vercel AI SDK
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ React Frontend │ │ Node.js API │ │ Temporal Server │
│ │ │ │ │ │
│ • Chat Interface│◄──►│ • REST Routes │◄──►│ • Workflows │
│ • Tool Approval │ │ • Session Mgmt │ │ • Activities │
│ • Real-time UI │ │ • LLM Service │ │ • State Mgmt │
└─────────────────┘ └─────────────────┘ └─────────────────┘
- Temporal Workflows: Durable conversation orchestration
- Multi-LLM Support: OpenAI, Anthropic, Google AI via Vercel AI SDK
- Tool System: Pluggable tools with parameter validation
- REST API: Express.js routes for frontend communication
- Type Safety: Full TypeScript implementation
- Error Handling: Comprehensive error handling and retries
- Modern UI: React 19 with Tailwind CSS
- Real-time Chat: Interactive conversation interface
- Tool Approval: Visual confirmation for tool executions
- Session Management: Automatic session handling
- Responsive Design: Works on desktop and mobile
- Dark Mode: Automatic theme switching
- Node.js 18+
- npm or yarn
- Temporal Server (or Temporal CLI for development)
git clone <your-repo-url>
cd temporal-ai-agent# Using Temporal CLI (recommended for development)
temporal server start-dev
# Or using Docker
docker run --rm -p 7233:7233 temporalio/auto-setup:latestcd backend
npm install
# Configure environment variables
cp .env.example .env
# Edit .env with your API keys (OpenAI, Anthropic, or Google AI)
# Start the backend
npm run devcd frontend
npm install
# Start the frontend
npm run dev- Frontend: http://localhost:5173
- Backend API: http://localhost:3000
- Temporal UI: http://localhost:8233
Create a .env file in the backend directory:
# LLM Configuration (choose one or more)
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
GOOGLE_AI_API_KEY=your_google_key
# Default LLM settings
LLM_DEFAULT_PROVIDER=google
LLM_DEFAULT_MODEL=gemini-1.5-pro
# Temporal Configuration
TEMPORAL_ADDRESS=localhost:7233
TEMPORAL_NAMESPACE=default
TEMPORAL_TASK_QUEUE=ai-agent-queue
# Server Configuration
PORT=3000
NODE_ENV=developmentThe system includes several mock tools for demonstration:
- Search Events: Find public events by location and date
- Search Flights: Find flights between cities
- Create Invoice: Generate Stripe invoices
- Send Email: Send emails (mock implementation)
- Open the frontend at http://localhost:5173
- Click "Start New Chat"
- Begin chatting with the AI agent
- The agent will suggest tools and ask for approval when needed
User: "I want to attend a tech conference in San Francisco next month and need to book a flight from New York"
Agent: "I'll help you find tech conferences in San Francisco and flights from New York. Let me search for events first."
[Agent requests approval to use search tools]
User: "Yes, go ahead"
[Agent executes tools and provides results]
temporal-ai-agent/
├── backend/ # Node.js backend
│ ├── src/
│ │ ├── activities/ # Temporal activities
│ │ ├── workflows/ # Temporal workflows
│ │ ├── api/ # REST API routes
│ │ ├── services/ # Business logic
│ │ ├── tools/ # Tool implementations
│ │ └── shared/ # Shared types
│ └── package.json
├── frontend/ # React frontend
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── services/ # API services
│ │ └── App.jsx # Main app
│ └── package.json
└── README.md
- Create a tool implementation in
backend/src/tools/ - Add the tool to the registry in
backend/src/tools/index.ts - Update the tool definitions for the LLM
- Test the tool in the conversation
- Modify prompts in
backend/src/prompts/ - Adjust LLM settings in
backend/src/config/config.ts - Switch between providers by updating environment variables
POST /api/agent/conversations- Start a new conversationPOST /api/agent/conversations/:id/messages- Send a messageGET /api/agent/conversations/:id- Get conversation statePOST /api/agent/conversations/:id/approve- Approve tool executionGET /api/agent/health- Health check
- aiAgentWorkflow: Main conversation orchestration
- Activities: LLM calls, tool executions, state management
- Build the application:
cd backend
npm run build- Set production environment variables
- Deploy to your preferred platform (AWS, GCP, Azure, etc.)
- Ensure Temporal Server is accessible
- Build the application:
cd frontend
npm run build- Deploy the
distfolder to a static hosting service - Update API base URL for production
-
Temporal Connection Failed
- Ensure Temporal Server is running on localhost:7233
- Check network connectivity
- Verify Temporal CLI installation
-
LLM API Errors
- Verify API keys are correct
- Check rate limits and quotas
- Ensure the model is available
-
Frontend Connection Issues
- Verify backend is running on localhost:3000
- Check CORS configuration
- Ensure API endpoints are accessible
Enable debug logging by setting:
LOG_LEVEL=debug
ENABLE_LLM_LOGGING=true
ENABLE_TOOL_LOGGING=true- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the original temporal-ai-agent repository for details.
- Original Python implementation: temporal-community/temporal-ai-agent
- Temporal for the workflow orchestration platform
- Vercel AI SDK for multi-model LLM support
- React and Tailwind CSS for the frontend
Note: This is a Node.js/TypeScript implementation of the original Python temporal-ai-agent. While it maintains the same core functionality and architecture, it uses different technologies and may have some variations in implementation details.