A powerful, flexible, and type-safe AI chatbot library for TypeScript/JavaScript applications with support for OpenAI, Anthropic Claude, and Google Gemini.
This is the TypeScript/JavaScript implementation of our multi-language chatbot library:
- ๐ npm-chatbot - TypeScript/JavaScript (this package)
- ๐ php-chatbot - PHP implementation
- ๐ท go-chatbot - Go implementation
All implementations share the same API design and features, making it easy to switch between languages or maintain consistency across polyglot projects.
- ๐ฏ Three Major AI Providers - OpenAI (GPT-4o, GPT-4 Turbo, o1), Anthropic (Claude Sonnet 4.5, Opus 4.1), Google AI (Gemini 2.0, 1.5 Pro)
- ๐ Type-Safe - Full TypeScript support with comprehensive type definitions
- ๐พ Conversation Memory - Built-in conversation history management
- ๐ Security - Input/output filtering, content moderation, rate limiting
- โก Streaming Support - Real-time response streaming for all providers
- ๐ Error Handling - Comprehensive error handling with retry logic
- ๐ Usage Tracking - Token usage and cost tracking
- ๐งช Extensively Tested - 94% test coverage with 880+ tests
- ๐ Universal - Works in Node.js and modern browsers
- ๐ฆ Tree-Shakeable - Optimized bundle size with ESM support
# Using npm
npm install @rumenx/chatbot
# Using yarn
yarn add @rumenx/chatbot
# Using pnpm
pnpm add @rumenx/chatbotInstall the AI provider SDK(s) you plan to use:
# For OpenAI
npm install openai
# For Anthropic
npm install @anthropic-ai/sdk
# For Google AI
npm install @google/generative-aiAll provider dependencies are optional peer dependencies, so you only install what you need.
import { Chatbot } from '@rumenx/chatbot';
// Initialize chatbot with OpenAI
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o', // Latest: 'gpt-4o', 'gpt-4-turbo', 'o1-preview'
},
temperature: 0.7,
maxTokens: 150,
});
// Send a message
const response = await chatbot.chat({
message: 'Hello! How are you?',
metadata: {
sessionId: 'user-123',
userId: 'user-123',
},
});
console.log(response.content);
// Output: "Hello! I'm doing well, thank you for asking. How can I help you today?"import { Chatbot } from '@rumenx/chatbot';
import type { ChatbotConfig } from '@rumenx/chatbot';
const config: ChatbotConfig = {
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o', // Latest: 'gpt-4o', 'gpt-4-turbo', 'o1-preview' (Note: gpt-3.5-turbo deprecated)
},
systemPrompt: 'You are a helpful AI assistant.',
temperature: 0.7,
maxTokens: 500,
enableMemory: true,
maxHistory: 20,
security: {
enableInputFilter: true,
enableOutputFilter: true,
maxInputLength: 4000,
},
rateLimit: {
enabled: true,
requestsPerMinute: 10,
requestsPerHour: 100,
},
};
const chatbot = new Chatbot(config);
// Simple chat
const response = await chatbot.chat({
message: 'What is the capital of France?',
metadata: {
sessionId: 'session-1',
userId: 'user-1',
},
});
console.log(response.content); // "The capital of France is Paris."
console.log(response.metadata.usage); // { promptTokens: 15, completionTokens: 8, totalTokens: 23 }import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-sonnet-4-5-20250929', // Latest: Claude Sonnet 4.5 (Sep 2025), Opus 4.1, Haiku 4.5
},
temperature: 0.8,
maxTokens: 1000,
});
const response = await chatbot.chat({
message: 'Explain quantum computing in simple terms.',
metadata: {
sessionId: 'session-2',
userId: 'user-2',
},
});
console.log(response.content);import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'google',
apiKey: process.env.GOOGLE_API_KEY!,
model: 'gemini-2.0-flash-exp', // Latest: 'gemini-2.0-flash-exp', 'gemini-1.5-pro', 'gemini-1.5-flash'
},
temperature: 0.9,
maxTokens: 800,
});
const response = await chatbot.chat({
message: 'Write a haiku about programming.',
metadata: {
sessionId: 'session-3',
userId: 'user-3',
},
});
console.log(response.content);Stream responses in real-time for better UX:
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o',
},
});
// Stream responses
const stream = chatbot.chatStream({
message: 'Tell me a story about a robot.',
metadata: {
sessionId: 'session-4',
userId: 'user-4',
},
});
// Process the stream
for await (const chunk of stream) {
process.stdout.write(chunk); // Print each chunk as it arrives
}
console.log('\nโ
Stream complete!');The chatbot automatically maintains conversation history:
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini', // Cost-effective model for conversations
},
enableMemory: true,
maxHistory: 10, // Keep last 10 messages
});
const sessionId = 'user-session-123';
// First message
await chatbot.chat({
message: 'My name is Alice.',
metadata: { sessionId, userId: 'alice' },
});
// Second message - chatbot remembers the context
const response = await chatbot.chat({
message: 'What is my name?',
metadata: { sessionId, userId: 'alice' },
});
console.log(response.content); // "Your name is Alice."
// Get conversation history
const history = chatbot.getConversationHistory(sessionId);
console.log(history); // Array of all messages in the sessionHere's an example of integrating the chatbot into a React application:
import React, { useState } from 'react';
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
},
});
function ChatComponent() {
const [messages, setMessages] = useState<
Array<{ role: string; content: string }>
>([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
const sendMessage = async () => {
if (!input.trim()) return;
// Add user message
setMessages((prev) => [...prev, { role: 'user', content: input }]);
setLoading(true);
try {
const response = await chatbot.chat({
message: input,
metadata: {
sessionId: 'react-session',
userId: 'react-user',
},
});
// Add assistant response
setMessages((prev) => [
...prev,
{ role: 'assistant', content: response.content },
]);
} catch (error) {
console.error('Chat error:', error);
} finally {
setLoading(false);
setInput('');
}
};
return (
<div className="chat-container">
<div className="messages">
{messages.map((msg, idx) => (
<div key={idx} className={`message ${msg.role}`}>
<strong>{msg.role}:</strong> {msg.content}
</div>
))}
</div>
<div className="input-area">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && sendMessage()}
placeholder="Type a message..."
disabled={loading}
/>
<button onClick={sendMessage} disabled={loading}>
{loading ? 'Sending...' : 'Send'}
</button>
</div>
</div>
);
}
export default ChatComponent;Here's an example of using the chatbot in an Express.js API:
import express from 'express';
import { Chatbot } from '@rumenx/chatbot';
const app = express();
app.use(express.json());
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
},
enableMemory: true,
});
// Chat endpoint
app.post('/api/chat', async (req, res) => {
try {
const { message, sessionId, userId } = req.body;
if (!message || !sessionId || !userId) {
return res.status(400).json({ error: 'Missing required fields' });
}
const response = await chatbot.chat({
message,
metadata: { sessionId, userId },
});
res.json({
content: response.content,
metadata: response.metadata,
});
} catch (error) {
console.error('Chat error:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
// Streaming endpoint
app.post('/api/chat/stream', async (req, res) => {
try {
const { message, sessionId, userId } = req.body;
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const stream = chatbot.chatStream({
message,
metadata: { sessionId, userId },
});
for await (const chunk of stream) {
res.write(`data: ${JSON.stringify({ chunk })}\n\n`);
}
res.write('data: [DONE]\n\n');
res.end();
} catch (error) {
console.error('Stream error:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
// Get conversation history
app.get('/api/chat/history/:sessionId', (req, res) => {
try {
const { sessionId } = req.params;
const history = chatbot.getConversationHistory(sessionId);
res.json({ history });
} catch (error) {
console.error('History error:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
// Health check
app.get('/health', (req, res) => {
res.json({ status: 'ok', version: '1.0.0' });
});
app.listen(3000, () => {
console.log('๐ Chatbot API running on http://localhost:3000');
});| Model | Status | Use Case |
|---|---|---|
gpt-4o |
โ Recommended | Latest flagship model, best for complex tasks |
gpt-4o-mini |
โ Recommended | Cost-effective, great for most use cases |
gpt-4-turbo |
โ Supported | High performance, large context window |
o1-preview |
โ Supported | Advanced reasoning model |
o1-mini |
โ Supported | Faster reasoning model |
gpt-4 |
Still supported, but consider upgrading | |
gpt-3.5-turbo |
โ Deprecated | Will be retired June 2025, use gpt-4o-mini instead |
| Model | Status | Use Case |
|---|---|---|
claude-sonnet-4-5-20250929 |
โ Recommended | Latest - smartest model for complex agents and coding |
claude-haiku-4-5-20251001 |
โ Recommended | Fastest model with near-frontier intelligence |
claude-opus-4-1-20250805 |
โ Recommended | Exceptional model for specialized reasoning |
claude-3-5-sonnet-20241022 |
โ Supported | Previous generation (legacy) |
claude-3-5-sonnet-20240620 |
Consider upgrading to 4.5 | |
claude-3-opus-20240229 |
Consider upgrading to 4.1 |
| Model | Status | Use Case |
|---|---|---|
gemini-2.0-flash-exp |
โ Recommended | Latest experimental model |
gemini-1.5-pro |
โ Recommended | Production-ready, 2M token context |
gemini-1.5-flash |
โ Recommended | Fast and efficient |
gemini-1.5-flash-8b |
โ Supported | Smallest, fastest, most affordable |
gemini-pro |
Consider upgrading to 1.5 or 2.0 |
Note: Model availability and naming may change. Check your provider's documentation for the latest model names.
import type { ChatbotConfig } from '@rumenx/chatbot';
const config: ChatbotConfig = {
// Provider configuration (required)
provider: {
provider: 'openai', // 'openai' | 'anthropic' | 'google'
apiKey: 'your-api-key',
model: 'gpt-4',
apiUrl: 'https://api.openai.com/v1', // Optional: custom API endpoint
organizationId: 'org-123', // Optional: OpenAI organization ID
},
// Generation options
temperature: 0.7, // 0.0 to 1.0 (higher = more creative)
maxTokens: 500, // Maximum tokens in response
topP: 0.9, // Optional: nucleus sampling
frequencyPenalty: 0, // Optional: -2.0 to 2.0
presencePenalty: 0, // Optional: -2.0 to 2.0
stop: ['###'], // Optional: stop sequences
// System configuration
systemPrompt: 'You are a helpful AI assistant.', // Optional: system message
enableMemory: true, // Enable conversation history
maxHistory: 20, // Maximum messages to keep in memory
timeout: 30000, // Request timeout in ms
// Security settings
security: {
enableInputFilter: true, // Filter user input
enableOutputFilter: true, // Filter AI responses
maxInputLength: 4000, // Maximum input length
blockedPatterns: [/password/i], // Regex patterns to block
},
// Rate limiting
rateLimit: {
enabled: true,
requestsPerMinute: 10,
requestsPerHour: 100,
requestsPerDay: 1000,
},
// Logging
logLevel: 'info', // 'debug' | 'info' | 'warn' | 'error'
// Custom metadata
metadata: {
appName: 'My Chatbot App',
version: '1.0.0',
},
};new Chatbot(config: ChatbotConfig)Send a message and get a response.
const response = await chatbot.chat({
message: 'Hello!',
metadata: {
sessionId: 'session-id',
userId: 'user-id',
},
});Stream a response in real-time.
for await (const chunk of chatbot.chatStream({ message: 'Hello!' })) {
console.log(chunk);
}Get conversation history for a session.
const history = chatbot.getConversationHistory('session-id');Clear conversation history for a session.
chatbot.clearConversationHistory('session-id');Update chatbot configuration.
chatbot.updateConfig({
temperature: 0.9,
maxTokens: 1000,
});Get information about the current provider.
const info = chatbot.getProviderInfo();
console.log(info); // { name: 'openai', model: 'gpt-4', ... }interface ChatbotConfig {
provider: AiProviderConfig;
temperature?: number;
maxTokens?: number;
enableMemory?: boolean;
maxHistory?: number;
security?: SecurityConfig;
rateLimit?: RateLimitConfig;
// ... more options
}
interface ChatOptions {
message: string;
metadata?: {
sessionId?: string;
userId?: string;
[key: string]: unknown;
};
}
interface ChatResponse {
content: string;
metadata: {
provider: string;
model: string;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
responseTime?: number;
};
}This library is framework-agnostic and can be used with any JavaScript framework or library. Here are examples for popular frameworks:
npm install @rumenx/chatbot reactSee Using with React example above for a complete integration.
The library works seamlessly with Vue 3. Here's a basic example:
npm install @rumenx/chatbot vue<template>
<div class="chat-container">
<div
v-for="(msg, idx) in messages"
:key="idx"
:class="`message ${msg.role}`"
>
<strong>{{ msg.role }}:</strong> {{ msg.content }}
</div>
<input
v-model="input"
@keyup.enter="sendMessage"
placeholder="Type a message..."
/>
<button @click="sendMessage" :disabled="loading">
{{ loading ? 'Sending...' : 'Send' }}
</button>
</div>
</template>
<script setup lang="ts">
import { ref } from 'vue';
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: import.meta.env.VITE_OPENAI_API_KEY,
model: 'gpt-3.5-turbo',
},
});
const messages = ref<Array<{ role: string; content: string }>>([]);
const input = ref('');
const loading = ref(false);
const sendMessage = async () => {
if (!input.value.trim()) return;
messages.value.push({ role: 'user', content: input.value });
loading.value = true;
try {
const response = await chatbot.chat({
message: input.value,
metadata: { sessionId: 'vue-session', userId: 'vue-user' },
});
messages.value.push({ role: 'assistant', content: response.content });
} catch (error) {
console.error('Chat error:', error);
} finally {
loading.value = false;
input.value = '';
}
};
</script>npm install @rumenx/chatbot next// app/api/chat/route.ts
import { Chatbot } from '@rumenx/chatbot';
import { NextRequest, NextResponse } from 'next/server';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-3.5-turbo',
},
});
export async function POST(request: NextRequest) {
const { message, sessionId, userId } = await request.json();
try {
const response = await chatbot.chat({
message,
metadata: { sessionId, userId },
});
return NextResponse.json(response);
} catch (error) {
return NextResponse.json({ error: 'Chat failed' }, { status: 500 });
}
}The library provides comprehensive error handling:
import { Chatbot, ChatbotError } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4',
},
});
try {
const response = await chatbot.chat({
message: 'Hello!',
metadata: { sessionId: 'session-1', userId: 'user-1' },
});
console.log(response.content);
} catch (error) {
if (error instanceof ChatbotError) {
console.error('Error category:', error.category);
console.error('Error severity:', error.severity);
console.error('Is retryable:', error.isRetryable);
console.error('Retry delay:', error.retryDelay);
// Handle specific error types
switch (error.category) {
case 'authentication':
console.error('Authentication failed. Check your API key.');
break;
case 'rate_limit':
console.error('Rate limit exceeded. Retry after:', error.retryDelay);
break;
case 'validation':
console.error('Invalid input:', error.userMessage);
break;
case 'network':
console.error('Network error. Check your connection.');
break;
default:
console.error('Unexpected error:', error.message);
}
} else {
console.error('Unknown error:', error);
}
}authentication- API key or authentication issuesrate_limit- Rate limit exceededvalidation- Invalid input or configurationnetwork- Network connectivity issuesprovider- Provider-specific errorstimeout- Request timeoutunknown- Unexpected errors
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-3.5-turbo',
},
security: {
enableInputFilter: true, // Filter malicious input
enableOutputFilter: true, // Filter inappropriate responses
maxInputLength: 4000, // Prevent oversized inputs
blockedPatterns: [/password/i, /credit card/i, /social security/i],
},
});const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-3.5-turbo',
},
rateLimit: {
enabled: true,
requestsPerMinute: 10,
requestsPerHour: 100,
requestsPerDay: 1000,
},
});For more details, see SECURITY.md.
We're constantly working to improve and expand the library. Here's what's planned for future releases:
The following providers are planned but not yet implemented. Community contributions are welcome!
| Provider | Status | Target Version | Notes |
|---|---|---|---|
| Meta Llama | ๐ Planned | v2.x | Llama 3.3, 3.2, 3.1 support via Ollama or cloud APIs |
| xAI Grok | ๐ Planned | v2.x | Grok-2, Grok-2-mini integration |
| DeepSeek | ๐ Planned | v2.x | DeepSeek-V3 and DeepSeek-R1 support |
| Ollama | ๐ Planned | v2.x | Local LLM support with Ollama |
| Mistral AI | ๐ Planned | v2.x | Mistral Large, Medium, Small models |
| Cohere | ๐ Planned | v2.x | Command R+, Command R models |
| Perplexity | ๐ Planned | v3.x | pplx-7b-online, pplx-70b-online |
| Framework | Status | Target Version | Description |
|---|---|---|---|
| React Components | ๐ Planned | v2.x | Pre-built <ChatWidget />, <ChatInput />, <MessageList /> |
| Vue 3 Components | ๐ Planned | v2.x | Composition API components |
| Angular Components | ๐ Planned | v3.x | Standalone components for Angular 15+ |
| Svelte Components | ๐ Planned | v3.x | Svelte 5 components |
| Express Middleware | ๐ Planned | v2.x | app.use(chatbot.middleware()) |
| Next.js Integration | ๐ Planned | v2.x | Server actions and route handlers |
| Fastify Plugin | ๐ Planned | v2.x | fastify.register(chatbotPlugin) |
- ๐ฎ Function Calling / Tool Use - Support for OpenAI functions, Anthropic tools
- ๐ฎ Multi-Modal Support - Image, audio, and video inputs
- ๐ฎ RAG Integration - Vector database integration for retrieval-augmented generation
- ๐ฎ Prompt Templates - Pre-built templates for common use cases
- ๐ฎ Agent Framework - Build autonomous agents with planning and execution
- ๐ฎ Fine-tuning Support - Train and deploy custom models
- ๐ฎ Cost Optimization - Automatic model selection based on budget
- ๐ฎ A/B Testing - Test different models and prompts
- โ OpenAI (GPT-4o, GPT-4 Turbo, o1, GPT-4o-mini)
- โ Anthropic (Claude Sonnet 4.5, Haiku 4.5, Opus 4.1)
- โ Google AI (Gemini 2.0, Gemini 1.5 Pro/Flash)
- โ Streaming support for all providers
- โ Conversation memory management
- โ Type-safe TypeScript APIs
- โ Error handling with retries
- โ Rate limiting and security
- โ Token usage tracking
- โ 94% test coverage
Want to help implement these features? Check out our Contributing Guide and:
- Pick a feature from the roadmap
- Open an issue to discuss implementation
- Submit a PR with your implementation
- Get recognized as a contributor!
Priority is given to features with community interest. Open an issue to vote on features you'd like to see!
The library has 94% test coverage with 880+ tests.
# Run all tests
npm test
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watch
# Run fast tests only
npm run test:fastimport { Chatbot } from '@rumenx/chatbot';
describe('Chatbot', () => {
it('should send a message and receive a response', async () => {
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: 'test-key',
model: 'gpt-3.5-turbo',
},
});
const response = await chatbot.chat({
message: 'Hello!',
metadata: { sessionId: 'test', userId: 'test' },
});
expect(response.content).toBeDefined();
expect(response.metadata.provider).toBe('openai');
});
});We welcome contributions! Please see CONTRIBUTING.md for details.
# Clone the repository
git clone https://github.com/RumenDamyanov/npm-chatbot.git
cd npm-chatbot
# Install dependencies
npm install
# Run tests
npm test
# Build the project
npm run build
# Run examples
npm run example:openaiPlease read our Code of Conduct before contributing.
This project is licensed under the MIT License - see the LICENSE.md file for details.
If you find this library helpful, please consider:
- โญ Starring the repository
- ๐ Reporting bugs via GitHub Issues
- ๐ก Suggesting new features
- ๐ Improving documentation
- ๐ฐ Sponsoring the project
See CHANGELOG.md for version history and release notes.
Made with โค๏ธ by Rumen Damyanov