A modern web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions. The system provides real-time TypeScript diagnostics with intelligent fix suggestions, GitHub Copilot-style completions powered by Claude AI, comprehensive file system integration, and detailed LSP server monitoring.
✨ Key Features:
- 📝 Monaco Editor with full TypeScript LSP integration
- 🤖 AI-Powered Fixes using Claude 4 Sonnet for intelligent code suggestions
- ✨ Inline AI Completions - GitHub Copilot-style code completions as you type
- 📁 File Explorer with drag-and-drop and File System Access API support
- 📊 LSP Server Status Monitoring - Real-time server health and connection tracking
- 🔌 Multi-File Editing with tabbed interface and file system integration
- 📋 Real-time Activity Logging with categorized LSP and AI activity
- 💾 Smart Caching for improved performance and reduced API calls
- 🎯 Demo Project - Instant TypeScript project to test all features
The project consists of three main components working together to provide a complete IDE-like experience:
- Client - React-based Monaco editor with comprehensive LSP integration, file system access, and AI features
- Bridge Server - WebSocket bridge facilitating communication between browser and TypeScript Language Server
- AI Server - Express server providing AI-powered code analysis and completions
┌─────────────────────────────────────────────────────────────────┐
│ Monaco Editor (Browser) │
│ ┌─────────────┬─────────────┬─────────────┬─────────────────┐ │
│ │ File │ Monaco │ AI Fix │ LSP Server │ │
│ │ Explorer │ Editor │ Panel │ Status │ │
│ │ │ │ │ │ │
│ │ • File Sys │ • LSP │ • AI │ • Connection │ │
│ │ • Demo Proj │ • AI │ • Fix │ • Health │ │
│ │ • Drag&Drop │ • Completions│ • Suggestions│ • Diagnostics │ │
│ └─────────────┴─────────────┴─────────────┴─────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│ │ │
│ WebSocket (3001) │ HTTP (3002) │
v v v
┌─────────────────┐ ┌────────────────────┐ ┌─────────────┐
│ Bridge Server │ │ AI Analysis Server │ │ Claude API │
│ │ │ │ │ │
│ • WebSocket ←───┼────┼─→ Express Routes │ │ • Fix │
│ • JSON-RPC ────→ │ • Rate Limiting │ │ • Complete │
│ • LSP Process │ │ • Smart Caching │ │ │
└─────────────────┘ └────────────────────┘ └─────────────┘
│
│ stdio
v
┌─────────────────┐
│ TypeScript LSP │
│ │
│ • Diagnostics │
│ • Hover Info │
│ • Completions │
│ • Definitions │
└─────────────────┘
- 📝 Monaco Editor with full TypeScript/JavaScript support
- 🔌 Language Server Protocol integration for real-time diagnostics, hover info, and code navigation
- 📁 File Explorer with File System Access API support and drag-and-drop functionality
- 🔍 Multi-File Editing with tabbed interface and seamless file switching
- 🎯 Demo Project - Instant TypeScript project with multiple files for testing
- 🤖 AI-Powered Error Analysis using Claude 4 Sonnet for intelligent fix suggestions
- ✨ Inline AI Completions - GitHub Copilot-style code completions with smart triggering
- 🎯 Context-Aware Suggestions - AI receives LSP server status and full code context
- 💡 One-Click Fixes - Apply AI suggestions directly in the editor
- 💾 Smart Caching - Reduces API calls while maintaining performance
- 🔒 Rate Limiting and Request Validation for production readiness
- 📊 LSP Server Status - Real-time connection monitoring with health indicators
- 📋 Activity Logging with categorized logs (LSP, Editor, System, AI)
- 🔍 Server Health Tracking - Message throughput, error rates, and connection status
- 📈 Performance Metrics - Processing times and cache hit rates
- 🎨 Modern UI with glassmorphism effects and dark theme
- 📱 Responsive Design that works across different screen sizes
- ⚡ Fast Performance - Optimized for < 500ms AI completion responses
- 🔧 Hot Reload development setup with Vite
- 📦 Type-Safe implementation with comprehensive TypeScript coverage
- Node.js 18+
- npm or yarn
- Anthropic API key (for AI features)
- TypeScript Language Server:
npm install -g typescript-language-server typescript
-
Clone the repository:
git clone <repository-url> cd monaco-lsp
-
Install dependencies for all components:
# Install root dependencies npm install # Install client dependencies cd client && npm install # Install bridge server dependencies cd ../bridge-server && npm install # Install AI server dependencies cd ../ai-server && npm install
-
Configure the AI server:
cd ai-server cp .env.example .envEdit
.envand add your Anthropic API key:ANTHROPIC_API_KEY=your-anthropic-api-key DEFAULT_MODEL=claude-4-sonnet-20250514
-
Start all services:
In separate terminals:
# Terminal 1: Start Bridge server (port 3001) cd bridge-server npm start # Terminal 2: Start AI server (port 3002) cd ai-server npm run dev # Terminal 3: Start client (port 5173) cd client npm run dev
-
Open the application: Navigate to
http://localhost:5173
monaco-lsp/
├── client/ # React-based Monaco editor application
│ ├── src/
│ │ ├── components/ # UI components
│ │ │ ├── AIFixPanel.tsx # AI-powered fix suggestions panel
│ │ │ ├── FileExplorer.tsx # File system explorer with demo project
│ │ │ ├── FileTabs.tsx # Multi-file tab interface
│ │ │ ├── LogPanel.tsx # Real-time activity logs with categories
│ │ │ ├── MonacoVSCodeEditor.tsx # Monaco editor with VSCode API integration
│ │ │ ├── ServerStatus.tsx # LSP server health monitoring
│ │ │ └── index.ts # Component barrel exports
│ │ ├── services/ # Business logic and integrations
│ │ │ ├── aiAgent.ts # AI agent for error analysis and fixes
│ │ │ ├── aiAgent-enhanced.ts # Enhanced AI agent (reserved for future)
│ │ │ ├── aiCompletions.ts # GitHub Copilot-style completions
│ │ │ ├── editorModels.ts # Monaco model management
│ │ │ ├── fileSystem.ts # File system access and demo projects
│ │ │ ├── lspMonitor.ts # LSP server health tracking
│ │ │ └── index.ts # Service barrel exports
│ │ ├── utils/ # Utility functions
│ │ │ ├── logger.ts # Event-driven logging system
│ │ │ └── index.ts # Utility barrel exports
│ │ ├── lsp/ # LSP integration layer
│ │ │ └── directLSPSetup.ts # Manual LSP protocol implementation
│ │ ├── types/ # Shared TypeScript types
│ │ │ ├── index.ts # Core type definitions
│ │ │ ├── lsp-status.ts # LSP server status types
│ │ │ └── index.ts # Type barrel exports
│ │ ├── constants/ # Configuration and constants
│ │ │ └── index.ts # API endpoints, editor config, thresholds
│ │ ├── App.tsx # Main application component
│ │ └── main.tsx # Application entry point
│ ├── AI_COMPLETIONS.md # AI completions feature documentation
│ └── package.json
│
├── bridge-server/ # WebSocket LSP bridge server
│ ├── src/
│ │ └── index.ts # WebSocket to LSP stdio translation
│ ├── dist/ # Compiled JavaScript output
│ └── package.json
│
└── ai-server/ # AI analysis and completion server
├── api/
│ └── index.ts # Express server entry point
├── src/
│ ├── routes/ # API route handlers
│ │ └── analyze.ts # Error analysis and completion endpoints
│ ├── services/ # AI and analysis services
│ │ ├── ai.ts # AI model integration (Claude/OpenAI)
│ │ └── codeAnalyzer.ts # Code analysis and context extraction
│ ├── types/ # TypeScript type definitions
│ │ └── index.ts # API and analysis types
│ ├── prompts/ # AI prompt templates
│ │ ├── completion.ts # Completion-specific prompts
│ │ └── typescript.ts # TypeScript analysis prompts
│ └── index.ts # Main server logic (if separate from api/)
└── package.json
- User types code in Monaco Editor with multi-file support
- Editor sends LSP requests via WebSocket to Bridge server (port 3001)
- Bridge server forwards messages to TypeScript Language Server via stdio
- LSP sends diagnostics back through Bridge to Monaco with health tracking
- LSP Monitor tracks server status, message counts, and error rates
- ServerStatus component displays real-time connection health
- AI Agent processes diagnostics automatically:
- Sends errors with LSP context to AI server (port 3002)
- Includes server health status in AI requests for intelligent decisions
- Falls back to local patterns if AI unavailable
- Caches suggestions (5-minute TTL) and notifies subscribers
- AIFixPanel updates via subscription pattern with confidence scores
- User applies fixes with one-click application directly in editor
- File Explorer enables loading local directories or demo projects
- File System Service handles File System Access API with fallbacks
- Editor Models manages multiple Monaco models for different files
- File Tabs provides seamless switching between open files
- Inline completions trigger as you type with smart detection:
- Debounced requests (300ms) to prevent API spam
- Context extraction (50 lines before, 10 lines after cursor)
- Smart triggering based on patterns (dot notation, function calls, etc.)
- Caching (30s) for repeated contexts
- Ghost text appears with Tab to accept
- Fallback to cached completions when API unavailable
Analyzes TypeScript code and returns AI-generated fix suggestions with enhanced LSP context awareness.
Request Body:
{
"code": "const x: string = 123;",
"diagnostics": [{
"range": {
"start": { "line": 0, "character": 18 },
"end": { "line": 0, "character": 21 }
},
"severity": 1,
"message": "Type 'number' is not assignable to type 'string'."
}],
"language": "typescript",
"context": {
"lspStatus": {
"connected": true,
"healthy": true,
"serverName": "TypeScript Language Server",
"serverVersion": "4.9.5",
"capabilities": {
"completionProvider": true,
"hoverProvider": true,
"definitionProvider": true,
"diagnosticProvider": true
},
"messagesProcessed": 1234,
"errorRate": 0.01
},
"diagnosticsActive": true
}
}Response:
{
"suggestions": [{
"id": "unique-id-123",
"title": "Convert to string",
"description": "Convert the number to a string using toString()",
"fix": {
"range": {
"startLine": 0,
"startColumn": 18,
"endLine": 0,
"endColumn": 21
},
"text": "123.toString()"
},
"confidence": 0.95,
"explanation": "This converts the number to a string to match the expected type"
}],
"model": "claude-4-sonnet-20250514",
"processingTime": 1234
}Generates GitHub Copilot-style inline code completions with smart context analysis.
Request Body:
{
"context": {
"before": "function calculateTotal(items: Item[]): number {\n return items.",
"after": "\n}",
"language": "typescript"
},
"prefix": "function calculateTotal(items: Item[]): number {\n return items.",
"language": "typescript"
}Response:
{
"completion": "reduce((sum, item) => sum + item.price, 0)",
"model": "claude-4-sonnet-20250514",
"processingTime": 234
}
### `GET /api/health`
Health check endpoint.
**Response:**
```json
{
"status": "ok",
"timestamp": "2024-01-01T12:00:00.000Z",
"aiService": "connected",
"model": "claude-4-sonnet-20250514"
}| Variable | Description | Default |
|---|---|---|
ANTHROPIC_API_KEY |
Anthropic API key for Claude | Required |
OPENAI_API_KEY |
OpenAI API key (optional) | Optional |
PORT |
AI server port | 3002 |
CORS_ORIGIN |
Allowed CORS origin | http://localhost:5173 |
DEFAULT_MODEL |
Default AI model | claude-4-sonnet-20250514 |
MAX_TOKENS |
Max tokens for AI response | 2000 |
TEMPERATURE |
AI creativity (0-1) | 0.3 |
MAX_REQUESTS_PER_MINUTE |
Rate limit per IP | 20 |
- Runs on port
3001 - WebSocket endpoint:
ws://localhost:3001 - Spawns TypeScript Language Server process
- Development port:
5173 - Configuration centralized in
constants/index.ts:- LSP WebSocket URL:
ws://localhost:3001 - AI Server URL:
http://localhost:3002 - Editor options and default content
- AI confidence thresholds
- Completion debounce delays
- LSP WebSocket URL:
- Clean architecture with:
- Functional AI agent service
- AI completions provider
- Shared types in
types/ - Barrel exports for cleaner imports
-
Anthropic (Recommended):
claude-4-sonnet-20250514- Latest and most capableclaude-3-opus,claude-3-sonnet- Previous versions
-
OpenAI (Optional):
gpt-4o,gpt-4,gpt-3.5-turbo
- Context Extraction: Gathers code around errors with 10 lines of context
- Smart Caching: Caches suggestions for 5 minutes to reduce API calls
- Structured Output: Uses Zod schemas for reliable suggestion format
- Confidence Scoring: Each suggestion includes a confidence score (0-1)
- Fallback Handling: Returns empty array if AI fails (no breaking)
- Trigger Detection: Smart patterns detect when to show completions
- Context Building: Extracts ~20 lines before and 5 after cursor
- Debouncing: Waits 300ms after typing stops before requesting
- Fast Response: Optimized prompts for < 500ms latency
- Multi-line Support: Detects functions, classes for longer completions
# Client development
cd client
npm run dev # Start dev server
npm run build # Build for production
npm run preview # Preview production build
# Bridge Server
cd bridge-server
npm run dev # Start with nodemon
npm start # Start production
# AI Server
cd ai-server
npm run dev # Start with hot reload
npm run build # Compile TypeScript
npm run typecheck # Type checking
npm start # Start production- Open the application at
http://localhost:5173 - Load a demo project or open a local folder using the File Explorer
- Type TypeScript code with errors to see LSP diagnostics
- Monitor server status in the Server Status panel (top-right)
- View AI suggestions in the "AI Fixes" tab for automatic error analysis
- Apply fixes with one-click to see AI-powered corrections
- Try inline completions - start typing to see ghost text suggestions
- Press Tab to accept completions (GitHub Copilot-style)
- Monitor activity in the LSP Logs tab for real-time communication
- Switch files using the tab interface to test multi-file editing
- React 18 - UI framework with functional components
- Monaco Editor - VSCode's code editor
- @codingame/monaco-vscode-api - VSCode service integration
- TypeScript - Full type safety
- Vite - Fast build tool with HMR
- Tailwind CSS - Utility-first styling
- Architecture:
- Functional programming approach
- Event-driven logging system
- Observer pattern for state updates
- Centralized configuration
- Node.js - Runtime
- ws - WebSocket library
- TypeScript Language Server - LSP implementation
- vscode-languageserver-protocol - Protocol types
- vscode-ws-jsonrpc - JSON-RPC over WebSocket
- Express.js - HTTP framework
- @anthropic-ai/sdk - Official Anthropic SDK
- Zod - Runtime type validation
- TypeScript - Type safety
- In-memory cache - Performance optimization
- express-rate-limit - Rate limiting
- API keys stored in environment variables
- CORS configured for local development
- Rate limiting prevents abuse
- Request size limited to 1MB
- Input validation with Zod schemas
- Caching: 5-minute cache for AI suggestions, 30s for completions
- Debouncing: Editor changes debounced before analysis (300ms for completions)
- Selective Analysis: Only analyzes code with diagnostics
- Context Limiting: Sends only relevant code context
- Connection Pooling: Reuses WebSocket connections
- Completion Optimization: Lower temperature, fewer tokens for speed
- Smart Triggers: Only shows completions after relevant patterns
-
"Cannot connect to LSP server"
- Ensure Bridge server is running on port 3001
- Check WebSocket URL in client config
-
"AI analysis failed"
- Verify API key is set correctly
- Check AI server logs for errors
- Ensure model name is correct
-
"No fix suggestions appearing"
- Check browser console for errors
- Verify AI server is running on port 3002
- Look at Activity Log for error messages
# Build all components
cd client && npm run build
cd ../bridge-server && npm run build
cd ../ai-server && npm run build- Set production environment variables
- Configure CORS for production domain
- Set up reverse proxy for WebSocket
- Enable HTTPS for security
- ARCHITECTURE.md - Complete system architecture
- API_REFERENCE.md - Detailed API documentation
- ai-server/ARCHITECTURE.md - AI server specifics
- Inline JSDoc comments throughout the codebase
- AI Agent (
services/aiAgent.ts) - Functional AI integration for error fixes - AI Completions (
services/aiCompletions.ts) - Inline completion provider - Logger (
utils/logger.ts) - Event-driven logging system - LSP Setup (
lsp/directLSPSetup.ts) - Manual LSP implementation - LSP Monitor (
services/lspMonitor.ts) - Connection health tracking - Types (
types/index.ts) - Shared TypeScript interfaces - Constants (
constants/index.ts) - Centralized configuration
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details