Skip to content

Web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions

Notifications You must be signed in to change notification settings

michaelshimeles/code-editor-lsp

Repository files navigation

Monaco LSP with AI-Powered Code Analysis

A modern web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions. The system provides real-time TypeScript diagnostics with intelligent fix suggestions, GitHub Copilot-style completions powered by Claude AI, comprehensive file system integration, and detailed LSP server monitoring.

✨ Key Features:

  • 📝 Monaco Editor with full TypeScript LSP integration
  • 🤖 AI-Powered Fixes using Claude 4 Sonnet for intelligent code suggestions
  • Inline AI Completions - GitHub Copilot-style code completions as you type
  • 📁 File Explorer with drag-and-drop and File System Access API support
  • 📊 LSP Server Status Monitoring - Real-time server health and connection tracking
  • 🔌 Multi-File Editing with tabbed interface and file system integration
  • 📋 Real-time Activity Logging with categorized LSP and AI activity
  • 💾 Smart Caching for improved performance and reduced API calls
  • 🎯 Demo Project - Instant TypeScript project to test all features

🏗️ Architecture Overview

The project consists of three main components working together to provide a complete IDE-like experience:

  1. Client - React-based Monaco editor with comprehensive LSP integration, file system access, and AI features
  2. Bridge Server - WebSocket bridge facilitating communication between browser and TypeScript Language Server
  3. AI Server - Express server providing AI-powered code analysis and completions
┌─────────────────────────────────────────────────────────────────┐
│                    Monaco Editor (Browser)                     │
│  ┌─────────────┬─────────────┬─────────────┬─────────────────┐ │
│  │ File        │ Monaco      │ AI Fix      │ LSP Server      │ │
│  │ Explorer    │ Editor      │ Panel       │ Status          │ │
│  │             │             │             │                 │ │
│  │ • File Sys  │ • LSP       │ • AI        │ • Connection    │ │
│  │ • Demo Proj │ • AI        │ • Fix       │ • Health        │ │
│  │ • Drag&Drop │ • Completions│ • Suggestions│ • Diagnostics  │ │
│  └─────────────┴─────────────┴─────────────┴─────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
         │                         │                    │
         │ WebSocket (3001)       │ HTTP (3002)        │
         v                         v                    v
┌─────────────────┐    ┌────────────────────┐    ┌─────────────┐
│ Bridge Server   │    │ AI Analysis Server │    │ Claude API  │
│                 │    │                    │    │             │
│ • WebSocket ←───┼────┼─→ Express Routes   │    │ • Fix       │
│ • JSON-RPC ────→    │ • Rate Limiting     │    │ • Complete  │
│ • LSP Process   │    │ • Smart Caching    │    │             │
└─────────────────┘    └────────────────────┘    └─────────────┘
         │
         │ stdio
         v
┌─────────────────┐
│ TypeScript LSP  │
│                 │
│ • Diagnostics   │
│ • Hover Info    │
│ • Completions   │
│ • Definitions   │
└─────────────────┘

✨ Features

Core Editing Experience

  • 📝 Monaco Editor with full TypeScript/JavaScript support
  • 🔌 Language Server Protocol integration for real-time diagnostics, hover info, and code navigation
  • 📁 File Explorer with File System Access API support and drag-and-drop functionality
  • 🔍 Multi-File Editing with tabbed interface and seamless file switching
  • 🎯 Demo Project - Instant TypeScript project with multiple files for testing

AI-Powered Intelligence

  • 🤖 AI-Powered Error Analysis using Claude 4 Sonnet for intelligent fix suggestions
  • Inline AI Completions - GitHub Copilot-style code completions with smart triggering
  • 🎯 Context-Aware Suggestions - AI receives LSP server status and full code context
  • 💡 One-Click Fixes - Apply AI suggestions directly in the editor
  • 💾 Smart Caching - Reduces API calls while maintaining performance
  • 🔒 Rate Limiting and Request Validation for production readiness

Monitoring & Debugging

  • 📊 LSP Server Status - Real-time connection monitoring with health indicators
  • 📋 Activity Logging with categorized logs (LSP, Editor, System, AI)
  • 🔍 Server Health Tracking - Message throughput, error rates, and connection status
  • 📈 Performance Metrics - Processing times and cache hit rates

Developer Experience

  • 🎨 Modern UI with glassmorphism effects and dark theme
  • 📱 Responsive Design that works across different screen sizes
  • Fast Performance - Optimized for < 500ms AI completion responses
  • 🔧 Hot Reload development setup with Vite
  • 📦 Type-Safe implementation with comprehensive TypeScript coverage

🚀 Quick Start

Prerequisites

  • Node.js 18+
  • npm or yarn
  • Anthropic API key (for AI features)
  • TypeScript Language Server:
    npm install -g typescript-language-server typescript

Installation

  1. Clone the repository:

    git clone <repository-url>
    cd monaco-lsp
  2. Install dependencies for all components:

    # Install root dependencies
    npm install
    
    # Install client dependencies
    cd client && npm install
    
    # Install bridge server dependencies
    cd ../bridge-server && npm install
    
    # Install AI server dependencies
    cd ../ai-server && npm install
  3. Configure the AI server:

    cd ai-server
    cp .env.example .env

    Edit .env and add your Anthropic API key:

    ANTHROPIC_API_KEY=your-anthropic-api-key
    DEFAULT_MODEL=claude-4-sonnet-20250514
  4. Start all services:

    In separate terminals:

    # Terminal 1: Start Bridge server (port 3001)
    cd bridge-server
    npm start
    
    # Terminal 2: Start AI server (port 3002)
    cd ai-server
    npm run dev
    
    # Terminal 3: Start client (port 5173)
    cd client
    npm run dev
  5. Open the application: Navigate to http://localhost:5173

📁 Project Structure

monaco-lsp/
├── client/                    # React-based Monaco editor application
│   ├── src/
│   │   ├── components/        # UI components
│   │   │   ├── AIFixPanel.tsx       # AI-powered fix suggestions panel
│   │   │   ├── FileExplorer.tsx     # File system explorer with demo project
│   │   │   ├── FileTabs.tsx         # Multi-file tab interface
│   │   │   ├── LogPanel.tsx         # Real-time activity logs with categories
│   │   │   ├── MonacoVSCodeEditor.tsx # Monaco editor with VSCode API integration
│   │   │   ├── ServerStatus.tsx     # LSP server health monitoring
│   │   │   └── index.ts             # Component barrel exports
│   │   ├── services/          # Business logic and integrations
│   │   │   ├── aiAgent.ts           # AI agent for error analysis and fixes
│   │   │   ├── aiAgent-enhanced.ts  # Enhanced AI agent (reserved for future)
│   │   │   ├── aiCompletions.ts     # GitHub Copilot-style completions
│   │   │   ├── editorModels.ts      # Monaco model management
│   │   │   ├── fileSystem.ts        # File system access and demo projects
│   │   │   ├── lspMonitor.ts        # LSP server health tracking
│   │   │   └── index.ts             # Service barrel exports
│   │   ├── utils/             # Utility functions
│   │   │   ├── logger.ts            # Event-driven logging system
│   │   │   └── index.ts             # Utility barrel exports
│   │   ├── lsp/               # LSP integration layer
│   │   │   └── directLSPSetup.ts    # Manual LSP protocol implementation
│   │   ├── types/             # Shared TypeScript types
│   │   │   ├── index.ts             # Core type definitions
│   │   │   ├── lsp-status.ts        # LSP server status types
│   │   │   └── index.ts             # Type barrel exports
│   │   ├── constants/         # Configuration and constants
│   │   │   └── index.ts             # API endpoints, editor config, thresholds
│   │   ├── App.tsx            # Main application component
│   │   └── main.tsx           # Application entry point
│   ├── AI_COMPLETIONS.md     # AI completions feature documentation
│   └── package.json
│
├── bridge-server/           # WebSocket LSP bridge server
│   ├── src/
│   │   └── index.ts         # WebSocket to LSP stdio translation
│   ├── dist/                # Compiled JavaScript output
│   └── package.json
│
└── ai-server/              # AI analysis and completion server
    ├── api/
    │   └── index.ts         # Express server entry point
    ├── src/
    │   ├── routes/          # API route handlers
    │   │   └── analyze.ts   # Error analysis and completion endpoints
    │   ├── services/        # AI and analysis services
    │   │   ├── ai.ts        # AI model integration (Claude/OpenAI)
    │   │   └── codeAnalyzer.ts # Code analysis and context extraction
    │   ├── types/           # TypeScript type definitions
    │   │   └── index.ts     # API and analysis types
    │   ├── prompts/         # AI prompt templates
    │   │   ├── completion.ts # Completion-specific prompts
    │   │   └── typescript.ts # TypeScript analysis prompts
    │   └── index.ts         # Main server logic (if separate from api/)
    └── package.json

🔄 Data Flow

Core LSP Communication Flow

  1. User types code in Monaco Editor with multi-file support
  2. Editor sends LSP requests via WebSocket to Bridge server (port 3001)
  3. Bridge server forwards messages to TypeScript Language Server via stdio
  4. LSP sends diagnostics back through Bridge to Monaco with health tracking
  5. LSP Monitor tracks server status, message counts, and error rates
  6. ServerStatus component displays real-time connection health

AI-Powered Features Flow

  1. AI Agent processes diagnostics automatically:
    • Sends errors with LSP context to AI server (port 3002)
    • Includes server health status in AI requests for intelligent decisions
    • Falls back to local patterns if AI unavailable
    • Caches suggestions (5-minute TTL) and notifies subscribers
  2. AIFixPanel updates via subscription pattern with confidence scores
  3. User applies fixes with one-click application directly in editor

File System Integration Flow

  1. File Explorer enables loading local directories or demo projects
  2. File System Service handles File System Access API with fallbacks
  3. Editor Models manages multiple Monaco models for different files
  4. File Tabs provides seamless switching between open files

AI Completions Flow

  1. Inline completions trigger as you type with smart detection:
    • Debounced requests (300ms) to prevent API spam
    • Context extraction (50 lines before, 10 lines after cursor)
    • Smart triggering based on patterns (dot notation, function calls, etc.)
    • Caching (30s) for repeated contexts
    • Ghost text appears with Tab to accept
    • Fallback to cached completions when API unavailable

📡 AI Server API

POST /api/analyze-errors

Analyzes TypeScript code and returns AI-generated fix suggestions with enhanced LSP context awareness.

Request Body:

{
  "code": "const x: string = 123;",
  "diagnostics": [{
    "range": {
      "start": { "line": 0, "character": 18 },
      "end": { "line": 0, "character": 21 }
    },
    "severity": 1,
    "message": "Type 'number' is not assignable to type 'string'."
  }],
  "language": "typescript",
  "context": {
    "lspStatus": {
      "connected": true,
      "healthy": true,
      "serverName": "TypeScript Language Server",
      "serverVersion": "4.9.5",
      "capabilities": {
        "completionProvider": true,
        "hoverProvider": true,
        "definitionProvider": true,
        "diagnosticProvider": true
      },
      "messagesProcessed": 1234,
      "errorRate": 0.01
    },
    "diagnosticsActive": true
  }
}

Response:

{
  "suggestions": [{
    "id": "unique-id-123",
    "title": "Convert to string",
    "description": "Convert the number to a string using toString()",
    "fix": {
      "range": {
        "startLine": 0,
        "startColumn": 18,
        "endLine": 0,
        "endColumn": 21
      },
      "text": "123.toString()"
    },
    "confidence": 0.95,
    "explanation": "This converts the number to a string to match the expected type"
  }],
  "model": "claude-4-sonnet-20250514",
  "processingTime": 1234
}

POST /api/complete

Generates GitHub Copilot-style inline code completions with smart context analysis.

Request Body:

{
  "context": {
    "before": "function calculateTotal(items: Item[]): number {\n  return items.",
    "after": "\n}",
    "language": "typescript"
  },
  "prefix": "function calculateTotal(items: Item[]): number {\n  return items.",
  "language": "typescript"
}

Response:

{
  "completion": "reduce((sum, item) => sum + item.price, 0)",
  "model": "claude-4-sonnet-20250514",
  "processingTime": 234
}

### `GET /api/health`

Health check endpoint.

**Response:**
```json
{
  "status": "ok",
  "timestamp": "2024-01-01T12:00:00.000Z",
  "aiService": "connected",
  "model": "claude-4-sonnet-20250514"
}

⚙️ Configuration

AI Server Environment Variables

Variable Description Default
ANTHROPIC_API_KEY Anthropic API key for Claude Required
OPENAI_API_KEY OpenAI API key (optional) Optional
PORT AI server port 3002
CORS_ORIGIN Allowed CORS origin http://localhost:5173
DEFAULT_MODEL Default AI model claude-4-sonnet-20250514
MAX_TOKENS Max tokens for AI response 2000
TEMPERATURE AI creativity (0-1) 0.3
MAX_REQUESTS_PER_MINUTE Rate limit per IP 20

Bridge Server Configuration

  • Runs on port 3001
  • WebSocket endpoint: ws://localhost:3001
  • Spawns TypeScript Language Server process

Client Configuration

  • Development port: 5173
  • Configuration centralized in constants/index.ts:
    • LSP WebSocket URL: ws://localhost:3001
    • AI Server URL: http://localhost:3002
    • Editor options and default content
    • AI confidence thresholds
    • Completion debounce delays
  • Clean architecture with:
    • Functional AI agent service
    • AI completions provider
    • Shared types in types/
    • Barrel exports for cleaner imports

🤖 AI Integration

Supported Models

  • Anthropic (Recommended):

    • claude-4-sonnet-20250514 - Latest and most capable
    • claude-3-opus, claude-3-sonnet - Previous versions
  • OpenAI (Optional):

    • gpt-4o, gpt-4, gpt-3.5-turbo

How AI Analysis Works

  1. Context Extraction: Gathers code around errors with 10 lines of context
  2. Smart Caching: Caches suggestions for 5 minutes to reduce API calls
  3. Structured Output: Uses Zod schemas for reliable suggestion format
  4. Confidence Scoring: Each suggestion includes a confidence score (0-1)
  5. Fallback Handling: Returns empty array if AI fails (no breaking)

How AI Completions Work

  1. Trigger Detection: Smart patterns detect when to show completions
  2. Context Building: Extracts ~20 lines before and 5 after cursor
  3. Debouncing: Waits 300ms after typing stops before requesting
  4. Fast Response: Optimized prompts for < 500ms latency
  5. Multi-line Support: Detects functions, classes for longer completions

🛠️ Development

Available Scripts

# Client development
cd client
npm run dev          # Start dev server
npm run build        # Build for production
npm run preview      # Preview production build

# Bridge Server
cd bridge-server
npm run dev          # Start with nodemon
npm start            # Start production

# AI Server
cd ai-server
npm run dev          # Start with hot reload
npm run build        # Compile TypeScript
npm run typecheck    # Type checking
npm start            # Start production

Testing the System

  1. Open the application at http://localhost:5173
  2. Load a demo project or open a local folder using the File Explorer
  3. Type TypeScript code with errors to see LSP diagnostics
  4. Monitor server status in the Server Status panel (top-right)
  5. View AI suggestions in the "AI Fixes" tab for automatic error analysis
  6. Apply fixes with one-click to see AI-powered corrections
  7. Try inline completions - start typing to see ghost text suggestions
  8. Press Tab to accept completions (GitHub Copilot-style)
  9. Monitor activity in the LSP Logs tab for real-time communication
  10. Switch files using the tab interface to test multi-file editing

🏗️ Technical Stack

Client

  • React 18 - UI framework with functional components
  • Monaco Editor - VSCode's code editor
  • @codingame/monaco-vscode-api - VSCode service integration
  • TypeScript - Full type safety
  • Vite - Fast build tool with HMR
  • Tailwind CSS - Utility-first styling
  • Architecture:
    • Functional programming approach
    • Event-driven logging system
    • Observer pattern for state updates
    • Centralized configuration

Bridge Server

  • Node.js - Runtime
  • ws - WebSocket library
  • TypeScript Language Server - LSP implementation
  • vscode-languageserver-protocol - Protocol types
  • vscode-ws-jsonrpc - JSON-RPC over WebSocket

AI Server

  • Express.js - HTTP framework
  • @anthropic-ai/sdk - Official Anthropic SDK
  • Zod - Runtime type validation
  • TypeScript - Type safety
  • In-memory cache - Performance optimization
  • express-rate-limit - Rate limiting

🔒 Security Considerations

  • API keys stored in environment variables
  • CORS configured for local development
  • Rate limiting prevents abuse
  • Request size limited to 1MB
  • Input validation with Zod schemas

📈 Performance Optimizations

  1. Caching: 5-minute cache for AI suggestions, 30s for completions
  2. Debouncing: Editor changes debounced before analysis (300ms for completions)
  3. Selective Analysis: Only analyzes code with diagnostics
  4. Context Limiting: Sends only relevant code context
  5. Connection Pooling: Reuses WebSocket connections
  6. Completion Optimization: Lower temperature, fewer tokens for speed
  7. Smart Triggers: Only shows completions after relevant patterns

🐛 Troubleshooting

Common Issues

  1. "Cannot connect to LSP server"

    • Ensure Bridge server is running on port 3001
    • Check WebSocket URL in client config
  2. "AI analysis failed"

    • Verify API key is set correctly
    • Check AI server logs for errors
    • Ensure model name is correct
  3. "No fix suggestions appearing"

    • Check browser console for errors
    • Verify AI server is running on port 3002
    • Look at Activity Log for error messages

🚢 Deployment

Production Build

# Build all components
cd client && npm run build
cd ../bridge-server && npm run build
cd ../ai-server && npm run build

Environment Setup

  1. Set production environment variables
  2. Configure CORS for production domain
  3. Set up reverse proxy for WebSocket
  4. Enable HTTPS for security

📚 Documentation

Key Client Modules

  • AI Agent (services/aiAgent.ts) - Functional AI integration for error fixes
  • AI Completions (services/aiCompletions.ts) - Inline completion provider
  • Logger (utils/logger.ts) - Event-driven logging system
  • LSP Setup (lsp/directLSPSetup.ts) - Manual LSP implementation
  • LSP Monitor (services/lspMonitor.ts) - Connection health tracking
  • Types (types/index.ts) - Shared TypeScript interfaces
  • Constants (constants/index.ts) - Centralized configuration

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

📄 License

MIT License - see LICENSE file for details

About

Web-based code editor that combines Monaco Editor with TypeScript Language Server Protocol (LSP) support, AI-powered error analysis, and inline code completions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages