Skip to content

[EPIC][AI]: AI-assisted operations (natural language interface) #2291

@crivetimihai

Description

@crivetimihai

🤖 Epic: AI-Assisted Operations (Natural Language Interface)

Goal

Add a chat/natural language interface to the Admin UI where users can ask questions, execute commands, and troubleshoot issues using plain English. An LLM interprets requests and calls the ContextForge API via tool-calling.

Why Now?

  1. Differentiator: Unique feature leveraging MCP/LLM strengths
  2. Onboarding: New users can be productive without learning the UI
  3. Efficiency: Power users get quick answers without clicking through dashboards
  4. Troubleshooting: AI correlates data across multiple sources
  5. Natural Fit: ContextForge manages MCP tools - perfect for AI assistance

📖 User Stories

US-1: Natural Language Queries

As a user
I want to ask questions in plain English
So that I can get information without navigating the UI

Acceptance Criteria:

  • "Show me all inactive servers" → Lists inactive servers
  • "Which tools have errors today?" → Shows tools with recent errors
  • "What's the average latency for the database tool?" → Returns metrics
  • "How many requests did we handle this week?" → Aggregates metrics
  • "What changed in the last 24 hours?" → Shows audit log summary
US-2: Natural Language Actions

As a user
I want to perform actions via natural language
So that I can manage the system quickly

Acceptance Criteria:

  • "Create a tool called weather-api..." → Creates tool with specified config
  • "Disable all staging servers" → Disables matching servers
  • "Add rate limiting to the database tool" → Updates tool config
  • "Delete the test-resource resource" → Deletes with confirmation
  • Confirmation required for destructive actions
US-3: Troubleshooting Assistant

As a user debugging an issue
I want AI help diagnosing problems
So that I can resolve issues faster

Acceptance Criteria:

  • "Why is the production server slow?" → Checks metrics, suggests causes
  • "Help me debug this error: [paste error]" → Analyzes and suggests fixes
  • "What's causing the spike in errors?" → Correlates events and metrics
  • "Is there anything unusual right now?" → Proactive anomaly summary
US-4: Chat Interface

As a user
I want a chat panel in the Admin UI
So that I can interact with the AI assistant

Acceptance Criteria:

  • Chat panel (sidebar or modal)
  • Conversation history preserved
  • Code/JSON formatted nicely
  • Links to relevant UI pages
  • Copy response button
  • Clear conversation option
US-5: Context-Aware Assistance

As a user viewing an entity
I want context-aware AI help
So that suggestions are relevant to what I'm looking at

Acceptance Criteria:

  • "What's wrong with this server?" when viewing a server
  • "Suggest improvements for this tool" when editing a tool
  • "Explain this configuration" for any entity
  • AI knows current page context

📋 Implementation Tasks

Phase 1: Infrastructure

  • Design AI assistant API endpoint
  • Define available tools/functions for LLM
  • Integrate LLM provider (configurable)
  • Implement tool-calling execution
  • Add conversation memory

Phase 2: Query Tools

  • Tool: list_servers(filters)
  • Tool: list_tools(filters)
  • Tool: get_metrics(entity, timerange)
  • Tool: get_audit_log(filters)
  • Tool: search_entities(query)

Phase 3: Action Tools

  • Tool: create_tool(config)
  • Tool: update_tool(id, changes)
  • Tool: toggle_server(id, enabled)
  • Tool: delete_entity(type, id)
  • Confirmation flow for destructive actions

Phase 4: Chat UI

  • Create chat panel component
  • Implement message rendering (markdown, code)
  • Add conversation history
  • Implement streaming responses
  • Add keyboard shortcuts (Cmd+J to open)

Phase 5: Context & Polish

  • Pass current page context to AI
  • Add suggested prompts
  • Implement feedback (thumbs up/down)
  • Add usage analytics
  • Document capabilities

⚙️ Architecture

┌─────────────────────────────────────────────┐
│                  Chat UI                     │
│  [User]: "Show servers with high latency"   │
└─────────────────┬───────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────┐
│           AI Assistant API                   │
│  POST /api/v1/assistant/chat                │
└─────────────────┬───────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────┐
│              LLM Provider                    │
│  (OpenAI, Anthropic, Local, Configurable)  │
│                                             │
│  Tools Available:                           │
│  - list_servers(filters)                    │
│  - get_metrics(entity, period)              │
│  - create_tool(config)                      │
│  - ...                                      │
└─────────────────┬───────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────┐
│         ContextForge API                     │
│  Execute tool calls against real API        │
└─────────────────────────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────┐
│              Response                        │
│  "Found 3 servers with latency >500ms:      │
│   - prod-db-server (avg: 850ms)             │
│   - analytics-server (avg: 620ms)           │
│   - cache-server (avg: 510ms)"              │
└─────────────────────────────────────────────┘

Tool Definition Example

@assistant_tool
def list_servers(
    status: Optional[str] = None,
    tag: Optional[str] = None,
    limit: int = 10
) -> list[Server]:
    """
    List MCP servers with optional filtering.
    
    Args:
        status: Filter by status ('active', 'inactive')
        tag: Filter by tag
        limit: Maximum results to return
    
    Returns:
        List of servers matching criteria
    """
    return server_service.list(status=status, tag=tag, limit=limit)

✅ Success Criteria

  • Natural language queries return accurate results
  • Actions execute correctly with confirmation
  • Chat UI is intuitive and responsive
  • Conversation context maintained
  • Troubleshooting suggestions are helpful
  • Multiple LLM providers supported

📚 References

Metadata

Metadata

Assignees

No one assigned

    Labels

    COULDP3: Nice-to-have features with minimal impact if left out; included if time permitsagentsAgent SamplesenhancementNew feature or requestepicLarge feature spanning multiple issuesfrontendFrontend development (HTML, CSS, JavaScript)pythonPython / backend development (FastAPI)uiUser Interface
    No fields configured for Feature.

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions