Skip to content

Evaluate and improve @fundamental-ngx/mcp server + create complementary Skills #14158

@droshev

Description

@droshev

Summary

The @fundamental-ngx/mcp server provides AI coding assistants with structured access to 1000+ components across 8 libraries. It has 9+ tools covering component APIs, examples, design tokens, accessibility, migration, and comparison. It has not yet been evaluated through real-world usage in an application.

This issue covers: use the MCP server to build real Angular UI, identify gaps, create complementary Skills for procedural knowledge the MCP server cannot provide, and propose improvements.

Background

What the MCP server does today:

  • Tools: list_components, search_components, get_component_api, get_component_examples, recommend_components, get_migration_guide, get_design_tokens, get_accessibility_guide, compare_components, get_usage_guide
  • Metadata from Custom Elements Manifests (CEM) + TypeDoc, covering ~1000 components
  • 131 tests across 5 test suites

What the MCP server does NOT cover:
MCP tools answer "what exists?" and "what does this component's API look like?" They do not encode "how do I compose a master-detail page" or "how do I migrate from @input decorators to signal inputs across a library." That procedural knowledge belongs in Skills.

The repo already has excellent agent guides in docs/agents/ and .claude/rules/ — the question is whether some of that knowledge should also be packaged as installable Skills for consumers outside the repo.

Part 1: Real-World Usage Test

Set up the MCP server in a fresh Angular project and attempt to build real UI.

Setup

  • Scaffold a new Angular 21+ app
  • Configure the MCP server in Claude Code or VS Code
  • Verify all tools respond

Build Scenarios

  • Simple form — "Build a login form with email, password, and submit using @fundamental-ngx/core"
  • Data table — "Create a platform table with sorting, filtering, and row selection"
  • Dialog with form — "Build a dialog that contains a reactive form with validation states"
  • Page layout — "Create a page with dynamic page header, side navigation, and content area"
  • Complex composition — "Build a master-detail layout with a platform list on the left and detail view on the right"
  • UI5 Web Components — "Build a form using UI5 Web Component wrappers (ui5-input, ui5-button)"

For each: did the AI use MCP tools? Was the generated code correct (imports, standalone components, signal patterns)? What was missing?

Part 2: Tool-by-Tool Evaluation

Rate each as: useful as-is, needs improvement, or not useful / redesign.

  • list_components — Does filtering by library work? Is the summary enough to pick the right component?
  • search_components — Test ambiguous queries: "dropdown" (should find select, combobox, menu), "loading" (busy indicator, skeleton)
  • get_component_api — Are inputs/outputs/methods complete? Are signal-based inputs documented?
  • get_component_examples — Are examples up to date with Angular 21+ patterns (standalone, signal inputs)?
  • recommend_components — Test: "a settings page", "an admin dashboard", "a data entry form". Does it recommend core vs platform correctly?
  • get_migration_guide — Is it useful for Angular version upgrades? For signals migration?
  • get_design_tokens — Does it return useful tokens? Can AI map them to component styling?
  • get_accessibility_guide — Are ARIA patterns actionable? Does AI apply them correctly?
  • compare_components — Is comparing core vs platform variants useful? Does AI use this proactively?
  • get_usage_guide — Are the usage guides for complex components (Dialog, Table) helpful in practice?

Part 3: Identify and Create Skills

The repo already has docs/agents/ (9 guides, 2,600+ lines) and .claude/rules/ (6 files). Evaluate what should become installable Skills for consumers.

Candidate Skill What It Would Do Why Not Just MCP or docs/agents?
build-form Compose a reactive form with @fundamental-ngx/platform form components, validation, error states MCP gives individual APIs; a Skill knows the composition order and FormGroup wiring
build-page-layout Compose dynamic page + shell bar + side nav + content area recommend_components lists parts but doesn't explain nesting and module setup
build-table Platform table with DataSource, sorting, filtering, pagination, selection Tables are the most complex component — procedural guidance prevents common mistakes
migrate-to-signals Step-by-step migration: @input→input(), @output→output(), @HostBinding→host, *ngIf→@if docs/agents/angular-patterns.md covers this but isn't installable by consumers
setup-project ng new → ng add → theme setup → first component → verify it works Common first-time stumbling block for new consumers

Deliverable

  • Decide which Skills to create
  • Implement each as a .md file (~40 tokens frontmatter, ~2,000 tokens body)
  • Test each by activating it and verifying the AI follows the procedure
  • Document where Skills live and how to register them

Part 4: Propose MCP Server Improvements

  • Are responses too verbose or too terse? Should complex APIs use progressive disclosure?
  • Missing: import helper — given components, return exact import statements (standalone imports, module imports)
  • Missing: core vs platform advisor — when both exist (e.g., table, select), guide the user to the right one
  • Missing: composition examples — multi-component templates, not just single-component APIs
  • Are the 131 tests covering the right scenarios? Any gaps?
  • Data quality: are all ~1000 components represented? Any stale metadata?

Part 5: Document and Advertise Skills

Once Skills are created:

In the repo

  • Add a "Skills" section to the fundamental-ngx README
  • Add a "Complementary Skills" section to the MCP server README
  • Reference Skills in CLAUDE.md

External discovery

  • Publish to skills.sh
  • Mention Skills in the @fundamental-ngx/mcp npm package description

Part 6: Validate docs and guides with AI agents

The docs/agents/ guides (9 files, 2,600+ lines) and .claude/rules/ (6 files) are designed for AI consumption but have not been validated by actually running agents against them.

  • Test each docs/agents/*.md guide — activate an agent, point it at the guide, and ask it to follow the instructions. Where it fails or produces wrong output, the guide needs fixing.
  • Test each .claude/rules/*.md rule — do agents actually follow the rules? Do the rules prevent the mistakes they claim to prevent?
  • For any Skills created in Part 3 — validate them the same way before considering them done.
  • Document weak points: which docs led to wrong code, missing imports, outdated patterns, or confusion.

Every agent failure is a documentation bug.

Acceptance Criteria

  • All 6 build scenarios attempted and documented
  • All tools evaluated with rating and notes
  • Skills implemented, tested, and working
  • MCP improvement proposals filed
  • Skills documented in README, MCP README, and CLAUDE.md
  • Skills published to skills.sh
  • Existing agent guides (docs/agents/, .claude/rules/) validated through agent testing and weak points documented
  • Summary comparing MCP-only vs MCP+Skills effectiveness for at least 2 scenarios

Resources

  • MCP server source: libs/mcp-server/
  • Agent guides: docs/agents/ (9 files)
  • Claude rules: .claude/rules/ (6 files)
  • npm package: @fundamental-ngx/mcp
  • Documentation site: https://sap.github.io/fundamental-ngx

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions