Skip to content

Latest commit

 

History

History
1087 lines (838 loc) · 33.7 KB

File metadata and controls

1087 lines (838 loc) · 33.7 KB

Code Generators — Complete Implementation Guide

This document covers the code generation tools that automate creation of dataclasses from documentation strings and OpenAPI specs. It uses NovaCom Networks as the example throughout: a cloud-managed network infrastructure provider with the NovaCom Dashboard API (collection namespace: novacom.dashboard).

Audience: Framework developers setting up code generation infrastructure

Related Documents:


Table of Contents

  1. Code Generation Strategy
  2. User Model Dataclass Generator
  3. API Dataclass Generator (Device Models) 3B. Field Description Sync Generator 3C. MCP Server Documentation Generator 3D. CLI Documentation Generator 3E. Markdown to HTML Converter
  4. Usage Examples
  5. Verification Checklist
  6. CI/CD Integration
  7. Integration with Feature Workflow

SECTION 1: Code Generation Strategy

What Gets Generated vs What Requires Manual Work

Component Generated From Output Location Frequency
User Model Dataclass DOCUMENTATION string plugins/plugin_utils/user_models/ Once per module
Device Model Dataclass OpenAPI spec plugins/plugin_utils/api/v{X}/generated/ Once per API version
Transform Mixin skeleton Manual template plugins/plugin_utils/api/v{X}/ Once per module+version
Field descriptions DOCUMENTATION string plugins/plugin_utils/user_models/ (in-place) After DOCUMENTATION changes
MCP server docs User Model introspection docs/12-mcp-server.md After User Model changes
CLI docs User Model introspection docs/13-cli.md After User Model changes
HTML pages Markdown docs docs/_site/*.html CI build (GitHub Pages)

What Requires Manual Work

  • Field mapping in transform mixins
  • Custom transformation functions (names ↔ IDs)
  • Endpoint operations configuration
  • Business logic validation

Generation Workflow Diagram (Text-Based)

Write docstring (DOCUMENTATION)
           │
           ▼
Generate User Model (auto)
           │
           ▼
Generate API models from OpenAPI (auto)
           │
           ▼
Create Transform Mixin (manual)
           │
           ▼
Test & refine

SECTION 2: User Model Dataclass Generator

Tool: generate_user_dataclasses.py

Location: tools/generators/generate_user_dataclasses.py

Purpose: Parse DOCUMENTATION strings and generate typed Python dataclasses that represent the user-facing data model.

Complete Implementation

"""Generate user-facing dataclasses from DOCUMENTATION strings.

This script parses module documentation (DOCUMENTATION blocks) and generates
strongly-typed Python dataclasses that represent the user-facing data model.
These dataclasses are the stable interface that crosses the RPC boundary.

Used by: novacom.dashboard collection (NovaCom Dashboard API)
"""

import yaml
import argparse
import re
from pathlib import Path
from typing import Dict, Any, List, Optional
from dataclasses import dataclass


@dataclass
class FieldSpec:
    """Specification for a single field in the generated dataclass.

    Attributes:
        name: Field name (snake_case).
        python_type: Python type annotation string (e.g., 'str', 'List[str]').
        required: Whether the field is required (no default).
        description: Docstring description for the field.
        default: Default value as Python code string, or None.
    """
    name: str
    python_type: str
    required: bool
    description: str
    default: Optional[str] = None


class UserDataclassGenerator:
    """
    Generator for user-facing dataclasses from DOCUMENTATION strings.

    Parses YAML-structured DOCUMENTATION blocks (Ansible-style) and produces
    Python dataclasses with proper typing, docstrings, and BaseTransformMixin
    inheritance.

    Attributes:
        nested_classes: Accumulated nested class code during generation.
    """

    # Type mapping: documentation type -> Python type
    TYPE_MAPPING = {
        'str': 'str',
        'int': 'int',
        'float': 'float',
        'bool': 'bool',
        'list': 'List',
        'dict': 'Dict',
        'path': 'str',
        'raw': 'Any',
        'jsonarg': 'Dict',
    }

    def __init__(self) -> None:
        """Initialize generator with empty nested classes list."""
        self.nested_classes: List[str] = []

    def parse_documentation(self, doc_string: str) -> Dict[str, Any]:
        """
        Parse DOCUMENTATION YAML string into a dictionary.

        Args:
            doc_string: Raw DOCUMENTATION string from module (YAML format).

        Returns:
            Parsed documentation dict with keys: module, short_description,
            options, etc.

        Raises:
            yaml.YAMLError: If YAML parsing fails.
        """
        return yaml.safe_load(doc_string)

    def generate_from_file(self, doc_file: Path, output_file: Path) -> None:
        """
        Generate dataclass from a file containing DOCUMENTATION.

        Reads the file, extracts the DOCUMENTATION block via regex, parses it,
        and writes the generated Python code to output_file.

        Args:
            doc_file: Path to file containing DOCUMENTATION block.
            output_file: Path to output Python file.

        Raises:
            ValueError: If no DOCUMENTATION block found in file.
        """
        content = doc_file.read_text()

        # Extract DOCUMENTATION string (handles triple-quoted and single-quoted)
        doc_match = re.search(
            r'DOCUMENTATION\s*=\s*["\']+(.*?)["\']',
            content,
            re.DOTALL
        )

        if not doc_match:
            raise ValueError(f"No DOCUMENTATION found in {doc_file}")

        doc_string = doc_match.group(1)

        doc_data = self.parse_documentation(doc_string)
        module_name = doc_data.get('module', doc_file.stem)

        generated_code = self.generate_dataclass(module_name, doc_data)

        output_file.parent.mkdir(parents=True, exist_ok=True)
        output_file.write_text(generated_code)
        print(f"Generated {output_file}")

    def generate_dataclass(
        self,
        module_name: str,
        doc_data: Dict[str, Any]
    ) -> str:
        """
        Generate dataclass code from parsed documentation.

        Args:
            module_name: Module name (e.g., 'admin', 'site').
            doc_data: Parsed documentation dict.

        Returns:
            Complete Python source code as string.
        """
        self.nested_classes = []

        options = doc_data.get('options', {})
        fields = self._build_fields(options, prefix='')

        class_name = f'User{module_name.title().replace("_", "")}'

        code_parts = []

        # Header comment
        code_parts.append('"""Generated User model dataclass.')
        code_parts.append('')
        code_parts.append(f'Auto-generated from {module_name} module DOCUMENTATION.')
        code_parts.append('DO NOT EDIT MANUALLY - regenerate using tools/generators/')
        code_parts.append('"""')
        code_parts.append('')
        code_parts.append('from dataclasses import dataclass')
        code_parts.append('from typing import Optional, List, Dict, Any')
        code_parts.append('')
        code_parts.append('from ..platform.base_transform import BaseTransformMixin')
        code_parts.append('')
        code_parts.append('')

        # Nested classes first (child before parent)
        for nested_code in self.nested_classes:
            code_parts.append(nested_code)
            code_parts.append('')

        # Main class
        code_parts.append('@dataclass')
        code_parts.append(f'class {class_name}(BaseTransformMixin):')
        code_parts.append('    """')
        description = doc_data.get('short_description', f'{module_name.replace("_", " ").title()} resource')
        code_parts.append(f'    {description}')
        code_parts.append('    ')
        code_parts.append('    This dataclass represents the user-facing data model.')
        code_parts.append('    It is the stable interface that crosses the RPC boundary.')
        code_parts.append('    ')
        code_parts.append('    Attributes:')
        for field in fields:
            code_parts.append(f'        {field.name}: {field.description}')
        code_parts.append('    """')
        code_parts.append('    ')

        # Fields: required first, then optional with defaults
        for field in fields:
            field_line = f'    {field.name}: '
            if not field.required:
                field_line += 'Optional['
            field_line += field.python_type
            if not field.required:
                field_line += ']'
            if field.default is not None:
                field_line += f' = {field.default}'
            elif not field.required:
                field_line += ' = None'
            code_parts.append(field_line)

        return '\n'.join(code_parts)

    def _build_fields(
        self,
        options: Dict[str, Any],
        prefix: str = ''
    ) -> List[FieldSpec]:
        """
        Build field specifications from options dict.

        Handles nested suboptions by generating nested dataclasses and
        registering them in self.nested_classes.

        Args:
            options: Options dict from DOCUMENTATION (key = field name).
            prefix: Prefix for nested class names.

        Returns:
            List of FieldSpec objects, required fields first.
        """
        fields = []
        for field_name, field_spec in options.items():
            field_type = field_spec.get('type', 'str')
            required = field_spec.get('required', False)
            description = field_spec.get('description', '')
            default = field_spec.get('default')

            if isinstance(description, list):
                description = ' '.join(description)

            python_type = self._map_type(field_type, field_spec)

            if 'suboptions' in field_spec:
                nested_class_name = f'{prefix}{field_name.replace("_", " ").title().replace(" ", "")}'
                nested_fields = self._build_fields(
                    field_spec['suboptions'],
                    prefix=nested_class_name
                )
                nested_code = self._generate_nested_class(
                    nested_class_name,
                    nested_fields
                )
                self.nested_classes.append(nested_code)

                if field_type == 'list':
                    elements = field_spec.get('elements', 'dict')
                    if elements == 'dict':
                        python_type = f'List[{nested_class_name}]'
                    else:
                        elem_type = self.TYPE_MAPPING.get(elements, 'Any')
                        python_type = f'List[{elem_type}]'
                else:
                    python_type = nested_class_name

            formatted_default = self._format_default(default)
            field = FieldSpec(
                name=field_name,
                python_type=python_type,
                required=required,
                description=description,
                default=formatted_default
            )
            fields.append(field)

        # Sort: required first, then optional
        fields.sort(key=lambda f: (f.required, f.name), reverse=True)
        return fields

    def _map_type(self, ansible_type: str, field_spec: Dict[str, Any]) -> str:
        """
        Map documentation type to Python type string.

        Args:
            ansible_type: Type from DOCUMENTATION (str, list, dict, etc.).
            field_spec: Full field specification for elements/suboptions.

        Returns:
            Python type annotation string.
        """
        base_type = self.TYPE_MAPPING.get(ansible_type, 'Any')

        if ansible_type == 'list':
            elements = field_spec.get('elements', 'str')
            element_type = self.TYPE_MAPPING.get(elements, 'Any')
            return f'List[{element_type}]'

        if ansible_type == 'dict':
            return 'Dict[str, Any]'

        return base_type

    def _generate_nested_class(
        self,
        class_name: str,
        fields: List[FieldSpec]
    ) -> str:
        """
        Generate nested dataclass code.

        Args:
            class_name: Name of nested class.
            fields: List of field specifications.

        Returns:
            Generated class code as string.
        """
        lines = []
        lines.append('@dataclass')
        lines.append(f'class {class_name}:')
        lines.append('    """Nested dataclass for structured option."""')
        lines.append('    ')

        for field in fields:
            field_line = f'    {field.name}: '
            if not field.required:
                field_line += 'Optional['
            field_line += field.python_type
            if not field.required:
                field_line += ']'
            if field.default is not None:
                field_line += f' = {field.default}'
            elif not field.required:
                field_line += ' = None'
            lines.append(field_line)

        return '\n'.join(lines)

    def _format_default(self, default: Any) -> Optional[str]:
        """
        Format default value for Python code.

        Args:
            default: Default value from documentation.

        Returns:
            Formatted string for Python source, or None.
        """
        if default is None:
            return None
        if isinstance(default, bool):
            return str(default)
        if isinstance(default, (int, float)):
            return str(default)
        if isinstance(default, str):
            return f"'{default}'"
        return repr(default)


def main() -> None:
    """Main entry point with argparse."""
    parser = argparse.ArgumentParser(
        description='Generate user dataclasses from DOCUMENTATION strings'
    )
    parser.add_argument(
        'doc_file',
        type=Path,
        help='Path to file containing DOCUMENTATION block'
    )
    parser.add_argument(
        '--output',
        type=Path,
        default=None,
        help='Output file path (default: infer from module name)'
    )

    args = parser.parse_args()

    if args.output is None:
        module_name = args.doc_file.stem
        output_dir = Path('plugins/plugin_utils/user_models')
        output_dir.mkdir(parents=True, exist_ok=True)
        args.output = output_dir / f'{module_name}.py'

    generator = UserDataclassGenerator()
    generator.generate_from_file(args.doc_file, args.output)


if __name__ == '__main__':
    main()

Usage

python tools/generators/generate_user_dataclasses.py \
    plugins/plugin_utils/docs/admin.py \
    --output plugins/plugin_utils/user_models/admin.py

SECTION 3: API Dataclass Generator (Device Models)

Tool: datamodel-code-generator (Third-Party)

Installation:

pip install datamodel-code-generator

Why This Tool

  • Industry standard for OpenAPI → Python
  • Handles complex schemas (nested, oneOf, allOf)
  • Generates dataclass models (not just Pydantic)
  • Well-maintained and widely used

Wrapper Script: generate_api_models.sh

Location: tools/generators/generate_api_models.sh

Complete Script

#!/bin/bash
# Generate API dataclasses from OpenAPI specs
#
# Uses datamodel-code-generator to produce Python dataclasses from
# NovaCom Dashboard API OpenAPI specifications.
#
# Prerequisites:
#   pip install datamodel-code-generator
#
# Usage:
#   Place OpenAPI specs in tools/openapi_specs/ (novacom-v1.json, novacom-v2.json)
#   cd novacom.dashboard && bash tools/generators/generate_api_models.sh

set -e

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SPECS_DIR="$SCRIPT_DIR/../openapi_specs"
OUTPUT_BASE="$PROJECT_ROOT/plugins/plugin_utils/api"

echo "Generating API dataclasses from OpenAPI specs..."
echo "Specs dir: $SPECS_DIR"
echo "Output base: $OUTPUT_BASE"
echo ""

if [ ! -d "$SPECS_DIR" ]; then
    echo "Error: Specs directory not found: $SPECS_DIR"
    echo "Create it and add novacom-v1.json, novacom-v2.json, etc."
    exit 1
fi

for spec_file in "$SPECS_DIR"/novacom-v*.json; do
    if [ ! -f "$spec_file" ]; then
        echo "No OpenAPI specs found matching novacom-v*.json in $SPECS_DIR"
        exit 1
    fi

    filename=$(basename "$spec_file")
    version=$(echo "$filename" | sed -E 's/novacom-v([0-9]+(_[0-9]+)?)\.json/\1/')

    echo "Processing $filename (version $version)..."

    output_dir="$OUTPUT_BASE/v${version}/generated"
    mkdir -p "$output_dir"

    datamodel-codegen \
        --input "$spec_file" \
        --input-file-type openapi \
        --output "$output_dir/models.py" \
        --output-model-type dataclasses.dataclass \
        --field-constraints \
        --use-standard-collections \
        --use-schema-description \
        --use-title-as-name \
        --target-python-version 3.9 \
        --collapse-root-models \
        --disable-timestamp

    # Prepend header comment
    temp_file=$(mktemp)
    cat > "$temp_file" << 'HEADER'
"""Generated API dataclasses from OpenAPI specification.

Auto-generated using datamodel-code-generator.
DO NOT EDIT MANUALLY - regenerate using tools/generators/generate_api_models.sh

These are pure API data models. To add transformation logic, create a
companion file (e.g., admin.py) with a TransformMixin that inherits from
BaseTransformMixin.
"""

HEADER
    cat "$output_dir/models.py" >> "$temp_file"
    mv "$temp_file" "$output_dir/models.py"

    # Create __init__.py
    cat > "$output_dir/__init__.py" << 'INIT'
"""Generated API models for NovaCom Dashboard API."""
from .models import *

INIT

    echo "  Generated: $output_dir/models.py"
done

echo ""
echo "API dataclass generation complete!"
echo ""
echo "Next steps:"
echo "  1. Review generated files in plugins/plugin_utils/api/"
echo "  2. Create transform mixins for each resource"
echo "  3. Import generated classes in your transform mixin files"

Usage

# Place OpenAPI specs in tools/openapi_specs/
# novacom-v1.json, novacom-v2.json

cd novacom.dashboard
bash tools/generators/generate_api_models.sh

SECTION 3B: Field Description Sync Generator

Tool: generate_model_descriptions.py

Location: tools/generate_model_descriptions.py

Purpose: Sync field descriptions from module DOCUMENTATION YAML strings into User Model dataclass field(metadata={"description": "..."}) annotations. This keeps the User Model self-describing — the MCP server reads these descriptions at runtime to populate tool input schemas.

How It Works

  1. Scans plugins/modules/meraki_*.py for DOCUMENTATION assignments
  2. Extracts per-field descriptions from options.config.suboptions
  3. Reads the corresponding action plugin to find the USER_MODEL path
  4. Rewrites the User Model file, transforming bare field defaults:
# Before
name: Optional[str] = None

# After
name: Optional[str] = field(default=None, metadata={"description": "VLAN name."})

Usage

python tools/generate_model_descriptions.py

The tool is idempotent: running it again updates existing descriptions if the DOCUMENTATION string has changed, and leaves already-current fields untouched.

Why This Exists

Module DOCUMENTATION strings define field descriptions for ansible-doc but are not accessible at Python import time (they are string constants in module files, not importable metadata). The MCP server and CLI need descriptions when generating JSON Schema tool definitions and argparse help text. This generator bridges the gap by copying descriptions into the dataclass field(metadata=...) where they are available via dataclasses.fields().


SECTION 3C: MCP Server Documentation Generator

Tool: generate_mcp_docs.py

Location: tools/generate_mcp_docs.py

Purpose: Generate a complete MCP server reference document by introspecting User Model dataclasses. Produces a Markdown file with a tool summary table and detailed per-tool reference including metadata, input schemas, and config field tables.

How It Works

  1. Calls build_tool_definitions() from plugins.plugin_utils.mcp.introspect
  2. For each of the 48 tools, extracts scope, canonical key, system key, valid states, and JSON Schema
  3. Formats as a structured Markdown document with summary table and per-tool sections

Usage

python tools/generate_mcp_docs.py
# → docs/12-mcp-server.md (48 tools)

Output

docs/12-mcp-server.md — auto-generated, do not edit manually. Contains:

  • Overview and installation instructions
  • Tool summary table (name, scope, canonical key, category, states)
  • Per-tool reference sections with metadata and input schema tables

SECTION 3D: CLI Documentation Generator

Tool: generate_cli_docs.py

Location: tools/generate_cli_docs.py

Purpose: Generate a complete CLI reference document by introspecting User Model dataclasses. Produces a Markdown file with a command summary, usage examples, and per-command argument tables.

How It Works

  1. Calls build_tool_definitions() to discover all resources
  2. For each resource, reads field type hints and metadata["description"] to build argument tables
  3. Maps Python types to CLI type labels (string, integer, boolean, JSON)
  4. Formats as a structured Markdown document

Usage

python tools/generate_cli_docs.py
# → docs/13-cli.md (48 commands)

Output

docs/13-cli.md — auto-generated, do not edit manually. Contains:

  • Overview, installation, and quick-start examples
  • Global flags reference (--mock, --json, --yaml, --list)
  • Complex argument handling (@file.json references)
  • Command summary table and per-command argument reference

SECTION 3E: Markdown to HTML Converter

Tool: md_to_html.py

Location: tools/md_to_html.py

Purpose: Convert Markdown documentation files to themed HTML pages that match the ansible-doc-renderer site. Used in the GitHub Pages CI workflow to produce mcp-server.html and cli.html alongside the module documentation.

How It Works

  1. Parses Markdown using a standard-library-only converter (no external dependencies)
  2. Handles fenced code blocks, tables, headers, bold/italic, inline code, links, and horizontal rules
  3. Wraps output in the same HTML shell as the module docs (toolbar with zoom/theme, nav bar)
  4. Uses the shared styles.css CSS variables so pages look consistent in light/dark mode

Usage

python tools/md_to_html.py \
    --output docs/_site \
    --css-path styles.css \
    docs/12-mcp-server.md docs/13-cli.md
# → docs/_site/mcp-server.html, docs/_site/cli.html

CI Integration

Called automatically by .github/workflows/static.yml after the module docs are generated. The resulting HTML pages are uploaded alongside the module docs as a single GitHub Pages artifact.


SECTION 4: Usage Examples

Example 1: Generate NovaCom Admin Dataclasses

Step 1: Create docs file with DOCUMENTATION

# plugins/plugin_utils/docs/admin.py

DOCUMENTATION = """
---
module: novacom_organization_admin
short_description: Manage NovaCom organization administrators
description:
  - Create, update, or delete NovaCom organization admin users
  - Manage admin attributes and RBAC permissions
options:
  username:
    description: Username for the admin
    required: true
    type: str
  email:
    description: Email address
    type: str
  name:
    description: Full name of the admin
    type: str
  org_access:
    description: Organization access level
    type: str
    choices: ['full', 'read-only', 'none']
  tags:
    description:
      - List of network tag-based access permissions
    type: list
    elements: dict
    suboptions:
      tag:
        description: Network tag
        type: str
      access:
        description: Access level for this tag
        type: str
  networks:
    description:
      - List of network-level access permissions
    type: list
    elements: dict
    suboptions:
      network_id:
        description: Network identifier
        type: str
      access:
        description: Access level for this network
        type: str
  organizations:
    description:
      - List of organization names (NOT IDs)
    type: list
    elements: str
  id:
    description:
      - Admin ID (read-only, returned after creation)
    type: str
  created_at:
    description:
      - Creation timestamp (read-only)
    type: str
"""

Step 2: Generate user model dataclass

python tools/generators/generate_user_dataclasses.py \
    plugins/plugin_utils/docs/admin.py \
    --output plugins/plugin_utils/user_models/admin.py

Step 3: Generated output

"""Generated User model dataclass.

Auto-generated from admin module DOCUMENTATION.
DO NOT EDIT MANUALLY - regenerate using tools/generators/
"""

from dataclasses import dataclass
from typing import Optional, List, Dict, Any

from ..platform.base_transform import BaseTransformMixin


@dataclass
class Tags:
    """Nested dataclass for structured option."""
    tag: Optional[str] = None
    access: Optional[str] = None


@dataclass
class Networks:
    """Nested dataclass for structured option."""
    network_id: Optional[str] = None
    access: Optional[str] = None


@dataclass
class UserAdmin(BaseTransformMixin):
    """
    Manage NovaCom organization administrators

    This dataclass represents the user-facing data model.
    It is the stable interface that crosses the RPC boundary.

    Attributes:
        username: Username for the admin
        email: Email address
        name: Full name of the admin
        org_access: Organization access level
        tags: List of network tag-based access permissions
        networks: List of network-level access permissions
        organizations: List of organization names (NOT IDs)
        id: Admin ID (read-only, returned after creation)
        created_at: Creation timestamp (read-only)
    """

    username: str
    email: Optional[str] = None
    name: Optional[str] = None
    org_access: Optional[str] = None
    tags: Optional[List[Tags]] = None
    networks: Optional[List[Networks]] = None
    organizations: Optional[List[str]] = None
    id: Optional[str] = None
    created_at: Optional[str] = None

Step 4: Generate API models

# Ensure novacom-v1.json is in tools/openapi_specs/
bash tools/generators/generate_api_models.sh

Step 5: Generated API output (excerpt)

plugins/plugin_utils/api/v1/generated/models.py contains classes like:

  • Admin — From /components/schemas/Admin
  • Organization — From /components/schemas/Organization
  • Site — From /components/schemas/Site
  • etc.

Example 2: Regeneration After Schema Changes

When the OpenAPI spec changes:

  1. Update spec file: Replace tools/openapi_specs/novacom-v1.json with the new version.
  2. Regenerate API models: Run bash tools/generators/generate_api_models.sh.
  3. Review changes: git diff plugins/plugin_utils/api/v1/generated/models.py.
  4. Update transform mixins if field names changed (manual step): Edit plugins/plugin_utils/api/v1/admin.py and other mixin files.

SECTION 5: Verification Checklist

After generation, verify:

  • Imports are correct: dataclass, typing (Optional, List, Dict, Any), BaseTransformMixin
  • Field types match expectations: Required vs Optional
  • List types have correct element types
  • Nested objects handled correctly (nested dataclasses defined before parent)
  • Docstrings present: Module-level, class, attributes

Common Issues and Fixes

Issue Fix
Missing imports Add from typing import List, Optional, Dict, Any at top
Wrong default value Fix Optional annotation: is_active: Optional[bool] = True
Nested class order Define child class before parent class
Invalid type for suboptions Ensure elements: dict with suboptions for list of objects

SECTION 6: CI/CD Integration

Documentation Site (GitHub Pages)

The .github/workflows/static.yml workflow builds the full documentation site on push to main:

  1. Builds the Ansible collection and installs it
  2. Runs ansible-doc --metadata-dump to extract module documentation
  3. Renders module HTML pages with the TypeScript ansible-doc-renderer
  4. Runs generate_mcp_docs.py and generate_cli_docs.py to produce Markdown
  5. Runs md_to_html.py to convert MCP server and CLI docs to themed HTML
  6. Deploys all pages to GitHub Pages

The workflow triggers on changes to plugins/modules/, plugins/plugin_utils/user_models/, the renderer, or the generator tools.

Automated Model Regeneration (NovaCom Reference)

For projects with OpenAPI-driven device models, a separate workflow can trigger on spec changes:

# .github/workflows/regenerate-models.yml

name: Regenerate API Models

on:
  push:
    paths:
      - 'tools/openapi_specs/*.json'
  pull_request:
    paths:
      - 'tools/openapi_specs/*.json'

jobs:
  regenerate:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'

      - name: Install datamodel-code-generator
        run: pip install datamodel-code-generator

      - name: Regenerate API models
        run: bash tools/generators/generate_api_models.sh

      - name: Check for changes
        id: changes
        run: |
          git diff --exit-code plugins/plugin_utils/api/ || echo "changed=true" >> $GITHUB_OUTPUT

      - name: Run tests
        run: |
          pip install -e .
          pytest tests/ -v --tb=short

      - name: Create PR if changes detected
        if: steps.changes.outputs.changed == 'true' && github.event_name == 'push'
        uses: peter-evans/create-pull-request@v5
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
          commit-message: "chore: regenerate API models from OpenAPI specs"
          title: "Regenerate API models"
          branch: auto/regenerate-models

Version Compatibility Matrix

Script to produce module × API version support matrix:

# tools/generators/version_matrix.py

"""Generate module x API version compatibility matrix."""

import json
from pathlib import Path

REGISTRY_PATH = Path("plugins/plugin_utils/platform/registry.py")
API_DIR = Path("plugins/plugin_utils/api")


def extract_modules_from_registry() -> dict:
    """Parse registry or api dir for module/version support."""
    matrix = {}
    if not API_DIR.exists():
        return matrix

    for version_dir in sorted(API_DIR.iterdir()):
        if not version_dir.is_dir() or not version_dir.name.startswith("v"):
            continue
        ver = version_dir.name
        for py_file in version_dir.glob("*.py"):
            if py_file.name.startswith("_"):
                continue
            module = py_file.stem
            if module not in matrix:
                matrix[module] = {}
            matrix[module][ver] = "Y"

    return matrix


def print_matrix(matrix: dict) -> None:
    """Print markdown table."""
    versions = sorted(set(v for m in matrix.values() for v in m))
    print("| Module         |", " | ".join(f"API {v}" for v in versions), "|")
    print("|----------------|", "|".join("--------|" for _ in versions), "|")
    for module in sorted(matrix.keys()):
        row = [matrix[module].get(v, "N") for v in versions]
        print(f"| {module:<14} |", " | ".join(f"  {x}    " for x in row), "|")


if __name__ == "__main__":
    m = extract_modules_from_registry()
    print_matrix(m)

Example output:

| Module         | API v1 | API v2 
|----------------|--------|--------
| admin          |   Y    |   Y    
| organization   |   Y    |   Y    
| site           |   N    |   Y    

SECTION 7: Integration with Feature Workflow

How generators feed into the feature implementation workflow:

GENERATORS (this doc)
    │
    ├── Generate UserAdmin (auto)
    ├── Generate APIAdmin from OpenAPI (auto)
    │
    ▼
FEATURES (doc 07)
    │
    ├── Create AdminTransformMixin (manual)
    │   - Field mapping
    │   - Custom transforms (names ↔ IDs)
    │   - Endpoint operations
    │
    ├── Create Action Plugin (manual)
    │   - novacom_organization_admin.py
    │
    └── Test with playbook

Summary:

Step Tool Output
1 generate_user_dataclasses.py UserAdmin in user_models/admin.py
2 generate_api_models.sh Admin, Organization, etc. in api/v1/generated/
3 Manual AdminTransformMixin_v1, APIAdmin_v1 in api/v1/admin.py
4 Manual Action plugin (2-line USER_MODEL class in Meraki)
5 generate_model_descriptions.py Field descriptions synced into User Model metadata
6 generate_mcp_docs.py docs/12-mcp-server.md
7 generate_cli_docs.py docs/13-cli.md
8 md_to_html.py (CI) docs/_site/mcp-server.html, docs/_site/cli.html

Related Documents