Skip to content

A comprehensive prompt for analyzing multi-repository codebases locally using LLMs via Ollama or LM Studio with MCP servers

Notifications You must be signed in to change notification settings

OmidZamani/codebase-ecosystem-atlas

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

Codebase Ecosystem Atlas

A comprehensive prompt for analyzing multi-repository codebases locally using LLMs via Ollama or LM Studio with MCP servers.

Overview

This prompt enables you to perform deep static analysis of your entire codebase ecosystem without uploading anything to external services. It generates living documentation including:

  • Architecture maps and service catalogs
  • Dependency and data flow visualization
  • Security findings and technical debt analysis
  • Business logic extraction and flow reconstruction
  • CI/CD and container insights
  • Cross-repository traceability

Tested Models

The following models have been tested and work well with this prompt:

  • nvidia/Mistral-NeMo-12B-Instruct
  • Qwen/Qwen2.5-Coder-14B-Instruct
  • Qwen/Qwen2.5-Coder-7B-Instruct
  • deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
  • google/codegemma-7b
  • google/codegemma-7b-it
  • bigcode/starcoder2-15b
  • bigcode/starcoder2-15b-instruct-v0.1
  • meta-llama/Llama-3.1-8B-Instruct
  • microsoft/Phi-4-reasoning
  • ibm-granite/granite-8b-code-instruct-128k
  • meta-llama/Llama-3.1-8B
  • NousResearch/Hermes-3-Llama-3.1-8B
  • google/gemma-2-9b-it
  • 01-ai/Yi-Coder-9B-Chat
  • mistralai/Ministral-8B-Instruct-2410
  • internlm/internlm2_5-7b-chat
  • mistralai/Codestral-22B-v0.1

Requirements

  • Ollama or LM Studio for running local LLMs
  • MCP Server (e.g., Desktop Commander MCP) for filesystem access
  • One of the tested models above

Usage

  1. Set up Ollama or LM Studio with one of the tested models
  2. Configure your MCP server (e.g., Desktop Commander MCP)
  3. Load the prompt from Public_Codebase_Ecosystem_Atlas_Prompt.md
  4. Replace placeholders:
    • {{ROOT_PATH}}: Your codebase root directory
    • {{OUTPUT_ROOT}}: Where to save analysis outputs
    • {{PROJECT_NAME}}: Your project name
  5. Customize the domain context section with your business domain
  6. Run the analysis

Key Features

Privacy-First

  • 100% local execution
  • No code upload or data exfiltration
  • Read-only static analysis

Comprehensive Coverage

  • Multi-language support (Java, C#, Node.js, Python, Go, PHP, Ruby, Dart, Swift, C/C++, Rust, SQL, etc.)
  • All artifact types (code, configs, CI/CD, infrastructure, migrations, specs)
  • 100% repository coverage with no skips

Rich Outputs

  • Markdown documentation with Mermaid diagrams
  • PlantUML / C4 architecture diagrams
  • Graphviz DOT graphs
  • JSON/YAML structured catalogs
  • CSV metrics and matrices
  • Interactive HTML reports (optional)

Output Structure

{{OUTPUT_ROOT}}/
├── 00_index.md                    # Navigation portal
├── 01_system_design/              # Architecture diagrams
├── 02_maps/                       # Dependency/call/dataflow maps
├── 03_repos/{repo}/               # Per-repository analysis
├── 04_ci_cd/                      # CI/CD findings
├── 05_containers/                 # Docker/K8s analysis
├── 06_frontend/                   # Frontend reports
├── 07_metrics/                    # Metrics and dashboards
├── 08_security/                   # Security findings
├── 09_adr/                        # Architecture Decision Records
├── 10_onboarding/                 # Onboarding guide
├── 11_impact/                     # Change impact analysis
├── 12_debt/                       # Technical debt registry
└── 99_crosslinks/                 # Traceability matrix

Use Cases

  • CTO/Architect: Understand system topology, coupling, and refactoring roadmap
  • Developers: Fast onboarding without tribal knowledge
  • Security/Compliance: Trace sensitive data paths end-to-end
  • DevOps: Identify deployment dependencies and pipeline coupling
  • Executives: Business capabilities and critical flows overview

License

This prompt and documentation are provided as-is for use with local LLM inference tools.

Contributing

Contributions, issues, and feedback are welcome! If you test additional models or have improvements to the prompt, please open an issue or PR.


Note: This is a static analysis tool. Always review outputs and validate findings before taking action.

About

A comprehensive prompt for analyzing multi-repository codebases locally using LLMs via Ollama or LM Studio with MCP servers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published