AI-Powered Code Reviews, Security Analysis, Architecture Drift Detection & Release Readiness — Directly in Your Jenkins Pipeline
- Why ForgeAI?
- Analyzers
- Supported LLM Backends
- Quick Start
- Pipeline DSL Reference
- Air-Gapped / Local LLM Setup
- HTML Report
- Configuration Reference
- Building from Source
- Project Structure
- Requirements
- Roadmap
- Contributing
- License
Every CI/CD pipeline runs linters and tests — but they miss the architectural, strategic, and contextual issues that only experienced engineers catch. ForgeAI bridges that gap by embedding AI-powered intelligence directly into your Jenkins pipeline.
ForgeAI is not another ChatGPT wrapper. It is a purpose-built pipeline intelligence engine with:
- 8 specialized analyzers, each with expert-level system prompts tuned for its domain
- Architecture-aware analysis that understands hexagonal, layered, CQRS, and microservice patterns
- Composite scoring that weighs security 3× and architecture 2× — because not all findings are equal
- Release readiness verdicts (SHIP_IT / CAUTION / HOLD / BLOCK) that synthesize all analyses
- Zero-cloud mode via Ollama for air-gapped and regulated environments
- A self-contained HTML report archived with every build
| Analyzer | ID | What It Does |
|---|---|---|
| AI Code Review | code-review |
SOLID, DRY, naming, error handling, anti-patterns, readability |
| Vulnerability Analysis | vulnerability |
OWASP Top 10, hardcoded secrets, injection, CWE mapping |
| Architecture Drift Detection | architecture-drift |
Layer violations, circular deps, coupling, pattern enforcement |
| Test Gap Analysis | test-gaps |
Untested paths, missing edge cases, test quality, concrete suggestions |
| Dependency Risk Scoring | dependency-risk |
License conflicts, unmaintained deps, unpinned versions, duplication |
| Commit Intelligence | commit-intel |
Commit hygiene, breaking change detection, changelog & semver suggestions |
| Pipeline Optimizer | pipeline-advisor |
Parallelization, caching, resource waste, failure resilience |
| Release Readiness | release-readiness |
Composite verdict synthesizing all prior analyses |
ForgeAI is provider-agnostic. Use whatever fits your security and budget requirements:
| Provider | Type | API Key Required | Air-Gapped |
|---|---|---|---|
| OpenAI (GPT-4o, GPT-4o-mini, o1) | Cloud API | Yes | No |
| Anthropic Claude (claude-sonnet-4-5, claude-opus-4-5) | Cloud API | Yes | No |
| Ollama (DeepSeek-Coder, CodeLlama, Llama 3, Mistral, Phi-3) | Local | No | Yes |
| LM Studio | Local | No | Yes |
| vLLM / LocalAI / text-generation-webui | Self-hosted | Optional | Yes |
| Any OpenAI-compatible endpoint | Varies | Varies | Varies |
Build from source:
git clone https://github.com/jenkinsci/forgeai-pipeline-intelligence-plugin.git
cd forgeai-pipeline-intelligence-plugin
mvn clean package -DskipTestsUpload target/forgeai-pipeline-intelligence.hpi via Manage Jenkins → Plugins → Advanced → Deploy Plugin.
From the Jenkins Update Center (once published):
Manage Jenkins → Plugins → Available → search for "ForgeAI Pipeline Intelligence"
Navigate to Manage Jenkins → System → ForgeAI Pipeline Intelligence:
- Select your LLM Provider (OpenAI / Anthropic / Ollama)
- Enter the Endpoint URL (e.g.,
https://api.openai.com/) - Enter the Model ID (e.g.,
gpt-4o) - Select or create an API Key credential (Jenkins Secret Text)
- Click Test Connection to verify
- Enable or disable individual analyzers
- Save
Full suite (recommended):
stage('ForgeAI Intelligence') {
steps {
script {
def report = forgeAI(
analyzers: ['code-review', 'vulnerability', 'architecture-drift',
'test-gaps', 'dependency-risk', 'release-readiness'],
sourceGlob: 'src/**/*.java',
contextInfo: 'Spring Boot microservice, hexagonal architecture',
failOnCritical: true,
criticalThreshold: 4
)
echo "Composite Score: ${report.compositeScore}/10"
}
}
post {
always {
archiveArtifacts artifacts: 'forgeai-reports/**', allowEmptyArchive: true
publishHTML(target: [
reportDir: 'forgeai-reports',
reportFiles: 'forgeai-report.html',
reportName: 'ForgeAI Report'
])
}
}
}Single analyzer (targeted):
def result = forgeAIScan(
analyzer: 'vulnerability',
source: readFile('src/main/java/App.java'),
context: 'Java 17 REST API handling PII data'
)
if (result.criticalCount > 0) {
error("Security scan found ${result.criticalCount} critical vulnerabilities")
}Parallel analyzers:
stage('ForgeAI Parallel') {
parallel {
stage('Security') { steps { script { forgeAIScan analyzer: 'vulnerability', source: src } } }
stage('Architecture') { steps { script { forgeAIScan analyzer: 'architecture-drift', source: src } } }
stage('Test Gaps') { steps { script { forgeAIScan analyzer: 'test-gaps', source: src } } }
}
}See the examples/ directory for complete, annotated Jenkinsfiles.
| Parameter | Type | Default | Description |
|---|---|---|---|
analyzers |
List<String> |
All 7 analyzers | Which analyzers to run |
sourceGlob |
String |
**/*.java,**/*.py,**/*.js,**/*.ts,**/*.go,**/*.rs |
Glob patterns for source files |
contextInfo |
String |
"" |
Project description, architecture, or constraints |
failOnCritical |
boolean |
false |
Fail build if composite score falls below threshold |
criticalThreshold |
int |
3 |
Minimum composite score (1–10) |
Returns a Map with: compositeScore, totalFindings, criticalCount, analyzerCount, and per-analyzer scores (e.g., code-reviewScore, vulnerabilityScore).
| Parameter | Type | Description |
|---|---|---|
analyzer |
String |
Analyzer ID (see table above) |
source |
String |
Source code or diff to analyze |
context |
String |
Additional context |
Returns a Map with: score, severity, summary, findingsCount, criticalCount, highCount.
ForgeAI supports fully offline operation — no data ever leaves your network.
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a code-focused model
ollama pull deepseek-coder:6.7b # Fast, good for most use cases (~4 GB)
ollama pull deepseek-coder:33b # More accurate, requires ~20 GB RAM
ollama pull codellama:13b # Meta's code model
ollama pull llama3:8b # General-purpose, solid code skills
# Verify it is running
curl http://localhost:11434/api/tagsJenkins global config:
Provider: Ollama (Local)
Endpoint: http://localhost:11434
Model ID: deepseek-coder:6.7b
API Key: (leave blank)
- Download from lmstudio.ai
- Load any GGUF model (e.g.,
deepseek-coder-v2) - Start the local server (default:
http://localhost:1234) - In Jenkins:
Provider: OpenAI / OpenAI-Compatible
Endpoint: http://localhost:1234/
Model ID: (auto-detected by LM Studio)
API Key: (leave blank)
Every build generates a self-contained HTML report with:
- Composite score and release verdict (SHIP_IT / CAUTION / HOLD / BLOCK)
- Per-analyzer breakdown with individual scores
- Detailed findings with severity, file location, and suggested fixes
- Dark theme optimised for readability
The report is written to forgeai-reports/forgeai-report.html in the workspace. Use publishHTML (HTML Publisher plugin) or archiveArtifacts to surface it on the build page.
| Setting | Description | Default |
|---|---|---|
| LLM Provider | OpenAI / Anthropic / Ollama | OpenAI |
| Endpoint URL | API base URL | https://api.openai.com/ |
| Model ID | Model to use | gpt-4o |
| API Key Credential | Jenkins Secret Text credential ID | — |
| Temperature | LLM creativity (0.0–1.0) | 0.2 |
| Timeout | Request timeout in seconds | 120 |
| Max Tokens | Maximum response length | 4096 |
| Per-analyzer toggles | Enable or disable each analyzer globally | All enabled |
| Publish HTML Report | Generate the HTML report artifact | true |
| Fail on Low Score | Fail build below the threshold | false |
| Score Threshold | Minimum passing composite score (1–10) | 3 |
| Custom System Prompt | Text prepended to every LLM prompt | — |
Prerequisites: JDK 17+, Maven 3.9+
git clone https://github.com/jenkinsci/forgeai-pipeline-intelligence-plugin.git
cd forgeai-pipeline-intelligence-plugin
# Full build with tests
mvn clean verify
# Build only (skip tests)
mvn clean package -DskipTests
# The installable plugin is at:
# target/forgeai-pipeline-intelligence.hpiNote: The CI workflow tests against Java 17 and Java 21. Java 25 requires additional Maven configuration to disable two parent-POM-bound plugins (
io.jenkins.tools.maven:license-maven-pluginandcom.github.spotbugs:spotbugs-maven-plugin) that bundle ASM versions incompatible with class file major version 69. These overrides are already present inpom.xml.
See LOCAL_TESTING.md for a complete guide covering local Jenkins setup, LLM provider configuration, and pre-release validation.
| Feature | ForgeAI | Typical AI Plugins |
|---|---|---|
| Architecture drift detection | Yes — pattern-aware | No — code-level only |
| Composite release scoring | Yes — weighted, cross-analyzer | No — single dimension |
| Pipeline self-optimisation | Yes — analyses the Jenkinsfile itself | Not available |
| Air-gapped local LLM | Yes — Ollama, LM Studio, vLLM | No — cloud-only |
| Multi-provider abstraction | Yes — OpenAI, Anthropic, Ollama, custom | No — single vendor |
| Quality gate with verdicts | Yes — SHIP_IT / CAUTION / HOLD / BLOCK | Pass/fail only |
| Dependency supply-chain risk | Yes — license, maintenance, depth | CVE-only |
| Commit intelligence + changelog | Yes — auto semver + changelog draft | Not available |
| Admin GUI with test connection | Yes — full Jelly config UI | Config-file only |
forgeai-pipeline-intelligence/
├── pom.xml
├── src/
│ ├── main/java/io/forgeai/jenkins/
│ │ ├── config/
│ │ │ └── ForgeAIGlobalConfiguration.java # Admin GUI settings
│ │ ├── llm/
│ │ │ ├── LLMProvider.java # Provider interface
│ │ │ ├── OpenAICompatibleProvider.java # OpenAI / LM Studio / vLLM
│ │ │ ├── AnthropicProvider.java # Anthropic Claude API
│ │ │ ├── OllamaProvider.java # Local Ollama
│ │ │ ├── LLMProviderFactory.java
│ │ │ └── LLMException.java
│ │ ├── analyzers/
│ │ │ ├── BaseAnalyzer.java
│ │ │ ├── CodeReviewAnalyzer.java
│ │ │ ├── VulnerabilityAnalyzer.java
│ │ │ ├── ArchitectureDriftAnalyzer.java
│ │ │ ├── TestGapAnalyzer.java
│ │ │ ├── DependencyRiskAnalyzer.java
│ │ │ ├── CommitIntelligenceAnalyzer.java
│ │ │ ├── PipelineAdvisorAnalyzer.java
│ │ │ ├── ReleaseReadinessAnalyzer.java
│ │ │ └── ResultParser.java
│ │ ├── steps/
│ │ │ ├── ForgeAIStep.java # forgeAI pipeline step
│ │ │ ├── ForgeAIScanStep.java # forgeAIScan pipeline step
│ │ │ └── DirectoryTreeCallable.java
│ │ └── reports/
│ │ ├── AnalysisResult.java
│ │ └── ForgeAIReportGenerator.java
│ ├── main/resources/
│ │ ├── index.jelly
│ │ └── io/forgeai/jenkins/
│ │ ├── config/ForgeAIGlobalConfiguration/config.jelly
│ │ └── steps/ForgeAIStep/config.jelly
│ └── test/java/io/forgeai/jenkins/
│ └── ForgeAIPluginTest.java
├── examples/
│ ├── Jenkinsfile.full-suite # Full analysis with HTML report
│ ├── Jenkinsfile.targeted # Parallel targeted scans
│ └── Jenkinsfile.local-ollama # Air-gapped local LLM
├── .github/workflows/ci.yml # GitHub Actions (Java 17 + 21 matrix)
├── LOCAL_TESTING.md
├── CONTRIBUTING.md
└── LICENSE # Apache 2.0
| Requirement | Minimum |
|---|---|
| Jenkins | 2.528.3 LTS |
| Java (runtime) | 17 |
| Java (build) | 17 (tested through 21) |
| Maven (build) | 3.9 |
| LLM | OpenAI API key, Anthropic API key, or Ollama running locally |
- GitHub Checks API — post findings as PR annotations
- SonarQube integration — augment AI analysis with static analysis data
- Historical trend dashboard — track scores across builds
- Slack / Teams notifications with score summaries
- Multi-language prompt tuning — model-specific prompt optimisation
- Custom analyzer support — define your own analyzer prompts via the UI
- GitLab CI and GitHub Actions adapters
Contributions are welcome. See CONTRIBUTING.md for guidelines on reporting bugs, suggesting features, and submitting pull requests.
Key areas where help is particularly valuable:
- Testing with additional LLM providers and models
- Prompt engineering improvements
- Additional language support (Go, Rust, C#, Ruby)
- HTML report UI improvements
- Documentation and tutorials