An MCP server and CLI for generating Konveyor analyzer rules using AI. Point it at a migration guide, code snippets, or any description of migration concerns — it generates validated rules ready for the konveyor/rulesets repo.
Two entry points, shared internals:
- MCP server — 4 deterministic tools for interactive rule construction from Claude Code, Cursor, Kai, or any MCP client. No server-side LLM needed.
- CLI — E2E pipeline for CI/CD automation with server-side LLM. Auto-detects source/target/language from content.
| Tool | Description |
|---|---|
construct_rule |
Takes rule parameters (ruleID, condition type, pattern, location, message, etc.), validates, returns valid YAML |
construct_ruleset |
Takes name, description, labels, returns ruleset metadata YAML |
validate_rules |
Structural validation: required fields, category, effort, regex, labels, duplicates |
get_help |
Documentation on condition types, valid locations, label format, categories, examples |
| Command | Description | Status |
|---|---|---|
rulegen generate |
Ingest input (URL, file, text) → extract patterns via LLM → construct rules → validate → save | Implemented |
rulegen validate |
Structural validation of rule YAML (directory or file); prints JSON, no LLM | Implemented |
rulegen test |
Generate test data, run kantra, auto-fix test data via LLM hints (up to --max-iterations) |
Implemented |
rulegen score |
Run kantra tests for functional confidence + optional LLM-as-judge | Experimental |
- Go 1.22+
- kantra — required for
rulegen test(must be on PATH)
go build -o rulegen ./cmd/rulegen/Start the server — no API key needed. Supports two transports:
# stdio (default) — for local MCP clients
./rulegen serve
# Streamable HTTP — for remote/shared deployments
./rulegen serve --transport http --port 8080Stdio is the MCP-recommended transport for local servers. The client launches the server as a subprocess — no separate process to manage, and access is restricted to just the MCP client.
Add .mcp.json to your project root:
{
"mcpServers": {
"rulegen": {
"type": "stdio",
"command": "./rulegen",
"args": ["serve"]
}
}
}Use Streamable HTTP when the server runs remotely, is shared across multiple clients, or you want to manage the server lifecycle independently (e.g., for debugging). Requires starting the server separately with ./rulegen serve --transport http --port 8080.
{
"mcpServers": {
"rulegen": {
"type": "streamable-http",
"url": "http://localhost:8080/mcp"
}
}
}Streamable HTTP (server must be running separately):
{
"mcpServers": {
"rulegen": {
"url": "http://localhost:8080/mcp"
}
}
}If your Cursor version supports stdio MCP servers, you can use the same .mcp.json as in Connect from Claude Code (stdio — recommended) above.
Once connected, ask your MCP client:
Use the rulegen MCP server to generate Konveyor analyzer rules for this migration guide:
https://gist.github.com/savitharaghunathan/52198c722b807f3862af38b72e6d7331
Save the rules to the output folder with source and target labels.
The client LLM will:
- Call
get_helpto learn about condition types and locations - Read the migration guide content
- Call
construct_rulefor each migration pattern it identifies - Call
construct_rulesetto create ruleset metadata - Call
validate_rulesto verify the output
No server-side LLM or API key is needed — the client's LLM does all the thinking.
Set your LLM provider and API key:
export GEMINI_API_KEY=your-keyGenerate rules (source/target/language auto-detected from content):
./rulegen generate \
--input "https://gist.github.com/savitharaghunathan/52198c722b807f3862af38b72e6d7331" \
--provider geminiOr specify everything explicitly:
./rulegen generate \
--input "https://spring.io/blog/migration-guide" \
--source spring-boot-3 \
--target spring-boot-4 \
--language java \
--output ./output \
--provider anthropic| Flag | Description | Required |
|---|---|---|
--input |
URL, file path, or text content | Yes |
--source |
Source technology (auto-detected if omitted) | No |
--target |
Target technology (auto-detected if omitted) | No |
--language |
Programming language: java, go, nodejs, csharp (auto-detected if omitted) | No |
--output |
Output directory (default: output) |
No |
--provider |
LLM provider: anthropic, openai, gemini, ollama (overrides RULEGEN_LLM_PROVIDER env var) |
Yes |
Validate existing rule YAML without an LLM (same structural checks as the validate_rules MCP tool):
./rulegen validate --rules ./output/my-ruleset/rulesUse a directory of .yaml files or a single rule file. Prints JSON to stdout; exits with a non-zero status if validation fails.
Generate test data, run kantra tests, and auto-fix test data (not rule YAML) when the compile or kantra steps fail:
./rulegen test \
--rules output/golang-non-fips-crypto-to-golang-fips-crypto/rules \
--output output/golang-non-fips-crypto-to-golang-fips-crypto \
--provider gemini \
--max-iterations 3The test-fix loop:
- Generates test source code that should trigger each rule
- Phase A — Compile fix: Checks compilation (
go build,mvn compile,npx tsc,dotnet build), feeds errors + API docs back to the LLM, retries up to 5 times - Phase B — Kantra test: Runs
kantra teston generated test data - When tests still fail, asks the LLM for code hints, regenerates test data, and re-runs (up to
--max-iterations) - Consistency check: Verifies every rule has a test case and every test references a real rule
Requires
--experimentalflag:./rulegen --experimental score ...
Score rules by running kantra tests (primary signal — does the rule actually work?):
./rulegen --experimental score \
--tests output/go-non-fips-crypto-to-go-fips-140-compliance/testsAdd LLM-as-judge as a secondary quality signal:
./rulegen --experimental score \
--tests output/go-non-fips-crypto-to-go-fips-140-compliance/tests \
--rules output/go-non-fips-crypto-to-go-fips-140-compliance/rules \
--provider geminiVerdict logic:
- kantra fail → reject (rule doesn't match test data)
- kantra pass + judge reject → review (works but quality concerns)
- kantra pass + judge accept → accept
| Flag | Description | Required |
|---|---|---|
--tests |
Directory containing .test.yaml files |
Yes |
--rules |
Rules directory; required when using --provider (LLM judge) |
With --provider |
--output |
Project root for confidence/scores.yaml (default: print only) |
No |
--kantra |
Path to kantra binary (default: kantra on PATH) |
No |
--timeout |
Kantra timeout in seconds (default: 900) |
No |
--provider |
LLM provider for judge: anthropic, openai, gemini, ollama |
No |
| Provider | API Key Env Var | Model Env Var | Default Model |
|---|---|---|---|
anthropic |
ANTHROPIC_API_KEY |
ANTHROPIC_MODEL |
claude-sonnet-4-5 |
openai |
OPENAI_API_KEY |
OPENAI_MODEL |
gpt-4o |
gemini |
GEMINI_API_KEY |
GEMINI_MODEL |
gemini-2.5-flash |
ollama |
— | OLLAMA_MODEL |
llama3 |
Output matches the konveyor/rulesets layout — directly submittable as a PR.
output/spring-boot-3-to-spring-boot-4/
├── rules/
│ ├── ruleset.yaml
│ ├── web.yaml
│ └── security.yaml
├── tests/
│ ├── web.test.yaml
│ └── data/web/
│ ├── pom.xml
│ └── src/main/java/com/example/App.java
└── confidence/
└── scores.yaml # kantra test results + optional LLM judge scores
Java (java.referenced, java.dependency), Go (go.referenced, go.dependency), Node.js (nodejs.referenced), C# (csharp.referenced), and builtin (filecontent, file, xml, json, hasTags, xmlPublicID), plus and/or combinators.
make test # Unit tests
make test-all # Unit tests + vet + race detector
make test-e2e # E2E tests (real LLM + kantra)
make lint # golangci-lint| Project | Description |
|---|---|
| analyzer-rule-generator (ARG) | Python, LLM-powered rule generation pipeline |
| Scribe | Java/Quarkus MCP server for rule construction |
| analyzer-lsp | Rule engine and analyzer |
| kantra | Rule testing CLI |
Apache-2.0