Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 0 additions & 60 deletions .github/workflows/update-readme-for-merged-pr.yml

This file was deleted.

4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Qodo Agents
# Qodo Agents

This repository contains agents implementations examples, to be used with [Qodo Command](https://github.com/qodo-ai/command), showcasing best practices and common patterns for building AI-powered development workflows.

Expand Down Expand Up @@ -211,4 +211,4 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file

---

**Built with ❤️ by the Qodo community**
**Built with ❤️ by the Qodo community**
10 changes: 10 additions & 0 deletions agent.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Version of the agent configuration standard
version = "1.0"

# Default model for all agents, unless overridden in a specific agent's config.
model = "claude-4.5-sonnet"

# List of all agents that Qodo Command will be able to call.
imports = [
"agents/code-clarity-agent/agent.toml"
]
82 changes: 82 additions & 0 deletions agents/code-clarity-agent/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Universal Code Clarity Agent

**Don't just detect code quality issues—FIX them automatically with AI, for *any* language.**

This Qodo agent analyzes source code in any language (Python, JavaScript, Java, Rust, and more) for common clarity and quality problems, generates language-specific AI-powered fixes, and provides a quantifiable score to prove the improvement.

---
Competition Category
Best Agent for Clean Code Description


## Quick Start

To analyze and automatically refactor a file, run:

```bash
qodo code-clarity --set file_path=path/to/your/code.js --set language=javascript
```

```bash
qodo code-clarity --set file_path=path/to/your/code.py --set language=python
```

---

## Core Features

- ✅ **Detects 5 Types of Issues**: Finds missing docstrings, magic numbers, poor variable names, overly complex functions, and redundant comments.
- ✅ **Generates AI-Powered Fixes**: Automatically generates high-quality docstrings, extracts magic numbers into named constants, and suggests better variable names.
- ✅ **Quantifiable Scoring**: Calculates a "Clarity Score" (0-100) before and after the fixes, so you can see the concrete improvement.
- ✅ **Before/After Comparison**: The agent's output includes the original and the refactored code, making it easy to review the changes.

---

## Core Philosophy: From Detection to Solution

Many code quality tools are excellent at **detecting** problems. They generate a list of issues, leaving the developer with the manual task of fixing them.

This agent is built on a different philosophy: **it provides a solution, not just a report.**

By leveraging AI, the Universal Code Clarity Agent moves beyond simple analysis to offer automated, language-aware refactoring. Its core value is in saving developer time and cognitive load by not just identifying what's wrong, but by actively fixing it.

### Key Differentiators
* **Automated Refactoring:** Instead of just flagging missing docstrings or magic numbers, the agent generates high-quality, language-specific fixes and applies them.
* **Quantifiable Improvement:** The Clarity Score (0-100) provides a concrete metric to demonstrate the value of the changes, showing a clear "before and after" state.
* **Multi-Language by Design:** The agent is built to be language-agnostic, applying the appropriate documentation standards (JSDoc, Google-style docstrings, etc.) based on the user's input.

---

## Example Workflow

1. **You run the agent on a file:**
`qodo code-clarity --set file_path=examples/bad_code.py`

2. **The agent analyzes the code and finds:**
* 2 missing docstrings
* 1 magic number
* 2 poor variable names
* **Initial Score: 45/100**

3. **The agent automatically applies fixes:**
* Generates two complete, Google-style docstrings.
* Extracts `0.15` into a constant named `THRESHOLD`.
* Suggests renaming `calc` to `calculate_value` and `x` to `base_value`.

4. **The agent displays the results:**
* **Final Score: 90/100**
* **Improvement: +100%**
* A clear, side-by-side view of the original and refactored code.

---

## Value Proposition

The Universal Code Clarity Agent transforms code quality analysis from a manual, time-consuming chore into a fast, automated workflow. By using AI to generate fixes and providing a clear scoring system, it allows developers to improve their codebase's readability and maintainability in seconds, not hours. This means less time spent on tedious refactoring and more time focused on building features.

---

## Requirements

- **Qodo CLI:** Requires a standard installation and login (`qodo login`).
- **No Special API Keys:** Unlike other agents that may require a `QODO_API_KEY` for premium services, our agent uses the core AI model and local tools, making it accessible to anyone with a basic Qodo account.
100 changes: 100 additions & 0 deletions agents/code-clarity-agent/agent.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
version = "1.0"
model = "claude-4.5-sonnet"

[commands.code-clarity]
description = "Analyzes source code in any language for clarity issues, auto-fixes them using AI, and reports the score improvement."

instructions = """
You are an expert code quality analyst and refactoring assistant for multiple programming languages. Your goal is to analyze a given source file, identify clarity issues, automatically fix them, and report the improvement with a scoring system. The user will specify the language.

Follow these steps precisely:

1. **Analyze the Initial Code:**
* Read the content of the source file provided in the `file_path` argument.
* The `language` argument specifies the programming language of the file (e.g., 'python', 'javascript', 'java', 'rust', 'html').
* Calculate an initial "Clarity Score" based on the following universal rubric (out of 100):
* Start with 100 points.
* For each function/method missing a documentation comment: -15 points.
* For each "magic number" (a hard-coded numerical literal): -5 points.
* For each variable name that is non-descriptive (e.g., 'x', 'y', 'data', 'i'): -5 points.
* For each function/method longer than 30 lines: -10 points.
* For each redundant comment (e.g., `// increment i` for `i++`): -2 points.
* Keep a list of all detected issues and their locations (line numbers).

2. **Generate AI-Powered Fixes (Language-Specific):**
* **Documentation Comments:** For each function/method missing documentation, generate a high-quality, standard documentation comment block for the specified `language`.
* For Python, use Google-style docstrings.
* For JavaScript/TypeScript, use JSDoc comments.
* For Java, use Javadoc comments.
* For Rust, use `///` doc comments.
* For other languages, use the most common and standard documentation format.
* **Magic Numbers:** For each magic number, replace it with a constant variable with a descriptive, conventional name for that language (e.g., `const TAX_RATE = 0.15;` in JS, `static final double TAX_RATE = 0.15;` in Java). Place these constants in an appropriate location (e.g., top of the file or within a class).
* **Variable Names:** For poor variable names, suggest a more descriptive name. *Do not replace them automatically*, but list the suggestion in the `fixes_applied` section.
* **Redundant Comments:** Remove comments that state the obvious.

3. **Create the Refactored Code:**
* Apply the generated docstrings, the new constants (replacing magic numbers), and the removal of redundant comments to the original code to create a new, refactored version of the code.

4. **Calculate the Final Score:**
* Analyze the refactored code using the same scoring rubric from step 1. This will be the final "Clarity Score".

5. **Generate the Output:**
* Produce a JSON object that strictly follows the `output_schema`.
* **Do not write to any files.** The JSON object should be the final output printed to the console.
* The output must include the initial score, final score, a list of issues found, a list of fixes applied, the original code, and the refactored code.
"""

arguments = [
{ name = "file_path", type = "string", required = true, description = "The path to the source code file to analyze and fix." },
{ name = "language", type = "string", required = true, description = "The programming language of the file (e.g., 'python', 'javascript', 'java', 'rust')." }
]

tools = ["filesystem", "shell"]

execution_strategy = "act"

output_schema = """
{
"type": "object",
"properties": {
"initial_score": { "type": "number", "description": "The code clarity score (0-100) before fixes." },
"final_score": { "type": "number", "description": "The code clarity score (0-100) after fixes." },
"score_improvement_percent": { "type": "number", "description": "The percentage improvement in the score." },
"summary": {
"type": "object",
"description": "A summary of the changes.",
"properties": {
"total_issues_found": {"type": "number"},
"automatic_fixes_applied": {"type": "number"},
"suggestions_provided": {"type": "number"}
}
},
"issues_detected": {
"type": "array",
"description": "A list of all clarity issues found in the original code.",
"items": {
"type": "object",
"properties": {
"line": { "type": "number" },
"type": { "type": "string", "enum": ["Missing Docstring", "Magic Number", "Poor Variable Name", "Complex Function", "Redundant Comment"] },
"description": { "type": "string" }
}
}
},
"fixes_applied": {
"type": "array",
"description": "A list of all the fixes and suggestions applied to the code.",
"items": {
"type": "object",
"properties": {
"type": { "type": "string", "enum": ["Generated Docstring", "Extracted Constant", "Suggested Variable Name", "Removed Comment"] },
"description": { "type": "string" }
}
}
},
"original_code": { "type": "string", "description": "The original code content." },
"refactored_code": { "type": "string", "description": "The refactored code content with fixes applied." }
},
"required": ["initial_score", "final_score", "summary", "issues_detected", "fixes_applied", "original_code", "refactored_code"]
}
"""
103 changes: 103 additions & 0 deletions agents/code-clarity-agent/agent.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
version: "1.0"
model: "claude-4.5-sonnet"

commands:
code-clarity:
description: "Analyzes source code in any language for clarity issues, auto-fixes them using AI, and reports the score improvement."
instructions: |
You are an expert code quality analyst and refactoring assistant for multiple programming languages. Your goal is to analyze a given source file, identify clarity issues, automatically fix them, and report the improvement with a scoring system. The user will specify the language.

Follow these steps precisely:

1. **Analyze the Initial Code:**
* Read the content of the source file provided in the `file_path` argument.
* The `language` argument specifies the programming language of the file (e.g., 'python', 'javascript', 'java', 'rust', 'html').
* Calculate an initial "Clarity Score" based on the following universal rubric (out of 100):
* Start with 100 points.
* For each function/method missing a documentation comment: -15 points.
* For each "magic number" (a hard-coded numerical literal): -5 points.
* For each variable name that is non-descriptive (e.g., 'x', 'y', 'data', 'i'): -5 points.
* For each function/method longer than 30 lines: -10 points.
* For each redundant comment (e.g., `// increment i` for `i++`): -2 points.
* Keep a list of all detected issues and their locations (line numbers).

2. **Generate AI-Powered Fixes (Language-Specific):**
* **Documentation Comments:** For each function/method missing documentation, generate a high-quality, standard documentation comment block for the specified `language`.
* For Python, use Google-style docstrings.
* For JavaScript/TypeScript, use JSDoc comments.
* For Java, use Javadoc comments.
* For Rust, use `///` doc comments.
* For other languages, use the most common and standard documentation format.
* **Magic Numbers:** For each magic number, replace it with a constant variable with a descriptive, conventional name for that language (e.g., `const TAX_RATE = 0.15;` in JS, `static final double TAX_RATE = 0.15;` in Java). Place these constants in an appropriate location (e.g., top of the file or within a class).
* **Variable Names:** For poor variable names, suggest a more descriptive name. *Do not replace them automatically*, but list the suggestion in the `fixes_applied` section.
* **Redundant Comments:** Remove comments that state the obvious.

3. **Create the Refactored Code:**
* Apply the generated docstrings, the new constants (replacing magic numbers), and the removal of redundant comments to the original code to create a new, refactored version of the code.

4. **Calculate the Final Score:**
* Analyze the refactored code using the same scoring rubric from step 1. This will be the final "Clarity Score".

5. **Generate the Output:**
* Produce a JSON object that strictly follows the `output_schema`.
* **Do not write to any files.** The JSON object should be the final output printed to the console.
* The output must include the initial score, final score, a list of issues found, a list of fixes applied, the original code, and the refactored code.

arguments:
- name: file_path
type: string
required: true
description: "The path to the source code file to analyze and fix."
- name: language
type: string
required: true
description: "The programming language of the file (e.g., 'python', 'javascript', 'java', 'rust')."

tools: ["filesystem", "shell"]

execution_strategy: "act"

output_schema: |
{
"type": "object",
"properties": {
"initial_score": { "type": "number", "description": "The code clarity score (0-100) before fixes." },
"final_score": { "type": "number", "description": "The code clarity score (0-100) after fixes." },
"score_improvement_percent": { "type": "number", "description": "The percentage improvement in the score." },
"summary": {
"type": "object",
"description": "A summary of the changes.",
"properties": {
"total_issues_found": {"type": "number"},
"automatic_fixes_applied": {"type": "number"},
"suggestions_provided": {"type": "number"}
}
},
"issues_detected": {
"type": "array",
"description": "A list of all clarity issues found in the original code.",
"items": {
"type": "object",
"properties": {
"line": { "type": "number" },
"type": { "type": "string", "enum": ["Missing Docstring", "Magic Number", "Poor Variable Name", "Complex Function", "Redundant Comment"] },
"description": { "type": "string" }
}
}
},
"fixes_applied": {
"type": "array",
"description": "A list of all the fixes and suggestions applied to the code.",
"items": {
"type": "object",
"properties": {
"type": { "type": "string", "enum": ["Generated Docstring", "Extracted Constant", "Suggested Variable Name", "Removed Comment"] },
"description": { "type": "string" }
}
}
},
"original_code": { "type": "string", "description": "The original code content." },
"refactored_code": { "type": "string", "description": "The refactored code content with fixes applied." }
},
"required": ["initial_score", "final_score", "summary", "issues_detected", "fixes_applied", "original_code", "refactored_code"]
}
Loading