An intelligent GitHub Action that automatically analyzes the quality of unit test cases in your codebase by comparing them against JIRA requirements and Confluence documentation using AI-powered analysis.
- Automated Test Quality Scoring: Generates a comprehensive quality score (0-10) for your test suite
- JIRA Integration: Fetches requirements directly from JIRA tickets
- Confluence Integration: Analyzes related documentation for complete context
- AI-Powered Analysis: Uses AWS Bedrock with Claude for intelligent test evaluation
- PR Comments: Automatically posts detailed reports as pull request comments
- Detailed Reporting: Provides actionable insights and improvement recommendations
- Configurable Thresholds: Set minimum quality scores for your workflow
When the action runs, it generates a comprehensive report that includes:
## β
Unit Test Quality Report
**JIRA Ticket:** BB-1234
**Branch:** feature/BB-1234-new-feature
### π Quality Score
ββββββββββ 8.0/10
| Metric | Score |
|--------|-------|
| Test Quality Score | **8.0/10** |
| Minimum Threshold | 6.0/10 |
| Status | **PASSED** |
Create .github/workflows/check-unit-testcases.yml:
name: Check Unit Test Cases Quality
on:
pull_request:
branches:
- dev
types: [opened, synchronize, reopened]
jobs:
analyze-test-quality:
name: Analyze Test Quality
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
issues: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Extract JIRA ticket
id: extract_ticket
run: |
# Extract ticket from branch name or PR title
if [[ "${{ github.head_ref }}" =~ ([A-Z]+-[0-9]+) ]]; then
TICKET_ID="${BASH_REMATCH[1]}"
fi
echo "ticket_id=$TICKET_ID" >> $GITHUB_OUTPUT
- name: Run Unit Test Quality Analysis
uses: sourcefuse/[email protected]
with:
jira_url: ${{ secrets.UT_QUALITY_JIRA_URL }}
jira_email: ${{ secrets.UT_QUALITY_JIRA_EMAIL }}
jira_api_token: ${{ secrets.UT_QUALITY_JIRA_API_TOKEN }}
jira_ticket_id: ${{ steps.extract_ticket.outputs.ticket_id }}
confluence_url: ${{ secrets.UT_QUALITY_CONFLUENCE_URL }}
confluence_email: ${{ secrets.UT_QUALITY_CONFLUENCE_EMAIL }}
confluence_api_token: ${{ secrets.UT_QUALITY_CONFLUENCE_API_TOKEN }}
aws_access_key_bedrock: ${{ secrets.UT_QUALITY_AWS_ACCESS_KEY_BEDROCK }}
aws_secret_key_bedrock: ${{ secrets.UT_QUALITY_AWS_SECRET_KEY_BEDROCK }}
minimum_score: '6.0'Add required secrets in your repository settings:
UT_QUALITY_JIRA_URL- Your JIRA instance URLUT_QUALITY_JIRA_EMAIL- JIRA user emailUT_QUALITY_JIRA_API_TOKEN- JIRA API tokenUT_QUALITY_CONFLUENCE_URL- Confluence URLUT_QUALITY_CONFLUENCE_EMAIL- Confluence emailUT_QUALITY_CONFLUENCE_API_TOKEN- Confluence API tokenUT_QUALITY_AWS_ACCESS_KEY_BEDROCK- AWS access keyUT_QUALITY_AWS_SECRET_KEY_BEDROCK- AWS secret key
The action automatically triggers when you:
- Create a PR to your configured branch
- Include a JIRA ticket ID in branch name or PR title (e.g.,
feature/BB-1234-description)
graph LR
A[PR Created] --> B[Extract JIRA ID]
B --> C[Fetch JIRA Requirements]
C --> D[Fetch Confluence Docs]
D --> E[Clone Repository]
E --> F[AI Analysis]
F --> G[Generate Score]
G --> H[Post PR Comment]
H --> I[Upload to Confluence]
- Requirement Gathering: Fetches requirements from JIRA and related Confluence pages
- Code Analysis: Examines test files in your repository
- AI Evaluation: Uses Claude to assess test quality against requirements
- Score Generation: Calculates quality score based on multiple factors
- Report Creation: Generates detailed analysis with recommendations
- Result Posting: Adds comment to PR and optionally uploads to Confluence
The analyzer evaluates tests across five key dimensions:
| Category | Weight | Focus Areas |
|---|---|---|
| Test Coverage | 20% | Requirement coverage, missing scenarios |
| Test Quality | 20% | Assertions, test structure, clarity |
| Edge Cases | 20% | Error handling, boundary conditions |
| Assertions | 20% | Quality and completeness of checks |
| Maintainability | 20% | Code organization, documentation |
| Input | Required | Default | Description |
|---|---|---|---|
jira_ticket_id |
Yes | - | JIRA ticket to analyze |
minimum_score |
No | 6.0 |
Minimum acceptable score |
save_to_file |
No | true |
Save results to files |
upload_to_confluence |
No | true |
Upload to Confluence |
repository_branch |
No | main |
Branch to analyze |
Configure via GitHub Variables (optional):
UT_QUALITY_JIRA_PROJECT_KEY- Default JIRA project (default:BB)UT_QUALITY_CONFLUENCE_SPACE_KEY- Default Confluence space (default:BB)UT_QUALITY_MINIMUM_SCORE- Global minimum score (default:1.0)UT_QUALITY_CLAUDE_CODE_USE_BEDROCK- Use Bedrock (default:1)
For automated setup, use the provided script:
# Clone the repository
git clone https://github.com/sourcefuse/ai-test-quality-analyzer.git
cd ai-test-quality-analyzer
# Create .env file with your credentials
cp .env.example .env
# Edit .env with your values
# Run setup script
chmod +x setup-github-secrets.sh
./setup-github-secrets.shThe script will:
- Verify GitHub CLI authentication
- Configure all required secrets
- Set up optional variables
- Provide setup confirmation
- name: Run Unit Test Quality Analysis
uses: sourcefuse/[email protected]
with:
minimum_score: '7.0' # Require high quality
- name: Fail if quality is low
if: steps.quality_check.outputs.score_passed == 'false'
run: exit 1minimum_score: ${{
github.base_ref == 'main' && '8.0' ||
github.base_ref == 'dev' && '6.0' ||
'4.0'
}}Trigger manually from Actions tab with custom parameters:
- Specific JIRA ticket
- Custom quality threshold
- Target branch
The action posts a formatted comment with:
- Visual score indicator
- Pass/fail status
- Score breakdown
- Links to detailed reports
- Improvement suggestions
Uploaded reports include:
- Complete requirements mapping
- Test coverage analysis
- Code quality assessment
- Specific improvement recommendations
- Example code snippets
| Issue | Solution |
|---|---|
| "Cannot find JIRA ticket" | Ensure branch/PR contains ticket ID (e.g., BB-1234) |
| "AWS authentication failed" | Verify Bedrock credentials and region |
| "Confluence upload failed" | Check API token permissions |
| "Score below threshold" | Review report for improvement areas |
Enable detailed logging:
env:
ACTIONS_RUNNER_DEBUG: true
ACTIONS_STEP_DEBUG: true- All credentials stored as GitHub Secrets
- API tokens should have minimal required permissions
- Rotate tokens every 90 days
- Use dedicated service accounts
- Enable GitHub secret scanning
# Install dependencies
npm install
# Set up environment
cp .env.example .env
# Add your credentials to .env
# Run analysis
npm run start
npm run create-requirement-doc
npm run analyze-test-qualityβββ action.yml # GitHub Action definition
βββ fetch-and-analyze.ts # Main analysis orchestrator
βββ src/
β βββ services/ # Core services
β β βββ jira.service.ts
β β βββ confluence.service.ts
β β βββ bedrock-ai.service.ts
β β βββ requirement-analyzer.service.ts
β βββ dtos/ # Data transfer objects
β βββ models/ # Data models
β βββ utils/ # Utility functions
βββ templates/ # Workflow templates
βββ prompts/ # AI prompt templates
βββ setup-github-secrets.sh # Setup automation
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Open a Pull Request
This project is licensed under the ISC License - see the LICENSE file for details.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Setup Guide
- Built with AWS Bedrock and Claude
- Integrates with Atlassian JIRA and Confluence
- Powered by GitHub Actions
- Support for multiple testing frameworks
- Integration with code coverage tools
- Custom quality rubrics
- Performance testing analysis
- Integration testing support
- Multi-language support
Created by Vishal Gupta
Maintained by SourceFuse