This document defines Bayat's standards for automating and enhancing the code review process, increasing efficiency while maintaining high code quality.
- Introduction
- Automated Checks
- Static Analysis Integration
- Code Review Bot Configuration
- Metrics and Standards
- Workflow Integration
- Security Analysis
- Documentation Validation
- Performance Checks
- Accessibility Validation
- AI-Assisted Review
- Developer Experience Optimization
- Implementation Checklist
Code review is a critical quality control process that benefits from selective automation. This document outlines standards for implementing code review automation that strikes the right balance between automated and human review.
- Increase Efficiency: Automate repetitive checks to let reviewers focus on architectural and logical considerations
- Improve Consistency: Apply uniform standards across all code reviews
- Accelerate Feedback: Provide immediate feedback on common issues
- Prevent Regressions: Automatically enforce quality standards
- Support Knowledge Sharing: Surface insights that help developers learn
- Automate the Objective: Focus automation on checks with clear pass/fail criteria
- Preserve Human Judgment: Leave subjective aspects of review to human reviewers
- Progressive Enhancement: Start with essential checks and add more sophistication over time
- Team Customization: Allow teams to customize rules based on project-specific needs
- Transparency: Make all automated rules and their justifications visible to all developers
The following categories of checks should be automated in all projects:
-
Formatting Validation:
- Enforce consistent indentation, spacing, and line length
- Validate against style guides (e.g., Airbnb for JavaScript)
- Ensure code is formatted according to project standards
-
Implementation:
- Use Prettier or equivalent formatters
- Configure format-on-save in editors when possible
- Run format checks in CI/CD pipeline
Example pre-commit hook configuration:
// .husky/pre-commit
{
"hooks": {
"pre-commit": "lint-staged"
}
}
// package.json
{
"lint-staged": {
"*.{js,jsx,ts,tsx}": ["prettier --write", "eslint --fix"],
"*.{css,scss}": ["prettier --write", "stylelint --fix"],
"*.{json,md}": ["prettier --write"]
}
}
-
Linting Standards:
- Configure appropriate linters for language/framework
- Enforce naming conventions
- Identify code smells and anti-patterns
-
Implementation:
- Use ESLint for JavaScript/TypeScript
- Use language-specific linters (e.g., Rubocop for Ruby)
- Configure CI to fail on linting errors
Example ESLint configuration:
// .eslintrc.js
module.exports = {
extends: [
"eslint:recommended",
"plugin:@typescript-eslint/recommended",
"plugin:react/recommended",
"plugin:react-hooks/recommended"
],
rules: {
"complexity": ["error", { max: 15 }],
"max-depth": ["error", { max: 4 }],
"max-lines-per-function": ["warn", { max: 50 }],
"no-duplicate-imports": "error"
}
};
-
Coverage Requirements:
- Enforce minimum test coverage thresholds
- Require tests for new functionality
- Flag decreased coverage in pull requests
-
Implementation:
- Use coverage tools (Jest, Codecov, etc.)
- Configure CI to report coverage metrics
- Compare coverage between branches
Example Jest coverage configuration:
// jest.config.js
module.exports = {
coverageThreshold: {
global: {
branches: 80,
functions: 80,
lines: 80,
statements: 80
}
},
collectCoverageFrom: [
"src/**/*.{js,jsx,ts,tsx}",
"!src/**/*.d.ts",
"!src/test/**",
"!src/index.tsx"
]
};
-
Complexity Analysis:
- Configure cyclomatic complexity limits
- Set maximum function/method length
- Identify deeply nested code
-
Duplication Detection:
- Flag duplicate code blocks
- Report excessive similarity between files
- Suggest refactoring opportunities
-
Implementation:
- Integrate SonarQube/SonarCloud
- Configure code climate
- Set up analysis in CI/CD pipeline
Example SonarQube configuration:
# sonar-project.properties
sonar.projectKey=org.example:my-project
sonar.projectName=My Project
sonar.sources=src
sonar.tests=test
sonar.coverage.exclusions=**/*.test.js,**/*.spec.js
sonar.javascript.lcov.reportPaths=coverage/lcov.info
# Quality gates
sonar.qualitygate.wait=true
-
Version Monitoring:
- Detect outdated dependencies
- Identify security vulnerabilities
- Flag breaking changes in dependencies
-
License Compliance:
- Validate dependency licenses against allowlist
- Flag non-compliant licenses
- Ensure license compatibility
-
Implementation:
- Integrate Dependabot or equivalent
- Use OWASP Dependency Check
- Configure license scanning
Example Dependabot configuration:
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
labels:
- "dependencies"
ignore:
- dependency-name: "lodash"
versions: ["4.x"]
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "monthly"
-
GitHub Integration:
- Configure GitHub Actions for CI/CD
- Set up required status checks
- Configure branch protection rules
-
GitLab Integration:
- Configure GitLab CI/CD
- Set up merge request approval rules
- Configure branch protection
-
Bitbucket Integration:
- Configure Bitbucket Pipelines
- Set up code insights
- Configure branch restrictions
Example GitHub branch protection configuration:
# .github/workflows/branch-protection.yml
name: Branch Protection
on:
push:
branches:
- main
- develop
jobs:
protect:
runs-on: ubuntu-latest
steps:
- uses: actions/github-script@v5
with:
github-token: ${{ secrets.ADMIN_TOKEN }}
script: |
github.repos.updateBranchProtection({
owner: context.repo.owner,
repo: context.repo.repo,
branch: 'main',
required_status_checks: {
strict: true,
contexts: ['build', 'test', 'lint', 'security-scan']
},
enforce_admins: true,
required_pull_request_reviews: {
dismissal_restrictions: {},
dismiss_stale_reviews: true,
require_code_owner_reviews: true,
required_approving_review_count: 1
},
restrictions: null
})
-
Automated Comments:
- Configure bots to comment on specific issues
- Provide resolution suggestions for common problems
- Format comments consistently
-
Custom Reviewers:
- Assign specific reviewers based on file types
- Balance review workload across team
- Suggest reviewers based on expertise
Example CODEOWNERS file:
# CODEOWNERS file
# Assign frontend team to review frontend code
/src/components/ @frontend-team
/src/pages/ @frontend-team
# Assign backend team to review backend code
/server/ @backend-team
/database/ @backend-team @dba-team
# Assign security team to review security-related code
/src/auth/ @security-team
-
Code Quality Metrics:
- Maintainability Index
- Cyclomatic Complexity
- Technical Debt Ratio
- Comment Density
-
Process Metrics:
- Time to First Review
- Review Cycle Time
- Review Throughput
- Defect Detection Rate
-
Implementation:
- Collect metrics automatically
- Generate trend reports
- Set improvement targets
Example quality metrics dashboard configuration:
# .github/workflows/quality-metrics.yml
name: Quality Metrics
on:
schedule:
- cron: '0 0 * * 1' # Weekly on Monday
workflow_dispatch:
jobs:
metrics:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Generate metrics report
run: |
npm install -g code-quality-metrics
generate-metrics --output metrics-report.json
- name: Upload metrics
uses: actions/upload-artifact@v2
with:
name: quality-metrics
path: metrics-report.json
-
Required Checks:
- Define mandatory checks that must pass
- Set required approvals count
- Configure required code owner approvals
-
Warning Levels:
- Define severity levels for issues
- Configure which levels block merging
- Set thresholds for each level
Example configuration for required checks:
# .github/workflows/required-checks.yml
name: Required Checks
on:
pull_request:
branches:
- main
- develop
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Lint Code
run: npm run lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Tests
run: npm test
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Security Scan
run: npm run security-scan
-
Standardized Templates:
- Create templates for different PR types
- Include sections for context, changes, testing
- Provide checklists for common requirements
-
Implementation:
- Configure PR templates in repository
- Validate PR descriptions against template
- Provide feedback on incomplete PRs
Example PR template:
<!-- .github/PULL_REQUEST_TEMPLATE.md -->
## Description
<!-- Describe the changes in this PR -->
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## How Has This Been Tested?
<!-- Describe the tests you ran -->
- [ ] Unit tests
- [ ] Integration tests
- [ ] Manual testing
## Checklist
- [ ] My code follows the style guidelines
- [ ] I have performed a self-review
- [ ] I have added tests that prove my fix/feature works
- [ ] New and existing tests pass
- [ ] My changes generate no new warnings
-
Pipeline Configuration:
- Run automated checks in CI pipeline
- Block PRs that fail critical checks
- Provide detailed error reports
-
Review Stages:
- Configure review stages in pipeline
- Perform checks in parallel when possible
- Prioritize fast-failing checks
Example GitHub Actions workflow:
# .github/workflows/code-review.yml
name: Code Review
on:
pull_request:
branches: [ main ]
jobs:
formatting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check Formatting
run: npm run format:check
linting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Linters
run: npm run lint
testing:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Tests
run: npm test
static-analysis:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Static Analysis
run: npm run analyze
summary:
needs: [formatting, linting, testing, static-analysis]
runs-on: ubuntu-latest
steps:
- name: Generate Review Summary
run: |
echo "## Automated Review Summary" > summary.md
echo "All checks passed!" >> summary.md
- name: Comment PR
uses: actions/github-script@v5
with:
script: |
const fs = require('fs');
const summary = fs.readFileSync('summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: summary
});
-
SAST Integration:
- Configure Static Application Security Testing
- Scan for common security vulnerabilities
- Flag sensitive data exposure
-
Dependency Scanning:
- Check dependencies for known vulnerabilities
- Update vulnerable dependencies automatically
- Block PRs with critical security issues
-
Implementation:
- Integrate tools like Snyk, SonarQube Security
- Configure security scan in CI/CD
- Set security issue severity thresholds
Example security scanning configuration:
# .github/workflows/security-scan.yml
name: Security Scanning
on:
pull_request:
branches: [ main ]
push:
branches: [ main ]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: SAST Scan
uses: github/codeql-action/analyze@v1
with:
languages: javascript, typescript
- name: Dependency Scan
uses: snyk/actions/node@master
with:
args: --severity-threshold=high
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
- name: Secret Scanning
uses: zricethezav/gitleaks-action@master
-
Credential Scanning:
- Detect hardcoded secrets and credentials
- Block commits with sensitive information
- Provide education on secure credential management
-
Implementation:
- Configure pre-commit hooks for local detection
- Integrate GitLeaks or similar in CI/CD
- Configure real-time alerts for exposed secrets
Example pre-commit hook for secret detection:
#!/bin/sh
# .husky/pre-commit
# Run secret detection
detect-secrets scan --baseline .secrets.baseline
if [ $? -ne 0 ]; then
echo "Error: Secrets detected in code. Please remove them before committing."
exit 1
fi
-
Documentation Requirements:
- Verify README exists and is complete
- Check for API documentation
- Validate code comment coverage
-
Implementation:
- Configure documentation linters
- Check link validity
- Verify documentation format
Example documentation validation workflow:
# .github/workflows/docs-validation.yml
name: Documentation Validation
on:
pull_request:
branches: [ main ]
paths:
- '**/*.md'
- '**/*.rst'
- 'docs/**'
jobs:
validate-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check Markdown Links
uses: gaurav-nelson/github-action-markdown-link-check@v1
- name: Lint Documentation
run: |
npm install -g markdownlint-cli
markdownlint '**/*.md' --ignore node_modules
- name: Check README Completeness
run: |
./scripts/check-readme-sections.sh
-
API Documentation Requirements:
- Validate OpenAPI/Swagger specs
- Check for endpoint documentation
- Verify example requests and responses
-
Implementation:
- Configure OpenAPI linters
- Validate against schema
- Check for required sections
Example OpenAPI validation:
# .github/workflows/api-docs-validation.yml
name: API Documentation Validation
on:
pull_request:
paths:
- 'api/**'
- 'openapi.yaml'
jobs:
validate-api-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Validate OpenAPI Specification
uses: mbowman100/swagger-validator-action@master
with:
files: openapi.yaml
-
Bundle Size Monitoring:
- Track JavaScript bundle size
- Set size budgets per component
- Alert on significant increases
-
Load Time Analysis:
- Measure page load metrics
- Set performance budgets
- Compare against baseline
-
Implementation:
- Configure bundle analyzers
- Set up Lighthouse CI
- Implement performance testing in pipeline
Example bundle size check:
# .github/workflows/bundle-size.yml
name: Bundle Size
on:
pull_request:
paths:
- 'src/**'
- 'package.json'
jobs:
bundle-size:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build and Analyze Bundle
run: |
npm ci
npm run build
- name: Check Bundle Size
uses: siddharthkp/bundlesize@v1
with:
files: [
{
"path": "dist/main.*.js",
"maxSize": "100 kB"
},
{
"path": "dist/vendor.*.js",
"maxSize": "250 kB"
}
]
env:
BUNDLESIZE_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
-
WCAG Compliance Checks:
- Validate against WCAG guidelines
- Check for common accessibility issues
- Enforce accessibility standards
-
Implementation:
- Integrate axe-core or similar tools
- Configure accessibility testing in CI
- Generate accessibility reports
Example accessibility testing workflow:
# .github/workflows/accessibility.yml
name: Accessibility Testing
on:
pull_request:
paths:
- 'src/**/*.tsx'
- 'src/**/*.jsx'
- 'src/**/*.html'
jobs:
a11y:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v2
with:
node-version: 14
- name: Install Dependencies
run: npm ci
- name: Build Project
run: npm run build
- name: Run Accessibility Tests
run: |
npm install -g pa11y-ci
pa11y-ci --config .pa11yrc.json
-
AI Review Tools:
- Configure AI-powered code review assistants
- Use for initial screening and suggestions
- Complement human review with AI insights
-
Implementation:
- Integrate tools like DeepCode, CodeGuru
- Configure automated comments from AI
- Use AI suggestions as learning opportunities
Example AI code review configuration:
# .github/workflows/ai-code-review.yml
name: AI Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
ai-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run AI Code Review
uses: coderabbitai/ai-code-review@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
openai-api-key: ${{ secrets.OPENAI_API_KEY }}
config-path: .github/code-review-config.yaml
-
Review Pattern Analysis:
- Track common feedback patterns
- Identify recurring issues
- Create automated rules based on patterns
-
Implementation:
- Collect review comments and categorize
- Generate reports on common issues
- Automate detection of recurring patterns
Example review pattern analysis:
// scripts/analyze-review-patterns.js
const { Octokit } = require("@octokit/rest");
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
async function analyzeReviewPatterns() {
// Fetch review comments from last 100 PRs
const comments = await octokit.pulls.listReviewComments({
owner: "org-name",
repo: "repo-name",
per_page: 100
});
// Analyze patterns in comments
const patterns = {};
comments.data.forEach(comment => {
// Simple analysis by keywords
const text = comment.body.toLowerCase();
if (text.includes("naming")) {
patterns.naming = (patterns.naming || 0) + 1;
}
if (text.includes("test") || text.includes("coverage")) {
patterns.testing = (patterns.testing || 0) + 1;
}
// Add more patterns here
});
console.log("Review patterns:", patterns);
}
analyzeReviewPatterns();
-
Rapid Feedback:
- Provide immediate feedback in editor when possible
- Run fast checks locally before push
- Optimize CI/CD pipeline for speed
-
Implementation:
- Configure IDE extensions and plugins
- Set up git hooks for pre-commit checks
- Parallelize CI/CD jobs
Example VS Code settings for team:
// .vscode/settings.json
{
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.eslint": true,
"source.organizeImports": true
},
"eslint.validate": [
"javascript",
"typescript",
"javascriptreact",
"typescriptreact"
],
"typescript.preferences.importModuleSpecifier": "relative",
"javascript.preferences.importModuleSpecifier": "relative"
}
-
Developer Utilities:
- Provide CLI tools for common tasks
- Create self-service automation
- Enable developers to run checks locally
-
Implementation:
- Create custom scripts
- Document self-service tools
- Package tools for easy installation
Example developer utility script:
#!/bin/bash
# scripts/run-checks.sh
echo "Running pre-PR checks..."
# Run formatting
echo "Checking code formatting..."
npm run format:check
# Run linting
echo "Running linters..."
npm run lint
# Run tests
echo "Running tests..."
npm test
# Run security checks
echo "Running security scan..."
npm run security-scan
# Run bundle analysis
echo "Analyzing bundle size..."
npm run analyze
echo "All checks completed!"
- Configure code formatting and linting tools
- Set up CI/CD integration for automated checks
- Implement PR templates and CODEOWNERS
- Configure branch protection rules
- Set up dependency scanning
- Define and implement code quality metrics
- Configure test coverage requirements
- Set up security scanning
- Implement documentation validation
- Configure accessibility testing
- Implement AI-assisted code review
- Set up pattern analysis for review comments
- Create developer self-service tools
- Configure performance testing
- Implement feedback loop optimization