Test coverage measures how much of your code is executed during testing. This guide covers coverage measurement, targets, and best practices for your project.
# Install coverage tools using uv (preferred)
uv add --dev pytest-cov
# Alternative: Using pip
pip install pytest-cov# Basic coverage report
uv run pytest --cov=src/<package_name>
# Detailed terminal report with missing lines
uv run pytest --cov=src/<package_name> --cov-report=term-missing
# Generate HTML coverage report
uv run pytest --cov=src/<package_name> --cov-report=html
# Coverage for specific module
uv run pytest --cov=src/<package_name>/module tests/module/
# Fail tests if coverage drops below threshold
uv run pytest --cov=src/<package_name> --cov-fail-under=80- Overall Coverage: ≥ 80%
- Core Modules: ≥ 90%
- New Code: ≥ 95%
- Critical Paths: 100%
| Module Category | Target Coverage | Priority |
|---|---|---|
| Core Operations | 95% | Critical |
| Business Logic | 90% | High |
| Utility Functions | 85% | Medium |
| Infrastructure | 80% | Medium |
| Example/Demo Code | 50% | Low |
Name Stmts Miss Cover Missing
----------------------------------------------------------------------
src/<package_name>/core/module.py 43 0 100%
src/<package_name>/utils/helpers.py 40 5 88% 45-49
src/<package_name>/api/endpoints.py 98 98 0% 1-198
----------------------------------------------------------------------
TOTAL 181 103 43%
- Stmts: Total number of statements
- Miss: Number of statements not executed
- Cover: Percentage of statements covered
- Missing: Line numbers not covered
# Generate HTML report
uv run pytest --cov=src/<package_name> --cov-report=html
# Open report (macOS)
open htmlcov/index.html
# Open report (Linux)
xdg-open htmlcov/index.html
# Open report (Windows)
start htmlcov/index.htmlHTML reports provide:
- Interactive line-by-line coverage visualization
- Sortable module list
- Coverage trends over time
- Branch coverage details
Basic metric showing which lines were executed:
def calculate(x, y):
result = x + y # ✓ Covered
if result > 100:
return 100 # ✗ Not covered if result ≤ 100
return result # ✓ CoveredEnsures all code paths are tested:
def process(value):
if value > 0: # Need tests for both True and False
return "positive"
elif value < 0: # Need tests for both True and False
return "negative"
else:
return "zero"# High statement coverage but poor functional coverage
def divide(a, b):
# Test might cover the line but miss edge cases
return a / b # ✓ Line covered, but did we test b=0?# Good: Test actual functionality
def test_functionality():
"""Test business logic, not just lines."""
result = process_data(valid_input)
assert result.is_valid()
assert result.meets_requirements()
# Test edge cases and properties
edge_result = process_data(edge_case_input)
assert edge_result.handles_edge_case()# Not worth testing
if __name__ == "__main__":
# Demo code - low priority for coverage
demo()
# Platform-specific code
if sys.platform == "win32":
# Only test on relevant platform
windows_specific_function()# High priority - core business logic
def critical_calculation(data):
"""Critical function - aim for 100% coverage."""
# Every line and branch should be tested
# Lower priority - convenience wrapper
def helper_wrapper(data):
"""Simple wrapper - basic test sufficient."""
return critical_calculation(data)# Identify untested modules
uv run pytest --cov=src/<package_name> --cov-report=term-missing | grep "0%"
# Find partially tested modules
uv run pytest --cov=src/<package_name> --cov-report=term-missing | grep -E "[0-9]{1,2}%"-
Measure Baseline
uv run pytest --cov=src/<package_name> --cov-report=term > coverage_baseline.txt
-
Identify Gaps
- Sort by coverage percentage
- Focus on critical modules first
- Look for easy wins (simple functions)
-
Write Targeted Tests
# Use coverage report to identify missing lines # Missing: lines 45-52 (error handling) def test_error_conditions(): """Target uncovered error paths.""" with pytest.raises(ValueError): function_that_needs_coverage(invalid_input)
-
Verify Improvement
# Run coverage again and compare uv run pytest --cov=src/<package_name> --cov-report=term
See github-actions-coverage.yaml for a complete GitHub Actions workflow template with coverage reporting and Codecov integration.
See pre-commit-coverage-hook.yaml for pre-commit hook configurations that enforce coverage thresholds.
Quick setup:
# Install pre-commit
uv add --dev pre-commit
# Add hooks to .pre-commit-config.yaml
# Copy content from the template above
# Install hooks
pre-commit install
# Run coverage check
pre-commit run test-coverage --all-files@pytest.mark.asyncio
async def test_async_function():
"""Ensure async functions are properly covered."""
result = await async_function()
assert result is not None
# Test with various inputs
large_input = generate_large_input()
result = await async_function(large_input)
assert validate_result(result)def test_error_paths():
"""Cover all error conditions."""
# Invalid input type
with pytest.raises(TypeError):
function("not a valid type")
# Invalid input value
with pytest.raises(ValueError, match="must be positive"):
function(-1)
# Edge case errors
with pytest.raises(ValueError, match="empty"):
function([])@pytest.mark.parametrize("input,expected", [
(positive_value, "positive"),
(negative_value, "negative"),
(zero_value, "zero"),
(edge_case, "edge")
])
def test_all_branches(input, expected):
"""Ensure all conditional branches are covered."""
result = function_with_branches(input)
assert result == expected# Ensure test discovery is working
uv run pytest --collect-only
# Check source path is correct
uv run pytest --cov=src/<package_name> --cov-report=term
# Verify __init__.py files exist
find src -name "*.py" -type f | head# Clear coverage cache
rm -rf .coverage .pytest_cache
# Run with fresh environment
uv run pytest --cov=src/<package_name> --no-cov-on-fail# Ensure pytest-asyncio is installed
uv add --dev pytest-asyncio
# Use proper async test marking
@pytest.mark.asyncio # Required for async tests
async def test_async():
result = await async_function()Add coverage badges to README:

Or with dynamic coverage:
[](https://codecov.io/gh/username/repo)- Unit Testing Guide - General unit testing practices
- Testing Overview - Complete testing guide
- CI/CD Setup - Continuous integration configuration
- Project README - Project overview
- Source: vibe-coding-templates
- Version: 1.0.0
- Date: 2025-08-19
- Author: chrishayuk
- Template: Generic Python Project
When adapting this template:
- Replace
<package_name>with your actual package name - Adjust coverage targets based on project requirements
- Update CI/CD examples for your platform
- Add project-specific testing patterns
- Include relevant badges for your repository