The research project template includes a reporting system that generates detailed reports at every pipeline stage. This guide explains the reporting features, output locations, and how to interpret the generated reports.
Location: projects/{name}/output/reports/pipeline_report.*
Formats: JSON, HTML, Markdown
Contents:
- stage execution results
- Stage durations and success/failure status
- Test results and coverage data
- Validation results
- Performance metrics
- Error summaries
- Output statistics
Usage:
# View JSON report
cat projects/{name}/output/reports/pipeline_report.json | jq
# Open HTML report in browser
open projects/{name}/output/reports/pipeline_report.html
# Read Markdown report
cat projects/{name}/output/reports/pipeline_report.mdLocation: projects/{name}/output/reports/validation_report.json
Contents:
- PDF validation results
- Markdown validation results
- Output structure validation
- Figure registry validation
- Per-check results with pass/fail status
- Detailed issue breakdowns by severity
- Actionable recommendations
Key Fields:
checks: Dictionary of validation check resultsfigure_issues: List of figure reference issuesoutput_statistics: File counts and sizesrecommendations: Actionable items to resolve issuessummary: Overall validation summary
Location: projects/{name}/output/reports/log_summary.txt
Contents:
- Message counts by log level
- Recent errors (last 5-10)
- Recent warnings (last 5-10)
- Total line count
Usage:
# View log summary
cat projects/{name}/output/reports/log_summary.txt
# Quick check for errors
grep -i error projects/{name}/output/reports/log_summary.txtLocation: projects/{name}/output/reports/output_statistics.*
Formats: TXT, JSON
Contents:
- File counts by directory
- File sizes by directory
- Largest files (top 10)
- Missing expected files
- File type distributions
- Total output size
Usage:
# View text report
cat projects/{name}/output/reports/output_statistics.txt
# Parse JSON for programmatic access
uv run python -c "import json; print(json.load(open('projects/{name}/output/reports/output_statistics.json')))"Location: output/multi_project_summary/
Files:
multi_project_summary.json- Structured datamulti_project_summary.md- Human-readable summary
Contents:
- Per-project execution results
- Success/failure counts
- Performance analysis (slowest/fastest projects)
- Error aggregation across projects
- Cross-project recommendations
Usage:
# View summary
cat output/multi_project_summary/multi_project_summary.md
# Check for failed projects
jq '.projects | to_entries | map(select(.value.success == false))' output/multi_project_summary/multi_project_summary.jsonLocation: output/executive_summary/
Files:
executive_summary.json- Metrics and dataexecutive_summary.html- Interactive dashboardexecutive_summary.md- Text summarydashboard_matplotlib.png- Static dashboarddashboard_plotly.html- Interactive dashboardmetrics.csv- CSV data export
Contents:
- Cross-project metrics aggregation
- Health scores for each project
- Visual dashboards with charts
- Performance comparisons
- Resource usage statistics
Reports are automatically generated during pipeline execution:
# Single project - generates all reports
uv run python scripts/execute_pipeline.py --project project
# Multi-project - generates all reports + executive summary
uv run python scripts/execute_multi_project.pyGenerate specific reports independently:
# Validation report only
uv run python scripts/04_validate_output.py --project project
# Output statistics only
uv run python scripts/05_copy_outputs.py --project project
# Executive report only
uv run python scripts/07_generate_executive_report.pyKey Metrics:
total_duration: Total execution time in secondsstages[].status: "passed" or "failed" for each stagestages[].duration: Time spent in each stage
Red Flags:
- Any stage with
"status": "failed" error_summary.total_errors > 0- Long stage durations (> 300s)
Key Metrics:
summary.all_passed: Boolean indicating overall validation statussummary.failed: Number of failed checksfigure_issues_count: Number of figure reference issues
Red Flags:
"all_passed": false- Non-empty
figure_issueslist - Any recommendations with
"priority": "high"
Key Metrics:
total_files: Total number of generated filestotal_size_mb: Total output size in MBlargest_files: Top 10 largest files
Red Flags:
- Non-empty
missing_expected_fileslist - Unusually large files (> 100 MB)
- Zero files in expected directories
Key Metrics:
successful_projects: Count of successful projectsfailed_projects: Count of failed projectsperformance_analysis.average_duration: Average execution time
Red Flags:
failed_projects > 0- High average duration (> 300s per project)
- Non-empty
recommendationslist
Location: projects/{name}/output/reports/
Lifecycle: Cleaned on next pipeline run
Purpose: Working reports during development
Location: output/{name}/reports/
Lifecycle: Preserved across runs
Purpose: Deliverable reports for publication
Reports can be integrated into CI/CD pipelines:
# Check if validation passed
if jq -e '.summary.all_passed == true' projects/{name}/output/reports/validation_report.json > /dev/null; then
echo "Validation passed"
else
echo "Validation failed"
exit 1
fi
# Check if pipeline succeeded
if jq -e 'all(.stages[]; .status == "passed")' projects/{name}/output/reports/pipeline_report.json > /dev/null; then
echo "Pipeline succeeded"
else
echo "Pipeline failed"
exit 1
fiSymptom: Report files missing after pipeline execution
Causes:
- Pipeline failed before report generation
- Insufficient permissions to write reports
- Missing reporting module dependencies
Solutions:
# Check pipeline logs
cat projects/{name}/output/logs/pipeline.log | grep -i report
# Verify write permissions
touch projects/{name}/output/reports/test.txt && rm projects/{name}/output/reports/test.txt
# Reinstall dependencies
uv syncSymptom: Reports generated but missing data
Causes:
- Stage failed before data collection
- Data collection errors
- Corrupted intermediate outputs
Solutions:
# Re-run specific stage
uv run python scripts/04_validate_output.py --project project
# Check for errors in logs
grep -i "failed to generate report" projects/{name}/output/logs/pipeline.logSymptom: Cannot parse JSON reports
Causes:
- Report file corrupted
- Incomplete write
- JSON syntax errors
Solutions:
# Validate JSON syntax
jq . projects/{name}/output/reports/pipeline_report.json
# Regenerate report
uv run python scripts/execute_pipeline.py --project project --stage validate- Review Reports Regularly: Check reports after each pipeline run
- Act on Recommendations: Follow recommendations in validation reports
- Monitor Trends: Compare reports across runs to identify patterns
- Archive Important Reports: Save final reports before re-running pipeline
- Automate Checks: Integrate report validation into CI/CD
- Share Reports: Use HTML/Markdown formats for sharing with team
For issues with reporting:
- Check logs for report generation errors
- Verify all dependencies installed
- Ensure pipeline completed successfully
- Review this guide for report interpretation
- Consult troubleshooting guide for specific issues