Skip to content

Latest commit

 

History

History
813 lines (551 loc) · 18.4 KB

File metadata and controls

813 lines (551 loc) · 18.4 KB

📋 Common Workflows

Step-by-step recipes for common tasks

New to the template? Start with Quick Start Cheatsheet | Getting Started

🎯 "I Want To..." Quick Index


Write My First Document

Goal: Create your first professional document from scratch

Prerequisites: Template cloned and dependencies installed

Steps:

  1. Edit the abstract

    vim projects/code_project/manuscript/01_abstract.md
  2. Add your content

    # Abstract {#sec:abstract}
    
    Your research summary goes here. Keep it concise (150-250 words).
  3. Generate the PDF

    # Run core pipeline (ten stages by default; see RUN_GUIDE.md)
    uv run python scripts/execute_pipeline.py --project {name} --core-only
  4. View the result

    open output/code_project/pdf/01_abstract.pdf  # Individual section PDFs

Expected Result: Professional PDF with your content formatted

Next Steps: Read Getting Started Guide for more details


Add a New Section to Manuscript

Goal: Add a new numbered section to your manuscript

Prerequisites: Basic understanding of markdown

Steps:

  1. Determine section number

    • Main sections: 01-09 (e.g., 07_limitations.md)
    • Supplemental: S01-S99 (e.g., S03_additional_data.md)
    • See Manuscript Numbering
  2. Create the file

    vim projects/code_project/manuscript/07_limitations.md
  3. Add section header with label

    # Limitations {#sec:limitations}
    
    ## Study Limitations
    
    This research has several limitations...
  4. Rebuild manuscript

    uv run python scripts/execute_pipeline.py --project {name} --core-only
  5. Reference from other sections

    See Section \ref{sec:limitations} for discussion of constraints.

Expected Result: New section appears in correct order in combined PDF

Troubleshooting:

  • Section not appearing? Check filename starts with number/S-number
  • Wrong order? See Manuscript Numbering

Create a Figure with Data

Goal: Generate a figure from data using the thin orchestrator pattern

Prerequisites: Understanding of Python and matplotlib

Steps:

  1. Create business logic in projects/{name}/src/

    vim projects/code_project/src/data_analysis.py
    def analyze_data(values):
        """Analyze data and return statistics."""
        return {
            'mean': sum(values) / len(values),
            'max': max(values),
            'min': min(values)
        }
  2. Create tests (90% minimum coverage required)

    vim projects/code_project/tests/test_data_analysis.py
    from data_analysis import analyze_data
    
    def test_analyze_data():
        result = analyze_data([1, 2, 3, 4, 5])
        assert result['mean'] == 3.0
        assert result['max'] == 5
        assert result['min'] == 1
  3. Run tests

    pytest projects/code_project/tests/test_data_analysis.py --cov=projects.code_project.src.data_analysis
  4. Create thin orchestrator script

    vim projects/code_project/scripts/my_analysis_figure.py
    #!/usr/bin/env python3
    import os
    import matplotlib.pyplot as plt
    from projects.code_project.src.data_analysis import analyze_data  # Import from project src/
    
    # Use src/ method for computation
    data = [1, 2, 3, 4, 5]
    stats = analyze_data(data)
    
    # Script handles visualization only
    fig, ax = plt.subplots()
    ax.bar(['Mean', 'Max', 'Min'], 
           [stats['mean'], stats['max'], stats['min']])
    ax.set_title('Data Analysis')
    
    # Save to output
    output_path = 'projects/{name}/output/figures/my_analysis.png'
    os.makedirs(os.path.dirname(output_path), exist_ok=True)
    fig.savefig(output_path)
    print(output_path)  # Print for manifest
  5. Run script

    uv run python projects/code_project/scripts/my_analysis_figure.py
  6. Add to manuscript

    \begin{figure}[h]
    \centering
    \includegraphics[width=0.8\textwidth]{../output/figures/my_analysis.png}
    \caption{Statistical analysis of dataset}
    \label{fig:my_analysis}
    \end{figure}

Expected Result: Figure appears in manuscript with professional formatting

Key Principle: Business logic in projects/{name}/src/, visualization in projects/{name}/scripts/

See Also: Thin Orchestrator Pattern


Add Mathematical Equations

Goal: Add numbered equations with cross-references

Prerequisites: Basic LaTeX knowledge

Steps:

  1. Write equation with label

    \begin{equation}\label{eq:quadratic}
    f(x) = ax^2 + bx + c
    \end{equation}
  2. Reference equation in text

    The quadratic function \eqref{eq:quadratic} has two solutions.
  3. For multiple equations

    \begin{align}
    f(x) &= x^2 + 2x + 1 \label{eq:first} \\
    g(x) &= x^3 - x \label{eq:second}
    \end{align}
    
    Equations \eqref{eq:first} and \eqref{eq:second} are related.
  4. Rebuild

    uv run python scripts/execute_pipeline.py --project {name} --core-only

Expected Result: Numbered equations with clickable references

Troubleshooting:

  • Equation shows (??) → Check label spelling
  • Numbering wrong → Ensure unique labels
  • Not rendering → Check LaTeX syntax

See Also: Markdown Template Guide


Cross-Reference Sections and Figures

Goal: Create internal links between document parts

Prerequisites: Basic markdown understanding

Types of References:

Section References

# Methodology {#sec:methodology}

As described in Section \ref{sec:methodology}...

Equation References

\begin{equation}\label{eq:important}
E = mc^2
\end{equation}

From Equation \eqref{eq:important}, we see...

Figure References

\begin{figure}[h]
\centering
\includegraphics{../output/figures/plot.png}
\caption{Results}
\label{fig:results}
\end{figure}

Figure \ref{fig:results} shows...

Table References

\begin{table}[h]
\caption{Performance metrics}
\label{tab:performance}
...
\end{table}

Table \ref{tab:performance} summarizes...

Validation:

uv run python -m infrastructure.validation.cli markdown projects/code_project/manuscript/

See Also: Markdown Template Guide


Add a New Python Module

Goal: Add new functionality following the thin orchestrator pattern

Prerequisites: Python programming knowledge

Steps:

  1. Create module in projects/{name}/src/

    vim projects/code_project/src/statistics.py
    """Statistical analysis functions."""
    
    def calculate_variance(values):
        """Calculate sample variance."""
        mean = sum(values) / len(values)
        return sum((x - mean) ** 2 for x in values) / (len(values) - 1)
    
    def calculate_std_dev(values):
        """Calculate standard deviation."""
        return calculate_variance(values) ** 0.5
  2. Create tests

    vim projects/code_project/tests/test_statistics.py
    from projects.code_project.src.statistics import calculate_variance, calculate_std_dev
    
    def test_calculate_variance():
        values = [1, 2, 3, 4, 5]
        var = calculate_variance(values)
        assert abs(var - 2.5) < 1e-10
    
    def test_calculate_std_dev():
        values = [1, 2, 3, 4, 5]
        std = calculate_std_dev(values)
        assert abs(std - 1.5811388) < 1e-6
  3. Ensure coverage

    pytest projects/code_project/tests/test_statistics.py --cov=projects.code_project.src.statistics --cov-report=term-missing
  4. Use in scripts (thin orchestrator)

    from statistics import calculate_std_dev
    
    data = [1, 2, 3, 4, 5]
    std = calculate_std_dev(data)  # Use src/ method
    # Script handles visualization...

Expected Result: tested module ready for use

Key Rules:

  • ALL business logic in projects/{name}/src/
  • test coverage required (90% project, 60% infra)
  • Scripts only orchestrate, never implement algorithms

See Also: Thin Orchestrator Pattern


Write Tests for My Code

Goal: Achieve test coverage for src/ modules

Prerequisites: Understanding of pytest

Steps:

  1. Create test file

    vim projects/code_project/tests/test_my_module.py
  2. Import module to test

    from my_module import my_function
  3. Write test cases

    def test_my_function_basic():
        """Test basic functionality."""
        result = my_function([1, 2, 3])
        assert result == expected_value
    
    def test_my_function_edge_cases():
        """Test edge cases."""
        assert my_function([]) == default_value
        assert my_function([1]) == single_value
    
    def test_my_function_errors():
        """Test error handling."""
        with pytest.raises(ValueError):
            my_function(invalid_input)
  4. Run tests with coverage

    pytest projects/code_project/tests/test_my_module.py --cov=projects.code_project.src.my_module --cov-report=term-missing
  5. Check for missing lines

    • Lines marked with >>>>> are not covered
    • Add tests to cover all branches
  6. Repeat until 100%

Expected Result: All critical code paths tested, coverage requirements met

Requirements:

  • Statement coverage: 100%
  • Branch coverage: 100%
  • No mocks: Use data

See Also: Testing Guide | Workflow


Debug Test Failures

Goal: Identify and fix failing tests

Steps:

  1. Run tests verbosely

    pytest tests/ -v
  2. Run specific test

    pytest projects/code_project/tests/test_my_module.py::test_specific_function -v
  3. Use debugger

    pytest projects/code_project/tests/test_my_module.py --pdb
  4. Check detailed output

    pytest tests/ -vv --tb=long
  5. Common issues:

    • Import errors → Check PYTHONPATH
    • Assertion failures → Check expected vs actual values
    • Coverage failures → Add tests for missing lines

Troubleshooting Commands:

# Show test discovery
pytest --collect-only

# Run with maximum verbosity
pytest -vvv

# Show local variables on failure
pytest -l

# Stop at first failure
pytest -x

See Also: FAQ


Fix Coverage Below Requirements

Goal: Achieve required test coverage (90% project, 60% infra)

Steps:

  1. Generate coverage report

    pytest tests/ --cov=src --cov-report=term-missing
  2. Identify missing lines

    • Look for lines marked >>>>>
    • Note which functions/branches aren't covered
  3. Analyze uncovered code

    pytest tests/ --cov=src --cov-report=html
    open htmlcov/index.html
  4. Add tests for uncovered paths

    • Test all conditional branches (if/else)
    • Test exception handling
    • Test edge cases
  5. Verify improvement

    pytest tests/ --cov=src --cov-report=term-missing

Example - Covering Conditional:

# Code with uncovered branch
def process(value):
    if value > 0:  # Covered
        return value * 2
    else:  # Not covered - need test
        return 0

# Add test for uncovered branch
def test_process_negative():
    assert process(-5) == 0

Expected Result: Coverage requirements achieved (90% project, 60% infra)


Generate PDF of Manuscript

Goal: Build professional PDF from markdown sources

Steps:

  1. Run pipeline (recommended)

    # Standard core build (eight executor stages by default; no LLM)
    uv run python scripts/execute_pipeline.py --project {name} --core-only
    
    # Or use unified interactive menu
    ./run.sh
    
    # Run individual stage scripts (each requires --project {name})
    uv run python scripts/00_setup_environment.py --project {name}
    uv run python scripts/01_run_tests.py --project {name}
    uv run python scripts/02_run_analysis.py --project {name}
    uv run python scripts/03_render_pdf.py --project {name}
    uv run python scripts/04_validate_output.py --project {name}
    uv run python scripts/05_copy_outputs.py --project {name}
    # Optional: LLM (06), executive report (07) — see RUN_GUIDE.md
  2. Check for errors

    • Tests must pass (project and infrastructure thresholds in pyproject.toml / CI)
    • Scripts must succeed
    • Markdown validation must pass
    • PDF compilation must succeed
  3. View output

    # Combined PDF after copy outputs
    open output/{name}/pdf/{name}_combined.pdf
    
    # Or working copy under the project tree
    open projects/{name}/output/pdf/{name}_combined.pdf

Core pipeline (--core-only, default flags): eight executor stages — clean outputs, environment setup, infrastructure tests, project tests, analysis, PDF rendering, output validation, copy outputs. Not part of core: LLM stages (06_llm_review.py) and cross-project executive reporting (07_generate_executive_report.py).

Total Time: Varies by project and machine; the sequence above is ordered as in PipelineExecutor.

Troubleshooting:

  • Tests fail → Fix coverage issues
  • Scripts fail → Check imports from src/
  • PDF fails → Check pandoc/xelatex installation
  • References show ?? → Check label spelling

See Also: Pipeline Orchestration | PDF Validation


Customize Project Metadata

Goal: Personalize project with your information

Steps:

  1. Set environment variables

    export AUTHOR_NAME="Dr. Jane Smith"
    export AUTHOR_EMAIL="jane.smith@university.edu"
    export AUTHOR_ORCID="0000-0001-2345-6789"
    export PROJECT_TITLE="My Research Project"
    export DOI="10.5281/zenodo.12345678"  # Optional
  2. Or create .env file

    cp .env.template .env
    vim .env

    Add:

    AUTHOR_NAME="Dr. Jane Smith"
    AUTHOR_EMAIL="jane.smith@university.edu"
    AUTHOR_ORCID="0000-0001-2345-6789"
    PROJECT_TITLE="My Research Project"
    DOI="10.5281/zenodo.12345678"
  3. Source environment

    source .env
  4. Generate with custom metadata

    uv run python scripts/execute_pipeline.py --project {name} --core-only

Applied To:

  • PDF metadata (title, author, date)
  • LaTeX document properties
  • Generated file headers
  • Cross-reference systems

See Also: AGENTS.md Configuration


Add Supplemental Materials

Goal: Add supplemental sections to manuscript

Steps:

  1. Create supplemental file

    vim manuscript/S03_supplemental_figures.md
  2. Add content

    # Supplemental Figures {#sec:supplemental_figures}
    
    ## Additional Visualizations
    
    This section contains extended visualizations...
  3. Reference from main text

    See Section \ref{sec:supplemental_figures} for additional figures.
  4. Rebuild

    uv run python scripts/execute_pipeline.py --project {name} --core-only

Naming Convention:

  • Main sections: 01-09
  • Supplemental sections: S01-S99
  • Glossary: 98
  • References: 99

Order in PDF:

  1. Main sections (01-09)
  2. Supplemental sections (S01-S99)
  3. Glossary (98)
  4. References (99)

See Also: Manuscript Numbering


Contribute to the Template

Goal: Improve the template for everyone

Steps:

  1. Fork the repository

    # On GitHub, click "Fork"
    git clone https://github.com/YOUR_USERNAME/template.git
  2. Create feature branch

    git checkout -b feature/my-improvement
  3. Make changes

    • Follow thin orchestrator pattern
    • Maintain required test coverage
    • Update documentation
  4. Run tests

    pytest tests/ --cov=src --cov-report=term-missing
  5. Run build

    # Core pipeline (ten stages by default)
    uv run python scripts/execute_pipeline.py --project {name} --core-only
    
    # Or use unified interactive menu
    ./run.sh
  6. Commit changes

    git add .
    git commit -m "feat: add feature"
  7. Push and create PR

    git push origin feature/my-improvement
    # On GitHub, create Pull Request

Contribution Checklist:

  • Tests pass (infra + project suites for your branch)
  • Coverage maintained/improved
  • Documentation updated
  • Thin orchestrator pattern followed
  • Commit messages clear
  • PR description See Also: Contributing Guide | Code of Conduct

🔗 Related Documentation


Need more help? Check the FAQ or Documentation Index