Skip to content

Latest commit

 

History

History
356 lines (268 loc) · 9.61 KB

File metadata and controls

356 lines (268 loc) · 9.61 KB

Development Guide

This guide covers development workflow, testing, code style, and how to contribute to the Nomic codebase.

Development Setup

Prerequisites

  • Python 3.8 or higher
  • uv (recommended) or pip for package management

Installation

  1. Clone the repository (if applicable)

  2. Install dependencies:

    uv sync --dev
    # or
    pip install -e ".[dev]"

    This installs:

    • Runtime dependencies: pandas, python-dotenv
    • Development dependencies: pytest, ruff, pyright
  3. Verify installation:

    nomic rules num
    pytest

Testing

The project has two types of tests:

  1. Unit Tests (tests/test_*.py) - Test individual functions and modules in isolation
  2. Integration Tests (tests/integration/*.sh) - Test complete workflows using the CLI

Running Tests

Run all unit tests:

pytest

Run specific unit test file:

pytest tests/test_players.py

Run with verbose output:

pytest -v

Run integration tests:

# Run all integration tests
./tests/integration/test_game_workflows.sh

# Or run individually (they're bash scripts)
./tests/integration/test_game_workflows.sh

Test Structure

Unit Tests (tests/) are organized to mirror the source structure:

  • test_players.py - Tests for src/players.py
  • test_rules.py - Tests for src/rules.py
  • test_proposals.py - Tests for src/proposals.py
  • test_metadata.py - Tests for src/metadata.py
  • etc.

Integration Tests (tests/integration/) test complete workflows:

  • test_game_workflows.sh - Complete game workflows organized by type:
    • Proposal workflows (enactment, amendment, repeal, transmutation)
    • Voting workflows (unanimous voting, points, prohibited states)
    • Turn workflows (turn order, complete turns, forfeiture)
    • Rule state workflows (mutability, rule limits)
  • test_helpers.sh - Shared test helper functions

Integration tests focus on state change workflows as defined in the game rules, rather than code coverage. Each test is annotated with the relevant rule numbers being tested.

Test Fixtures

The state fixture provides a clean State object for each test:

def test_add_player(state: State):
    from src.players import add
    add(state, "Test Player")
    assert len(state.player_metadata) == 1

Rule Implementation Testing

Use the @tests decorator to inject required rules into test state:

from src.implements import tests

@tests({105, 109, 203})
def test_vote_passed(state: State):
    from src.actions import vote_passed
    result = vote_passed(state, votes_for=[0, 1, 2], votes_against=[])
    assert result is True

This ensures the state has the required rules before the test runs.

Code Style

Formatting

The project uses ruff for linting and formatting. Check code style:

ruff check src/

Auto-fix issues:

ruff check --fix src/

Type Checking

The project uses pyright for type checking:

pyright src/

Style Guidelines

  1. Line Length: 120 characters (configured in pyproject.toml)

  2. Docstrings: Use triple-quoted strings for module and function documentation:

    def add_player(state: State, name: str) -> None:
        """Add a new player to the game.
        
        Args:
            state: State object
            name: Full name of the player
        """
  3. Type Hints: Use type hints for all function parameters and return values:

    def get_player_id(state: State, name: str) -> int:
        ...
  4. Imports: Organize imports (ruff handles this automatically):

    • Standard library
    • Third-party packages
    • Local modules

Code Organization

Module Structure

Each module has a focused responsibility:

  • Domain Logic: players.py, rules.py, proposals.py
  • Actions: actions.py (proposal resolution, turn processing)
  • Infrastructure: state.py, metadata.py, paths.py, text_lists.py
  • Utilities: type_utils.py, id_utils.py, implements.py
  • Interface: cli.py

Adding New Features

  1. Identify the module: Determine which module your feature belongs to
  2. Write the function: Follow existing patterns and style
  3. Add validation: Use InvalidStateError or InvalidProposalError for validation failures
  4. Add tests: Write comprehensive tests
  5. Add CLI command (if needed): Add command to src/cli.py
  6. Update documentation: Update relevant docs files

Example: Adding a New Feature

Let's say we want to add a function to get all active players:

  1. Add to src/players.py:

    def get_active_players(state: State) -> pd.DataFrame:
        """Get all active players.
        
        Args:
            state: State object
            
        Returns:
            DataFrame with active players
        """
        df = state.player_metadata
        require_unique_index(df, "id")
        check_required_columns(df, {"active"})
        return df[df["active"]].copy()
  2. Add test to tests/test_players.py:

    def test_get_active_players(state: State):
        from src.players import add, get_active_players
        
        add(state, "Player 1")
        add(state, "Player 2")
        
        active = get_active_players(state)
        assert len(active) == 2
  3. Add CLI command (if needed):

    def cmd_active_players(paths: Paths) -> None:
        """List all active players."""
        state = State.load(paths)
        active = get_active_players(state)
        NOMIC_LOGGER.info("Active players:\n%s", active.to_string())

Rule Implementation Guide

When implementing game rules, use the @implements decorator to track which rules are implemented:

from src.implements import implements

@implements({201, 202})
def process_turn(state: State) -> None:
    """Process a complete turn.
    
    Implements:
    - Rule 201: Players alternate in alphabetical order
    - Rule 202: Turn consists of proposal + dice roll
    """
    # Implementation
    ...

Best Practices

  1. Document which rules: Always document which rules are implemented in the function docstring
  2. Use the decorator: Always use @implements for functions that implement specific rules
  3. Test with rules: Use @tests decorator in tests to ensure required rules exist
  4. Keep implementations focused: One function should implement a coherent set of related rules

Debugging

Common Issues

State Validation Errors

If you see InvalidStateError, check:

  1. That all required files exist
  2. That IDs/numbers are unique
  3. That referential integrity is maintained (e.g., proposers exist)
  4. That data types match expected types

Missing Rules

If you see ValueError: Required rules not all implemented, check:

  1. That the rules exist in game/rules.md
  2. That the rules are in game/rule_metadata.csv
  3. That rule numbers match between files

File Not Found

If files aren't found:

  1. Check that game/ directory exists
  2. Check .env file for custom paths
  3. Verify default paths in src/paths.py

Debugging Tips

  1. Use logging: The NOMIC_LOGGER provides structured logging:

    from src.nomic_logger import NOMIC_LOGGER
    NOMIC_LOGGER.info("Debug: %s", value)
  2. Inspect state: Load state and inspect DataFrames:

    from src.state import State
    from src.paths import load_paths
    
    state = State.load(load_paths())
    print(state.player_metadata)
    print(state.rules)
  3. Run tests: Tests often reveal issues:

    pytest -v tests/test_your_feature.py
  4. Check CLI output: Run CLI commands to see error messages:

    nomic players add "Test Player"

Contributing

Workflow

  1. Create a branch (if using git)
  2. Make changes: Follow code style and add tests
  3. Run tests: Ensure all tests pass
  4. Check style: Run ruff check and pyright
  5. Update documentation: Update relevant docs if needed
  6. Submit changes: Create a pull request or submit patch

Pull Request Checklist

  • All tests pass
  • Code follows style guidelines (ruff check passes)
  • Type checking passes (pyright passes)
  • Documentation updated (if needed)
  • New features have tests
  • @implements decorator used for rule implementations

Code Review Guidelines

When reviewing code, check:

  1. Correctness: Does it do what it's supposed to?
  2. Tests: Are there adequate tests?
  3. Style: Does it follow project style?
  4. Documentation: Is it well-documented?
  5. Edge cases: Are edge cases handled?

Project Structure

nomic/
├── game/              # Game state files (CSV + Markdown)
├── src/               # Source code
│   ├── cli.py         # CLI interface
│   ├── players.py     # Player management
│   ├── rules.py       # Rule management
│   ├── proposals.py   # Proposal management
│   ├── actions.py     # Game actions
│   └── ...
├── tests/             # Test files
│   ├── test_*.py      # Unit tests (pytest)
│   └── integration/   # Integration tests
│       ├── test_game_workflows.sh  # Complete workflow tests
│       └── test_helpers.sh         # Shared test helpers
├── docs/              # Documentation
├── pyproject.toml     # Project configuration
└── README.md          # Project overview

Related Documentation