Skip to content
69 changes: 69 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,16 @@ This is a Python package that converts SpikeGadgets .rec files (electrophysiolog
**Environment Setup:**

```bash
# Use either conda or mamba
conda env create -f environment.yml
# OR
mamba env create -f environment.yml

# Activate environment
conda activate trodes_to_nwb
# OR
mamba activate trodes_to_nwb

pip install -e .
```

Expand Down Expand Up @@ -89,6 +97,67 @@ Required files per session:
2. Push tag to GitHub (triggers PyPI upload)
3. Create GitHub release

## Development Best Practices

### Test-Driven Development (TDD)
- **Write tests first** before implementing new features or fixing bugs
- Follow the TDD cycle: Red (write failing test) → Green (make it pass) → Refactor
- All new functionality must have corresponding unit tests
- Integration tests required for complex workflows involving multiple components

### Quality Assurance Requirements
**All code changes must pass:**

```bash
# Run linting (must pass with no errors)
black .
ruff check .

# Run type checking
mypy src/

# Run full test suite with coverage
pytest --cov=src --cov-report=xml --doctest-modules -v --pyargs trodes_to_nwb
```

### Development Workflow
1. **Feature Branches**: Create feature branches for all changes (`git checkout -b feature/issue-XXX-description`)
2. **Incremental Development**: Make small, focused commits that can be easily reviewed and tested
3. **Continuous Testing**: Run tests frequently during development to catch issues early
4. **Pre-commit Validation**: Ensure linting and tests pass before committing

### Code Quality Standards
- **Test Coverage**: Maintain >90% code coverage for new code
- **Documentation**: All public functions must have docstrings with examples
- **Type Hints**: Use type annotations for all function parameters and return values
- **Error Handling**: Provide clear, actionable error messages with debugging context
- **Performance**: Consider memory usage and processing time for large datasets (17+ hour recordings)

### Pull Request Requirements
Before submitting PRs, ensure:
- [ ] All linting passes (`black .`, `ruff check .`)
- [ ] All tests pass with no failures or warnings
- [ ] New functionality includes unit tests
- [ ] Code coverage remains above current level
- [ ] Documentation updated for user-facing changes
- [ ] Performance impact assessed for large files

### Testing Strategy
**Unit Tests**: Focus on individual functions and classes
- Mock external dependencies and file I/O
- Test edge cases and error conditions
- Validate data transformations and calculations

**Integration Tests**: Test complete workflows
- Use real test data files where possible
- Validate end-to-end conversion pipelines
- Test memory usage on realistic file sizes

**Performance Tests**: Ensure scalability
- Benchmark conversion times for different file sizes
- Monitor memory usage during processing
- Validate parallel processing efficiency

## Important Notes

- Package supports Python >=3.8
Expand Down
Loading
Loading