Thank you for your interest in contributing to ModelAudit! This guide will help you get started with development and contributing to the project.
- Python 3.10 or higher
- uv (recommended) or pip
- Git
# Clone repository
git clone https://github.com/promptfoo/modelaudit.git
cd modelaudit
# Install with uv (recommended)
uv sync --extra all
# Windows (lighter optional set)
uv sync --extra all-ci-windows
# Or with pip
pip install -e .[all]Install and test your local development version:
# Option 1: Install in development mode with pip
pip install -e .[all]
# Then test the CLI directly (both forms work: "modelaudit <path>" or "modelaudit scan <path>")
modelaudit test_model.pkl
# Option 2: Use uv (recommended)
uv sync --extra all
# Test with uv run (no shell activation needed)
uv run modelaudit test_model.pkl
# Test with Python import
uv run python -c "from modelaudit import scan_file; print(scan_file('test_model.pkl'))"Create test models for development:
# Create a simple test pickle file
python -c "import pickle; pickle.dump({'test': 'data'}, open('test_model.pkl', 'wb'))"
# Test scanning it
modelaudit test_model.pklThis project uses optimized parallel test execution for faster development:
| Command | Use Case | Speed | Tests |
|---|---|---|---|
uv run pytest -n auto -m "not slow and not integration" |
Development | Fast | Unit tests only |
uv run pytest -n auto -x --tb=short |
Quick feedback | Fast, fail-fast | All tests, stop on first failure |
uv run pytest -n auto --cov=modelaudit |
CI/Full validation | Complete | All tests with coverage |
uv run pytest -k "test_pattern" -n auto |
Specific testing | Targeted | Pattern-matched tests |
# FAST - Development testing (excludes slow tests)
uv run pytest -n auto -m "not slow and not integration"
# QUICK FEEDBACK - Fail fast on first error
uv run pytest -n auto -x --tb=short
# COMPLETE - Full test suite with coverage
uv run pytest -n auto --cov=modelaudit
# SPECIFIC - Test individual files or patterns
uv run pytest tests/test_pickle_scanner.py -n auto -v
uv run pytest -k "test_scanner" -n auto
# PERFORMANCE - Profile slow tests
uv run pytest --durations=10 --tb=no# Run linting and formatting with Ruff
uv run ruff check modelaudit/ tests/ # Check code
uv run ruff check --fix modelaudit/ tests/ # Automatically fix lint issues
uv run ruff format modelaudit/ tests/ # Format code
# Type checking
uv run mypy modelaudit/
# Build package
uv buildCode Quality Tools:
This project uses modern Python tooling for maintaining code quality:
- Ruff: Ultra-fast Python linter and formatter (replaces Black, isort, flake8)
- MyPy: Static type checker
- Prettier: Fast formatter for JSON and YAML files
File Formatting with Prettier:
# Format JSON and YAML files
npx prettier --write .
# Check formatting (for CI)
npx prettier --check .# Create feature branch
git checkout -b feature/your-feature-name
# Make your changes...
git add <specific-files>
git commit -m "feat: description"
git push origin feature/your-feature-namePull Request Guidelines:
- Create PR against
mainbranch - Follow Conventional Commits format (
feat:,fix:,docs:, etc.) - All PRs are squash-merged with a conventional commit message
- Keep changes small and focused
- Add tests for new functionality
- Update documentation as needed
Detection quality reports are high priority and should be reproducible.
Include the following:
- ModelAudit version and Python version
- Exact command used (including flags)
- Expected result vs actual result
- Minimal reproducible sample (or a redacted/synthetic equivalent)
- Why the behavior is a false positive or false negative
If sharing a model artifact is not possible, include:
- File format and extension
- Relevant metadata/layout details
- Representative snippets (redacted) that trigger or evade detection
For sensitive security bypass details, use the private disclosure process in SECURITY.md instead of a public issue.
We use Conventional Commits format:
feat:- New featuresfix:- Bug fixesdocs:- Documentation updatestest:- Adding or updating testsrefactor:- Code refactoringperf:- Performance improvementschore:- Maintenance tasks
modelaudit/
├── modelaudit/ # Main package
│ ├── scanners/ # Scanner implementations (one per format)
│ ├── utils/ # Utility modules
│ ├── cli.py # CLI interface
│ └── core.py # Core scanning logic
├── tests/ # Test suite
├── docs/ # Contributor and security documentation
└── .github/ # GitHub Actions workflows
When adding a new scanner for a model format:
- Create a new scanner file in
modelaudit/scanners/ - Implement the scanner class following existing patterns
- Add appropriate tests in
tests/ - Update documentation
- Add any new dependencies to
pyproject.toml
For the security-focused implementation checklist, see docs/agents/new-scanner-quickstart.md.
- Follow PEP 8 style guidelines
- Use type hints where appropriate
- Write descriptive docstrings
- Keep functions focused and small
- Add comments for complex logic
- Write tests for all new functionality
- Ensure tests pass locally before submitting PR
- Include both unit tests and integration tests
- Test with different model formats and edge cases
# Run full test suite with coverage (optimized parallel execution)
uv run pytest -n auto --cov=modelaudit --cov-report=html
# Check for type errors
uv run mypy modelaudit/
# Format and lint code
uv run ruff format modelaudit/ tests/
uv run ruff check --fix modelaudit/ tests/
# Quick development test cycle
uv run pytest -n auto -m "not slow and not integration" -x
# Create test models for specific formats
python -c "import torch; torch.save({'model': 'data'}, 'test.pt')"
python -c "import pickle; pickle.dump({'test': 'malicious'}, open('malicious.pkl', 'wb'))"Releases are fully automated via release-please and GitHub Actions. When conventional commits land on main, release-please opens (or updates) a release PR. Merging that PR triggers the build, publish to PyPI, SBOM generation, and provenance attestation pipeline.
See docs/agents/release-process.md for the full workflow details.
When reporting issues:
- Use the GitHub issue templates
- Include ModelAudit version and Python version
- Provide minimal reproduction steps
- Include error messages and stack traces
- Mention the model format and size if applicable
For feature requests:
- Check existing issues first
- Describe the use case clearly
- Explain why it would benefit users
- Consider proposing an implementation approach
- GitHub Issues: For bugs and feature requests
- GitHub Discussions: For questions and general discussion
- Email: For security issues or private matters
Thank you for contributing to ModelAudit!