Welcome! This guide will help you contribute to Kornia.
-
AI Policy & Authorship: See AI_POLICY.md for the complete policy. Summary:
- Kornia accepts AI-assisted code but strictly rejects AI-generated contributions where the submitter acts as a proxy.
- Proof of Verification: PRs must include local test logs proving execution.
- Hallucination & Redundancy Ban: Use existing
korniautilities and never reinvent the wheel, except for when the utility is not available. - The "Explain It" Standard: You must be able to explain any code you submit.
- Violations result in immediate closure or rejection.
-
15-Day Rule: PRs with no activity for 15+ days will be automatically closed.
-
Transparency: All discussions must be public.
We're all volunteers. These policies help us focus on high-impact work.
-
Ask/Answer questions:
- GitHub Discussions
#korniatag in PyTorch Discuss- Discord
- Don't use GitHub issues for Q&A.
-
Report bugs via GitHub issues:
- Search for existing issues first.
- Use the bug report template.
- Include: clear description, reproduction steps, package versions, and code sample.
-
Fix bugs or add features:
- Check help wanted issues for starting points.
- Follow the development setup below.
- See Pull Request section for PR requirements.
-
Donate resources:
- Open Collective
- GitHub Sponsors
- We're looking for CUDA server donations for testing.
-
Fork the repository
-
Clone your fork and add upstream:
$ git clone git@github.com:<your Github username>/kornia.git $ cd kornia $ git remote add upstream https://github.com/kornia/kornia.git
-
Create a branch (don't work on
main):git checkout upstream/main -b feat/foo_feature # or git checkout upstream/main -b fix/bar_bug -
Development environment
We use pixi for package and environment management.
Install Pixi:
# On Linux/macOS curl -fsSL https://pixi.sh/install.sh | bash # On Windows (PowerShell) irm https://pixi.sh/install.ps1 | iex # Or using conda/mamba conda install -c conda-forge pixi
Set up the development environment:
# Install all dependencies (defaults to Python 3.11) pixi install # For specific Python versions pixi install -e py312 # Python 3.12 pixi install -e py313 # Python 3.13 # For CUDA development (requires reinstall of PyTorch) pixi run -e cuda install
Available tasks:
Kornia provides several tasks via pixi for common development workflows:
# Installation pixi run install # Install dev dependencies pixi run install-docs # Install dev + docs dependencies # Testing pixi run test # Run tests (configure via KORNIA_TEST_* env vars) pixi run test-f32 # Run tests with float32 pixi run test-f64 # Run tests with float64 pixi run test-slow # Run slow tests pixi run test-quick # Run quick tests (excludes jit, grad, nn) # CUDA testing (requires cuda environment) pixi run -e cuda test-cuda # Run tests on CUDA pixi run -e cuda test-cuda-f32 # Run CUDA tests with float32 pixi run -e cuda test-cuda-f64 # Run CUDA tests with float64 # Code quality pixi run lint # Run ruff linting pixi run typecheck # Run type checking with ty pixi run doctest # Run doctests # Documentation pixi run build-docs # Build documentation # Utilities pixi run clean # Clean Python cache files
Environment variables for tests:
Tests can be configured using environment variables:
# Set device (cpu, cuda, mps, tpu) export KORNIA_TEST_DEVICE=cuda # Set dtype (float32, float64, float16, bfloat16) export KORNIA_TEST_DTYPE=float32 # Run slow tests export KORNIA_TEST_RUNSLOW=true # Then run tests pixi run test
Dependencies: Defined in
pyproject.toml. Update it and runpixi install.CUDA: The CUDA environment uses PyTorch with CUDA 12.1. Run
pixi run -e cuda installto set it up. -
Develop and test:
Create test cases for your code. Run tests with:
# Run all tests pixi run test # Run specific test file pixi run test tests/<TEST_TO_RUN>.py # For specific test with pytest options pixi run test tests/<TEST_TO_RUN>.py --dtype=float32,float64 --device=all
dtype options:
bfloat16,float16,float32,float64,alldevice options:cpu,cuda,tpu,mps,allWe use pre-commit for code quality. Install it with
pre-commit install. See coding standards below.
- Set up your development environment (see above)
- Edit files in
docs/ - Build docs:
make build-docs - Preview:
open docs/build/html/index.html - Submit a PR following the Pull Request guidelines
-
Write small incremental changes:
- Commit small, logical changes
- Write clear commit messages
- Avoid large files
-
Add tests:
-
Write unit tests for each functionality
-
Use helpers from testing/
-
Put test utilities (not tests or fixtures) in
testing/from testing.base import BaseTester class TestMyFunction(BaseTester): # To compare the actual and expected tensors use `self.assert_close(...)` def test_smoke(self, device, dtype): # test the function with different parameters arguments, to check if the function at least runs with all the # arguments allowed. pass def test_exception(self, device, dtype): # tests the exceptions which can occur on your function # example of how to properly test your exceptions # with pytest.raises(<raised Error>) as errinfo: # your_function(<set of parameters that raise the error>) # assert '<msg of error>' in str(errinfo) pass def test_cardinality(self, device, dtype): # test if with different parameters the shape of the output is the expected pass def test_feature_foo(self, device, dtype): # test basic functionality pass def test_feature_bar(self, device, dtype): # test another functionality pass def test_gradcheck(self, device): # test the functionality gradients # Uses `self.gradcheck(...)` pass def test_dynamo(self, device, dtype, torch_optimizer): # test the functionality using dynamo optimizer # Example of how to properly test your function for dynamo # inputs = (...) # op = your_function # op_optimized = torch_optimizer(op) # self.assert_close(op(inputs), op_optimized(inputs)) pass
-
-
Test coverage: Cover different devices, dtypes, and batch sizes. Use
--dtypeand--devicepytest arguments to generate test combinations:import pytest @pytest.mark.parametrize("batch_size", [1, 2, 5]) def test_smoke(batch_size, device, dtype): x = torch.rand(batch_size, 2, 3, device=device, dtype=dtype) assert x.shape == (batch_size, 2, 3)
-
Type hints (Python >= 3.11):
-
Use typing when it improves readability
-
Use
torch.Tensordirectly for type hints (preferred) or import fromkornia.corefor backward compatibility -
Use
torch.nn.Moduledirectly for module classes (preferred) or import fromkornia.corefor backward compatibility -
For non-JIT modules, use
from __future__ import annotations -
Always type function inputs and outputs:
-
Run type checking with
pixi run typecheck(usesty)from __future__ import annotations import torch def homography_warp( patch_src: torch.Tensor, dst_homo_src: torch.Tensor, dsize: tuple[int, int], mode: str = 'bilinear', padding_mode: str = 'zeros' ) -> torch.Tensor:
For module classes:
from __future__ import annotations import torch.nn as nn class MyModule(nn.Module): def forward(self, x: torch.Tensor) -> torch.Tensor: return x
-
-
Code style:
-
Follow PEP8
-
Use f-strings: PEP 498
-
Line length: 120 characters
-
Comments must be written in English and verified by a human with a good understanding of the code
-
Obvious or redundant comments are not allowed (see Best Practices for comment guidelines)
-
W504 (line break after binary operator) is sometimes acceptable. Example:
determinant = A[:, :, 0:1, 0:1] * A[:, :, 1:2, 1:2] - A[:, :, 0:1, 1:2] * A[:, :, 1:2, 0:1])
-
-
Third-party libraries: Not allowed. Only PyTorch.
This section provides guidance for contributing to Kornia, with a focus on Python and PyTorch best practices, performance, and maintainability.
-
Discuss First: Always discuss your proposed changes in Discord or via a GitHub issue before starting implementation. This ensures your work aligns with project goals and avoids duplicate effort.
-
Start Small: If you're new to the project, start with small bug fixes or documentation improvements to familiarize yourself with the codebase and contribution process.
-
Understand the Codebase: Take time to explore existing code patterns, architecture, and conventions before implementing new features.
-
Review Existing Utilities: Before implementing new functionality, search the codebase for existing utilities in
kornia. This aligns with the AI Policy's Hallucination & Redundancy Ban (see Policies and Guidelines).
-
Keep PRs Focused: Each PR should address a single concern. If you're working on multiple features, create separate PRs for each.
-
Test Locally First: Always run all relevant tests locally before submitting (see Pull Request for requirements):
pixi run lint # Check formatting and linting pixi run test # Run all tests pixi run typecheck # Verify type checking
-
Update Documentation: When adding new features or changing behavior, update docstrings for public APIs. For documentation contributions, see Contributing to Documentation.
-
Performance Considerations:
- Prefer in-place operations when possible (e.g.,
tensor.add_(other)vstensor = tensor.add(other)) - Use tensor views and slicing instead of copying when possible
- Leverage PyTorch's vectorized operations over Python loops
- Profile before optimizing (use
torch.profilerorcProfile) - Consider memory efficiency for large tensors (use appropriate dtypes, avoid unnecessary copies)
- Use
torch.jit.scriptortorch.compilefor performance-critical paths when appropriate
- Prefer in-place operations when possible (e.g.,
-
Code Clarity:
- Use descriptive variable and function names that convey intent
- Keep functions focused and single-purpose
- Prefer clear code over comments; when comments are needed, explain "why" not "what"
- Avoid over-engineering; start simple and refactor when needed
-
Tensor Operations:
- Use
korniautilities instead of reimplementing common operations (see AI Policy) - Ensure operations are device-agnostic (work on CPU, CUDA, MPS, etc.)
- Support multiple dtypes (float32, float64, float16, bfloat16) when applicable
- Handle batched and non-batched inputs consistently
- Use
- Write tests for happy paths, error cases, edge conditions, boundary conditions, and integration scenarios
- Use
BaseTesterfromtesting.basefor consistent test structure (see Coding Standards for examples) - Test across different devices and dtypes using pytest parametrization (see Coding Standards for examples)
- Make tests deterministic, fast, and independent
- Use descriptive test names; test both forward pass and gradients when applicable
- Review your own PR first: check for typos/formatting, verify tests pass, ensure documentation is updated, and confirm AI policy compliance
- Respond promptly to review feedback
- Be open to feedback and explain your decisions when questioned
- See Pull Request section for review requirements
- Understand every line of code you submit; you must be able to explain it during review (see AI Policy)
- Review AI output thoroughly: check for unnecessary complexity, verify it follows project conventions, ensure it uses existing utilities, and test it
- Be transparent in PR descriptions about what was AI-assisted and what you manually reviewed (see Pull Request for AI Usage Disclosure requirements)
- AI Usage Disclosure in PR Template: When completing the PR template's "AI Usage Disclosure" section:
- Mark as 🟢 No AI used only if you wrote all code manually without any AI assistance
- Mark as 🟡 AI-assisted if you used AI tools (Copilot, Cursor, etc.) for boilerplate/refactoring but manually reviewed and tested every line
- Mark as 🔴 AI-generated if an AI agent generated the code, PR description, or commit messages, or if you cannot explain the logic without referring to the AI's output. Important: PRs marked as AI-generated are subject to stricter scrutiny and may be immediately closed if the logic cannot be explained
- Write clear, concise PR descriptions (see Pull Request for requirements)
- Always link to related issues or discussions in your PR description
- Ask questions in Discord or PR comments if unsure; it's better to clarify early than to rework later
Before submitting a PR, you must:
-
Open an issue first: All PRs must be linked to an existing issue. If no issue exists for your work, create one using the appropriate template (bug report or feature request).
-
Wait for maintainer approval: A maintainer must review and approve the issue before you start working on it. New issues are automatically labeled with
triageand will receive a welcome message explaining this process. -
Wait for assignment: You must be assigned to the issue by a maintainer before submitting a PR. This ensures:
- The issue aligns with project goals
- No duplicate work is being done
- Proper coordination of contributions
-
Do not start work until assigned: PRs submitted without prior issue approval and assignment may be closed or receive warnings during automated validation.
This workflow helps maintain quality, avoid conflicts, and ensure contributions align with the project's direction. The automated PR validation workflow will check these requirements and post warnings if they're not met.
Requirements:
- Issue approval and assignment: The linked issue must be approved by a maintainer and you must be assigned to it (see workflow above)
- Link PR to an issue (use "Closes #123" or "Fixes #123")
- Pass all local tests before submission
- Provide proof of local test execution in the PR description (this is especially important for first-time contributors)
- Fill the pull request template
- AI Policy Compliance: Must comply with AI_POLICY.md. This includes:
- Using existing
korniautilities instead of reinventing - Being able to explain all submitted code
- Completing the AI Usage Disclosure in the PR template accurately (see AI-Assisted Development for guidance on when to mark as AI-generated)
- Using existing
- 15-Day Rule: Inactive PRs (>15 days) will be closed
- Transparency: Keep discussions public
Code review:
- By default, GitHub Copilot will check the PR against the AI Policy and the coding standards.
- Code must be reviewed by the repository owner or a senior contributor, who have the final say on the quality and acceptance of the PR.
Note: Tickets may be closed during cleanup. Feel free to reopen if you plan to finish the work.
CI checks:
- All tests pass
- Test coverage maintained
- Type checking (ty)
- Documentation builds successfully
- Code formatting (ruff, docformatter via pre-commit)
Fix any failing checks before your PR will be considered.
By contributing, you agree to license your contributions under the Apache License. See LICENSE.