Thank you for your interest in contributing to the Llama Stack K8s Operator! This document provides guidelines and instructions for contributing to this project.
- Go (version specified in go.mod)
- Make
- pre-commit
This project uses pre-commit hooks to ensure code quality and consistency. The pre-commit hooks are automatically run on every commit and are also checked in our CI pipeline.
-
Install pre-commit by reference to the pre-commit docs
-
Install the pre-commit hooks (optional):
pre-commit install
You can run pre-commit hooks manually on all files:
pre-commit run --all-filesOr on specific files:
pre-commit run --files file1 file2The pre-commit hooks are also run in our CI pipeline on every pull request and push to the main branch. The CI will fail if:
- Any pre-commit hooks fail
- There are uncommitted changes after running pre-commit
- There are new files that haven't been committed
To avoid CI failures, always run pre-commit locally before pushing your changes:
pre-commit run --all-files
git add .
git commit -m "Your commit message"- Ensure your code passes all pre-commit checks locally
- Create a pull request against the main branch
- Ensure all CI checks pass
- Wait for review and address any feedback
Please follow the project's code style guidelines. The pre-commit hooks will help enforce many of these automatically.
All error messages in the codebase must follow a consistent format to improve readability and maintainability. The pre-commit hook check-go-error-messages enforces these rules automatically.
- All wrapped error messages must start with "failed to"
- Error messages should be descriptive and actionable
The project uses make test to run the unit tests. By default, this runs all tests except end-to-end tests with code coverage.
# Run all tests (default behavior)
make test
# Run tests for a specific package
make test TEST_PKGS=./pkg/deploy
# Run a specific test function
make test TEST_PKGS=./pkg/deploy TEST_FLAGS="-v -run TestRenderManifest"
# Run tests with verbose output
make test TEST_FLAGS="-v -coverprofile cover.out"
# Run tests for multiple packages
make test TEST_PKGS="./pkg/deploy ./controllers"The make test target supports the following variables for customization:
TEST_PKGS- Space-separated list of packages to test (default: all packages except e2e)TEST_FLAGS- Additional flags to pass togo test(default:-coverprofile cover.out)
For rapid development cycles, you can use focused test runs:
# Example TDD workflow for working on the deploy package
make test TEST_PKGS=./pkg/deploy TEST_FLAGS="-v -run TestRenderManifest"- DAMP and DRY together: Apply DAMP (Descriptive And Meaningful Phrases) to test scenarios, and DRY to implementation details. Keep test intent explicit while extracting common setup logic.
- Use AAA pattern: Arrange, Act, Assert with clear comments separating each phase.
- Integration tests: Use production constants to verify the controller applies defaults correctly.
- E2E tests: Use test-owned constants to focus on user workflows, not implementation details.
- Make test intent obvious: Use descriptive names and explicit values that show what each test is verifying.
- No shared state: Each test should be independent with unique namespaces and fresh instances.
- Use builders for variations: Create test instances with clear intent using the builder pattern.
- Async operations: Use
require.Eventuallywith appropriate timeouts for Kubernetes operations.
// Good: Descriptive test names that explain behavior
{
name: "No storage configuration - should use emptyDir",
buildInstance: func(namespace string) *llamav1alpha1.LlamaStackDistribution {
return NewDistributionBuilder().
WithStorage(nil). // Clear intent: testing emptyDir behavior
Build()
},
},
// Bad: Vague test name that doesn't explain expected behavior
{
name: "test nil storage",
// ...
}Focus: Tests should survive refactoring and clearly communicate what behavior they're verifying.
If you have any questions about contributing, please open an issue in the repository.