Thanks for considering contributing! Sheaf is built for plural systems, but we welcomes contributions from anyone who shares our goals, including singlets.
Please read the Code of Conduct before participating.
- Python 3.12+
- Node.js 20+
- Docker and Docker Compose (for PostgreSQL and Redis)
# Clone the repo
git clone https://github.com/sheaf-project/sheaf.git
cd sheaf
# Copy env and start infrastructure
cp .env.example .env
docker compose up db redis -d
# Backend
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pip install -e ./sheaf_dev # optional: dev-only tools (demo wipe, etc.)
alembic upgrade head
uvicorn sheaf.main:app --reload
# Frontend (separate terminal)
cd web
npm install
npm run devThe API runs on http://localhost:8000 (docs at /v1/docs), and the web UI on http://localhost:5173.
Use run_tests.sh to spin up a dedicated isolated Docker stack, run tests against every server configuration in sequence, then tear everything down:
./run_tests.shThis tests four configurations: selfhosted with no admin step-up, selfhosted with password step-up, selfhosted with TOTP step-up, and saas mode. Uses ports 8001/5433/6380 so it doesn't conflict with a running dev stack.
# Skip rebuilding the image if you haven't changed backend code:
./run_tests.sh --no-buildStart a server first, then run pytest directly. You need SHEAF_TEST_DB_URL so the admin_client fixture can promote a test user to admin directly in the DB — the default DATABASE_URL uses Docker's internal db hostname, which isn't reachable from the host:
docker compose up db redis -d
uvicorn sheaf.main:app --reload &
export SHEAF_TEST_DB_URL="postgresql+asyncpg://sheaf:<POSTGRES_PASSWORD>@localhost:5432/sheaf"
pytest -vReplace <POSTGRES_PASSWORD> with the value from your .env.
client— unauthenticated httpx clientauth_client— registers a fresh user per test, sets Bearer tokenadmin_client— registers a fresh user, promotes to admin directly via DB, completes admin step-up automatically (adapts to whateverADMIN_AUTH_LEVELthe server has configured)raw_admin_client— same asadmin_clientbut skips step-up — use this to test step-up enforcement
Test markers gate config-specific tests: admin_auth_password, admin_auth_totp, saas. The conftest skips them unless the matching server config is active.
# Backend
ruff check sheaf/
# Frontend
cd web
npm run lint
npx tsc --noEmitBoth must pass with zero errors.
Create migrations with Alembic. The Docker entrypoint runs alembic upgrade head on startup.
# Generate migration from model changes
alembic revision --autogenerate -m "description"
# Apply migrations
alembic upgrade headWhen adding enum columns, ensure the migration creates the Postgres enum type with lowercase values to match the StrEnum values.
If your branch lives long enough to be rebased onto new schema changes — or its migration revision id ever needs renumbering to resolve a chain conflict — write the upgrade() so it can run on a DB that already has a previous version applied. Use the SQLAlchemy inspector to check before each add_column / create_table:
bind = op.get_bind()
inspector = sa.inspect(bind)
existing_cols = {c["name"] for c in inspector.get_columns("my_table")}
if "my_new_col" not in existing_cols:
op.add_column("my_table", sa.Column("my_new_col", ...))Why: dev DBs that ran the original revision id won't know to skip the renumbered one, and the app crashes in a restart loop before you can alembic stamp it manually. Production isn't affected, but everyone testing the branch will hit it.
Open an issue. Include:
- What you expected to happen
- What actually happened
- Steps to reproduce
- Your environment (self-hosted or hosted, browser, OS)
Open an issue tagged as a feature request. Describe the use case — what are you trying to do and why?
If you're coming from SimplyPlural, we're especially interested in hearing about features you relied on, workflows that worked well, and things you wished were different.
- Fork the repo and create a feature branch from
main - Make your changes
- Ensure all linting passes (
ruff check sheaf/andcd web && npm run lint && npx tsc --noEmit) - Ensure tests pass (
./run_tests.shfor the full suite, orpytestagainst a local server) - Open a PR with a clear description of what and why
- Keep PRs focused. One feature or fix per PR.
- Write clear commit messages.
- If your change touches the data model, include an Alembic migration.
- If your change adds an API endpoint, add a test.
- Don't include unrelated formatting changes, refactors, or dependency bumps.
Releases are tag-driven and gated on a manual approval. The workflow:
- Bump the version in
pyproject.tomlandweb/package.jsonto match the target tag (e.g.0.1.1). - Move the
## [Unreleased]section inCHANGELOG.mdto a new## [v0.1.1]heading. The release workflow extracts that section verbatim into the GitHub release body. - Land those changes on
mainvia PR. - Tag and push:
git tag v0.1.1 && git push --tags. - The CI workflow's
dockerjob builds, signs, and attests the images. Then thereleasejob pauses for human approval — the request shows up under repo Actions → workflow run → "Review deployments". Approving creates the GitHub release and uploads the frontend tarball + build manifest. - If the tag's version doesn't match
pyproject.toml, the release job fails before publishing — re-tag rather than overriding.
v0.x.y releases are tagged as GitHub prereleases automatically (until v1.0.0).
The manual gate requires a configured GitHub Environment:
- Repo Settings → Environments → New environment named
release. - Enable "Required reviewers" and add the maintainers who can sign off on releases.
- Optionally add a deployment protection rule (e.g. only allow
refs/tags/v*).
Without the environment, the release job runs with no approval gate. Create it before cutting any tag you actually want to publish.
Before making significant changes, it helps to understand a few design decisions:
- User != System. A user is an auth identity. A system is the plural system profile. They're 1:1 today but separated for future flexibility - do not poke holes in the separation between the two.
- Self-hosted first. The codebase supports both self-hosting and a hosted tier without forking. The
SHEAF_MODEconfig flag controls which features are active. - Dev-only code stays in
sheaf_dev/. Destructive tools (database wipes, demo resets) belong in thesheaf_devpackage, never insheaf. The production Docker image doesn't include it by default — the code physically cannot exist in production. To include dev tools in a Docker build:INCLUDE_DEV_TOOLS=true docker compose up -d --build. For local dev:pip install -e ./sheaf_dev. The job system loads dev jobs viatry/except ImportError, so no configuration error can activate code that isn't there. - Encryption is application-level. Email and TOTP secrets are encrypted before storage. Lookups use blind indexes. Don't bypass this.
- All IDs are UUIDs. No auto-increment.
- Enums use StrEnum with lowercase values. SQLAlchemy Enum columns must use
values_callable=lambda e: [m.value for m in e]to match. - Encrypted fields (email, totp_secret) use
crypto.encrypt()/crypto.decrypt(). Lookups use blind indexes (crypto.blind_index()— keyed HMAC derived from the encryption key, not plain SHA-256). - Auth dependency: Use
get_current_userfor authenticated endpoints,get_admin_userfor admin-only (requiresis_admin=Trueoradmin:readscope),get_admin_write_userfor mutating admin endpoints (admin:write),get_current_user_optionalfor public endpoints that optionally use auth. - Scope enforcement: All resource endpoints are gated by
require_scope()fromsheaf/auth/dependencies.py. Router-level read deps live insheaf/api/v1/router.py; per-endpoint write/delete deps are on the individual route functions. Session/JWT auth bypasses scope checks (full access). Rules:resource:writeandresource:deleteboth implyresource:read; nothing impliesresource:delete. When adding a new endpoint, add the appropriatedependencies=[Depends(require_scope(...))]. - API keys: Stored as SHA-256 hash only — plaintext (
sk_…) returned once on creation. Valid scopes are defined in_ALL_SCOPES(dependencies.py) and_VALID_SCOPES(auth.py) — keep both in sync when adding new scopes.admin:*scopes can only be created by users withis_admin=True. - File URLs: Store the storage key (e.g.
avatars/{user_id}/{uuid}.png), never a signed URL. Callresolve_avatar_url(key)fromsheaf/files.pyto get the appropriate URL at read time. Schemas use@field_serializer("avatar_url")to do this automatically. - Database sessions:
get_dbyields a session and commits on success. For endpoints where the client needs the data immediately after the response (register, login), explicitlyawait db.commit()before returning. - API versioning: All routes under
/v1/. New versions get a new directory. - Frontend API calls: Use
apiFetch()fromlib/api-client.ts. It handles auth headers, token refresh, and error parsing. All fetch calls usecredentials: "same-origin"for cookie-based auth. - Frontend state: TanStack Query for server state. Custom hooks in
hooks/wrap query/mutation logic. No Redux or other global state.
This is not negotiable. Sheaf handles deeply personal identity data.
- Never log or expose plaintext encrypted fields (email, TOTP secrets).
- Never store secrets in code or commit .env files.
- Validate all user input. Pydantic handles request validation; don't bypass it.
- Check ownership on all mutations. Every endpoint that modifies data must verify the resource belongs to the authenticated user's system.
- No path traversal. File paths must be validated with
resolve()+is_relative_to(). - Use parameterised queries only. SQLAlchemy handles this — don't use raw SQL strings.
- Refresh tokens are HttpOnly cookies, not stored in localStorage.
- API key plaintext is never stored. Only the SHA-256 hash is persisted. Return the plaintext once on creation; never log it.
- Never store signed file URLs. Store the key; resolve URLs at read time via
resolve_avatar_url().
By contributing to Sheaf, you agree that your contributions will be licensed under AGPL-3.0-or-later.