Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
81d1632
DM-51979
emanehab99 May 28, 2025
02d67ab
feat(test): add testcontainers dependency
tcjennings Jun 16, 2025
f0c8dbc
feat(python): Upgrade to python 3.12
tcjennings Jun 12, 2025
b699bf0
feat(type): Refactor db session typing
tcjennings Jun 12, 2025
4ad117a
chore(typing):
tcjennings Jun 20, 2025
a2b6103
fix(parse): Allow '-' in element names (fixes #186)
tcjennings Jun 26, 2025
c7f1edf
fix(config): add config.bps.artifact_path
tcjennings Jun 26, 2025
d57c9e3
fix(daemon): Correct types in due date checks (#188)
tcjennings Jun 26, 2025
7231e4b
chore(project): Update contribution docs and update release branch na…
tcjennings Jun 26, 2025
cc09857
chore(docs): Update deployment doc [skip ci]
tcjennings Jun 27, 2025
a8b07d4
fix: Initialize bps_submit_dir to None in BpsScriptHandler
tcjennings Jun 27, 2025
5ea2ce7
chore(deps): Add pytransitions
tcjennings Jun 30, 2025
bad82e5
fix(status): make reviewable a terminal unsuccessful script status
tcjennings Jul 1, 2025
bc99754
feat(db): Add migrations and models for v2 tables
tcjennings Jun 17, 2025
276b295
feat(frontend): enable frontend by config flag
tcjennings Jun 17, 2025
e68b8e0
chore(api): Relocate and refactor asgi router tags
tcjennings Jun 18, 2025
a9aa398
feat(test): add test infrastructure for v2
tcjennings Jun 17, 2025
9352918
feat(db): apply pool configurations to db session
tcjennings Jun 18, 2025
c0f2b95
feat(jsonpatch): Implement jsonpatch module for RFC6902 update ops
tcjennings Jun 19, 2025
a1e7987
feat(api): Add campaign_v2 and manifest_v2 routes
tcjennings Jun 18, 2025
09f4a46
feat(db): Update Models
tcjennings Jun 23, 2025
d7742da
feat(api): Refactor validating manifests
tcjennings Jun 25, 2025
444a743
feat(api): Add edge and node apis
tcjennings Jun 23, 2025
e0aa818
feat(graph): Implement graph functions
tcjennings Jun 26, 2025
c7911b3
feat(api): Implement v2 campaign graph api
tcjennings Jun 27, 2025
4a6923a
feat(jsonpatch): implement RFC7396 json patch function
tcjennings Jun 27, 2025
8f5f5df
fix(node)
tcjennings Jun 27, 2025
515d266
feat(api): Include order in paginated queries
tcjennings Jul 8, 2025
273383a
chore(test): update test fixtures
tcjennings Jul 1, 2025
3920ee4
feat(transitions)
tcjennings Jun 30, 2025
0f58afd
feat(campaign): patch status change
tcjennings Jul 9, 2025
fad1076
- feat(api): Add activity logs router
tcjennings Jul 9, 2025
c12fc6d
chore(machine): Improve consistency between subclasses
tcjennings Jul 10, 2025
a9020f6
chore(session): refactor db session
tcjennings Jul 11, 2025
451c5e5
Fixes
tcjennings Jul 15, 2025
03cd07d
Merge pull request #200 from lsst-dm/tickets/DM-51979/add-config
tcjennings Aug 6, 2025
3b25a57
0.5.0
tcjennings Aug 6, 2025
860c5f6
fix(web_app): session typing
tcjennings Aug 6, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ name: "CI"

env:
UV_FROZEN: "1"
TEST__LOCAL_DB: "1" # signals test fixtures to not use testcontainers

jobs:
rebase-checker:
Expand All @@ -30,7 +31,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
python-version: "3.12"

- name: Run pre-commit
uses: pre-commit/action@v3.0.1
Expand All @@ -39,7 +40,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11"]
python-version: ["3.12"]
needs:
- rebase-checker
steps:
Expand All @@ -49,7 +50,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
version: "0.6.x"
version: "0.7.x"
enable-cache: true
python-version: ${{ matrix.python-version }}

Expand All @@ -63,7 +64,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11"]
python-version: ["3.12"]
needs:
- lint
- mypy
Expand All @@ -75,7 +76,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
version: "0.6.x"
version: "0.7.x"
enable-cache: true
python-version: ${{ matrix.python-version }}

Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ repos:

- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.9.2
rev: v0.11.13
hooks:
- id: ruff
- id: ruff
Expand Down
2 changes: 1 addition & 1 deletion .python-version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
3.11
3.12
2 changes: 2 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ update: update-deps init

.PHONY: build
build: export BUILDKIT_PROGRESS=plain
build: export COMPOSE_BAKE=true
build:
docker compose build cmservice
docker compose build cmworker
Expand Down Expand Up @@ -107,6 +108,7 @@ test: PGPORT=$(shell docker compose port postgresql 5432 | cut -d: -f2)
test: export DB__URL=postgresql://cm-service@localhost:${PGPORT}/cm-service
test: export DB__PASSWORD=INSECURE-PASSWORD
test: export DB__TABLE_SCHEMA=cm_service_test
test: export BPS__ARTIFACT_PATH=$(PWD)/output
test: run-compose
alembic upgrade head
pytest -vvv --asyncio-mode=auto --cov=lsst.cmservice --cov-branch --cov-report=term --cov-report=html ${PYTEST_ARGS}
Expand Down
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# cm-service
![Python](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Flsst-dm%2Fcm-service%2Frefs%2Fheads%2Fmain%2Fpyproject.toml)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![uv](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json)](https://github.com/astral-sh/uv)

Expand All @@ -8,7 +9,7 @@ https://cm-service.lsst.io.

## Developer Quick Start

You can build and run `cm-service` on any system which has Python 3.11 or greater, `uv`, `make`, and Docker w/ the
You can build and run `cm-service` on any system which has Python 3.12 or greater, `uv`, `make`, and Docker w/ the
Docker Compose V2 CLI plugin (this includes, in particular, recent MacOS with Docker Desktop). Proceed as
follows:

Expand Down
2 changes: 0 additions & 2 deletions alembic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
Database migrations and schema evolution are handled by `alembic`, a database tool
that is part of the `sqlalchemy` toolkit ecosystem.

Alembic is included in the project's dependency graph via the Safir package.

## Running Alembic

The `alembic` tool establishes an execution environment via the `env.py` file which
Expand Down
250 changes: 250 additions & 0 deletions alembic/versions/1da92a1c740f_create_v2_tables.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,250 @@
"""create v2 tables

Revision ID: 1da92a1c740f
Revises: acf951c80750
Create Date: 2025-06-13 14:56:31.238050+00:00

"""

from collections.abc import Sequence
from enum import Enum
from uuid import NAMESPACE_DNS

import sqlalchemy as sa
from sqlalchemy.dialects import postgresql

from alembic import op

# revision identifiers, used by Alembic.
revision: str = "1da92a1c740f"
down_revision: str | None = "acf951c80750"
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None


DEFAULT_CAMPAIGN_NAMESPACE = "dda54a0c-6878-5c95-ac4f-007f6808049e"
"""UUID5 of name 'io.lsst.cmservice' in `uuid.NAMESPACE_DNS`."""

# DB model uses mapped columns with Python Enum types, but we do not care
# to use native enums in the database, so when we have such a column, this
# definition will produce a VARCHAR instead.
ENUM_COLUMN_AS_VARCHAR = sa.Enum(Enum, length=20, native_enum=False, check_constraint=False)


def upgrade() -> None:
# Create table for machines v2
machines_v2 = op.create_table(
"machines_v2",
sa.Column("id", postgresql.UUID(), nullable=False),
sa.Column("state", sa.PickleType, nullable=False),
sa.PrimaryKeyConstraint("id"),
if_not_exists=True,
)

# Create table for campaigns v2
campaigns_v2 = op.create_table(
"campaigns_v2",
sa.Column("id", postgresql.UUID(), nullable=False),
sa.Column("name", postgresql.VARCHAR(), nullable=False),
sa.Column("namespace", postgresql.UUID(), nullable=False, default=DEFAULT_CAMPAIGN_NAMESPACE),
sa.Column("owner", postgresql.VARCHAR(), nullable=True),
sa.Column(
"metadata",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.Column(
"configuration",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.Column("status", ENUM_COLUMN_AS_VARCHAR, nullable=False, default="waiting"),
sa.Column(
"machine", postgresql.UUID(), sa.ForeignKey(machines_v2.c.id, ondelete="CASCADE"), nullable=True
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("name", "namespace"),
if_not_exists=True,
)

# Create node and edges tables for campaign digraph
nodes_v2 = op.create_table(
"nodes_v2",
sa.Column("id", postgresql.UUID(), nullable=False),
sa.Column(
"namespace",
postgresql.UUID(),
sa.ForeignKey(campaigns_v2.c.id, ondelete="CASCADE"),
nullable=False,
),
sa.Column("name", postgresql.VARCHAR(), nullable=False),
sa.Column("version", postgresql.INTEGER(), nullable=False, default=1),
sa.Column("kind", ENUM_COLUMN_AS_VARCHAR, nullable=False, default="node"),
sa.Column(
"metadata",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.Column(
"configuration",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.Column("status", ENUM_COLUMN_AS_VARCHAR, nullable=False, default="waiting"),
sa.Column(
"machine", postgresql.UUID(), sa.ForeignKey(machines_v2.c.id, ondelete="CASCADE"), nullable=True
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("name", "version", "namespace"),
if_not_exists=True,
)

_ = op.create_table(
"edges_v2",
sa.Column("id", postgresql.UUID(), nullable=False),
sa.Column("name", postgresql.VARCHAR(), nullable=False),
sa.Column(
"namespace",
postgresql.UUID(),
sa.ForeignKey(campaigns_v2.c.id, ondelete="CASCADE"),
nullable=False,
),
sa.Column("source", postgresql.UUID(), sa.ForeignKey(nodes_v2.c.id), nullable=False),
sa.Column("target", postgresql.UUID(), sa.ForeignKey(nodes_v2.c.id), nullable=False),
sa.Column(
"metadata",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.Column(
"configuration",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("source", "target", "namespace"),
if_not_exists=True,
)

# Create table for spec blocks v2 ("manifests")
_ = op.create_table(
"manifests_v2",
sa.Column("id", postgresql.UUID(), nullable=False),
sa.Column("name", postgresql.VARCHAR(), nullable=False),
sa.Column(
"namespace",
postgresql.UUID(),
sa.ForeignKey(campaigns_v2.c.id, ondelete="CASCADE"),
nullable=False,
),
sa.Column("version", postgresql.INTEGER(), nullable=False, default=1),
sa.Column("kind", ENUM_COLUMN_AS_VARCHAR, nullable=False, default="other"),
sa.Column(
"metadata",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.Column(
"spec",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("name", "version", "namespace"),
if_not_exists=True,
)

# Create table for tasks v2
_ = op.create_table(
"tasks_v2",
sa.Column("id", postgresql.UUID(), nullable=False),
sa.Column("namespace", postgresql.UUID(), nullable=False),
sa.Column("node", postgresql.UUID(), nullable=False),
sa.Column("priority", postgresql.INTEGER(), nullable=True),
sa.Column("created_at", postgresql.TIMESTAMP(timezone=True), nullable=False),
sa.Column("submitted_at", postgresql.TIMESTAMP(timezone=True), nullable=True),
sa.Column("finished_at", postgresql.TIMESTAMP(timezone=True), nullable=True),
sa.Column("wms_id", postgresql.VARCHAR(), nullable=True),
sa.Column("site_affinity", postgresql.ARRAY(postgresql.VARCHAR()), nullable=True),
sa.Column("status", ENUM_COLUMN_AS_VARCHAR, nullable=False),
sa.Column("previous_status", ENUM_COLUMN_AS_VARCHAR, nullable=True),
sa.Column(
"metadata",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.PrimaryKeyConstraint("id"),
sa.ForeignKeyConstraint(["node"], ["nodes_v2.id"]),
sa.ForeignKeyConstraint(["namespace"], ["campaigns_v2.id"]),
if_not_exists=True,
)

_ = op.create_table(
"activity_log_v2",
sa.Column("id", postgresql.UUID(), nullable=False),
sa.Column("namespace", postgresql.UUID(), nullable=False),
sa.Column("node", postgresql.UUID(), sa.ForeignKey(nodes_v2.c.id), nullable=True),
sa.Column("operator", postgresql.VARCHAR(), nullable=False, default="root"),
sa.Column("created_at", postgresql.TIMESTAMP(timezone=True), nullable=False),
sa.Column("finished_at", postgresql.TIMESTAMP(timezone=True), nullable=True),
sa.Column("from_status", ENUM_COLUMN_AS_VARCHAR, nullable=False),
sa.Column("to_status", ENUM_COLUMN_AS_VARCHAR, nullable=False),
sa.Column(
"detail",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.Column(
"metadata",
postgresql.JSONB(),
nullable=False,
default=dict,
server_default=sa.text("'{}'::json"),
),
sa.PrimaryKeyConstraint("id"),
if_not_exists=True,
)

# Insert default campaign (namespace) record
op.bulk_insert(
campaigns_v2,
[
{
"id": DEFAULT_CAMPAIGN_NAMESPACE,
"namespace": str(NAMESPACE_DNS),
"name": "DEFAULT",
"owner": "root",
}
],
)


def downgrade() -> None:
"""Drop tables in the reverse order in which they were created."""
op.drop_table("activity_log_v2", if_exists=True)
op.drop_table("tasks_v2", if_exists=True)
op.drop_table("manifests_v2", if_exists=True)
op.drop_table("edges_v2", if_exists=True)
op.drop_table("nodes_v2", if_exists=True)
op.drop_table("campaigns_v2", if_exists=True)
op.drop_table("machines_v2", if_exists=True)
4 changes: 2 additions & 2 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1

ARG PYTHON_VERSION="3.11"
ARG UV_VERSION="0.6"
ARG PYTHON_VERSION="3.12"
ARG UV_VERSION="0.7"
ARG ASGI_PORT="8080"

#==============================================================================
Expand Down
4 changes: 1 addition & 3 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,7 @@ Releases are performed at an unspecified cadence, to be no shorter than 1 week a
- Releases are named according to their semantic version (major.minor.patch).
- Releases are made by adding a named tag to the trunk branch.
- Each release will increment the minor version and set the patch level to 0, e.g., `1.0.12` -> `1.1.0`
- If a bugfix commit needs to be added to a release, then a retroactive branch will be created from the
release tag; the commit is cherry-picked into the release branch and a new tag is written with an incremented
patch level, e.g., `1.23.0` -> `1.23.1`.
- If a bugfix commit in the trunk needs to be added to a release, then a retroactive branch will be created from the affected release tag; any fix commits are cherry-picked into the release branch and a new tag is written with an incremented patch level, e.g., `1.23.0` -> `1.23.1`. This release branch is never merged to `main` (trunk) but is kept for subsequent cherry-picked fixes.
- The major version is incremented only in the presence of user-facing breaking changes.

This project uses `python-semantic-release` to manage releases. A release may be triggered by any ticket branch
Expand Down
15 changes: 15 additions & 0 deletions docs/DEPLOYING.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,3 +50,18 @@ The CM Service consumes several secrets from the k8s application environment. Th
- The AWS "default" profile is configured to use environment variables for its credentials source, so the appropriate secrets should be assigned to the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables.

- Authentication values for other profiles are stored in a secret mounted as a file at `/etc/aws/credentials`.

## Dev / Staging Deployments
Development or staging builds are deployed to the USDF Kubernetes vcluster `usdf-cm-dev`.

Prior to deploying a new build using a prerelease tag, release tag, or ticket branch tag, it may be necessary to clear the database of all data so it can be migrated from scratch.

### Clearing the Database
After appropriately setting the Kubernetes context and namespace:

1. Scale down the daemon deployment using `kubectl scale deployment cm-service-daemon --replicas=0`.
1. Obtain a shell in an API server pod using `kubectl exec -it cm-service-server-<hash> -- bash`.
1. Within this shell, downgrade the database migration using `alembic downgrade base`.

> [!CAUTION]
> This operation unconditionally destroys the database contents and all objects.
Loading