Skip to content

Commit dbab07e

Browse files
alec-flowersclaude
andauthored
feat: runtime fingerprinting, identity verification, and lockfile (#19)
* Add runtime fingerprinting, lockfile, and pre-submit validation Recipes are moving toward being self-contained documents of intent. This adds the infrastructure for reproducibility and environment tracking: - **Fingerprint module** (`core/fingerprint.py`): Captures pip freeze, GPU info, CUDA/torch/NCCL versions inside running containers. Deterministic output (sorted packages, fixed key order) for clean diffs. - **Lockfile** (`core/lockfile.py`): Aggregates per-worker fingerprints into `recipe.lock.yaml` written to the output directory after each run. - **Pre-submit validation** (`core/validation.py`): Background checks that HF models exist, Docker images resolve, and local paths are real. Fire-and-forget — never blocks job submission. - **Schema**: Optional `name`, `revision`, `container_image`, `container_digest` fields on ModelConfig for virtual identity tracking. - **CLI commands**: `srtctl diff` to compare two runs, `srtctl check` to verify environment against a reference fingerprint. - **Worker preamble**: Fingerprint capture injected after setup/pip install, before server launch — captures the real runtime state. All fault-tolerant: every probe, check, and write can fail independently without affecting the job. 133 new tests, all existing tests unaffected. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add SLURM context to lockfile and write early at job start The lockfile now captures the SLURM environment (job ID, account, partition, nodelist, user, cwd) in the _meta.slurm section. This is written at the start of the sweep so even crashed jobs have a lockfile with config + cluster context. The postprocess stage rewrites it with the aggregated runtime fingerprint after workers complete. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Store per-worker fingerprints instead of aggregating Each worker (prefill_w0, decode_w0, etc.) keeps its own fingerprint in the lockfile rather than being unioned into one blob. Prefill and decode nodes can have different GPU types, drivers, and packages — collapsing them hides real differences. srtctl diff now compares each worker against its counterpart between runs. srtctl check verifies each worker independently. Backward compatible: old lockfiles with a single 'fingerprint' key are loaded as {"worker": fingerprint}. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: use heredoc for fingerprint capture script to avoid escaping bugs The inline python3 -c approach produced literal \n characters instead of newlines when passing through bash → srun → bash → python. This caused a SyntaxError that was silently swallowed by || true, so fingerprints were never actually collected. Fix: write the capture script via a bash heredoc (cat <<'EOF') and pipe to python3 via process substitution. This is immune to quoting/escaping issues in the srun chain. Also add two new tests: - test_embedded_python_is_syntactically_valid: ast.parse() the extracted Python source to catch syntax errors at test time - test_embedded_python_produces_json: actually execute the script in a subprocess to verify it runs end-to-end Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: use python3 -m pip freeze for better package capture, resolve log_dir in lockfile - pip freeze misses system-installed packages in containers; python3 -m pip freeze is more reliable across environments - lockfile now records the resolved log_dir path instead of the template string (./outputs/{job_id}/logs), so the lockfile is self-contained - Added test for resolved_log_dir in lockfile * feat: capture framework versions (vllm/sglang/trtllm/dynamo) in fingerprint Adds a 'frameworks' section to the runtime fingerprint that probes for vllm, sglang, tensorrt_llm, and dynamo versions inside the container. Only detected frameworks are included. Also adds virtual identity fields (name, container_image) to the mocker recipe as an example of how to document pullable origins for reproducibility. * feat: capture container and model identity in runtime fingerprint Adds two new probes to the fingerprint script: - container_identity: reads enroot/Pyxis image metadata to capture the original Docker digest and image env vars from the running container - model_identity: reads HF download metadata (commit hash, repo ID) and config.json model ID from the mounted model directory This enables post-hoc verification that what actually ran matches what the recipe declared (model.name/revision, container_image/digest). * refactor: replace torch_version with frameworks dict, drop unverifiable container identity - Removed container_identity() probe — Pyxis/enroot stores zero provenance metadata inside containers (confirmed by inspection on ptyche GB200) - Removed torch_version as a standalone field — torch version is now captured inside the frameworks dict alongside vllm, sglang, tensorrt_llm, dynamo - frameworks dict only includes detected frameworks (sparse) - Added model_identity() probe for HF repo/revision from download metadata - Updated pip freeze to use python3 -m pip freeze for better container compat - Updated all tests to use new schema * fix: probe venv Python and merge multiple pip freeze sources - Auto-detect container Python venv (/opt/dynamo/venv/bin/python3) for framework version probes — system python3 misses venv-installed packages - Merge pip freeze output from venv python, system python, bare pip, and uv pip freeze — different install methods show different packages - Deduplicate across sources via set merge * fix: label pip_packages by source instead of merging pip_packages is now a dict keyed by source (e.g. '/opt/dynamo/venv/bin/python3', 'python3', 'pip', 'uv') so you can see which packages come from which environment. Diff/check logic flattens for comparison. * feat: add identity verification — compare recipe against runtime fingerprint Adds identity: block to recipe schema with model.repo, model.revision, and frameworks dict. After health check passes, the orchestrator loads worker fingerprints and compares against identity declarations. Prints a verification banner in the sweep log: - All checks passed (with what was verified) - WARNING: N mismatch(es) detected (with details) Mismatches warn but don't fail the job. * feat: show pass/fail for each identity check, fix HF metadata discovery - Verification banner now shows OK/!! for each check explicitly - Fixed model_identity probe to find HF commit hash from .cache/huggingface/download/*.metadata (hf download --local-dir format) - Added IdentityCheckResult dataclass for structured pass/fail results * fix: model.repo as unverifiable when HF metadata lacks repo name, add revision to recipe HF download --local-dir stores commit hashes in .cache/huggingface/download/*.metadata but not the repo name. model.repo is now treated as 'declared, not verifiable' instead of a failure when runtime can't determine the repo. model.revision check works correctly against the cached commit hash. * fix: use importlib.metadata for all framework probes, add tensorrt_llm to identity import tensorrt_llm loads native CUDA extensions which crash without GPU context. The fingerprint script runs before the worker starts, so GPU may not be available. importlib.metadata.version() only reads package metadata from dist-info — no native code, no GPU needed. Applied to all framework probes (vllm, sglang, tensorrt_llm, torch, dynamo). * feat: show what's running at submit time, prompt for identity block if missing After job submission, prints model/container/backend/benchmark summary. If identity block is present, shows declared identity fields inline. If missing, prints a yellow tip with example identity block to encourage runtime verification. * feat: include verification results in lockfile, right after _meta The lockfile now has a 'verification' section at the top (after _meta, before config) showing the identity check results: verification: result: all OK passed: 5 failed: 0 checks: - field: model.repo status: OK message: nvidia/Kimi-K2.5-NVFP4 (declared, not verifiable at runtime) - field: frameworks.tensorrt_llm status: OK message: 1.3.0rc9 This is the first thing you see when reading the lockfile. * chore: remove torch from framework probes, keep vllm/sglang/trtllm/dynamo only * docs: explicit identity tip showing available frameworks and where versions come from * docs: clarify frameworks is dynamo + one engine, not all three * feat: show running summary + identity tip in dry-run output too * docs: add agent instruction to always include identity block in recipes * docs: explain identity enables result replication * fix: pre-submit HF validation reads from identity block, not just model config * cleanup: remove dead container_image/digest/name fields from ModelConfig - Removed name, revision, container_image, container_digest from ModelConfig (all moved to identity block) - Removed Docker image pre-submit validation (can't verify from inside Pyxis) - HF validation now reads from identity.model.repo - Updated tests to use IdentityConfig instead of old ModelConfig fields - Note: background validation thread was effectively dead code — daemon thread exits before HTTP completes. Left in place but it needs a rethink. * cleanup: remove dead background validation thread The daemon thread spawned by run_validations_background() was killed before completing — srtctl apply exits immediately after sbatch. The real validation now happens at runtime via identity verification in the orchestrator. * feat: inline HF model validation at submit time (replaces dead background thread) Single HTTP HEAD to huggingface.co/api/models/{repo} before sbatch. Shows green checkmark or yellow warning. Takes <1s, never blocks on failure. * feat: HF model validation runs in dry-run too, not just submit * review: harden fingerprint PR after code review - IdentityCheckResult now frozen (consistency with other dataclasses) - Extract FRAMEWORK_PACKAGES constant (eliminates hardcoded duplicates) - Remove hasattr(config, 'identity') checks (field always exists via default_factory) - Reduce HF validation timeout 5s -> 2s (air-gapped clusters) - Reduce bash script probe timeout 5s -> 3s (faster worker startup) - Simplify find_python() (Path.exists() instead of subprocess) * chore: remove design doc from branch * feat: capture ML env vars in fingerprint with secret redaction Captures CUDA_, TORCH_, NCCL_, VLLM_, SGLANG_, TRTLLM_, HF_, DYN_, NVIDIA_, OMPI_, UCX_, NVSHMEM_ prefixed env vars. Redacts any variable containing TOKEN, KEY, SECRET, PASSWORD, CREDENTIAL, or AUTH. Inspired by dynamo's config_dump/environment.py but self-contained. * feat: include srt-slurm git commit hash in lockfile metadata * fix: _parse_pip_packages handles UNAVAILABLE sentinel string gracefully When all pip freeze commands fail, pip_packages is set to the string 'unavailable'. Previously this was iterated character-by-character, producing 8 garbage entries ('u': '?', 'n': '?', ...) that corrupted diff/check output. Now returns empty dict for strings and None. * fix: skip identity verification banner when no identity fields declared IdentityConfig() is always truthy (it's a dataclass instance). Check inner fields (model.repo, model.revision, frameworks) before running verification, matching the pattern in submit.py. * fix: address review findings — double walk, N captures, short prefix, bash doc - validate_local_path: single directory walk instead of two rglob passes - srtctl check: capture fingerprint once, reuse for all worker comparisons - model.revision: require >= 7 chars to prevent false prefix matches - Document bash requirement for process substitution in heredoc script --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 875fddd commit dbab07e

15 files changed

Lines changed: 3045 additions & 3 deletions

recipes/mocker/kimi-trace-agg.yaml

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
name: "kimi-k25-mocker-trace-agg"
2+
3+
slurm:
4+
time_limit: "00:15:00"
5+
6+
model:
7+
path: "kimi-k25-nvfp4"
8+
container: "trtllm-runtime"
9+
precision: "fp4"
10+
11+
resources:
12+
gpu_type: "gb200"
13+
gpus_per_node: 4
14+
agg_nodes: 1
15+
agg_workers: 1
16+
17+
extra_mount:
18+
- "/lustre/fsw/coreai_tritoninference_triton3/aflowers/srt-slurm/traces:/traces"
19+
20+
frontend:
21+
type: dynamo
22+
enable_multiple_frontends: false
23+
24+
backend:
25+
type: mocker
26+
speedup_ratio: 100
27+
engine_type: vllm
28+
29+
mocker_config:
30+
aggregated:
31+
num-gpu-blocks-override: 8192
32+
max-num-seqs: 128
33+
max-num-batched-tokens: 16384
34+
35+
benchmark:
36+
type: trace-replay
37+
trace_file: /traces/together-ai-basic-no-delays_119k/dataset.jsonl
38+
concurrencies: [4]
39+
ttft_threshold_ms: 3000
40+
itl_threshold_ms: 7
41+
aiperf_package: "aiperf>=0.7.0"
42+
43+
health_check:
44+
max_attempts: 60
45+
interval_seconds: 5
46+
47+
dynamo:
48+
install: false
49+
50+
# Virtual identity — verified against runtime fingerprint (warnings, not failures)
51+
identity:
52+
model:
53+
repo: "nvidia/Kimi-K2.5-NVFP4"
54+
revision: "c0285e649c34d4386b01e38abca642c06cbe014e"
55+
frameworks:
56+
dynamo: "1.0.0"
57+
tensorrt_llm: "1.3.0rc9"

src/srtctl/cli/do_sweep.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
from srtctl.cli.mixins import BenchmarkStageMixin, FrontendStageMixin, PostProcessStageMixin, WorkerStageMixin
2525
from srtctl.core.config import load_config
2626
from srtctl.core.health import wait_for_port
27+
from srtctl.core.lockfile import write_lockfile
2728
from srtctl.core.processes import (
2829
ManagedProcess,
2930
ProcessRegistry,
@@ -195,6 +196,9 @@ def run(self) -> int:
195196
if self.config.profiling.enabled:
196197
logger.info("Profiling: %s", self.config.profiling.type)
197198

199+
# Write initial lockfile with config + SLURM context (fingerprint added after run)
200+
write_lockfile(self.runtime.log_dir.parent, self.config)
201+
198202
registry = ProcessRegistry(job_id=self.runtime.job_id)
199203
stop_event = threading.Event()
200204
setup_signal_handlers(stop_event, registry)

src/srtctl/cli/mixins/benchmark_stage.py

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,9 @@
1414
from pathlib import Path
1515
from typing import TYPE_CHECKING
1616

17+
from srtctl.core.fingerprint import format_identity_verification, verify_identity
1718
from srtctl.core.health import wait_for_model
19+
from srtctl.core.lockfile import collect_worker_fingerprints
1820
from srtctl.core.slurm import get_hostname_ip, start_srun_process
1921
from srtctl.core.status import JobStage, JobStatus, StatusReporter
2022

@@ -93,6 +95,27 @@ def run_benchmark(
9395
return 1
9496

9597
logger.info("Server is healthy - starting benchmark")
98+
99+
# Identity verification: compare recipe identity against runtime fingerprints
100+
# Store results on self so postprocess can include them in the lockfile
101+
self._identity_verification = None
102+
try:
103+
fingerprints = collect_worker_fingerprints(self.runtime.log_dir)
104+
has_identity = self.config.identity and (
105+
(
106+
self.config.identity.model
107+
and (self.config.identity.model.repo or self.config.identity.model.revision)
108+
)
109+
or self.config.identity.frameworks
110+
)
111+
if fingerprints and has_identity:
112+
self._identity_verification = verify_identity(self.config.identity, fingerprints)
113+
banner = format_identity_verification(self._identity_verification, self.config.identity)
114+
for line in banner.splitlines():
115+
logger.info(line)
116+
except Exception as e:
117+
logger.debug("Identity verification skipped: %s", e)
118+
96119
if reporter:
97120
reporter.report(JobStatus.BENCHMARK, JobStage.BENCHMARK, "Running benchmark")
98121

src/srtctl/cli/mixins/postprocess_stage.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@
2828

2929
from srtctl.benchmarks.base import SCRIPTS_DIR
3030
from srtctl.core.config import load_cluster_config
31+
from srtctl.core.lockfile import write_lockfile
3132
from srtctl.core.schema import AIAnalysisConfig, S3Config
3233
from srtctl.core.slurm import start_srun_process
3334

@@ -150,6 +151,10 @@ def run_postprocess(self, exit_code: int) -> None:
150151
Args:
151152
exit_code: Exit code from the benchmark run
152153
"""
154+
# Write lockfile with verification results (non-fatal — never blocks job completion)
155+
verification = getattr(self, "_identity_verification", None)
156+
write_lockfile(self.runtime.log_dir.parent, self.config, self.runtime.log_dir, verification=verification)
157+
153158
# Copy config into log directory so it's included in S3 upload
154159
self._copy_config_to_logs()
155160

src/srtctl/cli/mixins/worker_stage.py

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
from collections import defaultdict
1313
from typing import TYPE_CHECKING, Any
1414

15+
from srtctl.core.fingerprint import generate_capture_script
1516
from srtctl.core.processes import ManagedProcess, NamedProcesses
1617
from srtctl.core.slurm import start_srun_process
1718

@@ -157,8 +158,10 @@ def __missing__(self, key: str) -> str:
157158
if profiling.enabled:
158159
logger.info("Profiling: %s mode", profiling.type)
159160

160-
# Build bash preamble (setup script + dynamo install)
161+
# Build bash preamble (setup script + dynamo install + fingerprint)
161162
bash_preamble = self._build_worker_preamble()
163+
fp_cmd = generate_capture_script(f"/logs/fingerprint_{mode}_w{index}.json")
164+
bash_preamble = f"{bash_preamble} && {fp_cmd}" if bash_preamble else fp_cmd
162165

163166
proc = start_srun_process(
164167
command=cmd,
@@ -258,8 +261,10 @@ def start_endpoint_worker(self, endpoint_processes: list["Process"]) -> ManagedP
258261
if profiling.enabled:
259262
logger.info("Profiling: %s mode", profiling.type)
260263

261-
# Build bash preamble (setup script + dynamo install)
264+
# Build bash preamble (setup script + dynamo install + fingerprint)
262265
bash_preamble = self._build_worker_preamble()
266+
fp_cmd = generate_capture_script(f"/logs/fingerprint_{mode}_w{index}.json")
267+
bash_preamble = f"{bash_preamble} && {fp_cmd}" if bash_preamble else fp_cmd
263268

264269
# Get srun config from backend
265270
srun_config = self.backend.get_srun_config()

src/srtctl/cli/submit.py

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,14 @@
4141
load_config,
4242
resolve_config_with_defaults,
4343
)
44+
from srtctl.core.fingerprint import (
45+
capture_fingerprint,
46+
check_against_fingerprint,
47+
diff_fingerprints,
48+
format_check_results,
49+
format_diff,
50+
)
51+
from srtctl.core.lockfile import load_lockfile_fingerprints
4452
from srtctl.core.schema import SrtConfig
4553
from srtctl.core.status import create_job_record
4654

@@ -259,6 +267,50 @@ def generate_minimal_sbatch_script(
259267
return rendered
260268

261269

270+
def _print_running_summary(config: SrtConfig, console: Console) -> None:
271+
"""Print what's being run and identity verification status."""
272+
console.print()
273+
console.print("[bold]Running:[/]")
274+
console.print(f" Model: {config.model.path}")
275+
console.print(f" Container: {config.model.container}")
276+
console.print(f" Backend: {config.backend_type}")
277+
console.print(f" Benchmark: {config.benchmark.type}")
278+
279+
has_identity = config.identity and (
280+
(config.identity.model and (config.identity.model.repo or config.identity.model.revision))
281+
or config.identity.frameworks
282+
)
283+
if has_identity:
284+
id_fields = []
285+
if config.identity.model and config.identity.model.repo:
286+
id_fields.append(f"model={config.identity.model.repo}")
287+
if config.identity.model and config.identity.model.revision:
288+
id_fields.append(f"rev={config.identity.model.revision[:12]}")
289+
for name, ver in (config.identity.frameworks or {}).items():
290+
id_fields.append(f"{name}={ver}")
291+
console.print(f" Identity: {', '.join(id_fields)}")
292+
else:
293+
console.print()
294+
console.print(
295+
"[yellow]Tip:[/] Add an [bold]identity:[/] block to your recipe so others can replicate your results."
296+
)
297+
console.print("[yellow] Without it, someone reading this recipe can't tell which exact model[/]")
298+
console.print("[yellow] weights or framework versions produced the benchmark numbers.[/]")
299+
console.print("[yellow] At job start, srtctl verifies the runtime matches your declaration.[/]")
300+
console.print()
301+
console.print("[dim] identity:[/]")
302+
console.print("[dim] model:[/]")
303+
console.print('[dim] repo: "nvidia/Kimi-K2.5-NVFP4" [/][dim italic]# HuggingFace model ID[/]')
304+
console.print('[dim] revision: "c0285e649c34..." [/][dim italic]# HF commit SHA[/]')
305+
console.print("[dim] frameworks: [/][dim italic]# dynamo + one engine[/]")
306+
console.print('[dim] dynamo: "1.0.0" [/][dim italic]# always (ai-dynamo)[/]')
307+
console.print('[dim] tensorrt_llm: "1.3.0rc9" [/][dim italic]# OR vllm OR sglang[/]')
308+
console.print()
309+
console.print("[dim italic] Agent: if you are an AI assistant creating this recipe, you MUST[/]")
310+
console.print("[dim italic] include the identity block. Use the container's framework versions[/]")
311+
console.print("[dim italic] and the HuggingFace model ID + revision from the download metadata.[/]")
312+
313+
262314
def submit_with_orchestrator(
263315
config_path: Path,
264316
config: SrtConfig | None = None,
@@ -311,6 +363,16 @@ def submit_with_orchestrator(
311363
runtime_config_filename=runtime_config_filename,
312364
)
313365

366+
# Identity validation (inline, <1s) — runs for both dry-run and submit
367+
if config.identity and config.identity.model and config.identity.model.repo:
368+
from srtctl.core.validation import validate_hf_model
369+
370+
hf_result = validate_hf_model(config.identity.model.repo, config.identity.model.revision)
371+
if hf_result.ok:
372+
console.print(f"[green]✓[/] HF model: {hf_result.message}")
373+
else:
374+
console.print(f"[yellow]⚠ HF model: {hf_result.message}[/]")
375+
314376
if dry_run:
315377
console.print()
316378
console.print(
@@ -325,6 +387,9 @@ def submit_with_orchestrator(
325387
console.print(Panel(syntax, title="Generated sbatch Script", border_style="cyan"))
326388
console.print()
327389
show_config_details(config)
390+
391+
# Show running summary + identity in dry-run too
392+
_print_running_summary(config, console)
328393
return
329394

330395
# Validate setup before submitting (not during dry-run)
@@ -431,6 +496,9 @@ def submit_with_orchestrator(
431496
console.print(f"[dim]📁 Logs:[/] {job_output_dir}/logs")
432497
console.print(f"[dim]📋 Monitor:[/] tail -f {job_output_dir}/logs/sweep_{job_id}.log")
433498
console.print(f"[dim]📊 Queue:[/] squeue --job {job_id}")
499+
500+
_print_running_summary(config, console)
501+
434502
return job_id
435503

436504
except subprocess.CalledProcessError as e:
@@ -943,8 +1011,75 @@ def add_common_args(p):
9431011
help="Print resolved YAML to stdout instead of writing files",
9441012
)
9451013

1014+
# Fingerprint comparison: srtctl diff <path_a> <path_b>
1015+
diff_parser = subparsers.add_parser("diff", help="Compare fingerprints from two runs")
1016+
diff_parser.add_argument("path_a", type=Path, help="First output dir or lockfile")
1017+
diff_parser.add_argument("path_b", type=Path, help="Second output dir or lockfile")
1018+
diff_parser.add_argument("--verbose", action="store_true", help="Show all package changes")
1019+
1020+
# Environment check: srtctl check <path>
1021+
check_parser = subparsers.add_parser("check", help="Check environment against a fingerprint")
1022+
check_parser.add_argument("path", type=Path, help="Lockfile or output dir to check against")
1023+
check_parser.add_argument("--json", action="store_true", dest="json_output", help="Output as JSON")
1024+
9461025
args = parser.parse_args()
9471026

1027+
# Handle diff and check commands first (they don't use -f/config)
1028+
if args.command == "diff":
1029+
fps_a = load_lockfile_fingerprints(args.path_a)
1030+
fps_b = load_lockfile_fingerprints(args.path_b)
1031+
if fps_a is None or fps_b is None:
1032+
missing = []
1033+
if fps_a is None:
1034+
missing.append(str(args.path_a))
1035+
if fps_b is None:
1036+
missing.append(str(args.path_b))
1037+
console.print(f"[bold red]Could not load fingerprints from:[/] {', '.join(missing)}")
1038+
sys.exit(1)
1039+
1040+
# Diff each worker against its counterpart
1041+
all_workers = sorted(set(fps_a.keys()) | set(fps_b.keys()))
1042+
for worker in all_workers:
1043+
if worker not in fps_a:
1044+
console.print(f"\n[bold]{worker}:[/] only in {args.path_b}")
1045+
continue
1046+
if worker not in fps_b:
1047+
console.print(f"\n[bold]{worker}:[/] only in {args.path_a}")
1048+
continue
1049+
diff = diff_fingerprints(fps_a[worker], fps_b[worker])
1050+
console.print(f"\n[bold]{worker}:[/]")
1051+
console.print(format_diff(diff, verbose=args.verbose))
1052+
return
1053+
1054+
if args.command == "check":
1055+
import json as json_mod
1056+
1057+
fps = load_lockfile_fingerprints(args.path)
1058+
if fps is None:
1059+
console.print(f"[bold red]Could not load fingerprints from:[/] {args.path}")
1060+
sys.exit(1)
1061+
1062+
# Capture current environment once, reuse for all worker checks
1063+
current_fp = capture_fingerprint()
1064+
all_results = []
1065+
for worker in sorted(fps.keys()):
1066+
results = check_against_fingerprint(fps[worker], current_fp)
1067+
if results:
1068+
all_results.extend(results)
1069+
console.print(f"\n[bold]{worker}:[/]")
1070+
if args.json_output:
1071+
console.print(
1072+
json_mod.dumps(
1073+
[{"field": r.field, "status": r.status.value, "message": r.message} for r in results],
1074+
indent=2,
1075+
)
1076+
)
1077+
else:
1078+
console.print(format_check_results(results))
1079+
if not all_results:
1080+
console.print(format_check_results([]))
1081+
sys.exit(1 if all_results else 0)
1082+
9481083
# Parse config arg: supports path:selector format for overrides
9491084
config_path, selector = parse_config_arg(args.config)
9501085

0 commit comments

Comments
 (0)