Skip to content

fix(e2e): run snapshot from CI runner instead of nested Job#199

Closed
mchmarny wants to merge 1 commit intomainfrom
fix/e2e-snapshot-agent
Closed

fix(e2e): run snapshot from CI runner instead of nested Job#199
mchmarny wants to merge 1 commit intomainfrom
fix/e2e-snapshot-agent

Conversation

@mchmarny
Copy link
Copy Markdown
Member

@mchmarny mchmarny commented Feb 24, 2026

Summary

  • Replace manual kubectl apply Job with direct aicr snapshot invocation from the CI runner
  • The snapshot command deploys its own agent Job to the Kind cluster

Root cause

The --deploy-agent flag was removed — aicr snapshot now always deploys an agent Job. The E2E test was creating a manual Job that ran aicr snapshot inside the cluster, which caused a nested agent deployment attempt. The pod's ServiceAccount lacked RBAC to create Jobs/Roles/ServiceAccounts, failing with UNAUTHORIZED.

Fix

Run aicr snapshot from the CI runner (outside the cluster) and let it handle its own Job deployment, RBAC, and cleanup. This matches how the command is actually used.

Test plan

  • bash -n tests/e2e/run.sh — syntax valid
  • E2E CI should now complete the snapshot test

The snapshot command always deploys an agent Job — running it inside
a manually-created Job caused a nested deploy attempt that failed with
UNAUTHORIZED (the pod ServiceAccount lacked RBAC to create Jobs).

Run `aicr snapshot` from the CI runner instead, letting the command
handle its own Job deployment to the Kind cluster.
@mchmarny mchmarny requested a review from a team as a code owner February 24, 2026 02:36
@mchmarny mchmarny self-assigned this Feb 24, 2026
@mchmarny mchmarny added the bug Something isn't working label Feb 24, 2026
@mchmarny mchmarny closed this Feb 24, 2026
@mchmarny mchmarny deleted the fix/e2e-snapshot-agent branch February 24, 2026 03:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/tests bug Something isn't working size/M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants