Skip to content

MCO-2168: Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet#5786

Open
HarshwardhanPatil07 wants to merge 3 commits intoopenshift:mainfrom
HarshwardhanPatil07:MCO-2168-Deleting-shared-PIS-testcase
Open

MCO-2168: Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet#5786
HarshwardhanPatil07 wants to merge 3 commits intoopenshift:mainfrom
HarshwardhanPatil07:MCO-2168-Deleting-shared-PIS-testcase

Conversation

@HarshwardhanPatil07
Copy link

@HarshwardhanPatil07 HarshwardhanPatil07 commented Mar 23, 2026

Automation Polarion OCP-88378 (MCO-2168): This test case verifies that when two PinnedImageSets reference the same image, deleting one PinnedImageSet does not remove the image from the nodes.

- What I did
Automated Polarion test case OCP-88378 (MCO-2168).
Added a new Ginkgo e2e test in test/extended-priv/mco_pinnedimages.go that verifies deleting one PinnedImageSet does not unpin images that are still referenced by another PinnedImageSet.

  • Creates two PinnedImageSets (pis-one and pis-two) both referencing the same alpine image on the worker pool
  • Verifies the image is pinned on all nodes and MachineConfigNode conditions are healthy
  • Deletes pis-one and verifies the image remains pinned on all nodes because pis-two still references it

- How to verify it

  • Create two PinnedImageSets with the same image targeting the worker pool
  • Wait for both to reconcile and verify the image is pinned (crictl images --pinned)
  • Delete one PinnedImageSet
  • Confirm the image is still pinned on all worker nodes

- Description for the changelog

test: automate OCP-88378 - verify deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet

View Test Execution Logs harshpat@harshpat-thinkpadp1gen4i:~/Downloads/repos/machine-config-operator$ ./_output/linux/amd64/machine-config-tests-ext run-test "[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet" I0323 17:36:12.165968 43333 test_context.go:566] The --provider flag is not set. Continuing as if --provider=skeleton had been used. Running Suite: - /home/harshpat/Downloads/repos/machine-config-operator ======================================================================== Random Seed: 1774267572 - will randomize all specs

Will run 1 of 1 specs

[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet
/home/harshpat/Downloads/repos/machine-config-operator/test/extended-priv/mco_pinnedimages.go:520
STEP: Creating a kubernetes client @ 03/23/26 17:36:12.166
I0323 17:36:15.109044 43333 client.go:164] configPath is now "/tmp/configfile4176615151"
I0323 17:36:15.109065 43333 client.go:291] The user is now "e2e-test-mco-pinnedimages-t8xwc-user"
I0323 17:36:15.109072 43333 client.go:293] Creating project "e2e-test-mco-pinnedimages-t8xwc"
I0323 17:36:15.454395 43333 client.go:302] Waiting on permissions in project "e2e-test-mco-pinnedimages-t8xwc" ...
I0323 17:36:16.705082 43333 client.go:363] Waiting for ServiceAccount "default" to be provisioned...
I0323 17:36:17.053661 43333 client.go:363] Waiting for ServiceAccount "builder" to be provisioned...
I0323 17:36:17.566292 43333 client.go:363] Waiting for ServiceAccount "deployer" to be provisioned...
I0323 17:36:17.930800 43333 client.go:373] Waiting for RoleBinding "system:image-builders" to be provisioned...
I0323 17:36:18.737982 43333 client.go:373] Waiting for RoleBinding "system:deployers" to be provisioned...
I0323 17:36:19.921407 43333 client.go:373] Waiting for RoleBinding "system:image-pullers" to be provisioned...
I0323 17:36:20.740788 43333 client.go:404] Project "e2e-test-mco-pinnedimages-t8xwc" has been fully provisioned.
I0323 17:36:20.741369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp'
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-a755db9f3e395401159d15cca4c9f2e3 True False False 3 3 3 0 7h4m
worker rendered-worker-225b39332f01eed9d1147f5e22629fe7 True False False 3 3 3 0 7h4m
I0323 17:36:23.086441 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:36:24.124647 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
Mar 23 17:36:25.148: INFO: <Kind: mcp, Name: worker, Namespace: > <Kind: mcp, Name: master, Namespace: > <Kind: mcp, Name: worker, Namespace: >
STEP: MCO Preconditions Checks @ 03/23/26 17:36:25.149
Mar 23 17:36:26.584: INFO: Check that master pool is ready for testing
I0323 17:36:26.584932 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'
Mar 23 17:36:27.606: INFO: Num nodes: 3, wait time per node 13 minutes
Mar 23 17:36:27.606: INFO: Increase waiting time because it is master pool
Mar 23 17:36:27.606: INFO: Waiting 3m54s for MCP master to be completed.
I0323 17:36:27.606448 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'
I0323 17:36:28.622141 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'
Mar 23 17:36:29.489: INFO: MCP 'master' is ready for testing
Mar 23 17:36:29.489: INFO: Check that worker pool is ready for testing
I0323 17:36:29.490004 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
Mar 23 17:36:30.397: INFO: Num nodes: 3, wait time per node 13 minutes
Mar 23 17:36:30.397: INFO: Waiting 3m0s for MCP worker to be completed.
I0323 17:36:30.398221 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'
I0323 17:36:31.412420 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'
Mar 23 17:36:32.375: INFO: MCP 'worker' is ready for testing
Mar 23 17:36:32.375: INFO: Wait for MCC to get the leader lease
I0323 17:36:32.375838 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'
I0323 17:36:33.359772 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-579ddc5498-lw5bt'
Mar 23 17:36:35.519: INFO: End of MCO Preconditions

STEP: Remove the test image from all nodes in the pool @ 03/23/26 17:37:05.088

Mar 23 17:37:05.088: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-42-126.us-east-2.compute.internal
I0323 17:37:08.103930 43333 client.go:743] showInfo is false
Mar 23 17:37:12.744: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9
Deleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-42-126us-east-2computeinternal-debug-9b5ck ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
Mar 23 17:37:12.744: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-112.us-east-2.compute.internal
I0323 17:37:15.852584 43333 client.go:743] showInfo is false
Mar 23 17:37:28.056: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9
Deleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-84-112us-east-2computeinternal-debug-5gtjx ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
Mar 23 17:37:28.056: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-9-163.us-east-2.compute.internal
I0323 17:37:31.606830 43333 client.go:743] showInfo is false
Mar 23 17:37:42.568: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9
Deleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-9-163us-east-2computeinternal-debug-b5tc9 ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
Mar 23 17:37:42.568: INFO: OK!

STEP: Create first PinnedImageSet with alpine image @ 03/23/26 17:37:42.568

Mar 23 17:37:42.568: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
Mar 23 17:37:42.568: INFO: mco fixture dir is not initialized, start to create
Mar 23 17:37:42.569: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir848482353
I0323 17:37:45.571483 43333 client.go:743] showInfo is true
I0323 17:37:45.571524 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
I0323 17:37:46.380195 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout
I0323 17:37:46.380354 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout'
pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created
Mar 23 17:37:47.149: INFO: OK!

STEP: Wait for the first PinnedImageSet to be applied @ 03/23/26 17:37:47.149

Mar 23 17:37:47.149: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:38:47.175473 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:38:48.018: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:38:48.018369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:39:14.295305 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:39:15.338822 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:39:16.359961 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:39:17.123383 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:39:18.103261 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:39:18.902146 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:39:19.901: INFO: Pool worker successfully pinned the images! Complete!
Mar 23 17:39:19.901: INFO: OK!

STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/23/26 17:39:19.901

I0323 17:39:21.074984 43333 client.go:743] showInfo is true
I0323 17:39:21.075020 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:39:25.271773 43333 client.go:743] showInfo is true
I0323 17:39:25.271805 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:39:29.413967 43333 client.go:743] showInfo is true
I0323 17:39:29.413998 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 23 17:39:32.869: INFO: OK!

STEP: Create second PinnedImageSet with the same alpine image @ 03/23/26 17:39:32.87

Mar 23 17:39:32.870: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
I0323 17:39:35.872700 43333 client.go:743] showInfo is true
I0323 17:39:35.872790 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
I0323 17:39:36.637949 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout
I0323 17:39:36.638136 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout'
pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created
Mar 23 17:39:37.403: INFO: OK!

STEP: Wait for the second PinnedImageSet to be applied @ 03/23/26 17:39:37.403

Mar 23 17:39:37.403: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:40:37.462768 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:40:38.856: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:40:38.856239 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:41:07.470262 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:41:08.389849 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:41:09.242246 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:41:10.314401 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:41:11.488677 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:41:12.527927 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:41:13.754: INFO: Pool worker successfully pinned the images! Complete!
Mar 23 17:41:13.754: INFO: OK!

STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/23/26 17:41:13.754

I0323 17:41:14.845939 43333 client.go:743] showInfo is true
I0323 17:41:14.845969 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:41:19.605896 43333 client.go:743] showInfo is true
I0323 17:41:19.605927 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:41:23.544115 43333 client.go:743] showInfo is true
I0323 17:41:23.544145 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 23 17:41:27.128: INFO: OK!

STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/23/26 17:41:27.128

I0323 17:41:27.128455 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:28.152879 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:41:29.064: INFO: Value: False
I0323 17:41:29.065168 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:29.996569 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:41:30.814: INFO: Value: False
I0323 17:41:30.815271 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:31.941457 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:41:32.974: INFO: Value: False
I0323 17:41:32.974477 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:34.287481 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:41:35.396: INFO: Value: False
I0323 17:41:35.396507 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:36.415836 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:41:37.881: INFO: Value: False
I0323 17:41:37.882178 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:39.108941 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:41:40.156: INFO: Value: False
Mar 23 17:41:40.157: INFO: OK!

STEP: Delete the first PinnedImageSet @ 03/23/26 17:41:40.157

I0323 17:41:40.157516 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'
Mar 23 17:41:41.226: INFO: OK!

STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/23/26 17:41:41.226

Mar 23 17:41:41.226: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:42:41.285878 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:42:42.286: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:42:42.286785 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:43:09.945816 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:43:10.774177 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:43:11.591445 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:43:12.542671 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:43:13.465565 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:43:14.291338 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:43:15.090: INFO: Pool worker successfully pinned the images! Complete!
Mar 23 17:43:15.090: INFO: OK!

STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/23/26 17:43:15.09

I0323 17:43:15.090871 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'
I0323 17:43:15.853716 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:
Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found
I0323 17:43:15.853902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'
Mar 23 17:43:16.656: INFO: OK!

STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/23/26 17:43:16.657

I0323 17:43:17.665577 43333 client.go:743] showInfo is true
I0323 17:43:17.665605 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:43:21.587332 43333 client.go:743] showInfo is true
I0323 17:43:21.587367 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:43:25.795681 43333 client.go:743] showInfo is true
I0323 17:43:25.795717 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 23 17:43:29.599: INFO: OK!

STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/23/26 17:43:29.599

I0323 17:43:29.599711 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:30.424114 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:43:31.648: INFO: Value: False
I0323 17:43:31.648434 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:32.465449 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:43:33.229: INFO: Value: False
I0323 17:43:33.229399 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:34.249754 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:43:35.334: INFO: Value: False
I0323 17:43:35.334706 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:36.209880 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:43:37.038: INFO: Value: False
I0323 17:43:37.038419 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:37.821735 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:43:39.021: INFO: Value: False
I0323 17:43:39.021601 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:40.089100 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:43:41.068: INFO: Value: False
Mar 23 17:43:41.068: INFO: OK!

I0323 17:43:41.068980 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'
I0323 17:43:46.564389 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'
Mar 23 17:43:47.826: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:44:47.873574 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:44:48.900: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:44:48.900902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:45:16.170463 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:45:17.017858 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:45:17.938289 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:45:18.963240 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:45:19.855125 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:45:20.767806 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:45:21.701: INFO: Pool worker successfully pinned the images! Complete!
I0323 17:45:21.702042 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'
I0323 17:45:22.854950 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:
Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found
Mar 23 17:45:22.855: INFO: <Kind: pinnedimageset, Name: tc-88378-pis-one, Namespace: > does not exist! No need to delete it!
I0323 17:45:23.770731 43333 client.go:421] Deleted {user.openshift.io/v1, Resource=users e2e-test-mco-pinnedimages-t8xwc-user}, err:
I0323 17:45:24.077279 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-mco-pinnedimages-t8xwc}, err:
I0323 17:45:24.343380 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~SIVc2wFpVErMw4P4tw3dKNBqFzGx5yvnxySKOR2ECvU}, err:
STEP: Destroying namespace "e2e-test-mco-pinnedimages-t8xwc" for this suite. @ 03/23/26 17:45:24.343
• [552.418 seconds]

Ran 1 of 1 Specs in 552.419 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
{
"name": "[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet",
"lifecycle": "blocking",
"duration": 552418,
"startTime": "2026-03-23 12:06:12.166041 UTC",
"endTime": "2026-03-23 12:15:24.584913 UTC",
"result": "passed",
"output": " STEP: Creating a kubernetes client @ 03/23/26 17:36:12.166\nI0323 17:36:15.109044 43333 client.go:164] configPath is now "/tmp/configfile4176615151"\nI0323 17:36:15.109065 43333 client.go:291] The user is now "e2e-test-mco-pinnedimages-t8xwc-user"\nI0323 17:36:15.109072 43333 client.go:293] Creating project "e2e-test-mco-pinnedimages-t8xwc"\nI0323 17:36:15.454395 43333 client.go:302] Waiting on permissions in project "e2e-test-mco-pinnedimages-t8xwc" ...\nI0323 17:36:16.705082 43333 client.go:363] Waiting for ServiceAccount "default" to be provisioned...\nI0323 17:36:17.053661 43333 client.go:363] Waiting for ServiceAccount "builder" to be provisioned...\nI0323 17:36:17.566292 43333 client.go:363] Waiting for ServiceAccount "deployer" to be provisioned...\nI0323 17:36:17.930800 43333 client.go:373] Waiting for RoleBinding "system:image-builders" to be provisioned...\nI0323 17:36:18.737982 43333 client.go:373] Waiting for RoleBinding "system:deployers" to be provisioned...\nI0323 17:36:19.921407 43333 client.go:373] Waiting for RoleBinding "system:image-pullers" to be provisioned...\nI0323 17:36:20.740788 43333 client.go:404] Project "e2e-test-mco-pinnedimages-t8xwc" has been fully provisioned.\nI0323 17:36:20.741369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp'\nNAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE\nmaster rendered-master-a755db9f3e395401159d15cca4c9f2e3 True False False 3 3 3 0 7h4m\nworker rendered-worker-225b39332f01eed9d1147f5e22629fe7 True False False 3 3 3 0 7h4m\nI0323 17:36:23.086441 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:36:24.124647 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 23 17:36:25.148: INFO: \u003cKind: mcp, Name: worker, Namespace: \u003e \u003cKind: mcp, Name: master, Namespace: \u003e \u003cKind: mcp, Name: worker, Namespace: \u003e\n STEP: MCO Preconditions Checks @ 03/23/26 17:36:25.149\nMar 23 17:36:26.584: INFO: Check that master pool is ready for testing\nI0323 17:36:26.584932 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'\nMar 23 17:36:27.606: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 23 17:36:27.606: INFO: Increase waiting time because it is master pool\nMar 23 17:36:27.606: INFO: Waiting 3m54s for MCP master to be completed.\nI0323 17:36:27.606448 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'\nI0323 17:36:28.622141 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'\nMar 23 17:36:29.489: INFO: MCP 'master' is ready for testing\nMar 23 17:36:29.489: INFO: Check that worker pool is ready for testing\nI0323 17:36:29.490004 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 23 17:36:30.397: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 23 17:36:30.397: INFO: Waiting 3m0s for MCP worker to be completed.\nI0323 17:36:30.398221 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'\nI0323 17:36:31.412420 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'\nMar 23 17:36:32.375: INFO: MCP 'worker' is ready for testing\nMar 23 17:36:32.375: INFO: Wait for MCC to get the leader lease\nI0323 17:36:32.375838 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'\nI0323 17:36:33.359772 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-579ddc5498-lw5bt'\nMar 23 17:36:35.519: INFO: End of MCO Preconditions\n\n STEP: Remove the test image from all nodes in the pool @ 03/23/26 17:37:05.088\nMar 23 17:37:05.088: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-42-126.us-east-2.compute.internal\nI0323 17:37:08.103930 43333 client.go:743] showInfo is false\nMar 23 17:37:12.744: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9\nDeleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-42-126us-east-2computeinternal-debug-9b5ck ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nMar 23 17:37:12.744: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-112.us-east-2.compute.internal\nI0323 17:37:15.852584 43333 client.go:743] showInfo is false\nMar 23 17:37:28.056: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9\nDeleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-84-112us-east-2computeinternal-debug-5gtjx ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nMar 23 17:37:28.056: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-9-163.us-east-2.compute.internal\nI0323 17:37:31.606830 43333 client.go:743] showInfo is false\nMar 23 17:37:42.568: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9\nDeleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-9-163us-east-2computeinternal-debug-b5tc9 ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nMar 23 17:37:42.568: INFO: OK!\n\n STEP: Create first PinnedImageSet with alpine image @ 03/23/26 17:37:42.568\nMar 23 17:37:42.568: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nMar 23 17:37:42.568: INFO: mco fixture dir is not initialized, start to create\nMar 23 17:37:42.569: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir848482353\nI0323 17:37:45.571483 43333 client.go:743] showInfo is true\nI0323 17:37:45.571524 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'\nI0323 17:37:46.380195 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout\nI0323 17:37:46.380354 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created\nMar 23 17:37:47.149: INFO: OK!\n\n STEP: Wait for the first PinnedImageSet to be applied @ 03/23/26 17:37:47.149\nMar 23 17:37:47.149: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:38:47.175473 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:38:48.018: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:38:48.018369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:39:14.295305 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:39:15.338822 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:39:16.359961 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:39:17.123383 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:39:18.103261 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:39:18.902146 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:39:19.901: INFO: Pool worker successfully pinned the images! Complete!\nMar 23 17:39:19.901: INFO: OK!\n\n STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/23/26 17:39:19.901\nI0323 17:39:21.074984 43333 client.go:743] showInfo is true\nI0323 17:39:21.075020 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:39:25.271773 43333 client.go:743] showInfo is true\nI0323 17:39:25.271805 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:39:29.413967 43333 client.go:743] showInfo is true\nI0323 17:39:29.413998 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 23 17:39:32.869: INFO: OK!\n\n STEP: Create second PinnedImageSet with the same alpine image @ 03/23/26 17:39:32.87\nMar 23 17:39:32.870: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nI0323 17:39:35.872700 43333 client.go:743] showInfo is true\nI0323 17:39:35.872790 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'\nI0323 17:39:36.637949 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout\nI0323 17:39:36.638136 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created\nMar 23 17:39:37.403: INFO: OK!\n\n STEP: Wait for the second PinnedImageSet to be applied @ 03/23/26 17:39:37.403\nMar 23 17:39:37.403: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:40:37.462768 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:40:38.856: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:40:38.856239 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:41:07.470262 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:41:08.389849 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:41:09.242246 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:41:10.314401 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:41:11.488677 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:41:12.527927 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:41:13.754: INFO: Pool worker successfully pinned the images! Complete!\nMar 23 17:41:13.754: INFO: OK!\n\n STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/23/26 17:41:13.754\nI0323 17:41:14.845939 43333 client.go:743] showInfo is true\nI0323 17:41:14.845969 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:41:19.605896 43333 client.go:743] showInfo is true\nI0323 17:41:19.605927 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:41:23.544115 43333 client.go:743] showInfo is true\nI0323 17:41:23.544145 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 23 17:41:27.128: INFO: OK!\n\n STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/23/26 17:41:27.128\nI0323 17:41:27.128455 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:28.152879 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:41:29.064: INFO: Value: False\nI0323 17:41:29.065168 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:29.996569 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:41:30.814: INFO: Value: False\nI0323 17:41:30.815271 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:31.941457 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:41:32.974: INFO: Value: False\nI0323 17:41:32.974477 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:34.287481 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:41:35.396: INFO: Value: False\nI0323 17:41:35.396507 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:36.415836 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:41:37.881: INFO: Value: False\nI0323 17:41:37.882178 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:39.108941 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:41:40.156: INFO: Value: False\nMar 23 17:41:40.157: INFO: OK!\n\n STEP: Delete the first PinnedImageSet @ 03/23/26 17:41:40.157\nI0323 17:41:40.157516 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'\nMar 23 17:41:41.226: INFO: OK!\n\n STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/23/26 17:41:41.226\nMar 23 17:41:41.226: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:42:41.285878 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:42:42.286: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:42:42.286785 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:43:09.945816 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:43:10.774177 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:43:11.591445 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:43:12.542671 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:43:13.465565 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:43:14.291338 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:43:15.090: INFO: Pool worker successfully pinned the images! Complete!\nMar 23 17:43:15.090: INFO: OK!\n\n STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/23/26 17:43:15.09\nI0323 17:43:15.090871 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0323 17:43:15.853716 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found\nI0323 17:43:15.853902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nMar 23 17:43:16.656: INFO: OK!\n\n STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/23/26 17:43:16.657\nI0323 17:43:17.665577 43333 client.go:743] showInfo is true\nI0323 17:43:17.665605 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:43:21.587332 43333 client.go:743] showInfo is true\nI0323 17:43:21.587367 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:43:25.795681 43333 client.go:743] showInfo is true\nI0323 17:43:25.795717 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 23 17:43:29.599: INFO: OK!\n\n STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/23/26 17:43:29.599\nI0323 17:43:29.599711 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:30.424114 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:43:31.648: INFO: Value: False\nI0323 17:43:31.648434 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:32.465449 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:43:33.229: INFO: Value: False\nI0323 17:43:33.229399 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:34.249754 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:43:35.334: INFO: Value: False\nI0323 17:43:35.334706 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:36.209880 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:43:37.038: INFO: Value: False\nI0323 17:43:37.038419 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:37.821735 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:43:39.021: INFO: Value: False\nI0323 17:43:39.021601 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:40.089100 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:43:41.068: INFO: Value: False\nMar 23 17:43:41.068: INFO: OK!\n\nI0323 17:43:41.068980 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nI0323 17:43:46.564389 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'\nMar 23 17:43:47.826: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:44:47.873574 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:44:48.900: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:44:48.900902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:45:16.170463 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:45:17.017858 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:45:17.938289 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:45:18.963240 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:45:19.855125 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:45:20.767806 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:45:21.701: INFO: Pool worker successfully pinned the images! Complete!\nI0323 17:45:21.702042 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0323 17:45:22.854950 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found\nMar 23 17:45:22.855: INFO: \u003cKind: pinnedimageset, Name: tc-88378-pis-one, Namespace: \u003e does not exist! No need to delete it!\nI0323 17:45:23.770731 43333 client.go:421] Deleted {user.openshift.io/v1, Resource=users e2e-test-mco-pinnedimages-t8xwc-user}, err: \u003cnil\u003e\nI0323 17:45:24.077279 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-mco-pinnedimages-t8xwc}, err: \u003cnil\u003e\nI0323 17:45:24.343380 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~SIVc2wFpVErMw4P4tw3dKNBqFzGx5yvnxySKOR2ECvU}, err: \u003cnil\u003e\n STEP: Destroying namespace "e2e-test-mco-pinnedimages-t8xwc" for this suite. @ 03/23/26 17:45:24.343\n"
}

…pinned by another PinnedImageSet

Automation Polarion OCP-88378 (MCO-2168): This test case verifies that when two PinnedImageSets reference the same image, deleting one PinnedImageSet does not remove the image from the nodes.

Signed-off-by: HarshwardhanPatil07 <harshpat@redhat.com>
@openshift-ci openshift-ci bot requested review from umohnani8 and yuqi-zhang March 23, 2026 06:55
@HarshwardhanPatil07
Copy link
Author

/cc @ptalgulk01

@openshift-ci openshift-ci bot requested a review from ptalgulk01 March 23, 2026 06:56
@HarshwardhanPatil07
Copy link
Author

/assign @HarshwardhanPatil07

@coderabbitai
Copy link

coderabbitai bot commented Mar 23, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e347e829-e355-41d3-b9e0-d476b7295ae9

📥 Commits

Reviewing files that changed from the base of the PR and between 7d41f94 and afe3c60.

📒 Files selected for processing (1)
  • test/extended-priv/mco_pinnedimages.go

Walkthrough

Adds a serial/disruptive Ginkgo test that pins the same image with two PinnedImageSet resources, verifies pinning across nodes, deletes one PinnedImageSet and confirms the other still pins the image, and checks API-level rejection for duplicate-image entries.

Changes

Cohort / File(s) Summary
Test Addition
test/extended-priv/mco_pinnedimages.go
New serial/disruptive Ginkgo It test (~112 added lines) plus two helper functions: removeImageFromNodes (best-effort Rmi() on nodes) and createPinnedImageSetAndWait (create PinnedImageSet, wait for pool pin completion and verify images pinned on nodes). Test flow: remove target image from nodes, create pisOne and pisTwo pointing to the same image and wait for pinning, assert MachineConfigNode conditions (PinnedImageSetsDegraded, PinnedImageSetsProgressing) are not set before deletion, delete pisOne, wait for pool reconciliation, verify pisOne is deleted while pisTwo remains and the image stays pinned on all nodes, and include API negative test asserting duplicate-image PinnedImageSet creation is rejected with "Duplicate value" and no resource persisted.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@HarshwardhanPatil07 HarshwardhanPatil07 changed the title test: add OCP-88378 Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet MCO-2168: Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet Mar 23, 2026
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 23, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 23, 2026

@HarshwardhanPatil07: This pull request references MCO-2168 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Automation Polarion OCP-88378 (MCO-2168): This test case verifies that when two PinnedImageSets reference the same image, deleting one PinnedImageSet does not remove the image from the nodes.

- What I did
Automated Polarion test case OCP-88378 (MCO-2168).
Added a new Ginkgo e2e test in test/extended-priv/mco_pinnedimages.go that verifies deleting one PinnedImageSet does not unpin images that are still referenced by another PinnedImageSet.

  • Creates two PinnedImageSets (pis-one and pis-two) both referencing the same alpine image on the worker pool
  • Verifies the image is pinned on all nodes and MachineConfigNode conditions are healthy
  • Deletes pis-one and verifies the image remains pinned on all nodes because pis-two still references it

- How to verify it

  • Create two PinnedImageSets with the same image targeting the worker pool
  • Wait for both to reconcile and verify the image is pinned (crictl images --pinned)
  • Delete one PinnedImageSet
  • Confirm the image is still pinned on all worker nodes

- Description for the changelog

test: automate OCP-88378 - verify deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
test/extended-priv/mco_pinnedimages.go (1)

599-605: Optional: Consider using o.Eventually for the critical post-deletion pin check.

After deleting the first PinnedImageSet, the immediate o.Expect(ri.IsPinned()) check relies on waitForPinComplete having fully synchronized state. While this should work, using o.Eventually would be more defensive against edge-case timing issues, consistent with similar checks elsewhere in this file (e.g., line 499).

♻️ Optional: More defensive assertion
 exutil.By("Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet")
 for _, node := range allNodes {
     ri := NewRemoteImage(node, pinnedImage)
-    o.Expect(ri.IsPinned()).To(o.BeTrue(),
+    o.Eventually(ri.IsPinned, "2m", "20s").Should(o.BeTrue(),
         "%s should still be pinned because %s still references it", ri, pisTwo)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended-priv/mco_pinnedimages.go` around lines 599 - 605, The
post-deletion check should be made resilient to timing by using a retrying
assertion instead of an immediate check: replace the direct
o.Expect(ri.IsPinned()) calls in the loop that uses NewRemoteImage(...)
(variable ri) over allNodes/pinnedImage/pisTwo with an o.Eventually wrapper that
calls ri.IsPinned() and asserts true within a short timeout and polling
interval, so the test waits for the pin state to stabilize after deleting the
first PinnedImageSet.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test/extended-priv/mco_pinnedimages.go`:
- Around line 599-605: The post-deletion check should be made resilient to
timing by using a retrying assertion instead of an immediate check: replace the
direct o.Expect(ri.IsPinned()) calls in the loop that uses NewRemoteImage(...)
(variable ri) over allNodes/pinnedImage/pisTwo with an o.Eventually wrapper that
calls ri.IsPinned() and asserts true within a short timeout and polling
interval, so the test waits for the pin state to stabilize after deleting the
first PinnedImageSet.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b683e487-4cfa-4ddc-8a78-895d0842aa75

📥 Commits

Reviewing files that changed from the base of the PR and between 259e932 and 69f31d0.

📒 Files selected for processing (1)
  • test/extended-priv/mco_pinnedimages.go

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 23, 2026

@HarshwardhanPatil07: This pull request references MCO-2168 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Automation Polarion OCP-88378 (MCO-2168): This test case verifies that when two PinnedImageSets reference the same image, deleting one PinnedImageSet does not remove the image from the nodes.

- What I did
Automated Polarion test case OCP-88378 (MCO-2168).
Added a new Ginkgo e2e test in test/extended-priv/mco_pinnedimages.go that verifies deleting one PinnedImageSet does not unpin images that are still referenced by another PinnedImageSet.

  • Creates two PinnedImageSets (pis-one and pis-two) both referencing the same alpine image on the worker pool
  • Verifies the image is pinned on all nodes and MachineConfigNode conditions are healthy
  • Deletes pis-one and verifies the image remains pinned on all nodes because pis-two still references it

- How to verify it

  • Create two PinnedImageSets with the same image targeting the worker pool
  • Wait for both to reconcile and verify the image is pinned (crictl images --pinned)
  • Delete one PinnedImageSet
  • Confirm the image is still pinned on all worker nodes

- Description for the changelog

test: automate OCP-88378 - verify deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet

View Test Execution Logs harshpat@harshpat-thinkpadp1gen4i:~/Downloads/repos/machine-config-operator$ ./_output/linux/amd64/machine-config-tests-ext run-test "[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet" I0323 17:36:12.165968 43333 test_context.go:566] The --provider flag is not set. Continuing as if --provider=skeleton had been used. Running Suite: - /home/harshpat/Downloads/repos/machine-config-operator ======================================================================== Random Seed: 1774267572 - will randomize all specs

Will run 1 of 1 specs

[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet
/home/harshpat/Downloads/repos/machine-config-operator/test/extended-priv/mco_pinnedimages.go:520
STEP: Creating a kubernetes client @ 03/23/26 17:36:12.166
I0323 17:36:15.109044 43333 client.go:164] configPath is now "/tmp/configfile4176615151"
I0323 17:36:15.109065 43333 client.go:291] The user is now "e2e-test-mco-pinnedimages-t8xwc-user"
I0323 17:36:15.109072 43333 client.go:293] Creating project "e2e-test-mco-pinnedimages-t8xwc"
I0323 17:36:15.454395 43333 client.go:302] Waiting on permissions in project "e2e-test-mco-pinnedimages-t8xwc" ...
I0323 17:36:16.705082 43333 client.go:363] Waiting for ServiceAccount "default" to be provisioned...
I0323 17:36:17.053661 43333 client.go:363] Waiting for ServiceAccount "builder" to be provisioned...
I0323 17:36:17.566292 43333 client.go:363] Waiting for ServiceAccount "deployer" to be provisioned...
I0323 17:36:17.930800 43333 client.go:373] Waiting for RoleBinding "system:image-builders" to be provisioned...
I0323 17:36:18.737982 43333 client.go:373] Waiting for RoleBinding "system:deployers" to be provisioned...
I0323 17:36:19.921407 43333 client.go:373] Waiting for RoleBinding "system:image-pullers" to be provisioned...
I0323 17:36:20.740788 43333 client.go:404] Project "e2e-test-mco-pinnedimages-t8xwc" has been fully provisioned.
I0323 17:36:20.741369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp'
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-a755db9f3e395401159d15cca4c9f2e3 True False False 3 3 3 0 7h4m
worker rendered-worker-225b39332f01eed9d1147f5e22629fe7 True False False 3 3 3 0 7h4m
I0323 17:36:23.086441 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:36:24.124647 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
Mar 23 17:36:25.148: INFO: <Kind: mcp, Name: worker, Namespace: > <Kind: mcp, Name: master, Namespace: > <Kind: mcp, Name: worker, Namespace: >
STEP: MCO Preconditions Checks @ 03/23/26 17:36:25.149
Mar 23 17:36:26.584: INFO: Check that master pool is ready for testing
I0323 17:36:26.584932 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'
Mar 23 17:36:27.606: INFO: Num nodes: 3, wait time per node 13 minutes
Mar 23 17:36:27.606: INFO: Increase waiting time because it is master pool
Mar 23 17:36:27.606: INFO: Waiting 3m54s for MCP master to be completed.
I0323 17:36:27.606448 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'
I0323 17:36:28.622141 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'
Mar 23 17:36:29.489: INFO: MCP 'master' is ready for testing
Mar 23 17:36:29.489: INFO: Check that worker pool is ready for testing
I0323 17:36:29.490004 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
Mar 23 17:36:30.397: INFO: Num nodes: 3, wait time per node 13 minutes
Mar 23 17:36:30.397: INFO: Waiting 3m0s for MCP worker to be completed.
I0323 17:36:30.398221 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'
I0323 17:36:31.412420 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'
Mar 23 17:36:32.375: INFO: MCP 'worker' is ready for testing
Mar 23 17:36:32.375: INFO: Wait for MCC to get the leader lease
I0323 17:36:32.375838 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'
I0323 17:36:33.359772 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-579ddc5498-lw5bt'
Mar 23 17:36:35.519: INFO: End of MCO Preconditions

STEP: Remove the test image from all nodes in the pool @ 03/23/26 17:37:05.088
Mar 23 17:37:05.088: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-42-126.us-east-2.compute.internal
I0323 17:37:08.103930 43333 client.go:743] showInfo is false
Mar 23 17:37:12.744: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9
Deleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-42-126us-east-2computeinternal-debug-9b5ck ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
Mar 23 17:37:12.744: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-112.us-east-2.compute.internal
I0323 17:37:15.852584 43333 client.go:743] showInfo is false
Mar 23 17:37:28.056: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9
Deleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-84-112us-east-2computeinternal-debug-5gtjx ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
Mar 23 17:37:28.056: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-9-163.us-east-2.compute.internal
I0323 17:37:31.606830 43333 client.go:743] showInfo is false
Mar 23 17:37:42.568: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9
Deleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-9-163us-east-2computeinternal-debug-b5tc9 ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
Mar 23 17:37:42.568: INFO: OK!

STEP: Create first PinnedImageSet with alpine image @ 03/23/26 17:37:42.568
Mar 23 17:37:42.568: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
Mar 23 17:37:42.568: INFO: mco fixture dir is not initialized, start to create
Mar 23 17:37:42.569: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir848482353
I0323 17:37:45.571483 43333 client.go:743] showInfo is true
I0323 17:37:45.571524 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
I0323 17:37:46.380195 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout
I0323 17:37:46.380354 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout'
pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created
Mar 23 17:37:47.149: INFO: OK!

STEP: Wait for the first PinnedImageSet to be applied @ 03/23/26 17:37:47.149
Mar 23 17:37:47.149: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:38:47.175473 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:38:48.018: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:38:48.018369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:39:14.295305 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:39:15.338822 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:39:16.359961 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:39:17.123383 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:39:18.103261 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:39:18.902146 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:39:19.901: INFO: Pool worker successfully pinned the images! Complete!
Mar 23 17:39:19.901: INFO: OK!

STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/23/26 17:39:19.901
I0323 17:39:21.074984 43333 client.go:743] showInfo is true
I0323 17:39:21.075020 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:39:25.271773 43333 client.go:743] showInfo is true
I0323 17:39:25.271805 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:39:29.413967 43333 client.go:743] showInfo is true
I0323 17:39:29.413998 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 23 17:39:32.869: INFO: OK!

STEP: Create second PinnedImageSet with the same alpine image @ 03/23/26 17:39:32.87
Mar 23 17:39:32.870: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
I0323 17:39:35.872700 43333 client.go:743] showInfo is true
I0323 17:39:35.872790 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
I0323 17:39:36.637949 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout
I0323 17:39:36.638136 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout'
pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created
Mar 23 17:39:37.403: INFO: OK!

STEP: Wait for the second PinnedImageSet to be applied @ 03/23/26 17:39:37.403
Mar 23 17:39:37.403: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:40:37.462768 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:40:38.856: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:40:38.856239 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:41:07.470262 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:41:08.389849 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:41:09.242246 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:41:10.314401 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:41:11.488677 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:41:12.527927 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:41:13.754: INFO: Pool worker successfully pinned the images! Complete!
Mar 23 17:41:13.754: INFO: OK!

STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/23/26 17:41:13.754
I0323 17:41:14.845939 43333 client.go:743] showInfo is true
I0323 17:41:14.845969 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:41:19.605896 43333 client.go:743] showInfo is true
I0323 17:41:19.605927 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:41:23.544115 43333 client.go:743] showInfo is true
I0323 17:41:23.544145 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 23 17:41:27.128: INFO: OK!

STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/23/26 17:41:27.128
I0323 17:41:27.128455 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:28.152879 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:41:29.064: INFO: Value: False
I0323 17:41:29.065168 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:29.996569 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:41:30.814: INFO: Value: False
I0323 17:41:30.815271 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:31.941457 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:41:32.974: INFO: Value: False
I0323 17:41:32.974477 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:34.287481 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:41:35.396: INFO: Value: False
I0323 17:41:35.396507 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:36.415836 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:41:37.881: INFO: Value: False
I0323 17:41:37.882178 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:41:39.108941 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:41:40.156: INFO: Value: False
Mar 23 17:41:40.157: INFO: OK!

STEP: Delete the first PinnedImageSet @ 03/23/26 17:41:40.157
I0323 17:41:40.157516 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'
Mar 23 17:41:41.226: INFO: OK!

STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/23/26 17:41:41.226
Mar 23 17:41:41.226: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:42:41.285878 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:42:42.286: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:42:42.286785 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:43:09.945816 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:43:10.774177 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:43:11.591445 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:43:12.542671 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:43:13.465565 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:43:14.291338 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:43:15.090: INFO: Pool worker successfully pinned the images! Complete!
Mar 23 17:43:15.090: INFO: OK!

STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/23/26 17:43:15.09
I0323 17:43:15.090871 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'
I0323 17:43:15.853716 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:
Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found
I0323 17:43:15.853902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'
Mar 23 17:43:16.656: INFO: OK!

STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/23/26 17:43:16.657
I0323 17:43:17.665577 43333 client.go:743] showInfo is true
I0323 17:43:17.665605 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:43:21.587332 43333 client.go:743] showInfo is true
I0323 17:43:21.587367 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0323 17:43:25.795681 43333 client.go:743] showInfo is true
I0323 17:43:25.795717 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 23 17:43:29.599: INFO: OK!

STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/23/26 17:43:29.599
I0323 17:43:29.599711 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:30.424114 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:43:31.648: INFO: Value: False
I0323 17:43:31.648434 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:32.465449 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:43:33.229: INFO: Value: False
I0323 17:43:33.229399 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:34.249754 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:43:35.334: INFO: Value: False
I0323 17:43:35.334706 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:36.209880 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:43:37.038: INFO: Value: False
I0323 17:43:37.038419 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:37.821735 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 23 17:43:39.021: INFO: Value: False
I0323 17:43:39.021601 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'
I0323 17:43:40.089100 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 23 17:43:41.068: INFO: Value: False
Mar 23 17:43:41.068: INFO: OK!

I0323 17:43:41.068980 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'
I0323 17:43:46.564389 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'
Mar 23 17:43:47.826: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0323 17:44:47.873574 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 23 17:44:48.900: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0323 17:44:48.900902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0323 17:45:16.170463 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:45:17.017858 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:45:17.938289 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:45:18.963240 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0323 17:45:19.855125 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0323 17:45:20.767806 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 23 17:45:21.701: INFO: Pool worker successfully pinned the images! Complete!
I0323 17:45:21.702042 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'
I0323 17:45:22.854950 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:
Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found
Mar 23 17:45:22.855: INFO: <Kind: pinnedimageset, Name: tc-88378-pis-one, Namespace: > does not exist! No need to delete it!
I0323 17:45:23.770731 43333 client.go:421] Deleted {user.openshift.io/v1, Resource=users e2e-test-mco-pinnedimages-t8xwc-user}, err:
I0323 17:45:24.077279 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-mco-pinnedimages-t8xwc}, err:
I0323 17:45:24.343380 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~SIVc2wFpVErMw4P4tw3dKNBqFzGx5yvnxySKOR2ECvU}, err:
STEP: Destroying namespace "e2e-test-mco-pinnedimages-t8xwc" for this suite. @ 03/23/26 17:45:24.343
• [552.418 seconds]

Ran 1 of 1 Specs in 552.419 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
{
"name": "[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet",
"lifecycle": "blocking",
"duration": 552418,
"startTime": "2026-03-23 12:06:12.166041 UTC",
"endTime": "2026-03-23 12:15:24.584913 UTC",
"result": "passed",
"output": " STEP: Creating a kubernetes client @ 03/23/26 17:36:12.166\nI0323 17:36:15.109044 43333 client.go:164] configPath is now "/tmp/configfile4176615151"\nI0323 17:36:15.109065 43333 client.go:291] The user is now "e2e-test-mco-pinnedimages-t8xwc-user"\nI0323 17:36:15.109072 43333 client.go:293] Creating project "e2e-test-mco-pinnedimages-t8xwc"\nI0323 17:36:15.454395 43333 client.go:302] Waiting on permissions in project "e2e-test-mco-pinnedimages-t8xwc" ...\nI0323 17:36:16.705082 43333 client.go:363] Waiting for ServiceAccount "default" to be provisioned...\nI0323 17:36:17.053661 43333 client.go:363] Waiting for ServiceAccount "builder" to be provisioned...\nI0323 17:36:17.566292 43333 client.go:363] Waiting for ServiceAccount "deployer" to be provisioned...\nI0323 17:36:17.930800 43333 client.go:373] Waiting for RoleBinding "system:image-builders" to be provisioned...\nI0323 17:36:18.737982 43333 client.go:373] Waiting for RoleBinding "system:deployers" to be provisioned...\nI0323 17:36:19.921407 43333 client.go:373] Waiting for RoleBinding "system:image-pullers" to be provisioned...\nI0323 17:36:20.740788 43333 client.go:404] Project "e2e-test-mco-pinnedimages-t8xwc" has been fully provisioned.\nI0323 17:36:20.741369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp'\nNAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE\nmaster rendered-master-a755db9f3e395401159d15cca4c9f2e3 True False False 3 3 3 0 7h4m\nworker rendered-worker-225b39332f01eed9d1147f5e22629fe7 True False False 3 3 3 0 7h4m\nI0323 17:36:23.086441 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:36:24.124647 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 23 17:36:25.148: INFO: \u003cKind: mcp, Name: worker, Namespace: \u003e \u003cKind: mcp, Name: master, Namespace: \u003e \u003cKind: mcp, Name: worker, Namespace: \u003e\n STEP: MCO Preconditions Checks @ 03/23/26 17:36:25.149\nMar 23 17:36:26.584: INFO: Check that master pool is ready for testing\nI0323 17:36:26.584932 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'\nMar 23 17:36:27.606: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 23 17:36:27.606: INFO: Increase waiting time because it is master pool\nMar 23 17:36:27.606: INFO: Waiting 3m54s for MCP master to be completed.\nI0323 17:36:27.606448 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'\nI0323 17:36:28.622141 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'\nMar 23 17:36:29.489: INFO: MCP 'master' is ready for testing\nMar 23 17:36:29.489: INFO: Check that worker pool is ready for testing\nI0323 17:36:29.490004 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 23 17:36:30.397: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 23 17:36:30.397: INFO: Waiting 3m0s for MCP worker to be completed.\nI0323 17:36:30.398221 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'\nI0323 17:36:31.412420 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'\nMar 23 17:36:32.375: INFO: MCP 'worker' is ready for testing\nMar 23 17:36:32.375: INFO: Wait for MCC to get the leader lease\nI0323 17:36:32.375838 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'\nI0323 17:36:33.359772 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-579ddc5498-lw5bt'\nMar 23 17:36:35.519: INFO: End of MCO Preconditions\n\n STEP: Remove the test image from all nodes in the pool @ 03/23/26 17:37:05.088\nMar 23 17:37:05.088: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-42-126.us-east-2.compute.internal\nI0323 17:37:08.103930 43333 client.go:743] showInfo is false\nMar 23 17:37:12.744: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9\nDeleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-42-126us-east-2computeinternal-debug-9b5ck ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nMar 23 17:37:12.744: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-112.us-east-2.compute.internal\nI0323 17:37:15.852584 43333 client.go:743] showInfo is false\nMar 23 17:37:28.056: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9\nDeleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-84-112us-east-2computeinternal-debug-5gtjx ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nMar 23 17:37:28.056: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-9-163.us-east-2.compute.internal\nI0323 17:37:31.606830 43333 client.go:743] showInfo is false\nMar 23 17:37:42.568: INFO: Deleted: quay.io/openshifttest/alpine@sha256:b85ab970ed9d2f6dd270a76897c0dd7de8e2e3beb504a9c3a568ad1c283c58a9\nDeleted: quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-9-163us-east-2computeinternal-debug-b5tc9 ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nMar 23 17:37:42.568: INFO: OK!\n\n STEP: Create first PinnedImageSet with alpine image @ 03/23/26 17:37:42.568\nMar 23 17:37:42.568: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nMar 23 17:37:42.568: INFO: mco fixture dir is not initialized, start to create\nMar 23 17:37:42.569: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir848482353\nI0323 17:37:45.571483 43333 client.go:743] showInfo is true\nI0323 17:37:45.571524 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'\nI0323 17:37:46.380195 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout\nI0323 17:37:46.380354 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-5e0j48t7config.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created\nMar 23 17:37:47.149: INFO: OK!\n\n STEP: Wait for the first PinnedImageSet to be applied @ 03/23/26 17:37:47.149\nMar 23 17:37:47.149: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:38:47.175473 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:38:48.018: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:38:48.018369 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:39:14.295305 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:39:15.338822 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:39:16.359961 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:39:17.123383 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:39:18.103261 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:39:18.902146 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:39:19.901: INFO: Pool worker successfully pinned the images! Complete!\nMar 23 17:39:19.901: INFO: OK!\n\n STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/23/26 17:39:19.901\nI0323 17:39:21.074984 43333 client.go:743] showInfo is true\nI0323 17:39:21.075020 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:39:25.271773 43333 client.go:743] showInfo is true\nI0323 17:39:25.271805 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:39:29.413967 43333 client.go:743] showInfo is true\nI0323 17:39:29.413998 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 23 17:39:32.869: INFO: OK!\n\n STEP: Create second PinnedImageSet with the same alpine image @ 03/23/26 17:39:32.87\nMar 23 17:39:32.870: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nI0323 17:39:35.872700 43333 client.go:743] showInfo is true\nI0323 17:39:35.872790 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir848482353/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'\nI0323 17:39:36.637949 43333 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout\nI0323 17:39:36.638136 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-t8xwc-ccaxuuzuconfig.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created\nMar 23 17:39:37.403: INFO: OK!\n\n STEP: Wait for the second PinnedImageSet to be applied @ 03/23/26 17:39:37.403\nMar 23 17:39:37.403: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:40:37.462768 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:40:38.856: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:40:38.856239 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:41:07.470262 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:41:08.389849 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:41:09.242246 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:41:10.314401 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:41:11.488677 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:41:12.527927 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:41:13.754: INFO: Pool worker successfully pinned the images! Complete!\nMar 23 17:41:13.754: INFO: OK!\n\n STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/23/26 17:41:13.754\nI0323 17:41:14.845939 43333 client.go:743] showInfo is true\nI0323 17:41:14.845969 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:41:19.605896 43333 client.go:743] showInfo is true\nI0323 17:41:19.605927 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:41:23.544115 43333 client.go:743] showInfo is true\nI0323 17:41:23.544145 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 23 17:41:27.128: INFO: OK!\n\n STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/23/26 17:41:27.128\nI0323 17:41:27.128455 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:28.152879 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:41:29.064: INFO: Value: False\nI0323 17:41:29.065168 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:29.996569 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:41:30.814: INFO: Value: False\nI0323 17:41:30.815271 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:31.941457 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:41:32.974: INFO: Value: False\nI0323 17:41:32.974477 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:34.287481 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:41:35.396: INFO: Value: False\nI0323 17:41:35.396507 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:36.415836 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:41:37.881: INFO: Value: False\nI0323 17:41:37.882178 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:41:39.108941 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:41:40.156: INFO: Value: False\nMar 23 17:41:40.157: INFO: OK!\n\n STEP: Delete the first PinnedImageSet @ 03/23/26 17:41:40.157\nI0323 17:41:40.157516 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'\nMar 23 17:41:41.226: INFO: OK!\n\n STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/23/26 17:41:41.226\nMar 23 17:41:41.226: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:42:41.285878 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:42:42.286: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:42:42.286785 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:43:09.945816 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:43:10.774177 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:43:11.591445 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:43:12.542671 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:43:13.465565 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:43:14.291338 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:43:15.090: INFO: Pool worker successfully pinned the images! Complete!\nMar 23 17:43:15.090: INFO: OK!\n\n STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/23/26 17:43:15.09\nI0323 17:43:15.090871 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0323 17:43:15.853716 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found\nI0323 17:43:15.853902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nMar 23 17:43:16.656: INFO: OK!\n\n STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/23/26 17:43:16.657\nI0323 17:43:17.665577 43333 client.go:743] showInfo is true\nI0323 17:43:17.665605 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-42-126.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:43:21.587332 43333 client.go:743] showInfo is true\nI0323 17:43:21.587367 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-112.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0323 17:43:25.795681 43333 client.go:743] showInfo is true\nI0323 17:43:25.795717 43333 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-t8xwc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-9-163.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 23 17:43:29.599: INFO: OK!\n\n STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/23/26 17:43:29.599\nI0323 17:43:29.599711 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:30.424114 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:43:31.648: INFO: Value: False\nI0323 17:43:31.648434 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:32.465449 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:43:33.229: INFO: Value: False\nI0323 17:43:33.229399 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:34.249754 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:43:35.334: INFO: Value: False\nI0323 17:43:35.334706 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:36.209880 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:43:37.038: INFO: Value: False\nI0323 17:43:37.038419 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:37.821735 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 23 17:43:39.021: INFO: Value: False\nI0323 17:43:39.021601 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={}'\nI0323 17:43:40.089100 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 23 17:43:41.068: INFO: Value: False\nMar 23 17:43:41.068: INFO: OK!\n\nI0323 17:43:41.068980 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nI0323 17:43:46.564389 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'\nMar 23 17:43:47.826: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0323 17:44:47.873574 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 23 17:44:48.900: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0323 17:44:48.900902 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0323 17:45:16.170463 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:45:17.017858 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-42-126.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:45:17.938289 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:45:18.963240 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-112.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0323 17:45:19.855125 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0323 17:45:20.767806 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-9-163.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 23 17:45:21.701: INFO: Pool worker successfully pinned the images! Complete!\nI0323 17:45:21.702042 43333 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0323 17:45:22.854950 43333 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found\nMar 23 17:45:22.855: INFO: \u003cKind: pinnedimageset, Name: tc-88378-pis-one, Namespace: \u003e does not exist! No need to delete it!\nI0323 17:45:23.770731 43333 client.go:421] Deleted {user.openshift.io/v1, Resource=users e2e-test-mco-pinnedimages-t8xwc-user}, err: \u003cnil\u003e\nI0323 17:45:24.077279 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-mco-pinnedimages-t8xwc}, err: \u003cnil\u003e\nI0323 17:45:24.343380 43333 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~SIVc2wFpVErMw4P4tw3dKNBqFzGx5yvnxySKOR2ECvU}, err: \u003cnil\u003e\n STEP: Destroying namespace "e2e-test-mco-pinnedimages-t8xwc" for this suite. @ 03/23/26 17:45:24.343\n"
}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Member

@isabella-janssen isabella-janssen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@isabella-janssen
Copy link
Member

/test unit

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 23, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 23, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: HarshwardhanPatil07, isabella-janssen

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 23, 2026
@HarshwardhanPatil07
Copy link
Author

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 24, 2026
@HarshwardhanPatil07
Copy link
Author

/test e2e-gcp-op-ocl-part1

@HarshwardhanPatil07
Copy link
Author

/test e2e-hypershift

@HarshwardhanPatil07
Copy link
Author

HarshwardhanPatil07 commented Mar 24, 2026

@isabella-janssen Thank you for the review! I will check with the QE if we need to add 1 more automation to check when duplicate images are added in the same PinnedImageSet are handled gracefully. added the hold label in the meantime

…ice in the pinnedImages array)

Signed-off-by: HarshwardhanPatil07 <harshpat@redhat.com>
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Mar 26, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 26, 2026

New changes are detected. LGTM label has been removed.

@HarshwardhanPatil07
Copy link
Author

/unhold

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 26, 2026
@HarshwardhanPatil07
Copy link
Author

New changes - added validation to handle duplicate images within a single PinnedImageSet

The implementation is working exactly as expected.

    STEP: Verify that a PinnedImageSet with duplicate images is rejected by the API @ 03/26/26 11:58:23.241
  Mar 26 11:58:23.241: INFO: Creating PinnedImageSet tc-88378-pis-duplicate in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
  I0326 11:58:26.243497 203228 client.go:743] showInfo is true
  I0326 11:58:26.243530 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-duplicate POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"},{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
  I0326 11:58:28.233311 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout
  I0326 11:58:28.233429 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout'
  I0326 11:58:29.149912 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout:
  The PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}
  The PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}
  I0326 11:58:29.149951 203228 template.go:83] fail to create/apply resource exit status 1
  Mar 26 11:58:29.149: INFO: OK!

    STEP: Verify the duplicate PinnedImageSet was not created @ 03/26/26 11:58:29.15
  I0326 11:58:29.150144 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}'
  I0326 11:58:30.066101 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}:
  Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-duplicate" not found
  Mar 26 11:58:30.066: INFO: OK!
Logs
---
pid: 197843
cwd: /home/harshpat/Downloads/repos/machine-config-operator
active_command: __systemd_osc_context_precmdline
last_command: |
  Will run 1 of 1 specs  ------------------------------  [sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet  /home/harshpat/Downloads/repos/machine-config-operator/test/extended-priv/mco_pinnedimages.go:520    STEP: Creating a kubernetes client @ 03/26/26 11:49:27.179  I0326 11:49:30.080762 203228 client.go:164] configPath is now "/tmp/configfile2140474964"  I0326 11:49:30.080782 203228 client.go:291] The user is now "e2e-test-mco-pinnedimages-gzhpl-user"  I0326 11:49:30.080786 203228 client.go:293] Creating project "e2e-test-mco-pinnedimages-gzhpl"  I0326 11:49:30.438487 203228 client.go:302] Waiting on permissions in project "e2e-test-mco-pinnedimages-gzhpl" ...  I0326 11:49:31.877762 203228 client.go:363] Waiting for ServiceAccount "default" to be provisioned...  I0326 11:49:32.262192 203228 client.go:363] Waiting for ServiceAccount "builder" to be provisioned...  I0326 11:49:32.644816 203228 client.go:363] Waiting for ServiceAccount "deployer" to be provisioned...  I0326 11:49:33.028098 203228 client.go:373] Waiting for RoleBinding "system:image-builders" to be provisioned...  I0326 11:49:33.873137 203228 client.go:373] Waiting for RoleBinding "system:deployers" to be provisioned...  I0326 11:49:34.723304 203228 client.go:373] Waiting for RoleBinding "system:image-pullers" to be provisioned...  I0326 11:49:35.577062 203228 client.go:404] Project "e2e-test-mco-pinnedimages-gzhpl" has been fully provisioned.  I0326 11:49:35.577518 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp'  NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE  master   rendered-master-44e887cd7048827fba0e76164ff43694   True      False      False      3              3                   3                     0                      176m  worker   rendered-worker-ec9a17283884ddee0a2875a578e37a2e   True      False      False      3              3                   3                     0                      176m  I0326 11:49:36.530468 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'  I0326 11:49:37.466423 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'  Mar 26 11:49:38.665: INFO:       STEP: MCO Preconditions Checks @ 03/26/26 11:49:38.665  Mar 26 11:49:39.885: INFO: Check that master pool is ready for testing  I0326 11:49:39.885517 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'  Mar 26 11:49:40.820: INFO: Num nodes: 3, wait time per node 13 minutes  Mar 26 11:49:40.820: INFO: Increase waiting time because it is master pool  Mar 26 11:49:40.820: INFO: Waiting 3m54s for MCP master to be completed.  I0326 11:49:40.820435 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'  I0326 11:49:41.761157 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'  Mar 26 11:49:42.702: INFO: MCP 'master' is ready for testing  Mar 26 11:49:42.702: INFO: Check that worker pool is ready for testing  I0326 11:49:42.702477 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'  Mar 26 11:49:43.619: INFO: Num nodes: 3, wait time per node 13 minutes  Mar 26 11:49:43.619: INFO: Waiting 3m0s for MCP worker to be completed.  I0326 11:49:43.619388 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'  I0326 11:49:44.556156 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'  Mar 26 11:49:45.496: INFO: MCP 'worker' is ready for testing  Mar 26 11:49:45.496: INFO: Wait for MCC to get the leader lease  I0326 11:49:45.496989 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'  I0326 11:49:47.096604 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-6d9c86bdfb-vjqnq'  Mar 26 11:49:49.197: INFO: End of MCO Preconditions    STEP: Remove the test image from all nodes in the pool @ 03/26/26 11:50:17.66  Mar 26 11:50:17.660: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-3-16.us-east-2.compute.internal  I0326 11:50:20.756208 203228 client.go:743] showInfo is false  I0326 11:50:28.832241 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-gqvvt ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:50:30.060: INFO: Error happened: exit status 1.  Retrying command. Num retries: 1  I0326 11:50:33.420970 203228 client.go:743] showInfo is false  I0326 11:50:36.852202 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-nwjcz ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:50:38.047: INFO: Error happened: exit status 1.  Retrying command. Num retries: 2  I0326 11:50:43.123059 203228 client.go:743] showInfo is false  I0326 11:50:47.247423 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:50:48.471: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:50:48.471: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-59-220.us-east-2.compute.internal  I0326 11:50:51.572904 203228 client.go:743] showInfo is false  I0326 11:50:59.151955 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-kvpsk ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:00.378: INFO: Error happened: exit status 1.  Retrying command. Num retries: 1  I0326 11:51:03.465824 203228 client.go:743] showInfo is false  I0326 11:51:07.199790 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-4qjw7 ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:08.398: INFO: Error happened: exit status 1.  Retrying command. Num retries: 2  I0326 11:51:11.426383 203228 client.go:743] showInfo is false  I0326 11:51:15.924570 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:17.115: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:17.115: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-67.us-east-2.compute.internal  I0326 11:51:20.186200 203228 client.go:743] showInfo is false  I0326 11:51:27.977085 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-blljr ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:29.215: INFO: Error happened: exit status 1.  Retrying command. Num retries: 1  I0326 11:51:32.243144 203228 client.go:743] showInfo is false  I0326 11:51:35.148365 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-tfcdg ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:36.380: INFO: Error happened: exit status 1.  Retrying command. Num retries: 2  I0326 11:51:39.415164 203228 client.go:743] showInfo is false  I0326 11:51:43.048840 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:  StdOut>  no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  StdErr>  Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:44.235: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083  Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...  To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.  Removing debug pod ...  error: non-zero exit code from debug container  Mar 26 11:51:44.235: INFO: OK!    STEP: Create first PinnedImageSet with alpine image @ 03/26/26 11:51:44.235  Mar 26 11:51:44.235: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]  Mar 26 11:51:44.235: INFO: mco fixture dir is not initialized, start to create  Mar 26 11:51:44.236: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir3541755939  I0326 11:51:47.237561 203228 client.go:743] showInfo is true  I0326 11:51:47.237605 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'  I0326 11:51:48.143037 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout  I0326 11:51:48.143209 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout'  pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created  Mar 26 11:51:49.070: INFO: OK!    STEP: Wait for the first PinnedImageSet to be applied @ 03/26/26 11:51:49.07  Mar 26 11:51:49.070: INFO: Waiting 10m0s for MCP worker to complete pinned images.  I0326 11:52:49.072124 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'  Mar 26 11:52:50.027: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}  I0326 11:52:50.027458 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'  I0326 11:53:23.134632 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:53:24.046574 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 11:53:24.959525 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:53:25.896960 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 11:53:26.790634 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:53:27.730439 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  Mar 26 11:53:28.637: INFO: Pool worker successfully pinned the images! Complete!  Mar 26 11:53:28.637: INFO: OK!    STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/26/26 11:53:28.637  I0326 11:53:29.863509 203228 client.go:743] showInfo is true  I0326 11:53:29.863551 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  I0326 11:53:34.495770 203228 client.go:743] showInfo is true  I0326 11:53:34.495800 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  I0326 11:53:40.808008 203228 client.go:743] showInfo is true  I0326 11:53:40.808046 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  Mar 26 11:53:44.337: INFO: OK!    STEP: Create second PinnedImageSet with the same alpine image @ 03/26/26 11:53:44.337  Mar 26 11:53:44.337: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]  I0326 11:53:47.339885 203228 client.go:743] showInfo is true  I0326 11:53:47.339951 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'  I0326 11:53:48.255233 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout  I0326 11:53:48.255390 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout'  pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created  Mar 26 11:53:49.188: INFO: OK!    STEP: Wait for the second PinnedImageSet to be applied @ 03/26/26 11:53:49.188  Mar 26 11:53:49.188: INFO: Waiting 10m0s for MCP worker to complete pinned images.  I0326 11:54:49.248189 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'  Mar 26 11:54:51.217: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}  I0326 11:54:51.217940 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'  I0326 11:55:27.555737 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:55:28.516521 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 11:55:29.478977 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:55:30.488544 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 11:55:31.472091 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:55:32.428445 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  Mar 26 11:55:33.399: INFO: Pool worker successfully pinned the images! Complete!  Mar 26 11:55:33.399: INFO: OK!    STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/26/26 11:55:33.399  I0326 11:55:34.653659 203228 client.go:743] showInfo is true  I0326 11:55:34.653699 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  I0326 11:55:39.333503 203228 client.go:743] showInfo is true  I0326 11:55:39.333556 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  I0326 11:55:45.373991 203228 client.go:743] showInfo is true  I0326 11:55:45.374022 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  Mar 26 11:55:48.669: INFO: OK!    STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/26/26 11:55:48.669  I0326 11:55:48.669754 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'  I0326 11:55:49.618856 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'  Mar 26 11:55:50.590: INFO: Value: False  I0326 11:55:50.591176 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'  I0326 11:55:51.525784 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'  Mar 26 11:55:52.481: INFO: Value: False  I0326 11:55:52.481806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'  I0326 11:55:53.813520 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'  Mar 26 11:55:55.844: INFO: Value: False  I0326 11:55:55.844355 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'  I0326 11:55:56.785582 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'  Mar 26 11:55:58.845: INFO: Value: False  I0326 11:55:58.845763 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'  I0326 11:55:59.767231 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'  Mar 26 11:56:00.733: INFO: Value: False  I0326 11:56:00.734220 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'  I0326 11:56:01.675227 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'  Mar 26 11:56:02.576: INFO: Value: False  Mar 26 11:56:02.576: INFO: OK!    STEP: Delete the first PinnedImageSet @ 03/26/26 11:56:02.576  I0326 11:56:02.576800 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'  Mar 26 11:56:03.836: INFO: OK!    STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/26/26 11:56:03.836  Mar 26 11:56:03.836: INFO: Waiting 10m0s for MCP worker to complete pinned images.  I0326 11:57:03.842194 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'  Mar 26 11:57:04.783: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}  I0326 11:57:04.783540 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'  I0326 11:57:45.310542 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:57:46.619661 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 11:57:48.662868 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:57:49.600070 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 11:57:50.509057 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 11:57:51.443481 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  Mar 26 11:57:52.357: INFO: Pool worker successfully pinned the images! Complete!  Mar 26 11:57:52.357: INFO: OK!    STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/26/26 11:57:52.357  I0326 11:57:52.358107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'  I0326 11:57:53.291378 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:  Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found  I0326 11:57:53.291551 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'  Mar 26 11:57:54.240: INFO: OK!    STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/26/26 11:57:54.24  I0326 11:57:55.465697 203228 client.go:743] showInfo is true  I0326 11:57:55.465729 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  I0326 11:58:00.182203 203228 client.go:743] showInfo is true  I0326 11:58:00.182238 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  I0326 11:58:06.173394 203228 client.go:743] showInfo is true  I0326 11:58:06.173446 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'  Mar 26 11:58:10.711: INFO: OK!    STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/26/26 11:58:10.711  I0326 11:58:10.711959 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'  I0326 11:58:11.619981 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'  Mar 26 11:58:12.548: INFO: Value: False  I0326 11:58:12.548812 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'  I0326 11:58:13.779107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'  Mar 26 11:58:14.718: INFO: Value: False  I0326 11:58:14.718623 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'  I0326 11:58:16.751807 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'  Mar 26 11:58:17.664: INFO: Value: False  I0326 11:58:17.664250 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'  I0326 11:58:18.604988 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'  Mar 26 11:58:19.526: INFO: Value: False  I0326 11:58:19.526682 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'  I0326 11:58:20.439088 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'  Mar 26 11:58:21.372: INFO: Value: False  I0326 11:58:21.372631 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'  I0326 11:58:22.309187 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'  Mar 26 11:58:23.241: INFO: Value: False  Mar 26 11:58:23.241: INFO: OK!    STEP: Verify that a PinnedImageSet with duplicate images is rejected by the API @ 03/26/26 11:58:23.241  Mar 26 11:58:23.241: INFO: Creating PinnedImageSet tc-88378-pis-duplicate in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]  I0326 11:58:26.243497 203228 client.go:743] showInfo is true  I0326 11:58:26.243530 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-duplicate POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"},{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'  I0326 11:58:28.233311 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout  I0326 11:58:28.233429 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout'  I0326 11:58:29.149912 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout:  The PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}  The PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}  I0326 11:58:29.149951 203228 template.go:83] fail to create/apply resource exit status 1  Mar 26 11:58:29.149: INFO: OK!    STEP: Verify the duplicate PinnedImageSet was not created @ 03/26/26 11:58:29.15  I0326 11:58:29.150144 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}'  I0326 11:58:30.066101 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}:  Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-duplicate" not found  Mar 26 11:58:30.066: INFO: OK!  I0326 11:58:30.066492 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'  I0326 11:58:36.208535 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'  Mar 26 11:58:37.697: INFO: Waiting 10m0s for MCP worker to complete pinned images.  I0326 11:59:37.698831 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'  Mar 26 11:59:38.648: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}  I0326 11:59:38.648722 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'  I0326 12:00:08.748588 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 12:00:10.058082 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 12:00:10.953806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 12:00:11.862759 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  I0326 12:00:12.772484 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'  I0326 12:00:13.706289 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'  Mar 26 12:00:14.639: INFO: Pool worker successfully pinned the images! Complete!  I0326 12:00:14.640103 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'  I0326 12:00:15.584232 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:  Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found  Mar 26 12:00:15.584: INFO:  does not exist! No need to delete it!  I0326 12:00:16.456736 203228 client.go:421] Deleted {user.openshift.io/v1, Resource=users  e2e-test-mco-pinnedimages-gzhpl-user}, err:   I0326 12:00:16.750972 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-mco-pinnedimages-gzhpl}, err:   I0326 12:00:17.046090 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~2YmQuchYNKaBqqEHouv8Sc_hbqH3KupLrvZmE0A6G5A}, err:     STEP: Destroying namespace "e2e-test-mco-pinnedimages-gzhpl" for this suite. @ 03/26/26 12:00:17.046  • [650.161 seconds]  ------------------------------  Ran 1 of 1 Specs in 650.161 seconds  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped[  {    "name": "[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet",    "lifecycle": "blocking",    "duration": 650160,    "startTime": "2026-03-26 06:19:27.178961 UTC",    "endTime": "2026-03-26 06:30:17.339682 UTC",    "result": "passed",    "output": "  STEP: Creating a kubernetes client @ 03/26/26 11:49:27.179\nI0326 11:49:30.080762 203228 client.go:164] configPath is now \"/tmp/configfile2140474964\"\nI0326 11:49:30.080782 203228 client.go:291] The user is now \"e2e-test-mco-pinnedimages-gzhpl-user\"\nI0326 11:49:30.080786 203228 client.go:293] Creating project \"e2e-test-mco-pinnedimages-gzhpl\"\nI0326 11:49:30.438487 203228 client.go:302] Waiting on permissions in project \"e2e-test-mco-pinnedimages-gzhpl\" ...\nI0326 11:49:31.877762 203228 client.go:363] Waiting for ServiceAccount \"default\" to be provisioned...\nI0326 11:49:32.262192 203228 client.go:363] Waiting for ServiceAccount \"builder\" to be provisioned...\nI0326 11:49:32.644816 203228 client.go:363] Waiting for ServiceAccount \"deployer\" to be provisioned...\nI0326 11:49:33.028098 203228 client.go:373] Waiting for RoleBinding \"system:image-builders\" to be provisioned...\nI0326 11:49:33.873137 203228 client.go:373] Waiting for RoleBinding \"system:deployers\" to be provisioned...\nI0326 11:49:34.723304 203228 client.go:373] Waiting for RoleBinding \"system:image-pullers\" to be provisioned...\nI0326 11:49:35.577062 203228 client.go:404] Project \"e2e-test-mco-pinnedimages-gzhpl\" has been fully provisioned.\nI0326 11:49:35.577518 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp'\nNAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE\nmaster   rendered-master-44e887cd7048827fba0e76164ff43694   True      False      False      3              3                   3                     0                      176m\nworker   rendered-worker-ec9a17283884ddee0a2875a578e37a2e   True      False      False      3              3                   3                     0                      176m\nI0326 11:49:36.530468 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:49:37.466423 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 26 11:49:38.665: INFO: \u003cKind: mcp, Name: worker, Namespace: \u003e \u003cKind: mcp, Name: master, Namespace: \u003e \u003cKind: mcp, Name: worker, Namespace: \u003e\n  STEP: MCO Preconditions Checks @ 03/26/26 11:49:38.665\nMar 26 11:49:39.885: INFO: Check that master pool is ready for testing\nI0326 11:49:39.885517 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'\nMar 26 11:49:40.820: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 26 11:49:40.820: INFO: Increase waiting time because it is master pool\nMar 26 11:49:40.820: INFO: Waiting 3m54s for MCP master to be completed.\nI0326 11:49:40.820435 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type==\"Degraded\")].status}'\nI0326 11:49:41.761157 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type==\"Updated\")].status}'\nMar 26 11:49:42.702: INFO: MCP 'master' is ready for testing\nMar 26 11:49:42.702: INFO: Check that worker pool is ready for testing\nI0326 11:49:42.702477 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 26 11:49:43.619: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 26 11:49:43.619: INFO: Waiting 3m0s for MCP worker to be completed.\nI0326 11:49:43.619388 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type==\"Degraded\")].status}'\nI0326 11:49:44.556156 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type==\"Updated\")].status}'\nMar 26 11:49:45.496: INFO: MCP 'worker' is ready for testing\nMar 26 11:49:45.496: INFO: Wait for MCC to get the leader lease\nI0326 11:49:45.496989 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'\nI0326 11:49:47.096604 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-6d9c86bdfb-vjqnq'\nMar 26 11:49:49.197: INFO: End of MCO Preconditions\n\n  STEP: Remove the test image from all nodes in the pool @ 03/26/26 11:50:17.66\nMar 26 11:50:17.660: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-3-16.us-east-2.compute.internal\nI0326 11:50:20.756208 203228 client.go:743] showInfo is false\nI0326 11:50:28.832241 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-gqvvt ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:50:30.060: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 1\nI0326 11:50:33.420970 203228 client.go:743] showInfo is false\nI0326 11:50:36.852202 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-nwjcz ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:50:38.047: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 2\nI0326 11:50:43.123059 203228 client.go:743] showInfo is false\nI0326 11:50:47.247423 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:50:48.471: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\nMar 26 11:50:48.471: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-59-220.us-east-2.compute.internal\nI0326 11:50:51.572904 203228 client.go:743] showInfo is false\nI0326 11:50:59.151955 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-kvpsk ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:00.378: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 1\nI0326 11:51:03.465824 203228 client.go:743] showInfo is false\nI0326 11:51:07.199790 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-4qjw7 ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:08.398: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 2\nI0326 11:51:11.426383 203228 client.go:743] showInfo is false\nI0326 11:51:15.924570 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:17.115: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\nMar 26 11:51:17.115: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-67.us-east-2.compute.internal\nI0326 11:51:20.186200 203228 client.go:743] showInfo is false\nI0326 11:51:27.977085 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-blljr ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:29.215: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 1\nI0326 11:51:32.243144 203228 client.go:743] showInfo is false\nI0326 11:51:35.148365 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-tfcdg ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:36.380: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 2\nI0326 11:51:39.415164 203228 client.go:743] showInfo is false\nI0326 11:51:43.048840 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:44.235: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...\nTo use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\nMar 26 11:51:44.235: INFO: OK!\n\n  STEP: Create first PinnedImageSet with alpine image @ 03/26/26 11:51:44.235\nMar 26 11:51:44.235: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nMar 26 11:51:44.235: INFO: mco fixture dir is not initialized, start to create\nMar 26 11:51:44.236: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir3541755939\nI0326 11:51:47.237561 203228 client.go:743] showInfo is true\nI0326 11:51:47.237605 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{\"name\":\"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\"}]'\nI0326 11:51:48.143037 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout\nI0326 11:51:48.143209 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created\nMar 26 11:51:49.070: INFO: OK!\n\n  STEP: Wait for the first PinnedImageSet to be applied @ 03/26/26 11:51:49.07\nMar 26 11:51:49.070: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:52:49.072124 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType==\"PinnedImageSets\")]}'\nMar 26 11:52:50.027: INFO: Pinned status: {\"availableMachineCount\":3,\"machineCount\":3,\"poolSynchronizerType\":\"PinnedImageSets\",\"readyMachineCount\":3,\"unavailableMachineCount\":0,\"updatedMachineCount\":3}\nI0326 11:52:50.027458 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:53:23.134632 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:53:24.046574 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 11:53:24.959525 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:53:25.896960 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 11:53:26.790634 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:53:27.730439 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nMar 26 11:53:28.637: INFO: Pool worker successfully pinned the images! Complete!\nMar 26 11:53:28.637: INFO: OK!\n\n  STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/26/26 11:53:28.637\nI0326 11:53:29.863509 203228 client.go:743] showInfo is true\nI0326 11:53:29.863551 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:53:34.495770 203228 client.go:743] showInfo is true\nI0326 11:53:34.495800 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:53:40.808008 203228 client.go:743] showInfo is true\nI0326 11:53:40.808046 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 26 11:53:44.337: INFO: OK!\n\n  STEP: Create second PinnedImageSet with the same alpine image @ 03/26/26 11:53:44.337\nMar 26 11:53:44.337: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nI0326 11:53:47.339885 203228 client.go:743] showInfo is true\nI0326 11:53:47.339951 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{\"name\":\"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\"}]'\nI0326 11:53:48.255233 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout\nI0326 11:53:48.255390 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created\nMar 26 11:53:49.188: INFO: OK!\n\n  STEP: Wait for the second PinnedImageSet to be applied @ 03/26/26 11:53:49.188\nMar 26 11:53:49.188: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:54:49.248189 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType==\"PinnedImageSets\")]}'\nMar 26 11:54:51.217: INFO: Pinned status: {\"availableMachineCount\":3,\"machineCount\":3,\"poolSynchronizerType\":\"PinnedImageSets\",\"readyMachineCount\":3,\"unavailableMachineCount\":0,\"updatedMachineCount\":3}\nI0326 11:54:51.217940 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:55:27.555737 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:55:28.516521 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 11:55:29.478977 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:55:30.488544 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 11:55:31.472091 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:55:32.428445 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nMar 26 11:55:33.399: INFO: Pool worker successfully pinned the images! Complete!\nMar 26 11:55:33.399: INFO: OK!\n\n  STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/26/26 11:55:33.399\nI0326 11:55:34.653659 203228 client.go:743] showInfo is true\nI0326 11:55:34.653699 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:55:39.333503 203228 client.go:743] showInfo is true\nI0326 11:55:39.333556 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:55:45.373991 203228 client.go:743] showInfo is true\nI0326 11:55:45.374022 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 26 11:55:48.669: INFO: OK!\n\n  STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/26/26 11:55:48.669\nI0326 11:55:48.669754 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:49.618856 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")]}'\nMar 26 11:55:50.590: INFO: Value: False\nI0326 11:55:50.591176 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:51.525784 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")]}'\nMar 26 11:55:52.481: INFO: Value: False\nI0326 11:55:52.481806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:53.813520 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")]}'\nMar 26 11:55:55.844: INFO: Value: False\nI0326 11:55:55.844355 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:56.785582 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")]}'\nMar 26 11:55:58.845: INFO: Value: False\nI0326 11:55:58.845763 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:59.767231 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")]}'\nMar 26 11:56:00.733: INFO: Value: False\nI0326 11:56:00.734220 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:56:01.675227 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")]}'\nMar 26 11:56:02.576: INFO: Value: False\nMar 26 11:56:02.576: INFO: OK!\n\n  STEP: Delete the first PinnedImageSet @ 03/26/26 11:56:02.576\nI0326 11:56:02.576800 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'\nMar 26 11:56:03.836: INFO: OK!\n\n  STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/26/26 11:56:03.836\nMar 26 11:56:03.836: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:57:03.842194 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType==\"PinnedImageSets\")]}'\nMar 26 11:57:04.783: INFO: Pinned status: {\"availableMachineCount\":3,\"machineCount\":3,\"poolSynchronizerType\":\"PinnedImageSets\",\"readyMachineCount\":3,\"unavailableMachineCount\":0,\"updatedMachineCount\":3}\nI0326 11:57:04.783540 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:57:45.310542 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:57:46.619661 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 11:57:48.662868 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:57:49.600070 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 11:57:50.509057 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 11:57:51.443481 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nMar 26 11:57:52.357: INFO: Pool worker successfully pinned the images! Complete!\nMar 26 11:57:52.357: INFO: OK!\n\n  STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/26/26 11:57:52.357\nI0326 11:57:52.358107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0326 11:57:53.291378 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io \"tc-88378-pis-one\" not found\nI0326 11:57:53.291551 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nMar 26 11:57:54.240: INFO: OK!\n\n  STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/26/26 11:57:54.24\nI0326 11:57:55.465697 203228 client.go:743] showInfo is true\nI0326 11:57:55.465729 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:58:00.182203 203228 client.go:743] showInfo is true\nI0326 11:58:00.182238 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:58:06.173394 203228 client.go:743] showInfo is true\nI0326 11:58:06.173446 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 26 11:58:10.711: INFO: OK!\n\n  STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/26/26 11:58:10.711\nI0326 11:58:10.711959 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:11.619981 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")]}'\nMar 26 11:58:12.548: INFO: Value: False\nI0326 11:58:12.548812 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:13.779107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")]}'\nMar 26 11:58:14.718: INFO: Value: False\nI0326 11:58:14.718623 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:16.751807 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")]}'\nMar 26 11:58:17.664: INFO: Value: False\nI0326 11:58:17.664250 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:18.604988 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")]}'\nMar 26 11:58:19.526: INFO: Value: False\nI0326 11:58:19.526682 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:20.439088 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")]}'\nMar 26 11:58:21.372: INFO: Value: False\nI0326 11:58:21.372631 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:22.309187 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")]}'\nMar 26 11:58:23.241: INFO: Value: False\nMar 26 11:58:23.241: INFO: OK!\n\n  STEP: Verify that a PinnedImageSet with duplicate images is rejected by the API @ 03/26/26 11:58:23.241\nMar 26 11:58:23.241: INFO: Creating PinnedImageSet tc-88378-pis-duplicate in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nI0326 11:58:26.243497 203228 client.go:743] showInfo is true\nI0326 11:58:26.243530 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-duplicate POOL=worker IMAGES=[{\"name\":\"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\"},{\"name\":\"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\"}]'\nI0326 11:58:28.233311 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout\nI0326 11:58:28.233429 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout'\nI0326 11:58:29.149912 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout:\nThe PinnedImageSet \"tc-88378-pis-duplicate\" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{\"name\":\"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\"}\nThe PinnedImageSet \"tc-88378-pis-duplicate\" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{\"name\":\"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\"}\nI0326 11:58:29.149951 203228 template.go:83] fail to create/apply resource exit status 1\nMar 26 11:58:29.149: INFO: OK!\n\n  STEP: Verify the duplicate PinnedImageSet was not created @ 03/26/26 11:58:29.15\nI0326 11:58:29.150144 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}'\nI0326 11:58:30.066101 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io \"tc-88378-pis-duplicate\" not found\nMar 26 11:58:30.066: INFO: OK!\n\nI0326 11:58:30.066492 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nI0326 11:58:36.208535 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'\nMar 26 11:58:37.697: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:59:37.698831 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType==\"PinnedImageSets\")]}'\nMar 26 11:59:38.648: INFO: Pinned status: {\"availableMachineCount\":3,\"machineCount\":3,\"poolSynchronizerType\":\"PinnedImageSets\",\"readyMachineCount\":3,\"unavailableMachineCount\":0,\"updatedMachineCount\":3}\nI0326 11:59:38.648722 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 12:00:08.748588 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 12:00:10.058082 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 12:00:10.953806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 12:00:11.862759 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nI0326 12:00:12.772484 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsDegraded\")].status}'\nI0326 12:00:13.706289 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type==\"PinnedImageSetsProgressing\")].status}'\nMar 26 12:00:14.639: INFO: Pool worker successfully pinned the images! Complete!\nI0326 12:00:14.640103 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0326 12:00:15.584232 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io \"tc-88378-pis-one\" not found\nMar 26 12:00:15.584: INFO: \u003cKind: pinnedimageset, Name: tc-88378-pis-one, Namespace: \u003e does not exist! No need to delete it!\nI0326 12:00:16.456736 203228 client.go:421] Deleted {user.openshift.io/v1, Resource=users  e2e-test-mco-pinnedimages-gzhpl-user}, err: \u003cnil\u003e\nI0326 12:00:16.750972 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-mco-pinnedimages-gzhpl}, err: \u003cnil\u003e\nI0326 12:00:17.046090 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~2YmQuchYNKaBqqEHouv8Sc_hbqH3KupLrvZmE0A6G5A}, err: \u003cnil\u003e\n  STEP: Destroying namespace \"e2e-test-mco-pinnedimages-gzhpl\" for this suite. @ 03/26/26 12:00:17.046\n"  }]
last_exit_code: 0
---
  worker   rendered-worker-ec9a17283884ddee0a2875a578e37a2e   True      False      False      3              3                   3                     0                      176m
  I0326 11:49:36.530468 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
  I0326 11:49:37.466423 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
  Mar 26 11:49:38.665: INFO:   
    STEP: MCO Preconditions Checks @ 03/26/26 11:49:38.665
  Mar 26 11:49:39.885: INFO: Check that master pool is ready for testing
  I0326 11:49:39.885517 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'
  Mar 26 11:49:40.820: INFO: Num nodes: 3, wait time per node 13 minutes
  Mar 26 11:49:40.820: INFO: Increase waiting time because it is master pool
  Mar 26 11:49:40.820: INFO: Waiting 3m54s for MCP master to be completed.
  I0326 11:49:40.820435 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'
  I0326 11:49:41.761157 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'
  Mar 26 11:49:42.702: INFO: MCP 'master' is ready for testing
  Mar 26 11:49:42.702: INFO: Check that worker pool is ready for testing
  I0326 11:49:42.702477 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
  Mar 26 11:49:43.619: INFO: Num nodes: 3, wait time per node 13 minutes
  Mar 26 11:49:43.619: INFO: Waiting 3m0s for MCP worker to be completed.
  I0326 11:49:43.619388 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'
  I0326 11:49:44.556156 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'
  Mar 26 11:49:45.496: INFO: MCP 'worker' is ready for testing
  Mar 26 11:49:45.496: INFO: Wait for MCC to get the leader lease
  I0326 11:49:45.496989 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'
  I0326 11:49:47.096604 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-6d9c86bdfb-vjqnq'
  Mar 26 11:49:49.197: INFO: End of MCO Preconditions
STEP: Remove the test image from all nodes in the pool @ 03/26/26 11:50:17.66

Mar 26 11:50:17.660: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-3-16.us-east-2.compute.internal
I0326 11:50:20.756208 203228 client.go:743] showInfo is false
I0326 11:50:28.832241 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-gqvvt ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:50:30.060: INFO: Error happened: exit status 1.
Retrying command. Num retries: 1
I0326 11:50:33.420970 203228 client.go:743] showInfo is false
I0326 11:50:36.852202 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-nwjcz ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:50:38.047: INFO: Error happened: exit status 1.
Retrying command. Num retries: 2
I0326 11:50:43.123059 203228 client.go:743] showInfo is false
I0326 11:50:47.247423 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:50:48.471: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container
Mar 26 11:50:48.471: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-59-220.us-east-2.compute.internal
I0326 11:50:51.572904 203228 client.go:743] showInfo is false
I0326 11:50:59.151955 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-kvpsk ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:51:00.378: INFO: Error happened: exit status 1.
Retrying command. Num retries: 1
I0326 11:51:03.465824 203228 client.go:743] showInfo is false
I0326 11:51:07.199790 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-4qjw7 ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:51:08.398: INFO: Error happened: exit status 1.
Retrying command. Num retries: 2
I0326 11:51:11.426383 203228 client.go:743] showInfo is false
I0326 11:51:15.924570 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:51:17.115: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container
Mar 26 11:51:17.115: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-67.us-east-2.compute.internal
I0326 11:51:20.186200 203228 client.go:743] showInfo is false
I0326 11:51:27.977085 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-blljr ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:51:29.215: INFO: Error happened: exit status 1.
Retrying command. Num retries: 1
I0326 11:51:32.243144 203228 client.go:743] showInfo is false
I0326 11:51:35.148365 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-tfcdg ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:51:36.380: INFO: Error happened: exit status 1.
Retrying command. Num retries: 2
I0326 11:51:39.415164 203228 client.go:743] showInfo is false
I0326 11:51:43.048840 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:
StdOut>
no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
StdErr>
Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container

Mar 26 11:51:44.235: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083
Starting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...
To use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.

Removing debug pod ...
error: non-zero exit code from debug container
Mar 26 11:51:44.235: INFO: OK!

STEP: Create first PinnedImageSet with alpine image @ 03/26/26 11:51:44.235

Mar 26 11:51:44.235: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
Mar 26 11:51:44.235: INFO: mco fixture dir is not initialized, start to create
Mar 26 11:51:44.236: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir3541755939
I0326 11:51:47.237561 203228 client.go:743] showInfo is true
I0326 11:51:47.237605 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
I0326 11:51:48.143037 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout
I0326 11:51:48.143209 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout'
pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created
Mar 26 11:51:49.070: INFO: OK!

STEP: Wait for the first PinnedImageSet to be applied @ 03/26/26 11:51:49.07

Mar 26 11:51:49.070: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0326 11:52:49.072124 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 26 11:52:50.027: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0326 11:52:50.027458 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0326 11:53:23.134632 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:53:24.046574 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 11:53:24.959525 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:53:25.896960 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 11:53:26.790634 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:53:27.730439 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 26 11:53:28.637: INFO: Pool worker successfully pinned the images! Complete!
Mar 26 11:53:28.637: INFO: OK!

STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/26/26 11:53:28.637

I0326 11:53:29.863509 203228 client.go:743] showInfo is true
I0326 11:53:29.863551 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0326 11:53:34.495770 203228 client.go:743] showInfo is true
I0326 11:53:34.495800 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0326 11:53:40.808008 203228 client.go:743] showInfo is true
I0326 11:53:40.808046 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 26 11:53:44.337: INFO: OK!

STEP: Create second PinnedImageSet with the same alpine image @ 03/26/26 11:53:44.337

Mar 26 11:53:44.337: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
I0326 11:53:47.339885 203228 client.go:743] showInfo is true
I0326 11:53:47.339951 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
I0326 11:53:48.255233 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout
I0326 11:53:48.255390 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout'
pinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created
Mar 26 11:53:49.188: INFO: OK!

STEP: Wait for the second PinnedImageSet to be applied @ 03/26/26 11:53:49.188

Mar 26 11:53:49.188: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0326 11:54:49.248189 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 26 11:54:51.217: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0326 11:54:51.217940 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0326 11:55:27.555737 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:55:28.516521 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 11:55:29.478977 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:55:30.488544 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 11:55:31.472091 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:55:32.428445 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 26 11:55:33.399: INFO: Pool worker successfully pinned the images! Complete!
Mar 26 11:55:33.399: INFO: OK!

STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/26/26 11:55:33.399

I0326 11:55:34.653659 203228 client.go:743] showInfo is true
I0326 11:55:34.653699 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0326 11:55:39.333503 203228 client.go:743] showInfo is true
I0326 11:55:39.333556 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0326 11:55:45.373991 203228 client.go:743] showInfo is true
I0326 11:55:45.374022 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 26 11:55:48.669: INFO: OK!

STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/26/26 11:55:48.669

I0326 11:55:48.669754 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'
I0326 11:55:49.618856 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 26 11:55:50.590: INFO: Value: False
I0326 11:55:50.591176 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'
I0326 11:55:51.525784 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 26 11:55:52.481: INFO: Value: False
I0326 11:55:52.481806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'
I0326 11:55:53.813520 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 26 11:55:55.844: INFO: Value: False
I0326 11:55:55.844355 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'
I0326 11:55:56.785582 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 26 11:55:58.845: INFO: Value: False
I0326 11:55:58.845763 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'
I0326 11:55:59.767231 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 26 11:56:00.733: INFO: Value: False
I0326 11:56:00.734220 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'
I0326 11:56:01.675227 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 26 11:56:02.576: INFO: Value: False
Mar 26 11:56:02.576: INFO: OK!

STEP: Delete the first PinnedImageSet @ 03/26/26 11:56:02.576

I0326 11:56:02.576800 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'
Mar 26 11:56:03.836: INFO: OK!

STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/26/26 11:56:03.836

Mar 26 11:56:03.836: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0326 11:57:03.842194 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 26 11:57:04.783: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0326 11:57:04.783540 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0326 11:57:45.310542 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:57:46.619661 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 11:57:48.662868 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:57:49.600070 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 11:57:50.509057 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 11:57:51.443481 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 26 11:57:52.357: INFO: Pool worker successfully pinned the images! Complete!
Mar 26 11:57:52.357: INFO: OK!

STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/26/26 11:57:52.357

I0326 11:57:52.358107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'
I0326 11:57:53.291378 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:
Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found
I0326 11:57:53.291551 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'
Mar 26 11:57:54.240: INFO: OK!

STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/26/26 11:57:54.24

I0326 11:57:55.465697 203228 client.go:743] showInfo is true
I0326 11:57:55.465729 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0326 11:58:00.182203 203228 client.go:743] showInfo is true
I0326 11:58:00.182238 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
I0326 11:58:06.173394 203228 client.go:743] showInfo is true
I0326 11:58:06.173446 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'
Mar 26 11:58:10.711: INFO: OK!

STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/26/26 11:58:10.711

I0326 11:58:10.711959 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'
I0326 11:58:11.619981 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 26 11:58:12.548: INFO: Value: False
I0326 11:58:12.548812 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'
I0326 11:58:13.779107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 26 11:58:14.718: INFO: Value: False
I0326 11:58:14.718623 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'
I0326 11:58:16.751807 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 26 11:58:17.664: INFO: Value: False
I0326 11:58:17.664250 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'
I0326 11:58:18.604988 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 26 11:58:19.526: INFO: Value: False
I0326 11:58:19.526682 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'
I0326 11:58:20.439088 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'
Mar 26 11:58:21.372: INFO: Value: False
I0326 11:58:21.372631 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'
I0326 11:58:22.309187 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'
Mar 26 11:58:23.241: INFO: Value: False
Mar 26 11:58:23.241: INFO: OK!

STEP: Verify that a PinnedImageSet with duplicate images is rejected by the API @ 03/26/26 11:58:23.241

Mar 26 11:58:23.241: INFO: Creating PinnedImageSet tc-88378-pis-duplicate in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]
I0326 11:58:26.243497 203228 client.go:743] showInfo is true
I0326 11:58:26.243530 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-duplicate POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"},{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'
I0326 11:58:28.233311 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout
I0326 11:58:28.233429 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout'
I0326 11:58:29.149912 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout:
The PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}
The PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}
I0326 11:58:29.149951 203228 template.go:83] fail to create/apply resource exit status 1
Mar 26 11:58:29.149: INFO: OK!

STEP: Verify the duplicate PinnedImageSet was not created @ 03/26/26 11:58:29.15

I0326 11:58:29.150144 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}'
I0326 11:58:30.066101 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}:
Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-duplicate" not found
Mar 26 11:58:30.066: INFO: OK!

I0326 11:58:30.066492 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'
I0326 11:58:36.208535 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'
Mar 26 11:58:37.697: INFO: Waiting 10m0s for MCP worker to complete pinned images.
I0326 11:59:37.698831 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'
Mar 26 11:59:38.648: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}
I0326 11:59:38.648722 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'
I0326 12:00:08.748588 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 12:00:10.058082 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 12:00:10.953806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 12:00:11.862759 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
I0326 12:00:12.772484 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'
I0326 12:00:13.706289 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'
Mar 26 12:00:14.639: INFO: Pool worker successfully pinned the images! Complete!
I0326 12:00:14.640103 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'
I0326 12:00:15.584232 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:
Error from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found
Mar 26 12:00:15.584: INFO: <Kind: pinnedimageset, Name: tc-88378-pis-one, Namespace: > does not exist! No need to delete it!
I0326 12:00:16.456736 203228 client.go:421] Deleted {user.openshift.io/v1, Resource=users e2e-test-mco-pinnedimages-gzhpl-user}, err:
I0326 12:00:16.750972 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-mco-pinnedimages-gzhpl}, err:
I0326 12:00:17.046090 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~2YmQuchYNKaBqqEHouv8Sc_hbqH3KupLrvZmE0A6G5A}, err:
STEP: Destroying namespace "e2e-test-mco-pinnedimages-gzhpl" for this suite. @ 03/26/26 12:00:17.046
• [650.161 seconds]

Ran 1 of 1 Specs in 650.161 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
{
"name": "[sig-mco][Suite:openshift/machine-config-operator/longduration][Serial][Disruptive] MCO Pinnedimages [PolarionID:88378][OTP] Deleting a PinnedImageSet does not affect images pinned by another PinnedImageSet",
"lifecycle": "blocking",
"duration": 650160,
"startTime": "2026-03-26 06:19:27.178961 UTC",
"endTime": "2026-03-26 06:30:17.339682 UTC",
"result": "passed",
"output": " STEP: Creating a kubernetes client @ 03/26/26 11:49:27.179\nI0326 11:49:30.080762 203228 client.go:164] configPath is now "/tmp/configfile2140474964"\nI0326 11:49:30.080782 203228 client.go:291] The user is now "e2e-test-mco-pinnedimages-gzhpl-user"\nI0326 11:49:30.080786 203228 client.go:293] Creating project "e2e-test-mco-pinnedimages-gzhpl"\nI0326 11:49:30.438487 203228 client.go:302] Waiting on permissions in project "e2e-test-mco-pinnedimages-gzhpl" ...\nI0326 11:49:31.877762 203228 client.go:363] Waiting for ServiceAccount "default" to be provisioned...\nI0326 11:49:32.262192 203228 client.go:363] Waiting for ServiceAccount "builder" to be provisioned...\nI0326 11:49:32.644816 203228 client.go:363] Waiting for ServiceAccount "deployer" to be provisioned...\nI0326 11:49:33.028098 203228 client.go:373] Waiting for RoleBinding "system:image-builders" to be provisioned...\nI0326 11:49:33.873137 203228 client.go:373] Waiting for RoleBinding "system:deployers" to be provisioned...\nI0326 11:49:34.723304 203228 client.go:373] Waiting for RoleBinding "system:image-pullers" to be provisioned...\nI0326 11:49:35.577062 203228 client.go:404] Project "e2e-test-mco-pinnedimages-gzhpl" has been fully provisioned.\nI0326 11:49:35.577518 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp'\nNAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE\nmaster rendered-master-44e887cd7048827fba0e76164ff43694 True False False 3 3 3 0 176m\nworker rendered-worker-ec9a17283884ddee0a2875a578e37a2e True False False 3 3 3 0 176m\nI0326 11:49:36.530468 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:49:37.466423 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 26 11:49:38.665: INFO: \u003cKind: mcp, Name: worker, Namespace: \u003e \u003cKind: mcp, Name: master, Namespace: \u003e \u003cKind: mcp, Name: worker, Namespace: \u003e\n STEP: MCO Preconditions Checks @ 03/26/26 11:49:38.665\nMar 26 11:49:39.885: INFO: Check that master pool is ready for testing\nI0326 11:49:39.885517 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.machineCount}'\nMar 26 11:49:40.820: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 26 11:49:40.820: INFO: Increase waiting time because it is master pool\nMar 26 11:49:40.820: INFO: Waiting 3m54s for MCP master to be completed.\nI0326 11:49:40.820435 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'\nI0326 11:49:41.761157 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp master -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'\nMar 26 11:49:42.702: INFO: MCP 'master' is ready for testing\nMar 26 11:49:42.702: INFO: Check that worker pool is ready for testing\nI0326 11:49:42.702477 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nMar 26 11:49:43.619: INFO: Num nodes: 3, wait time per node 13 minutes\nMar 26 11:49:43.619: INFO: Waiting 3m0s for MCP worker to be completed.\nI0326 11:49:43.619388 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Degraded")].status}'\nI0326 11:49:44.556156 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.conditions[?(@.type=="Updated")].status}'\nMar 26 11:49:45.496: INFO: MCP 'worker' is ready for testing\nMar 26 11:49:45.496: INFO: Wait for MCC to get the leader lease\nI0326 11:49:45.496989 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pod -n openshift-machine-config-operator -l k8s-app=machine-config-controller -o jsonpath={.items[0].metadata.name}'\nI0326 11:49:47.096604 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig logs -n openshift-machine-config-operator -c machine-config-controller machine-config-controller-6d9c86bdfb-vjqnq'\nMar 26 11:49:49.197: INFO: End of MCO Preconditions\n\n STEP: Remove the test image from all nodes in the pool @ 03/26/26 11:50:17.66\nMar 26 11:50:17.660: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-3-16.us-east-2.compute.internal\nI0326 11:50:20.756208 203228 client.go:743] showInfo is false\nI0326 11:50:28.832241 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-gqvvt ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:50:30.060: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 1\nI0326 11:50:33.420970 203228 client.go:743] showInfo is false\nI0326 11:50:36.852202 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-nwjcz ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:50:38.047: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 2\nI0326 11:50:43.123059 203228 client.go:743] showInfo is false\nI0326 11:50:47.247423 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:50:48.471: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-3-16us-east-2computeinternal-debug-jvchz ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\nMar 26 11:50:48.471: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-59-220.us-east-2.compute.internal\nI0326 11:50:51.572904 203228 client.go:743] showInfo is false\nI0326 11:50:59.151955 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-kvpsk ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:00.378: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 1\nI0326 11:51:03.465824 203228 client.go:743] showInfo is false\nI0326 11:51:07.199790 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-4qjw7 ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:08.398: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 2\nI0326 11:51:11.426383 203228 client.go:743] showInfo is false\nI0326 11:51:15.924570 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:17.115: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-59-220us-east-2computeinternal-debug-v5wbr ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\nMar 26 11:51:17.115: INFO: Removing image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 from node ip-10-0-84-67.us-east-2.compute.internal\nI0326 11:51:20.186200 203228 client.go:743] showInfo is false\nI0326 11:51:27.977085 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-blljr ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:29.215: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 1\nI0326 11:51:32.243144 203228 client.go:743] showInfo is false\nI0326 11:51:35.148365 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-tfcdg ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:36.380: INFO: Error happened: exit status 1.\nRetrying command. Num retries: 2\nI0326 11:51:39.415164 203228 client.go:743] showInfo is false\nI0326 11:51:43.048840 203228 client.go:763] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal --to-namespace=e2e-test-mco-pinnedimages-gzhpl -- chroot /host crictl rmi quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083:\nStdOut\u003e\nno such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStdErr\u003e\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\n\nMar 26 11:51:44.235: INFO: no such image quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083\nStarting pod/ip-10-0-84-67us-east-2computeinternal-debug-mcxwk ...\nTo use host binaries, run chroot /host. Instead, if you need to access host namespaces, run nsenter -a -t 1.\n\nRemoving debug pod ...\nerror: non-zero exit code from debug container\nMar 26 11:51:44.235: INFO: OK!\n\n STEP: Create first PinnedImageSet with alpine image @ 03/26/26 11:51:44.235\nMar 26 11:51:44.235: INFO: Creating PinnedImageSet tc-88378-pis-one in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nMar 26 11:51:44.235: INFO: mco fixture dir is not initialized, start to create\nMar 26 11:51:44.236: INFO: mco fixture dir is initialized: /tmp/fixture-testdata-dir3541755939\nI0326 11:51:47.237561 203228 client.go:743] showInfo is true\nI0326 11:51:47.237605 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-one POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'\nI0326 11:51:48.143037 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout\nI0326 11:51:48.143209 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-5xz4zfuaconfig.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-one created\nMar 26 11:51:49.070: INFO: OK!\n\n STEP: Wait for the first PinnedImageSet to be applied @ 03/26/26 11:51:49.07\nMar 26 11:51:49.070: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:52:49.072124 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 26 11:52:50.027: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0326 11:52:50.027458 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:53:23.134632 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:53:24.046574 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 11:53:24.959525 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:53:25.896960 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 11:53:26.790634 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:53:27.730439 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 26 11:53:28.637: INFO: Pool worker successfully pinned the images! Complete!\nMar 26 11:53:28.637: INFO: OK!\n\n STEP: Verify the image is pinned on all nodes after creating the first PinnedImageSet @ 03/26/26 11:53:28.637\nI0326 11:53:29.863509 203228 client.go:743] showInfo is true\nI0326 11:53:29.863551 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:53:34.495770 203228 client.go:743] showInfo is true\nI0326 11:53:34.495800 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:53:40.808008 203228 client.go:743] showInfo is true\nI0326 11:53:40.808046 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 26 11:53:44.337: INFO: OK!\n\n STEP: Create second PinnedImageSet with the same alpine image @ 03/26/26 11:53:44.337\nMar 26 11:53:44.337: INFO: Creating PinnedImageSet tc-88378-pis-two in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nI0326 11:53:47.339885 203228 client.go:743] showInfo is true\nI0326 11:53:47.339951 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-two POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'\nI0326 11:53:48.255233 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout\nI0326 11:53:48.255390 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-3a7um0l5config.json.stdout'\npinnedimageset.machineconfiguration.openshift.io/tc-88378-pis-two created\nMar 26 11:53:49.188: INFO: OK!\n\n STEP: Wait for the second PinnedImageSet to be applied @ 03/26/26 11:53:49.188\nMar 26 11:53:49.188: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:54:49.248189 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 26 11:54:51.217: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0326 11:54:51.217940 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:55:27.555737 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:55:28.516521 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 11:55:29.478977 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:55:30.488544 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 11:55:31.472091 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:55:32.428445 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 26 11:55:33.399: INFO: Pool worker successfully pinned the images! Complete!\nMar 26 11:55:33.399: INFO: OK!\n\n STEP: Verify the image is still pinned on all nodes after creating the second PinnedImageSet @ 03/26/26 11:55:33.399\nI0326 11:55:34.653659 203228 client.go:743] showInfo is true\nI0326 11:55:34.653699 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:55:39.333503 203228 client.go:743] showInfo is true\nI0326 11:55:39.333556 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:55:45.373991 203228 client.go:743] showInfo is true\nI0326 11:55:45.374022 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 26 11:55:48.669: INFO: OK!\n\n STEP: Verify all MachineConfigNodes report healthy pinned image conditions @ 03/26/26 11:55:48.669\nI0326 11:55:48.669754 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:49.618856 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 26 11:55:50.590: INFO: Value: False\nI0326 11:55:50.591176 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:51.525784 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 26 11:55:52.481: INFO: Value: False\nI0326 11:55:52.481806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:53.813520 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 26 11:55:55.844: INFO: Value: False\nI0326 11:55:55.844355 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:56.785582 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 26 11:55:58.845: INFO: Value: False\nI0326 11:55:58.845763 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:55:59.767231 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 26 11:56:00.733: INFO: Value: False\nI0326 11:56:00.734220 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:56:01.675227 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 26 11:56:02.576: INFO: Value: False\nMar 26 11:56:02.576: INFO: OK!\n\n STEP: Delete the first PinnedImageSet @ 03/26/26 11:56:02.576\nI0326 11:56:02.576800 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-one'\nMar 26 11:56:03.836: INFO: OK!\n\n STEP: Wait for the pool to reconcile after deleting the first PinnedImageSet @ 03/26/26 11:56:03.836\nMar 26 11:56:03.836: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:57:03.842194 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 26 11:57:04.783: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0326 11:57:04.783540 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 11:57:45.310542 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:57:46.619661 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 11:57:48.662868 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:57:49.600070 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 11:57:50.509057 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 11:57:51.443481 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 26 11:57:52.357: INFO: Pool worker successfully pinned the images! Complete!\nMar 26 11:57:52.357: INFO: OK!\n\n STEP: Verify the first PinnedImageSet is deleted and the second still exists @ 03/26/26 11:57:52.357\nI0326 11:57:52.358107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0326 11:57:53.291378 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found\nI0326 11:57:53.291551 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nMar 26 11:57:54.240: INFO: OK!\n\n STEP: Verify the image is STILL pinned on all nodes after deleting the first PinnedImageSet @ 03/26/26 11:57:54.24\nI0326 11:57:55.465697 203228 client.go:743] showInfo is true\nI0326 11:57:55.465729 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-3-16.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:58:00.182203 203228 client.go:743] showInfo is true\nI0326 11:58:00.182238 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-59-220.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nI0326 11:58:06.173394 203228 client.go:743] showInfo is true\nI0326 11:58:06.173446 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig debug node/ip-10-0-84-67.us-east-2.compute.internal -- chroot /host crictl images --pinned -o json quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083'\nMar 26 11:58:10.711: INFO: OK!\n\n STEP: Verify MachineConfigNodes remain healthy after the deletion @ 03/26/26 11:58:10.711\nI0326 11:58:10.711959 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:11.619981 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 26 11:58:12.548: INFO: Value: False\nI0326 11:58:12.548812 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:13.779107 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 26 11:58:14.718: INFO: Value: False\nI0326 11:58:14.718623 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:16.751807 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 26 11:58:17.664: INFO: Value: False\nI0326 11:58:17.664250 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:18.604988 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 26 11:58:19.526: INFO: Value: False\nI0326 11:58:19.526682 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:20.439088 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")]}'\nMar 26 11:58:21.372: INFO: Value: False\nI0326 11:58:21.372631 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={}'\nI0326 11:58:22.309187 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")]}'\nMar 26 11:58:23.241: INFO: Value: False\nMar 26 11:58:23.241: INFO: OK!\n\n STEP: Verify that a PinnedImageSet with duplicate images is rejected by the API @ 03/26/26 11:58:23.241\nMar 26 11:58:23.241: INFO: Creating PinnedImageSet tc-88378-pis-duplicate in pool worker with images [quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083 quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083]\nI0326 11:58:26.243497 203228 client.go:743] showInfo is true\nI0326 11:58:26.243530 203228 client.go:745] Running 'oc --namespace=e2e-test-mco-pinnedimages-gzhpl --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig process --ignore-unknown-parameters=true -f /tmp/fixture-testdata-dir3541755939/generic-pinned-image-set.yaml -p NAME=tc-88378-pis-duplicate POOL=worker IMAGES=[{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"},{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}]'\nI0326 11:58:28.233311 203228 template.go:66] the file of resource is /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout\nI0326 11:58:28.233429 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout'\nI0326 11:58:29.149912 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig create -f /tmp/e2e-test-mco-pinnedimages-gzhpl-dpmvkuf6config.json.stdout:\nThe PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}\nThe PinnedImageSet "tc-88378-pis-duplicate" is invalid: spec.pinnedImages[1]: Duplicate value: map[string]interface {}{"name":"quay.io/openshifttest/alpine@sha256:dc1536cbff0ba235d4219462aeccd4caceab9def96ae8064257d049166890083"}\nI0326 11:58:29.149951 203228 template.go:83] fail to create/apply resource exit status 1\nMar 26 11:58:29.149: INFO: OK!\n\n STEP: Verify the duplicate PinnedImageSet was not created @ 03/26/26 11:58:29.15\nI0326 11:58:29.150144 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}'\nI0326 11:58:30.066101 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-duplicate -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-duplicate" not found\nMar 26 11:58:30.066: INFO: OK!\n\nI0326 11:58:30.066492 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-two -o jsonpath={.}'\nI0326 11:58:36.208535 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig delete pinnedimageset tc-88378-pis-two'\nMar 26 11:58:37.697: INFO: Waiting 10m0s for MCP worker to complete pinned images.\nI0326 11:59:37.698831 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.poolSynchronizersStatus[?(@.poolSynchronizerType=="PinnedImageSets")]}'\nMar 26 11:59:38.648: INFO: Pinned status: {"availableMachineCount":3,"machineCount":3,"poolSynchronizerType":"PinnedImageSets","readyMachineCount":3,"unavailableMachineCount":0,"updatedMachineCount":3}\nI0326 11:59:38.648722 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get mcp worker -o jsonpath={.status.machineCount}'\nI0326 12:00:08.748588 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 12:00:10.058082 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-3-16.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 12:00:10.953806 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 12:00:11.862759 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-59-220.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nI0326 12:00:12.772484 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsDegraded")].status}'\nI0326 12:00:13.706289 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get machineconfignode ip-10-0-84-67.us-east-2.compute.internal -o jsonpath={.status.conditions[?(@.type=="PinnedImageSetsProgressing")].status}'\nMar 26 12:00:14.639: INFO: Pool worker successfully pinned the images! Complete!\nI0326 12:00:14.640103 203228 client.go:718] Running 'oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}'\nI0326 12:00:15.584232 203228 client.go:727] Error running /home/harshpat/.local/bin/oc --kubeconfig=/home/harshpat/Downloads/repos/kubeconfig get pinnedimageset tc-88378-pis-one -o jsonpath={.}:\nError from server (NotFound): pinnedimagesets.machineconfiguration.openshift.io "tc-88378-pis-one" not found\nMar 26 12:00:15.584: INFO: \u003cKind: pinnedimageset, Name: tc-88378-pis-one, Namespace: \u003e does not exist! No need to delete it!\nI0326 12:00:16.456736 203228 client.go:421] Deleted {user.openshift.io/v1, Resource=users e2e-test-mco-pinnedimages-gzhpl-user}, err: \u003cnil\u003e\nI0326 12:00:16.750972 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-mco-pinnedimages-gzhpl}, err: \u003cnil\u003e\nI0326 12:00:17.046090 203228 client.go:421] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha2562YmQuchYNKaBqqEHouv8Sc_hbqH3KupLrvZmE0A6G5A}, err: \u003cnil\u003e\n STEP: Destroying namespace "e2e-test-mco-pinnedimages-gzhpl" for this suite. @ 03/26/26 12:00:17.046\n"
}
harshpat@harshpat-thinkpadp1gen4i:
/Downloads/repos/machine-config-operator$ cp /home/harshpat/.cursor/projects/home-harshpat-Downloads-repos-machine-config-operator/terminals/4.txt /home/harshpat/Downloads/repos/test-88378-output.log

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
test/extended-priv/mco_pinnedimages.go (1)

617-633: Split duplicate-image rejection into a separate g.It

This spec currently validates two different behaviors (shared-reference deletion + duplicate-input rejection). Separating them will make failures easier to diagnose and reduce test maintenance overhead.

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended-priv/mco_pinnedimages.go` around lines 617 - 633, Split the
duplicate-image rejection check out of the existing test into its own g.It:
create a new It with a descriptive name (e.g., "rejects PinnedImageSet with
duplicate images") and move the block that defines pisDupName, calls
CreateGenericPinnedImageSet(oc.AsAdmin(), pisDupName, mcp.GetName(),
[]string{pinnedImage, pinnedImage}), and the subsequent expectations on err
(HaveOccurred, BeAssignableToTypeOf(&exutil.ExitError{}), and StdErr contains
"Duplicate value") plus the existence check using
NewPinnedImageSet(oc.AsAdmin(), pisDupName). Remove those lines from the
original test so it only tests shared-reference deletion, ensure the new It uses
the same fixtures/variables (mcp, pinnedImage, oc) and includes any necessary
setup/cleanup and logger.Infof calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test/extended-priv/mco_pinnedimages.go`:
- Around line 617-633: Split the duplicate-image rejection check out of the
existing test into its own g.It: create a new It with a descriptive name (e.g.,
"rejects PinnedImageSet with duplicate images") and move the block that defines
pisDupName, calls CreateGenericPinnedImageSet(oc.AsAdmin(), pisDupName,
mcp.GetName(), []string{pinnedImage, pinnedImage}), and the subsequent
expectations on err (HaveOccurred, BeAssignableToTypeOf(&exutil.ExitError{}),
and StdErr contains "Duplicate value") plus the existence check using
NewPinnedImageSet(oc.AsAdmin(), pisDupName). Remove those lines from the
original test so it only tests shared-reference deletion, ensure the new It uses
the same fixtures/variables (mcp, pinnedImage, oc) and includes any necessary
setup/cleanup and logger.Infof calls.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f03ad8b3-7b94-47b3-8162-468eceaa2634

📥 Commits

Reviewing files that changed from the base of the PR and between 69f31d0 and 7d41f94.

📒 Files selected for processing (1)
  • test/extended-priv/mco_pinnedimages.go

Signed-off-by: HarshwardhanPatil07 <harshpat@redhat.com>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 26, 2026

@HarshwardhanPatil07: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-op-ocl 7d41f94 link false /test e2e-gcp-op-ocl
ci/prow/e2e-hypershift 7d41f94 link true /test e2e-hypershift

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@HarshwardhanPatil07
Copy link
Author

PTAL @ptalgulk01

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants