Skip to content

Add e2e test for [not] pruning remote revisions #658

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

nrfox
Copy link
Contributor

@nrfox nrfox commented Feb 14, 2025

What type of PR is this?

  • Enhancement / New Feature
  • Bug Fix
  • Refactor
  • Optimization
  • Test
  • Documentation Update

What this PR does / why we need it:

Adds e2e test for #654

Which issue(s) this PR fixes:

Fixes #

Related Issue/PR #
Relates to #654

Additional information:

@nrfox nrfox requested a review from a team as a code owner February 14, 2025 20:46
Copy link

codecov bot commented Feb 14, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 74.57%. Comparing base (f23dacc) to head (5045cfa).
Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #658      +/-   ##
==========================================
+ Coverage   74.28%   74.57%   +0.28%     
==========================================
  Files          42       42              
  Lines        2559     2584      +25     
==========================================
+ Hits         1901     1927      +26     
  Misses        565      565              
+ Partials       93       92       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: Nick Fox <[email protected]>
g.Expect(clPrimary.List(ctx, gatewayPods, client.InNamespace(controlPlaneNamespace), client.MatchingLabels{"istio": "eastwestgateway"})).To(Succeed())
for _, pod := range gatewayPods.Items {
g.Expect(pod.DeletionTimestamp).To(BeNil())
g.Expect(pod).To(HaveCondition(corev1.PodReady, metav1.ConditionTrue), "Pod is not Ready in sample namespace; unexpected Condition")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should log the current state of the pod here when we hit this error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that would be helpful. I'll add that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not the most readable output but now it will print the pod status struct:

 Pod is not Ready in sample namespace; unexpected Condition. Pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodReadyToStartContainers", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTim
e:time.Date(2025, time.February, 18, 14, 26, 39, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2025, time.February, 18, 14, 26, 40, 0, time.L
ocal), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2025, time.February, 18, 14, 26, 42, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"ContainersR
eady", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2025, time.February, 18, 14, 26, 42, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.Jan
uary, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2025, time.February, 18, 14, 26, 38, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", HostIPs:[]v1.HostIP{v1.HostIP{IP:"172.18.0.3"}}, PodIP:"10.10.0.11", PodIPs:[
]v1.PodIP{v1.PodIP{IP:"10.10.0.11"}}, StartTime:time.Date(2025, time.February, 18, 14, 26, 38, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"istio-init", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerSt
ateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000391d50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:
"gcr.io/istio-release/proxyv2:1.24.2", ImageID:"gcr.io/istio-release/proxyv2@sha256:445156b5f4a780242d079a47b7d88199cbbb5959c92358469b721af490eca1ae", ContainerID:"containerd://d09ccac93a0db514809bf048282cc20c3a45da1b21cea78a31e232407bff680a", Started:(*bool)(0xc0008b7918), Allocat
edResources:v1.ResourceList(nil), Resources:(*v1.ResourceRequirements)(nil), VolumeMounts:[]v1.VolumeMountStatus{v1.VolumeMountStatus{Name:"kube-api-access-v9t4v", MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode
)(0xc000355120)}}, User:(*v1.ContainerUser)(nil), AllocatedResourcesStatus:[]v1.ResourceStatus(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"helloworld", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(
0xc000e13e18), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"quay.io/sai
l-dev/examples-helloworld-v1:1.0", ImageID:"quay.io/sail-dev/examples-helloworld-v1@sha256:328b237e4fb1b551dddbc6b4920ec6d0c0d1e7a6f18861398ac54f9df3d466f8", ContainerID:"containerd://5640b80b10eaeafed1bd4f2803f901717f8358502e146a91ba069e20b8e56548", Started:(*bool)(0xc0008b783a), 
AllocatedResources:v1.ResourceList(nil), Resources:(*v1.ResourceRequirements)(nil), VolumeMounts:[]v1.VolumeMountStatus{v1.VolumeMountStatus{Name:"kube-api-access-v9t4v", MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadO
nlyMode)(0xc000355100)}}, User:(*v1.ContainerUser)(nil), AllocatedResourcesStatus:[]v1.ResourceStatus(nil)}, v1.ContainerStatus{Name:"istio-proxy", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000e13e30), Terminated:(*v1.C
ontainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/istio-release/proxyv2:1.24.2", Image
ID:"gcr.io/istio-release/proxyv2@sha256:445156b5f4a780242d079a47b7d88199cbbb5959c92358469b721af490eca1ae", ContainerID:"containerd://8d50eefca3557831de33cbc206240dfdd9c23cf4fc6235d5c668e4577100c11f", Started:(*bool)(0xc0008b7868), AllocatedResources:v1.ResourceList(nil), Resources:
(*v1.ResourceRequirements)(nil), VolumeMounts:[]v1.VolumeMountStatus{v1.VolumeMountStatus{Name:"workload-socket", MountPath:"/var/run/secrets/workload-spiffe-uds", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil)}, v1.VolumeMountStatus{Name:"credential-socket", Mo
untPath:"/var/run/secrets/credential-uds", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil)}, v1.VolumeMountStatus{Name:"workload-certs", MountPath:"/var/run/secrets/workload-spiffe-credentials", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil)},
 v1.VolumeMountStatus{Name:"istiod-ca-cert", MountPath:"/var/run/secrets/istio", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil)}, v1.VolumeMountStatus{Name:"istio-data", MountPath:"/var/lib/istio/data", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMod
e)(nil)}, v1.VolumeMountStatus{Name:"istio-envoy", MountPath:"/etc/istio/proxy", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil)}, v1.VolumeMountStatus{Name:"istio-token", MountPath:"/var/run/secrets/tokens", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOn
lyMode)(nil)}, v1.VolumeMountStatus{Name:"istio-podinfo", MountPath:"/etc/istio/pod", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil)}, v1.VolumeMountStatus{Name:"kube-api-access-v9t4v", MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", ReadOnly:true, Re
cursiveReadOnly:(*v1.RecursiveReadOnlyMode)(0xc000355110)}}, User:(*v1.ContainerUser)(nil), AllocatedResourcesStatus:[]v1.ResourceStatus(nil)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil), Resize:"", ResourceClaimStatuses:[]v1.PodResourceClaimStatus(
nil)}                                                                                                                                                                                                                                                                                     

Copy link
Contributor

@sridhargaddam sridhargaddam Feb 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is too verbose. I'm fine if you want to revert this change.

Looks like there is some valid error in e2e-kind-multicluster_sail-operator_main suite.

  [FAILED] Timed out after 180.000s.
  The function passed to Eventually failed at /home/prow/go/src/github.com/istio-ecosystem/sail-operator/tests/e2e/multicluster/multicluster_primaryremote_test.go:440 with:
  Expected
      <string>: default
  to equal
      <string>: default-v1-24-1
  In [BeforeAll] at: /home/prow/go/src/github.com/istio-ecosystem/sail-operator/tests/e2e/multicluster/multicluster_primaryremote_test.go:339 @ 02/18/25 20:00:42.857

nrfox and others added 2 commits February 18, 2025 13:36
Co-authored-by: Sridhar Gaddam <[email protected]>
Signed-off-by: Nick Fox <[email protected]>
@nrfox nrfox force-pushed the e2e-test-for-pruning branch from 18fe1c4 to 5045cfa Compare February 18, 2025 19:44
@istio-testing
Copy link
Collaborator

@nrfox: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
e2e-kind-multicluster_sail-operator_main 5045cfa link true /test e2e-kind-multicluster

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@sridhargaddam
Copy link
Contributor

The following test is failing.

Summarizing 1 Failure:
  [FAIL] Multicluster deployment models Primary-Remote - Multi-Network configuration Istio version 1.24.1 when a revision is no longer in use [BeforeAll] sees the old revision as no longer in use
  /home/prow/go/src/github.com/istio-ecosystem/sail-operator/tests/e2e/multicluster/multicluster_primaryremote_test.go:339

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants