You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`\[Feature:LocalStorageCapacityIsolation\]`, // relies on a separate daemonset?
47
38
`\[sig-cloud-provider-gcp\]`, // these test require a different configuration - note that GCE tests from the sig-cluster-lifecycle were moved to the sig-cloud-provider-gcpcluster lifecycle see https://github.com/kubernetes/kubernetes/commit/0b3d50b6dccdc4bbd0b3e411c648b092477d79ac#diff-3b1910d08fb8fd8b32956b5e264f87cb
48
39
49
40
`kube-dns-autoscaler`, // Don't run kube-dns
50
-
`should check if Kubernetes master services is included in cluster-info`, // Don't run kube-dns
51
-
`DNS configMap`, // this tests dns federation configuration via configmap, which we don't support yet
41
+
`DNS configMap`, // this tests dns federation configuration via configmap, which we don't support yet
52
42
53
-
`NodeProblemDetector`, // requires a non-master node to run on
54
-
`Advanced Audit should audit API calls`, // expects to be able to call /logs
55
-
56
-
`Firewall rule should have correct firewall rules for e2e cluster`, // Upstream-install specific
43
+
`NodeProblemDetector`, // requires a non-master node to run on
`\[sig-network\] \[Feature:Topology Hints\] should distribute endpoints evenly`,
@@ -67,14 +54,12 @@ var (
67
54
// always add an issue here
68
55
"[Disabled:Broken]": {
69
56
`mount an API token into pods`, // We add 6 secrets, not 1
70
-
`ServiceAccounts should ensure a single API token exists`, // We create lots of secrets
71
57
`unchanging, static URL paths for kubernetes api services`, // the test needs to exclude URLs that are not part of conformance (/logs)
72
58
`Services should be able to up and down services`, // we don't have wget installed on nodes
73
59
`KubeProxy should set TCP CLOSE_WAIT timeout`, // the test require communication to port 11302 in the cluster nodes
74
60
`should check kube-proxy urls`, // previously this test was skipped b/c we reported -1 as the number of nodes, now we report proper number and test fails
75
61
`SSH`, // TRIAGE
76
62
`should implement service.kubernetes.io/service-proxy-name`, // this is an optional test that requires SSH. sig-network
77
-
`recreate nodes and ensure they function upon restart`, // https://bugzilla.redhat.com/show_bug.cgi?id=1756428
`\[sig-storage\].*\[Driver: nfs\] \[Testpattern: Dynamic PV \(default fs\)\].*subPath should be able to unmount after the subpath directory is deleted`,
@@ -123,14 +104,8 @@ var (
123
104
`Netpol \[LinuxOnly\] NetworkPolicy between server and client using UDP should enforce policy based on Ports`,
124
105
`Netpol \[LinuxOnly\] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector`,
125
106
126
-
`Topology Hints should distribute endpoints evenly`,
// Also, our CI doesn't support topology, so disable those tests
180
142
`\[sig-storage\] In-tree Volumes \[Driver: vsphere\] \[Testpattern: Dynamic PV \(delayed binding\)\] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies`,
181
143
`\[sig-storage\] In-tree Volumes \[Driver: vsphere\] \[Testpattern: Dynamic PV \(delayed binding\)\] topology should provision a volume and schedule a pod with AllowedTopologies`,
@@ -184,7 +146,6 @@ var (
184
146
},
185
147
// tests too slow to be part of conformance
186
148
"[Slow]": {
187
-
`\[sig-scalability\]`, // disable from the default set for now
188
149
`should create and stop a working application`, // Inordinately slow tests
189
150
190
151
`\[Feature:PerformanceDNS\]`, // very slow
@@ -194,25 +155,13 @@ var (
194
155
// tests that are known flaky
195
156
"[Flaky]": {
196
157
`Job should run a job to completion when tasks sometimes fail and are not locally restarted`, // seems flaky, also may require too many resources
197
-
// TODO(node): test works when run alone, but not in the suite in CI
198
-
`\[Feature:HPA\] Horizontal pod autoscaling \(scale resource: CPU\) \[sig-autoscaling\] ReplicationController light Should scale from 1 pod to 2 pods`,
`Clean up pods on node`, // schedules up to max pods per node
207
-
`DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume`, // test is very disruptive to other tests
208
-
209
-
`Should be able to support the 1\.7 Sample API Server using the current Aggregator`, // down apiservices break other clients today https://bugzilla.redhat.com/show_bug.cgi?id=1623195
210
-
211
-
`\[Feature:HPA\] Horizontal pod autoscaling \(scale resource: CPU\) \[sig-autoscaling\] ReplicationController light Should scale from 1 pod to 2 pods`,
212
-
213
-
`should prevent Ingress creation if more than 1 IngressClass marked as default`, // https://bugzilla.redhat.com/show_bug.cgi?id=1822286
214
-
215
-
`\[sig-network\] IngressClass \[Feature:Ingress\] should set default value on new IngressClass`, //https://bugzilla.redhat.com/show_bug.cgi?id=1833583
216
165
},
217
166
// Tests that don't pass on disconnected, either due to requiring
218
167
// internet access for GitHub (e.g. many of the s2i builds), or
@@ -245,33 +194,14 @@ var (
245
194
`\[Feature:LoadBalancer\]`,
246
195
},
247
196
"[Skipped:gce]": {
248
-
// Requires creation of a different compute instance in a different zone and is not compatible with volumeBindingMode of WaitForFirstConsumer which we use in 4.x
249
-
`\[sig-storage\] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist`,
250
-
251
197
// The following tests try to ssh directly to a node. None of our nodes have external IPs
252
-
`\[k8s.io\] \[sig-node\] crictl should be able to run crictl on the node`,
253
198
`\[sig-storage\] Flexvolumes should be mountable`,
254
-
`\[sig-storage\] Detaching volumes should not work when mount is in progress`,
255
-
256
-
// We are using ovn-kubernetes to conceal metadata
257
-
`\[sig-auth\] Metadata Concealment should run a check-metadata-concealment job to completion`,
0 commit comments