Skip to content
This repository was archived by the owner on Mar 26, 2025. It is now read-only.
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
3800d03
feat(e2e): parallel testing with different kubeconfigs
bartam1 Jul 21, 2023
1561b6a
downgrade e2e k8s.io/apimachinery to the same ver that is used by mai…
bartam1 Jul 24, 2023
d7947cc
add bin/ginkgo Makefile
bartam1 Jul 24, 2023
d9efd7f
parent d7947cc45e8ff9dfcf11f5d121a90e154b58782f
bartam1 Jul 24, 2023
ccc9a6d
update ginkgo to 2.11.0
bartam1 Jul 27, 2023
f8a786d
Merge branch 'master' into e2e_param
bartam1 Jul 27, 2023
56c0634
run allTestCase
bartam1 Jul 27, 2023
527e501
format time output
bartam1 Jul 27, 2023
0f0e07e
increase zookeepercluster and kafkacluster creation timeout
bartam1 Jul 27, 2023
4826fa1
run allTests serial
bartam1 Jul 27, 2023
0764631
zookeeper timeout increased
bartam1 Jul 28, 2023
86586a4
add test progress indicator
bartam1 Jul 28, 2023
2df49bb
report entry refactored
bartam1 Jul 28, 2023
cd250f5
go with one kind cluster
bartam1 Jul 29, 2023
6958dce
Merge branch 'master' into e2e_param
bartam1 Jul 31, 2023
347d28d
refactor comments and K8sClusterPool
bartam1 Jul 31, 2023
2b85e9f
fix mockTests description
bartam1 Aug 3, 2023
b678b17
fix: missing kubectlOptions param
bartam1 Aug 3, 2023
400c0ff
fix David suggestions 1
bartam1 Aug 3, 2023
4c0b935
fix David suggestions 2
bartam1 Aug 3, 2023
4031097
fix unnecessary imports
bartam1 Aug 3, 2023
edab06c
fix: feedFromDirectory
bartam1 Aug 8, 2023
6f55555
Merge branch 'master' into e2e_param
bartam1 Aug 8, 2023
98b4a0d
add fix and test for version and provider identifier
bartam1 Aug 9, 2023
8b72d6c
add test for GetTestSuiteDurationParallel
bartam1 Aug 9, 2023
5154223
add test for testpool
bartam1 Aug 10, 2023
bc10e92
refactor classifier using NewTest constructor
bartam1 Aug 12, 2023
c1fd100
fix classifier unit test
bartam1 Aug 12, 2023
eb7c5a1
add e2e unit tests execution into Makefile
bartam1 Aug 12, 2023
a2952f6
add e2e go fmt go vet
bartam1 Aug 13, 2023
95d6371
fix go.mod
bartam1 Aug 13, 2023
50d3a81
rename GetRawConfig to CreateRawConfig
bartam1 Aug 14, 2023
129e7fe
zookeepecluster create timeout defaults 4min
bartam1 Aug 14, 2023
7029681
test 2 kind cluster setup
bartam1 Aug 16, 2023
7bc2a2d
default strategy versionComplete
bartam1 Aug 16, 2023
4c7b111
Merge branch 'master' into e2e_param
bartam1 Aug 16, 2023
842273f
go with parallel
bartam1 Aug 16, 2023
ba66727
increase zookeeper and kafka cluster creation timeout
bartam1 Aug 16, 2023
a13805f
default pod readiness timeout increased to 60
bartam1 Aug 16, 2023
a947c8c
go with one kind cluster by default
bartam1 Aug 17, 2023
18442f6
Merge branch 'master' into e2e_param
bartam1 Aug 18, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion tests/e2e/const.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ const (
defaultTopicCreationWaitTime = 10 * time.Second
defaultUserCreationWaitTime = 10 * time.Second

kafkaClusterCreateTimeout = 600 * time.Second // Increased from 600 to 700 for multiple kind
kafkaClusterCreateTimeout = 600 * time.Second
kafkaClusterResourceCleanupTimeout = 120 * time.Second
kcatDeleetionTimeout = 40 * time.Second
zookeeperClusterCreateTimeout = 7 * time.Minute // Increased from 4 to 7 for multiple kind
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as on line 56, I don't think we need this comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please explain why should we remove this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm really not sure what sense it makes to have the explanation for this timeout here. For giving information to reviewers it would be fine as a review comment but I don't think this needs to be a comment on the code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 that the comment "// Increased from 4 to 7 for multiple kind" can be removed since it only provides the context about why the change is made, but it doesn't provide any valuable info itself along with the source code.

Or perhaps we can change the comment to something like this:

Suggested change
zookeeperClusterCreateTimeout = 7 * time.Minute // Increased from 4 to 7 for multiple kind
// increase timeout for supporting multiple Kind clusters
zookeeperClusterCreateTimeout = 7 * time.Minute

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can remove but in this case who will face with this issue on multiple kind cluster on github will not know what was the original setting for one kind cluster setup.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think default values and the reason of their changes are valuable in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the increased timeout was not helped, I put back to the original value and I removed the comment
fixed: 129e7fe

Expand Down
53 changes: 2 additions & 51 deletions tests/e2e/k8s.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ import (
"io"
"net/http"
"os"
"path"
"strings"
"text/template"
"time"

"emperror.dev/errors"
"github.com/Masterminds/sprig"
"github.com/banzaicloud/koperator/tests/e2e/pkg/common"
"github.com/cisco-open/k8s-objectmatcher/patch"
"github.com/gruntwork-io/terratest/modules/k8s"
. "github.com/onsi/ginkgo/v2"
Expand Down Expand Up @@ -101,55 +101,6 @@ func createOrReplaceK8sResourcesFromManifest( //nolint:unused // Note: this migh
}
}

func getDefaultKubeContext(kubeconfigPath string) (string, error) {
kubeconfigBytes, err := os.ReadFile(kubeconfigPath)
if err != nil {
return "", errors.WrapIfWithDetails(err, "reading KUBECONFIG file failed", "path", kubeconfigPath)
}

structuredKubeconfig := make(map[string]interface{})
err = yaml.Unmarshal(kubeconfigBytes, &structuredKubeconfig)
if err != nil {
return "", errors.WrapIfWithDetails(
err,
"parsing kubeconfig failed",
"kubeconfig", string(kubeconfigBytes),
)
}

kubecontext, isOk := structuredKubeconfig["current-context"].(string)
if !isOk {
return "", errors.WrapIfWithDetails(
err,
"kubeconfig current-context is not string",
"current-context", structuredKubeconfig["current-context"],
)
}

return kubecontext, nil
}

// currentKubernetesContext returns the currently set Kubernetes context based
// on the the environment variables and the KUBECONFIG file.
func currentEnvK8sContext() (kubeconfigPath string, kubecontextName string, err error) {
kubeconfigPath, isExisting := os.LookupEnv("KUBECONFIG")
if !isExisting {
homePath, err := os.UserHomeDir()
if err != nil {
return "", "", errors.WrapIf(err, "retrieving user home directory failed")
}

kubeconfigPath = path.Join(homePath, ".kube", "config")
}

kubecontext, err := getDefaultKubeContext(kubeconfigPath)
if err != nil {
return "", "", err
}

return kubeconfigPath, kubecontext, nil
}

// getK8sCRD queries and returns the CRD of the specified CRD name from the
// provided Kubernetes context.
func getK8sCRD(kubectlOptions k8s.KubectlOptions, crdName string) ([]byte, error) { //nolint:unused // Note: this might come in handy for manual CRD operations.
Expand Down Expand Up @@ -390,7 +341,7 @@ func kubectlOptions(kubecontextName, kubeconfigPath, namespace string) k8s.Kubec
// kubectlOptionsForCurrentContext returns a kubectlOptions object for the
// current Kubernetes context or alternatively an error.
func kubectlOptionsForCurrentContext() (k8s.KubectlOptions, error) {
kubeconfigPath, kubecontextName, err := currentEnvK8sContext()
kubeconfigPath, kubecontextName, err := common.CurrentEnvK8sContext()
if err != nil {
return k8s.KubectlOptions{}, errors.WrapIf(err, "retrieving current environment Kubernetes context failed")
}
Expand Down
10 changes: 6 additions & 4 deletions tests/e2e/koperator_suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -100,19 +100,21 @@ func runGinkgoTests(t *testing.T) error {
if err != nil {
return fmt.Errorf("could not parse MaxTimeout into time.Duration: %w", err)
}
// Protection against too long test suites
if testSuiteDuration > maxTimeout {
return fmt.Errorf("tests estimated duration: '%s' bigger then maxTimeout: '%s'", testSuiteDuration.String(), maxTimeout.String())
}

// Calculated timeout can be overran with the specified time length
allowedOverrun, err := time.ParseDuration(viper.GetString(config.Tests.AllowedOverrunDuration))
if err != nil {
return fmt.Errorf("could not parse AllowedOverrunDuration into time.Duration: %w", err)
}

// Set TestSuite timeout based on the generated tests
suiteConfig.Timeout = testSuiteDuration + allowedOverrun

// Protection against too long test suites
if suiteConfig.Timeout > maxTimeout {
return fmt.Errorf("tests estimated duration: '%s' longer then maxTimeout: '%s'", suiteConfig.Timeout.String(), maxTimeout.String())
}

if viper.GetBool(config.Tests.CreateTestReportFile) {
if err := createTestReportFile(); err != nil {
return err
Expand Down
20 changes: 3 additions & 17 deletions tests/e2e/pkg/common/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,11 @@ import (
"github.com/gruntwork-io/terratest/modules/k8s"
"golang.org/x/exp/maps"
"gopkg.in/yaml.v2"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/clientcmd/api"
)

// currentKubernetesContext returns the currently set Kubernetes context based
// CurrentEnvK8sContext returns the currently set Kubernetes context based
// on the the environment variables and the KUBECONFIG file.
func CurrentEnvK8sContext() (kubeconfigPath string, kubecontextName string, err error) {
kubeconfigPath, isExisting := os.LookupEnv("KUBECONFIG")
Expand Down Expand Up @@ -63,6 +62,7 @@ func KubectlOptionsForCurrentContext() (k8s.KubectlOptions, error) {
}, nil
}

// GetDefaultKubeContext returns the default kubeContext name from the given kubeconfig file
func GetDefaultKubeContext(kubeconfigPath string) (string, error) {
kubeconfigBytes, err := os.ReadFile(kubeconfigPath)
if err != nil {
Expand Down Expand Up @@ -105,25 +105,11 @@ func GetRawConfig(kubeconfigPath string) (api.Config, error) {
return clientConfig.RawConfig()
}

// GetKubeContexts returns the available kubecontext names in the kubeconfig file
func GetKubeContexts(kubeconfigPath string) ([]string, error) {
configs, err := GetRawConfig(kubeconfigPath)
if err != nil {
return nil, err
}
return maps.Keys(configs.Contexts), nil
}

// GetConfig returns kubernetes config based on the current environment.
// If fpath is provided, loads configuration from that file. Otherwise,
// GetConfig uses default strategy to load configuration from $KUBECONFIG,
// .kube/config, or just returns in-cluster config.
func GetConfigWithContext(kubeconfigPath, kubeContext string) (*rest.Config, error) {
rules := clientcmd.NewDefaultClientConfigLoadingRules()
if kubeconfigPath != "" {
rules.ExplicitPath = kubeconfigPath
}
overrides := &clientcmd.ConfigOverrides{CurrentContext: kubeContext}
return clientcmd.
NewNonInteractiveDeferredLoadingClientConfig(rules, overrides).
ClientConfig()
}
75 changes: 42 additions & 33 deletions tests/e2e/pkg/tests/mocks.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,16 @@ import (
"github.com/gruntwork-io/terratest/modules/k8s"
. "github.com/onsi/ginkgo/v2"

//. "github.com/onsi/gomega"
"time"
)

// MockTestsMinimal:
// 3 different provider 2 different version 3 different K8s cluster with 2 tests
// Expected: 2 testCase 1 testCase on any available K8sCluster
// 1x2 + 1x3 = 5 test all-together
// Runtime parallel: 1x(5) = 5sec (time of the longest testCase)
// MockTestsMinimal returns a Classifier that has 3 different K8s clusters using 2 different K8s versions and 3 different providers.
// The Classifier contains 2 tests.
//
// Expected (minimal strategy):
// - 2 testCases
// - 1 testCase on any of the available K8sClusters
// - runtime parallel: 1x(5) = 5sec (time of the longest testCase)
func MockTestsMinimal() Classifier {
k8sClusterPool := K8sClusterPool{
NewMockK8sCluster(
Expand Down Expand Up @@ -54,11 +55,13 @@ func MockTestsMinimal() Classifier {
return NewClassifier(k8sClusterPool, mockTest1, mockTest2)
}

// MockTestsProvider:
// 3 different provider 3 different K8s cluster with 2 tests
// Expected: 6 testCase 2 testCase on every provider
// 3x2 + 3x3 = 15 test all-together
// Runtime parallel: 3x(3) = 9sec (time of the longest testCase)
// MockTestsProvider returns a Classifier that has 3 different K8s clusters using 3 different K8s provider.
// The Classifier contains 2 tests.
//
// Expected (provider strategy):
// - 6 testCases
// - 2 testCases on different providers
// - runtime parallel: 3x(3) = 9sec
func MockTestsProvider() Classifier {
k8sClusterPool := K8sClusterPool{
NewMockK8sCluster(
Expand Down Expand Up @@ -86,11 +89,13 @@ func MockTestsProvider() Classifier {
return NewClassifier(k8sClusterPool, mockTest1, mockTest2)
}

// MockTestsProviderMoreTestsThenProvider:
// 2 different provider 2 different K8s cluster with 3 tests
// Expected: 6 testCase 3 testCase on every provider
// 2x2 + 2x2 + 2x3 = 14 test all-together
// Runtime parallel: 4 + 4 + 5 = 13
// MockTestsProviderMoreTestsThenProvider returns a Classifier that has 2 different K8s clusters using 2 different K8s provider.
// The Classifier contains 3 tests.
//
// Expected (provider strategy):
// - 6 testCases
// - 3 testCases on different providers
// - runtime parallel: 4 + 4 + 5 = 13sec
func MockTestsProviderMoreTestsThenProvider() Classifier {
k8sClusterPool := K8sClusterPool{
NewMockK8sCluster(
Expand All @@ -111,11 +116,13 @@ func MockTestsProviderMoreTestsThenProvider() Classifier {
return NewClassifier(k8sClusterPool, mockTest1, mockTest2, mockTest3)
}

// MockTestsVersionOne:
// no different version 2 different K8s cluster with 2 tests
// Expected: 2 testCase -> 1 testCase on each K8sCluster
// 2x2 2x3 = 10 test all-together
// Runtime parallel: 1x5 = 5
// MockTestsVersionOne returns a Classifier that has 2 different K8s clusters using same K8s versions and 2 different providers.
// The Classifier contains 2 tests.
//
// Expected (version strategy):
// - 2 testCases
// - 1 testCase on any of the available K8sClusters
// - runtime parallel: 1 x 5 = 5sec
func MockTestsVersionOne() Classifier {
k8sClusterPool := K8sClusterPool{
NewMockK8sCluster(
Expand All @@ -136,11 +143,13 @@ func MockTestsVersionOne() Classifier {
return NewClassifier(k8sClusterPool, mockTest1, mockTest2)
}

// MockTestsVersion:
// 2 different version 3 different K8s cluster with 2 tests
// Expected: 4 testCase -> 2 testCase on each version
// 2x2 2x3 = 10 test all-together
// Runtime parallel: 4 + 5 = 9
// MockTestsVersion returns a Classifier that has 3 different K8s clusters using 2 different K8s versions and 2 different providers.
// The Classifier contains 2 tests.
//
// Expected (version strategy):
// - 2 testCases
// - 1 testCase on any of the available K8sClusters
// - runtime parallel: 1 x 5 = 5sec
func MockTestsVersion() Classifier {
k8sClusterPool := K8sClusterPool{
NewMockK8sCluster(
Expand Down Expand Up @@ -168,11 +177,13 @@ func MockTestsVersion() Classifier {
return NewClassifier(k8sClusterPool, mockTest1, mockTest2)
}

// MockTestsVersion:
// 2 different version 2 different version 4 K8s cluster with 2 tests
// Expected: 4 testCase -> 2 testCase on each version
// 2x2 2x3 = 10 test all-together
// Runtime parallel: 4 + 5 = 9
// MockTestsComplete returns a Classifier that has 4 different K8s clusters using 2 different K8s versions and 3 different providers.
// The Classifier contains 2 tests.
//
// Expected (complete strategy):
// - 6 testCases
// - 2 testCase on every different K8sClusters provider and version
// - runtime parallel: 4 + 5 = 9sec
func MockTestsComplete() Classifier {
k8sClusterPool := K8sClusterPool{
NewMockK8sCluster(
Expand Down Expand Up @@ -236,8 +247,6 @@ func testMockTest2(kubectlOptions k8s.KubectlOptions) {
})
It("MockTest2-2", func() {
time.Sleep(time.Second * 1)
//Expect(0).Should(Equal(1))
AddReportEntry("Output:", CurrentSpecReport().CapturedGinkgoWriterOutput)
})
It("MockTest2-3", func() {
time.Sleep(time.Second * 1)
Expand Down
9 changes: 5 additions & 4 deletions tests/e2e/pkg/tests/tests.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ func (tests TestPool) Equal(other TestPool) bool {
return false
}

tests.Sort()
other.Sort()
tests.sort()
other.sort()

for i := range tests {
if !tests[i].equal(other[i]) {
Expand All @@ -60,6 +60,7 @@ func (tests TestPool) Equal(other TestPool) bool {
return true
}

// PoolInfo returns a formatted string as information about the current testPool
func (tests TestPool) PoolInfo() string {
testsByContextName := tests.getTestsByContextName()
testsByProviders := tests.getTestsByProviders()
Expand Down Expand Up @@ -100,7 +101,7 @@ func (tests TestPool) BuildParallelByK8sCluster() {
}
}

func (tests TestPool) Sort() {
func (tests TestPool) sort() {
sort.SliceStable(tests, func(i, j int) bool {
return tests[i].less(tests[j])
})
Expand All @@ -110,7 +111,7 @@ func (tests TestPool) getSortedTestsByClusterID() map[string][]Test {
testsByClusterID := make(map[string][]Test)
// Need to be sorted to achieve test specs tree consistency between processes
// otherwise it can happen that specs order will be different for each process
tests.Sort()
tests.sort()

for _, test := range tests {
testsByClusterID[test.k8sCluster.clusterInfo.clusterID] = append(testsByClusterID[test.k8sCluster.clusterInfo.clusterID], test)
Expand Down
4 changes: 2 additions & 2 deletions tests/e2e/test_alltestcases.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ var alltestCase = tests.TestCase{

func allTestCase(kubectlOptions k8s.KubectlOptions) {
var snapshottedInfo = &clusterSnapshot{}
snapshotCluster(snapshottedInfo)
snapshotCluster(kubectlOptions, snapshottedInfo)
testInstall(kubectlOptions)
testInstallZookeeperCluster(kubectlOptions)
testInstallKafkaCluster(kubectlOptions, "../../config/samples/simplekafkacluster.yaml")
Expand All @@ -44,5 +44,5 @@ func allTestCase(kubectlOptions k8s.KubectlOptions) {
testUninstallKafkaCluster(kubectlOptions)
testUninstallZookeeperCluster(kubectlOptions)
testUninstall(kubectlOptions)
snapshotClusterAndCompare(snapshottedInfo)
snapshotClusterAndCompare(kubectlOptions, snapshottedInfo)
}
16 changes: 4 additions & 12 deletions tests/e2e/test_snapshot.go
Original file line number Diff line number Diff line change
Expand Up @@ -59,19 +59,11 @@ type localComparisonPartialObjectMetadataType struct {

// snapshotCluster takes a clusterSnapshot of a K8s cluster and
// stores it into the snapshotCluster instance referenced as input
func snapshotCluster(snapshottedInfo *clusterSnapshot) bool {
func snapshotCluster(kubectlOptions k8s.KubectlOptions, snapshottedInfo *clusterSnapshot) bool {
return When("Get cluster resources state", Ordered, func() {
var kubectlOptions k8s.KubectlOptions
var err error

BeforeAll(func() {
By("Acquiring K8s config and context")
kubectlOptions, err = kubectlOptionsForCurrentContext()
Expect(err).NotTo(HaveOccurred())
})

var clusterResourceNames []string
var namespacedResourceNames []string
var err error

When("Get api-resources names", func() {
It("Get cluster-scoped api-resources names", func() {
Expand Down Expand Up @@ -135,10 +127,10 @@ func snapshotCluster(snapshottedInfo *clusterSnapshot) bool {

// snapshotClusterAndCompare takes a current snapshot of the K8s cluster and
// compares it against a snapshot provided as input
func snapshotClusterAndCompare(snapshottedInitialInfo *clusterSnapshot) bool {
func snapshotClusterAndCompare(kubectlOptions k8s.KubectlOptions, snapshottedInitialInfo *clusterSnapshot) bool {
return When("Verifying cluster resources state", Ordered, func() {
var snapshottedCurrentInfo = &clusterSnapshot{}
snapshotCluster(snapshottedCurrentInfo)
snapshotCluster(kubectlOptions, snapshottedCurrentInfo)

It("Checking resources list", func() {
// Temporarily increase maximum output length (default 4000) to fit more objects in the printed diff.
Expand Down