Skip to content

E2E test "Slurm when using --wait option it should stream logs from all the containers" not working as expected. #78

Open
@mbobrovskyi

Description

@mbobrovskyi

What happened:
E2E test "Slurm when using --wait option it should stream logs from all the containers" not working as expected.

kjob/test/e2e/slurm_test.go

Lines 559 to 614 in 2131368

ginkgo.It("should stream logs from all the containers", func() {
ginkgo.By("Create temporary file")
script, err := os.CreateTemp("", "e2e-slurm-")
gomega.Expect(err).NotTo(gomega.HaveOccurred())
defer os.Remove(script.Name())
defer script.Close()
ginkgo.By("Prepare script", func() {
_, err := script.WriteString("#!/bin/bash\necho 'Hello world!'")
gomega.Expect(err).NotTo(gomega.HaveOccurred())
})
var out []byte
ginkgo.By("Create slurm", func() {
cmdArgs := []string{"create", "slurm", "-n", ns.Name, "--profile", profile.Name, "--wait"}
// create pod with two containers
cmdArgs = append(cmdArgs, "--", "-n=2", script.Name())
cmd := exec.Command(kjobctlPath, cmdArgs...)
out, err = util.Run(cmd)
gomega.Expect(err).NotTo(gomega.HaveOccurred(), "%s: %s", err, out)
gomega.Expect(out).NotTo(gomega.BeEmpty())
})
var jobName, configMapName, serviceName, logs string
ginkgo.By("Check CLI output", func() {
jobName, configMapName, serviceName, logs, err = parseSlurmCreateOutput(out, profile.Name)
gomega.Expect(err).NotTo(gomega.HaveOccurred())
gomega.Expect(jobName).NotTo(gomega.BeEmpty())
gomega.Expect(configMapName).NotTo(gomega.BeEmpty())
gomega.Expect(serviceName).NotTo(gomega.BeEmpty())
gomega.Expect(logs).To(
gomega.MatchRegexp(
`Starting log streaming for pod "profile-slurm-[a-zA-Z0-9]+-[0-9]+-[a-zA-Z0-9]+" container "c1-."\.\.\.
Starting log streaming for pod "profile-slurm-[a-zA-Z0-9]+-[0-9]+-[a-zA-Z0-9]+" container "c1-."\.\.\.
Hello world!
Hello world!
Job logs streaming finished\.`,
),
)
})
ginkgo.By("Check the job is completed", func() {
gomega.Eventually(func(g gomega.Gomega) {
job := &batchv1.Job{}
g.Expect(k8sClient.Get(ctx, client.ObjectKey{Namespace: ns.Name, Name: jobName}, job)).To(gomega.Succeed())
g.Expect(job.Status.Conditions).To(gomega.ContainElement(gomega.BeComparableTo(
batchv1.JobCondition{
Type: batchv1.JobComplete,
Status: corev1.ConditionTrue,
},
cmpopts.IgnoreFields(batchv1.JobCondition{}, "LastTransitionTime", "LastProbeTime", "Reason", "Message"))))
}, util.Timeout, util.Interval).Should(gomega.Succeed())
})
})
})

We have only one container

jobTemplate = wrappers.MakeJobTemplate("job-template", ns.Name).
RestartPolicy(corev1.RestartPolicyNever).
BackoffLimitPerIndex(0).
WithContainer(*wrappers.MakeContainer("c1", util.E2eTestBashImage).Obj()).
Obj()
gomega.Expect(k8sClient.Create(ctx, jobTemplate)).To(gomega.Succeed())

so it show nothing.

What you expected to happen:

Cover case when we have multiple containers.

How to reproduce it (as minimally and precisely as possible):

Run make test-e2e.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Kjob version (use git describe --tags --dirty --always):
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions