Skip to content

A couple of state-dumping issues in OpenShift E2E test workflow #273

@MikeSpreitzer

Description

@MikeSpreitzer

It Deletes a Pod Before Showing Its Logs

kubectl scale deployment "$FMA_RELEASE_NAME" -n "$FMA_NAMESPACE" --replicas=0 || true
deletes the one and only controller Pod. Later,
kubectl logs deployment/"$FMA_RELEASE_NAME" -n "$FMA_NAMESPACE" --previous 2>/dev/null || true
kubectl logs deployment/"$FMA_RELEASE_NAME" -n "$FMA_NAMESPACE" 2>/dev/null || true
tries to display log from the latest and previous container run in the now-non-existent Pod.

It Only Shows State If There Was a Failure

It would be better to dump the state unconditionally, so that we have a good baseline to compare with when diagnosing a failure.

State is Not Deleted on Failure

But normal developers are not authorized to see the state in the failure case. So the state left around is just junk, serving no purpose.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions