It Deletes a Pod Before Showing Its Logs
|
kubectl scale deployment "$FMA_RELEASE_NAME" -n "$FMA_NAMESPACE" --replicas=0 || true |
deletes the one and only controller Pod. Later,
|
kubectl logs deployment/"$FMA_RELEASE_NAME" -n "$FMA_NAMESPACE" --previous 2>/dev/null || true |
|
kubectl logs deployment/"$FMA_RELEASE_NAME" -n "$FMA_NAMESPACE" 2>/dev/null || true |
tries to display log from the latest and previous container run in the now-non-existent Pod.
It Only Shows State If There Was a Failure
It would be better to dump the state unconditionally, so that we have a good baseline to compare with when diagnosing a failure.
State is Not Deleted on Failure
But normal developers are not authorized to see the state in the failure case. So the state left around is just junk, serving no purpose.
It Deletes a Pod Before Showing Its Logs
llm-d-fast-model-actuation/.github/workflows/ci-e2e-openshift.yaml
Line 913 in c47c25b
llm-d-fast-model-actuation/.github/workflows/ci-e2e-openshift.yaml
Lines 927 to 928 in c47c25b
It Only Shows State If There Was a Failure
It would be better to dump the state unconditionally, so that we have a good baseline to compare with when diagnosing a failure.
State is Not Deleted on Failure
But normal developers are not authorized to see the state in the failure case. So the state left around is just junk, serving no purpose.