Skip to content

Error for failed revision is not reported due to scaling to zero #14157

Open
@skonto

Description

@skonto

In what area(s)?

/area autoscale
/area api

What version of Knative?

Any.

Expected Behavior

Afaik when a new revision is created the deployment replicas is set to 1 by default in order to scale and check if the revision can be healthy (see discussion here). A customer has a scenario as follows:

  • Creates a revision that comes up without a problem and then scales down to zero as there is no traffic.
  • Makes the revision image private or removes it from their internal registry.
  • Issues a curl command that gets stuck (for some time) as there will be no pod coming up due to the image pull error that occurs.
  • The revision scales down to zero but no error is shown or at least the user does not have a specific point where to look.

The UX should be much better and although we have a debugging guide listing a number of places of where to look for an error this is not really helping. First the error goes away and several resources look ready due to scaling down to zero.

I expected the deployment not to progress and get stuck with something similar to (or at least provide that option as a normal deployment would stabilize to that) or report it and then scale down:

MinimumReplicasUnavailable Deployment does not have minimum availability

Unfortunately by default the code here cannot capture this due to scaling to zero and this still occurs even if we lower the deadline time to a reasonable value such as serving.knative.dev/progress-deadline: 30s, (although it might depend on the app to set this properly which is a bit painful).

I think we could increase autoscaling.knative.dev/scale-down-delay or set minScale to 1 or lower the request timeout but ideally I would expect Knative to report that ksvc is not healthy in this scenario although I understand that this is could be a transient error (image pull could have been fixed in the meant time). There is also some timeout here, activationTimeoutBuffer, that could be configurable? I don't see any logs for the revision not being activated.
Maybe we should mark it as unhealthy and when a new request comes in we could revaluate the ksvc status.

Actual Behavior

The behavior of the deployment is captured in the attached files, the number prefix on each filename represents time ordering. Everything seems to be fine although the deployment never reached the desired replicas.

Steps to Reproduce the Problem

Described above.

cc @evankanderson @dprotaso @mattmoor @psschwei

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/APIAPI objects and controllersarea/autoscalekind/bugCategorizes issue or PR as related to a bug.triage/acceptedIssues which should be fixed (post-triage)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions