Skip to content

restore fails with "waiting for another restore to finish" #887

Open
@Lobo75

Description

@Lobo75

Report

I found this community message with the same issue https://forums.percona.com/t/manually-restoring-multiple-times-is-not-working/27301 however the link to the jira ticket is invalid.

I ran into this same error on the percona operator 2.4.1 where I had to restore a database after a cluster failure. The first restore failed due to an invalid restore time selected, it could not find a valid restore point. Trying to fix the restore time was not possible, doing a kubectl delete on the restore yaml showed the restore was deleted but the operator did not seem to know it. All further attempts to restore using different name also failed as pointed out in the community forum.

I could not find a way to even list what the operator thought was running as to restores.

As a further test I deleted the cluster and re-created it with the same name. The percona operator saw the new cluster and tried to re-start the failed restore again and again. It finally gave up after 5 more attempts. Even deleting the cluster does not signal to the operator to remove any failed or in progress restores.

There needs to be a way to list the restores and completely delete them so a new one can be started.

More about the problem

I would have expected that doing a kubectl delete on the restore yaml would have killed any further restore attempts.

Steps to reproduce

See the Community message posted above.

Versions

  1. Kubernetes 1.27.11
  2. Operator 2.4.1
  3. Database postgres 15

Anything else?

This is a very serious problem when you cannot fix a failed restore and the database is down because of the restore attempt.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions