Skip to content

Large backup in aws without setting multiPartChunkSize stops at 50Gi  #188

@pando85

Description

@pando85

What steps did you take and what happened:
I just try to do a backup of a ZFS volume bigger than 50Gi and it crashes saying that:

caused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit"

What did you expect to happen:
With multiPartChunkSize not set up it should calculate partSize automatically but it seems that it took 5Mi as default value.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
I fixed it setting it explicitly to a bigger value.

Environment:

  • Velero version (use velero version):
velero version
Client:
	Version: v1.13.0
	Git commit: 76670e940c52880a18dbbc59e3cbee7b94cd3352
Server:
	Version: v1.13.0
  • Velero features (use velero client config get features):
velero client config get features
features: <NOT SET>
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Metadata

Metadata

Assignees

No one assigned

    Labels

    Bugissue/pr is a bug/fix to existing feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions