Description
Describe the bug
This is a re-opened thread from #4746 (comment).
I have attached the packet details of the PUT Object call from SDK to S3 (Localstack).
SDK always send Content-encoding
as aws-chunked
. This causes the result to fail to decompress. I have tried to explicitly set the Content-Length
to a sufficiently high number but in vain. This is only reproducible with localstack and not the real AWS.

Regression Issue
- Select this option if this issue appears to be a regression.
Expected Behavior
Content-encoding should not always be aws-chunked.
Current Behavior
Content-encoding is not always be aws-chunked.
Reproduction Steps
I have used the below code to upload the file
S3AsyncClient buildS3Client() {
S3CrtAsyncClientBuilder builder = S3AsyncClient.crtBuilder()
.credentialsProvider(getAwsCredentialsProvider())
.region(Region.of(region));
Optional<String> s3Endpoint = getLocalStackEndpoint();
s3Endpoint.ifPresent(s -> {
builder.endpointOverride(URI.create("https://s3.localhost.localstack.cloud:4566"));
builder.forcePathStyle(true);
builder.minimumPartSizeInBytes((long) (8 * 1024 * 1024));
});
return builder.build();
}
s3Client = buildS3Client()
s3TransferManager = S3TransferManager.builder()
.s3Client(s3Client)
.build();
The above snippet initialises the S3Client. I have used the minimumPartSizeInBytes
in trial and error.
putObjectRequest = PutObjectRequest.builder()
.bucket(bucket)
.key(key)
.contentEncoding(GZIP_ENCODING)
.contentType(contentType)
.contentLength((long) (8 * 1024 * 1024))
.tagging(tagging)
.build();
uploadRequest = UploadRequest.builder()
.putObjectRequest(putObjectRequest)
.requestBody(AsyncRequestBody.fromBytes(bytes))
.build();
s3TransferManager.upload(uploadRequest).completionFuture().join()
This code actually facilitates the transfer!
Possible Solution
No response
Additional Information/Context
No response
AWS Java SDK version used
2.29.15
JDK version used
17.0.13
Operating System and version
Ubuntu 22.04.5 LTS, Linux 6.10.14-linuxkit, Inside Docker 27.4.0