Description
Describe the bug
This is a re-opened thread from #4746 (comment).
I have attached the packet details of the PUT Object call from SDK to S3 (Localstack).
SDK always send Content-encoding
as aws-chunked
. This causes the result to fail to decompress. I have tried to explicitly set the Content-Length
to a sufficiently high number but in vain. This is only reproducible with localstack and not the real AWS.

Regression Issue
- Select this option if this issue appears to be a regression.
Expected Behavior
Content-encoding should not always be aws-chunked.
Current Behavior
Content-encoding is not always be aws-chunked.
Reproduction Steps
I have used the below code to upload the file
S3AsyncClient buildS3Client() {
S3CrtAsyncClientBuilder builder = S3AsyncClient.crtBuilder()
.credentialsProvider(getAwsCredentialsProvider())
.region(Region.of(region));
Optional<String> s3Endpoint = getLocalStackEndpoint();
s3Endpoint.ifPresent(s -> {
builder.endpointOverride(URI.create("https://s3.localhost.localstack.cloud:4566"));
builder.forcePathStyle(true);
builder.minimumPartSizeInBytes((long) (8 * 1024 * 1024));
});
return builder.build();
}
s3Client = buildS3Client()
s3TransferManager = S3TransferManager.builder()
.s3Client(s3Client)
.build();
The above snippet initialises the S3Client. I have used the minimumPartSizeInBytes
in trial and error.
putObjectRequest = PutObjectRequest.builder()
.bucket(bucket)
.key(key)
.contentEncoding(GZIP_ENCODING)
.contentType(contentType)
.contentLength((long) (8 * 1024 * 1024))
.tagging(tagging)
.build();
uploadRequest = UploadRequest.builder()
.putObjectRequest(putObjectRequest)
.requestBody(AsyncRequestBody.fromBytes(bytes))
.build();
s3TransferManager.upload(uploadRequest).completionFuture().join()
This code actually facilitates the transfer!
Possible Solution
No response
Additional Information/Context
No response
AWS Java SDK version used
2.29.15
JDK version used
17.0.13
Operating System and version
Ubuntu 22.04.5 LTS, Linux 6.10.14-linuxkit, Inside Docker 27.4.0
Activity
bhoradc commentedon Jan 3, 2025
Hi @ngudbhav,
Thank you for reporting the issue. I tried to reproduce this scenario but found the behavior to be consistent between AWS S3 and LocalStack.
Both environments:
content-encoding
(gzip,aws-chunked)Could you please go through the reproduction steps from below and let me know for any deviation that may result in your reported behavior?
pom.xml
1. AWS S3 Behavior:
Code snippet
CRT debug log
2. LocalStack Behavior:
Code snippet
CRT debug log
LocalStack version
Only notable difference I see in the networking setup, your environment is
localstack:4566
(docker’s internal network) whereas I am running it onlocalhost:4566
. But this difference should not affect thecontent-encoding
behavior is what I believe.Regards,
Chaitanya
ngudbhav commentedon Jan 4, 2025
Hi @bhoradc
Thanks a lot for the quick reply.
Is there any way I can disable the aws-chunked content encoding?
I have tried various ways but downloading the file requires manual decompression.
As you can see in the screenshot, even the wireshark displays an error that decompression failed. I have tried using a browser, Go AWS client but the automatic decompression is not working.
However, if I explicitly write a code to decompress the GZIP file, I get the expected contents back.
I am not sure if the dual headers is the cause of this behaviour.
bhoradc commentedon Jan 7, 2025
Hi @ngudbhav,
Currently, I don’t see the CRT builder having any support disabling chunked encoding through signer parameters or configuration settings, similar to the S3 Standard/Builder clients.
However, I don't see this as a regression from #5043. The results I shared in my previous comment demonstrate the expected behavior for dual content-encoding with the Java SDK.
Regards,
Chaitanya
ngudbhav commentedon Jan 8, 2025
Thanks a lot for your reply.
I understand that adding support to the CRT builder may not be in the pipeline. Is this something I can pick up? Our development experience is stuck because the browser cannot decompress the JSONs and CSVs from the local stack's S3.
Also, Can you please help me understand why the clients cannot decompress the server response? May be this is something that can be fixed without adding the support.
Thank you
DmitriyMusatkin commentedon Jan 16, 2025
I dont think its really crt issue.
aws-chunked is a fairly old s3 protocol for sending payload in chunks and supporting trailing headers.
this is used by clients to compute checksum as data is streamed and send it in the header.
the server should remove the aws-chunked from Content-Encoding after interpreting the chunks.
but looks like localstack has limited support for that and might not do it the same way as s3 does.
java transfer manager calculates checksum by default, so aws-chunked usage has been there since launch.
it might be possible to disable checksums or provide a precomputed checksum for the payload to workaround this
ngudbhav commentedon Jan 24, 2025
Hi @DmitriyMusatkin, @bhoradc
The Localstack Team has resolved this issue after @DmitriyMusatkin's comment. Thanks a lot for the guidance.
Please let me know if you would like to keep this issue open.
github-actions commentedon Jan 24, 2025
This issue is now closed. Comments on closed issues are hard for our team to see.
If you need more assistance, please open a new issue that references this one.