Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,12 +163,14 @@ Version 4.x is JDK17 LTS bytecode compatible, with Docker and JUnit / direct Jav

* Features and fixes
* ListObjectVersions API returns "isLatest=true" if versioning is not enabled. (fixes #2481)
* Tags are now verified for correctness.
* Refactorings
* TBD
* README.md fixes, typos, wording, clarifications
* Version updates (deliverable dependencies)
* None
* Version updates (build dependencies)
* Bump kotlin.version from 2.1.21 to 2.2.0
* Bump github/codeql-action from 3.29.0 to 3.29.1
* Bump com.puppycrawl.tools:checkstyle from 10.25.0 to 10.26.0

## 4.5.0
Expand Down
63 changes: 31 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
* [Start using Docker compose](#start-using-docker-compose)
* [Simple example](#simple-example)
* [Expanded example](#expanded-example)
* [Start using self-signed SSL certificate](#start-using-self-signed-ssl-certificate)
* [Start using a self-signed SSL certificate](#start-using-a-self-signed-ssl-certificate)
* [S3Mock Java](#s3mock-java)
* [Start using the JUnit4 Rule](#start-using-the-junit4-rule)
* [Start using the JUnit5 Extension](#start-using-the-junit5-extension)
Expand Down Expand Up @@ -63,6 +63,7 @@
* [Security](#security)
* [Contributing](#contributing)
* [Licensing](#licensing)
* [Powered by](#powered-by)
<!-- TOC -->

## S3Mock
Expand Down Expand Up @@ -221,7 +222,7 @@ S3Mock will accept presigned URLs, but it *ignores all parameters*.
For instance, S3Mock does not verify the HTTP verb that the presigned uri was created with, and it does not validate
whether the link is expired or not.

S3 SDKs can be used to create presigned URLs pointing to S3Mock if they're configured for path-style access. See the
S3 SDKs can be used to create presigned URLs pointing to S3Mock if they're configured for path-style access. See the
"Usage..." section above for links to examples on how to use the SDK with presigned URLs.

#### Self-signed SSL certificate
Expand Down Expand Up @@ -327,9 +328,9 @@ The mock can be configured with the following environment variables:
- Legacy name: `retainFilesOnExit`
- Default: false
- `debug`: set to `true` to
enable [Spring Boot's debug output](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.logging.console-output).
enable [Spring Boot's debug output](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.logging.console-output).
- `trace`: set to `true` to
enable [Spring Boot's trace output](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.logging.console-output).
enable [Spring Boot's trace output](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.logging.console-output).

### S3Mock Docker

Expand All @@ -339,7 +340,7 @@ The container is lightweight, built on top of the official [Linux Alpine image](

If needed,
configure [memory](https://docs.docker.com/engine/reference/commandline/run/#specify-hard-limits-on-memory-available-to-containers--m---memory)
and [cpu](https://docs.docker.com/engine/reference/commandline/run/#options) limits for the S3Mock docker container.
and [cpu](https://docs.docker.com/engine/reference/commandline/run/#options) limits for the S3Mock Docker container.

The JVM will automatically use half the available memory.

Expand All @@ -348,50 +349,45 @@ The JVM will automatically use half the available memory.
Starting on the command-line:

```shell
docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock
docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock
```

The port `9090` is for HTTP, port `9191` is for HTTPS.

Example with configuration via environment variables:

```shell
docker run -p 9090:9090 -p 9191:9191 -e COM_ADOBE_TESTING_S3MOCK_STORE_INITIAL_BUCKETS=test -e debug=true -t adobe/s3mock
docker run -p 9090:9090 -p 9191:9191 -e COM_ADOBE_TESTING_S3MOCK_STORE_INITIAL_BUCKETS=test -e debug=true -t adobe/s3mock
```

#### Start using the Fabric8 Docker-Maven-Plugin

Our [integration tests](integration-tests) are using the Amazon S3 Client to verify the server functionality against the
S3Mock. During the Maven build, the Docker image is started using the [docker-maven-plugin](https://dmp.fabric8.io/) and
the corresponding ports are passed to the JUnit test through the `maven-failsafe-plugin`. See [
`BucketIT`](integration-tests/src/test/kotlin/com/adobe/testing/s3mock/its/BucketIT.kt) as an example on how it's used
in the code.
the corresponding ports are passed to the JUnit test through the `maven-failsafe-plugin`. See [`BucketIT`](integration-tests/src/test/kotlin/com/adobe/testing/s3mock/its/BucketIT.kt)
as an example on how it's used in the code.

This way, one can easily switch between calling the S3Mock or the real S3 endpoint and this doesn't add any additional
This way, one can easily switch between calling the S3Mock or the real S3 endpoint, and this doesn't add any additional
Java dependencies to the project.

#### Start using Testcontainers

The [
`S3MockContainer`](testsupport/testcontainers/src/main/java/com/adobe/testing/s3mock/testcontainers/S3MockContainer.java)
The [`S3MockContainer`](testsupport/testcontainers/src/main/java/com/adobe/testing/s3mock/testcontainers/S3MockContainer.java)
is a `Testcontainer` implementation that comes pre-configured exposing HTTP and HTTPS ports. Environment variables can
be set on startup.

The example [
`S3MockContainerJupiterTest`](testsupport/testcontainers/src/test/java/com/adobe/testing/s3mock/testcontainers/S3MockContainerJupiterTest.java)
demonstrates the usage with JUnit 5. The example [
`S3MockContainerManualTest`](testsupport/testcontainers/src/test/java/com/adobe/testing/s3mock/testcontainers/S3MockContainerManualTest.java)
demonstrates the usage with plain Java.
The example [`S3MockContainerJupiterTest`](testsupport/testcontainers/src/test/kotlin/com/adobe/testing/s3mock/testcontainers/S3MockContainerJupiterTest.kt)
demonstrates the usage with JUnit 5. The example [`S3MockContainerManualTest`](testsupport/testcontainers/src/test/kotlin/com/adobe/testing/s3mock/testcontainers/S3MockContainerManualTest.kt)
demonstrates the usage with plain Kotlin. Java will be similar.

Testcontainers provides integrations for JUnit 4, JUnit 5 and Spock.
Testcontainers provide integrations for JUnit 4, JUnit 5 and Spock.
For more information, visit the [Testcontainers](https://www.testcontainers.org/) website.

To use the [
`S3MockContainer`](testsupport/testcontainers/src/main/java/com/adobe/testing/s3mock/testcontainers/S3MockContainer.java),
use the following Maven artifact in `test` scope:

```xml

<dependency>
<groupId>com.adobe.testing</groupId>
<artifactId>s3mock-testcontainers</artifactId>
Expand Down Expand Up @@ -430,7 +426,7 @@ docker compose down

##### Expanded example

Suppose we want to see what S3Mock is persisting, and look at the logs it generates in detail.
Suppose we want to see what S3Mock is persisting and look at the logs it generates in detail.

A local directory is needed, let's call it `locals3root`. This directory must be mounted as a volume into the Docker
container when it's started, and that mounted volume must then be configured as the `root` for S3Mock. Let's call the
Expand Down Expand Up @@ -512,7 +508,7 @@ $ ls locals3root/my-test-bucket
bucketMetadata.json
```

#### Start using self-signed SSL certificate
#### Start using a self-signed SSL certificate

S3Mock includes a self-signed SSL certificate:

Expand Down Expand Up @@ -572,13 +568,13 @@ the `S3Mock` during a JUnit test, classpaths of the tested application and of th
to unpredictable and undesired effects such as class conflicts or dependency version conflicts.
This is especially problematic if the tested application itself is a Spring (Boot) application, as both applications
will load configurations based on the availability of certain classes in the classpath, leading to unpredictable runtime
behaviour.
behavior.

_This is the opposite of what software engineers are trying to achieve when thoroughly testing code in continuous
integration..._

`S3Mock` dependencies are updated regularly, any update could break any number of projects.
**See also [issues labelled "dependency-problem"](https://github.com/adobe/S3Mock/issues?q=is%3Aissue+label%3Adependency-problem).**
**See also [issues labeled "dependency-problem"](https://github.com/adobe/S3Mock/issues?q=is%3Aissue+label%3Adependency-problem).**

**See also [the Java section below](#Java)**

Expand All @@ -605,11 +601,11 @@ The `S3MockExtension` can currently be used in two ways:
1. Declaratively using `@ExtendWith(S3MockExtension.class)` and by injecting a properly configured instance of
`AmazonS3` client and/or the started `S3MockApplication` to the tests.
See examples: [`S3MockExtensionDeclarativeTest`](testsupport/junit5/src/test/java/com/adobe/testing/s3mock/junit5/sdk1/S3MockExtensionDeclarativeTest.java) (for SDKv1)
or [`S3MockExtensionDeclarativeTest`](testsupport/junit5/src/test/java/com/adobe/testing/s3mock/junit5/sdk2/S3MockExtensionDeclarativeTest.java) (for SDKv2)
or [`S3MockExtensionDeclarativeTest`](testsupport/junit5/src/test/kotlin/com/adobe/testing/s3mock/junit5/sdk2/S3MockExtensionDeclarativeTest.kt) (for SDKv2)

2. Programmatically using `@RegisterExtension` and by creating and configuring the `S3MockExtension` using a _builder_.
See examples: [`S3MockExtensionProgrammaticTest`](testsupport/junit5/src/test/java/com/adobe/testing/s3mock/junit5/sdk1/S3MockExtensionProgrammaticTest.java) (for SDKv1)
or [`S3MockExtensionProgrammaticTest`](testsupport/junit5/src/test/java/com/adobe/testing/s3mock/junit5/sdk2/S3MockExtensionProgrammaticTest.java) (for SDKv2)
or [`S3MockExtensionProgrammaticTest`](testsupport/junit5/src/test/kotlin/com/adobe/testing/s3mock/junit5/sdk2/S3MockExtensionProgrammaticTest.kt) (for SDKv2)

To use the JUnit5 Extension, use the following Maven artifact in `test` scope:

Expand All @@ -624,9 +620,9 @@ To use the JUnit5 Extension, use the following Maven artifact in `test` scope:

#### Start using the TestNG Listener

The example [`S3MockListenerXMLConfigurationTest`](testsupport/testng/src/test/java/com/adobe/testing/s3mock/testng/S3MockListenerXmlConfigurationTest.java)
demonstrates the usage of the `S3MockListener`, which can be configured as shown in [`testng.xml`](testsupport/testng/src/test/resources/testng.xml).
The listener bootstraps the S3Mock application before TestNG execution starts and shuts down the application just before the execution terminates.
The example [`S3MockListenerXMLConfigurationTest`](testsupport/testng/src/test/kotlin/com/adobe/testing/s3mock/testng/S3MockListenerXmlConfigurationTest.kt)
demonstrates the usage of the `S3MockListener`, which can be configured as shown in [`testng.xml`](testsupport/testng/src/test/resources/testng.xml).
The listener bootstraps the S3Mock application before TestNG execution starts and shuts down the application just before the execution terminates.
Please refer to [`IExecutionListener`](https://github.com/testng-team/testng/blob/master/testng-core-api/src/main/java/org/testng/IExecutionListener.java)
in the TestNG API.

Expand Down Expand Up @@ -666,7 +662,7 @@ If the environment variable `COM_ADOBE_TESTING_S3MOCK_STORE_RETAIN_FILES_ON_EXIT

### Root-Folder

S3Mock stores buckets and objects a root-folder.
S3Mock stores buckets and objects in a root-folder.

This folder is expected to be empty when S3Mock starts. See also FYI above.

Expand Down Expand Up @@ -825,8 +821,8 @@ Vulnerabilities may also be reported through the GitHub issue tracker.

## Security

S3Mock is not intended to be used in production environments. It is a mock server that is meant to be used in
development and testing environments only. It does not implement all security features of AWS S3, and should not be used
S3Mock is not intended to be used in production environments. It is a mock server meant to be used in
development and testing environments only. It does not implement all security features of AWS S3 and should not be used
as a replacement for AWS S3 in production.
It is implemented using [Spring Boot](https://github.com/spring-projects/spring-boot), which is a Java framework that is
designed to be secure by default.
Expand All @@ -838,3 +834,6 @@ Contributions are welcome! Read the [Contributing Guide](./.github/CONTRIBUTING.
## Licensing

This project is licensed under the Apache V2 License. See [LICENSE](LICENSE) for more information.

## Powered by
[![IntelliJ IDEA logo.](https://resources.jetbrains.com/storage/products/company/brand/logos/IntelliJ_IDEA.svg)](https://jb.gg/OpenSourceSupport)
Original file line number Diff line number Diff line change
Expand Up @@ -554,6 +554,7 @@ public ResponseEntity<Void> putObjectTagging(
var bucket = bucketService.verifyBucketExists(bucketName);

var s3ObjectMetadata = objectService.verifyObjectExists(bucketName, key.key(), versionId);
objectService.verifyObjectTags(body.tagSet().tags());
objectService.setObjectTags(bucketName, key.key(), versionId, body.tagSet().tags());
return ResponseEntity
.ok()
Expand Down Expand Up @@ -673,8 +674,7 @@ public ResponseEntity<Retention> getObjectRetention(
@RequestParam(value = VERSION_ID, required = false) @Nullable String versionId) {
var bucket = bucketService.verifyBucketExists(bucketName);
bucketService.verifyBucketObjectLockEnabled(bucketName);
var s3ObjectMetadata = objectService.verifyObjectLockConfiguration(bucketName, key.key(),
versionId);
var s3ObjectMetadata = objectService.verifyObjectLockConfiguration(bucketName, key.key(), versionId);

return ResponseEntity
.ok()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,11 @@ public class S3Exception extends RuntimeException {
"The list of parts was not in ascending order. The parts list must be specified in "
+ "order by part number.");

public static final S3Exception INVALID_TAG =
new S3Exception(BAD_REQUEST.value(), "InvalidTag",
"Your request contains tag input that is not valid. For example, your request might contain "
+ "duplicate keys, keys or values that are too long, or system tags.");
Copy link

Copilot AI Jun 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The INVALID_TAG error message mentions duplicate keys and length issues but omits illegal-character failures. Consider updating it to reflect all validation rules.

Suggested change
+ "duplicate keys, keys or values that are too long, or system tags.");
+ "duplicate keys, keys or values that are too long, system tags, or illegal characters.");

Copilot uses AI. Check for mistakes.

public static S3Exception completeRequestMissingChecksum(String algorithm, Integer partNumber) {
return new S3Exception(BAD_REQUEST.value(), BAD_REQUEST_CODE,
"The upload was created using a " + algorithm + " checksum. "
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
import static com.adobe.testing.s3mock.S3Exception.BAD_REQUEST_CONTENT;
import static com.adobe.testing.s3mock.S3Exception.BAD_REQUEST_MD5;
import static com.adobe.testing.s3mock.S3Exception.INVALID_REQUEST_RETAIN_DATE;
import static com.adobe.testing.s3mock.S3Exception.INVALID_TAG;
import static com.adobe.testing.s3mock.S3Exception.NOT_FOUND_OBJECT_LOCK;
import static com.adobe.testing.s3mock.S3Exception.NOT_MODIFIED;
import static com.adobe.testing.s3mock.S3Exception.NO_SUCH_KEY;
Expand Down Expand Up @@ -49,8 +50,10 @@
import java.nio.file.Path;
import java.time.Instant;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.regex.Pattern;
import org.jspecify.annotations.Nullable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Expand All @@ -59,6 +62,14 @@ public class ObjectService extends ServiceBase {
static final String WILDCARD_ETAG = "\"*\"";
static final String WILDCARD = "*";
private static final Logger LOG = LoggerFactory.getLogger(ObjectService.class);
private static final Pattern TAG_ALLOWED_CHARS = Pattern.compile("[\\w+ \\-=.:/@]*");
private static final int MAX_ALLOWED_TAGS = 50;
private static final int MIN_ALLOWED_TAG_KEY_LENGTH = 1;
private static final int MAX_ALLOWED_TAG_KEY_LENGTH = 128;
private static final int MIN_ALLOWED_TAG_VALUE_LENGTH = 0;
private static final int MAX_ALLOWED_TAG_VALUE_LENGTH = 256;
private static final String DISALLOWED_TAG_KEY_PREFIX = "aws:";

private final BucketStore bucketStore;
private final ObjectStore objectStore;

Expand Down Expand Up @@ -175,6 +186,48 @@ public void setObjectTags(String bucketName, String key, @Nullable String versio
objectStore.storeObjectTags(bucketMetadata, uuid, versionId, tags);
}

public void verifyObjectTags(List<Tag> tags) {
Copy link

Copilot AI Jun 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No check for duplicate tag keys—AWS S3 disallows duplicate keys in a tag set. Consider validating uniqueness and throwing INVALID_TAG on duplicates.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Jun 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The validation does not check for duplicate tag keys, which S3 disallows. Consider adding a set-based check to ensure each key is unique before proceeding.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Jun 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation does not prohibit system tag keys (e.g., those prefixed with aws:); AWS S3 disallows system tags, so add a check to reject keys starting with the reserved prefix.

Copilot uses AI. Check for mistakes.
if (tags.size() > MAX_ALLOWED_TAGS) {
throw INVALID_TAG;
}
verifyDuplicateTagKeys(tags);
for (var tag : tags) {
verifyTagKeyPrefix(tag.key());
verifyTagLength(MIN_ALLOWED_TAG_KEY_LENGTH, MAX_ALLOWED_TAG_KEY_LENGTH, tag.key());
verifyTagChars(tag.key());

verifyTagLength(MIN_ALLOWED_TAG_VALUE_LENGTH, MAX_ALLOWED_TAG_VALUE_LENGTH, tag.value());
verifyTagChars(tag.value());
}
}

private void verifyDuplicateTagKeys(List<Tag> tags) {
var tagKeys = new HashSet<String>();
for (var tag : tags) {
if (!tagKeys.add(tag.key())) {
throw INVALID_TAG;
}
}
}

private void verifyTagKeyPrefix(String tagKey) {
if (tagKey.startsWith(DISALLOWED_TAG_KEY_PREFIX)) {
throw INVALID_TAG;
}
}

private void verifyTagLength(int minLength, int maxLength, String tag) {
if (tag.length() < minLength || tag.length() > maxLength) {
throw INVALID_TAG;
}
}

private void verifyTagChars(String tag) {
if (!TAG_ALLOWED_CHARS.matcher(tag).matches()) {
throw INVALID_TAG;
}
}

public void setLegalHold(String bucketName, String key, @Nullable String versionId, LegalHold legalHold) {
var bucketMetadata = bucketStore.getBucketMetadata(bucketName);
var uuid = bucketMetadata.getID(key);
Expand Down
Loading