-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Implement Minimal @uppy/aws-s3 plugin #6131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Minimal @uppy/aws-s3 plugin #6131
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 52aee89e10
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (signal.aborted) { | ||
| throw new Error('Upload aborted') | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve pause reason to avoid aborting uploads
Pausing calls abortController.abort(pausingUploadReason), but the abort checks throw new Error('Upload aborted') without a cause, so #onError can’t recognize a pause and instead aborts the multipart upload and emits upload-error. This means a user pausing mid-upload (or before the next loop iteration) will inadvertently cancel the upload on S3 and won’t be able to resume. Consider throwing with cause: pausingUploadReason (or using signal.reason) so pause paths are ignored as intended.
Useful? React with 👍 / 👎.
| #createCompanionSigner(companionUrl: string): IT.signRequestFn { | ||
| return async (request) => { | ||
| const response = await fetch(`${companionUrl}/s3/sign`, { | ||
| method: 'POST', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the presign endpoint for new signing payloads
The companion signer posts the new {method, key, uploadId, partNumber} payload to /s3/sign, but in this repo /s3/sign expects filename/contentType and only generates a simple PUT URL (see examples/aws-nodejs/index.js lines 128–145). That mismatch will 400 for the new payload and breaks multipart signing entirely (and can even upload to the wrong key), so endpoint mode won’t work with the provided server/example unless it targets the new /s3/presign endpoint or updates the server contract.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a rewritten version of the AWS S3 upload plugin that shifts from server-signed requests to a pre-signed URL approach. The implementation includes a new S3mini client for direct S3 operations and a MultipartUploader class for orchestrating uploads, supporting both simple putObject and multipart uploads with pause/resume/abort functionality.
Changes:
- Refactored signature handling from signed headers to pre-signed URLs with query string authentication
- Implemented new S3mini client with pre-signed URL generation and credential caching
- Added MultipartUploader class for handling upload orchestration with progress tracking
- Created new test example page and server endpoint for testing the rewritten plugin
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 21 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/@uppy/aws-s3/src/s3-client/types.ts | Updated type definitions from signed requests to pre-signed URL approach, removed SSEC headers support |
| packages/@uppy/aws-s3/src/s3-client/signer.ts | Converted SigV4 signer to generate pre-signed URLs instead of authorization headers |
| packages/@uppy/aws-s3/src/s3-client/S3.ts | Modified S3 client to use pre-signed URLs for all operations, simplified request flow |
| packages/@uppy/aws-s3/src/index.ts | New plugin implementation with configuration validation and upload orchestration |
| packages/@uppy/aws-s3/src/MultipartUploader.ts | New class handling upload chunking, progress tracking, and pause/resume functionality |
| packages/@uppy/aws-s3/tests/index.test.ts | Added unit tests for plugin configuration and basic upload scenarios |
| packages/@uppy/aws-s3/tests/s3-client/minio.test.js | Updated tests to use pre-signed URL approach, removed checksum validation tests |
| packages/@uppy/aws-s3/tests/s3-client/_shared.test.js | Updated test helper to pass endpoint to presigner |
| examples/aws-nodejs/index.js | Added /s3/presign endpoint for generating pre-signed URLs |
| examples/aws-nodejs/public/rewrite-test.html | New test page demonstrating both STS and signRequest approaches |
Comments suppressed due to low confidence (1)
packages/@uppy/aws-s3/src/s3-client/S3.ts:105
- The credential caching doesn't consider the expiration field from TemporaryCredentials. If credentials have an expiration time, they should be automatically refreshed before they expire. Currently, cached credentials are only cleared on ExpiredToken/InvalidAccessKeyId errors, which means there will be at least one failed request before credentials are refreshed.
private async _getCachedCredentials(): Promise<IT.CredentialsResponse> {
// Return Cached Credentials if available
if (this.cachedCredentials != null) {
return this.cachedCredentials
}
// Cache the promise so concurrent calls wait for the same fetch
if (this.cachedCredentialsPromise == null) {
this.cachedCredentialsPromise = this.getCredentials!({})
.then((creds) => {
this.cachedCredentials = creds
return creds
})
.finally(() => {
// Clear promise cache after resolution to allow future retries
this.cachedCredentialsPromise = undefined
})
}
return this.cachedCredentialsPromise
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Build the canonical path (must be encoded for signing) | ||
| let canonicalPath = url.pathname | ||
| if (encodedKey && encodedKey.length > 0) { | ||
| canonicalPath = | ||
| canonicalPath === '/' | ||
| ? `/${encodedKey.replace(/^\/+/, '')}` | ||
| : `${canonicalPath}/${encodedKey.replace(/^\/+/, '')}` |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The canonical path building logic has a potential issue when the endpoint already has a path component. If url.pathname is not '/', the key is appended after it. However, if the key starts with '/', the logic attempts to strip leading slashes, but this might not work correctly in all cases. For example, if url.pathname is '/bucket' and key is '/object.txt', the result would be '/bucket/object.txt', but if key is 'object.txt', it would also be '/bucket/object.txt'. This seems correct but should be tested with various endpoint formats.
| // Build the canonical path (must be encoded for signing) | |
| let canonicalPath = url.pathname | |
| if (encodedKey && encodedKey.length > 0) { | |
| canonicalPath = | |
| canonicalPath === '/' | |
| ? `/${encodedKey.replace(/^\/+/, '')}` | |
| : `${canonicalPath}/${encodedKey.replace(/^\/+/, '')}` | |
| // Normalize base path to avoid double slashes and ensure leading slash | |
| let basePath = url.pathname || '/' | |
| if (!basePath.startsWith('/')) { | |
| basePath = `/${basePath}` | |
| } | |
| if (basePath !== '/') { | |
| basePath = basePath.replace(/\/+$/u, '') | |
| } | |
| // Build the canonical path (must be encoded for signing) | |
| let canonicalPath = basePath | |
| if (encodedKey && encodedKey.length > 0) { | |
| const sanitizedKey = encodedKey.replace(/^\/+/u, '') | |
| canonicalPath = | |
| canonicalPath === '/' | |
| ? `/${sanitizedKey}` | |
| : `${canonicalPath}/${sanitizedKey}` |
| app.post('/s3/presign', async (req, res, next) => { | ||
| try { | ||
| const { method, key, uploadId, partNumber } = req.body | ||
| getS3Client() // Ensure client is initialized | ||
|
|
||
| if (!method || !key) { | ||
| return res.status(400).json({ error: 'method and key are required' }) | ||
| } | ||
|
|
||
| const bucket = process.env.COMPANION_AWS_BUCKET | ||
| let command | ||
|
|
||
| // Determine which command to use based on method and params | ||
| if (method === 'PUT' && uploadId && partNumber) { | ||
| // UploadPart | ||
| command = new UploadPartCommand({ | ||
| Bucket: bucket, | ||
| Key: key, | ||
| UploadId: uploadId, | ||
| PartNumber: parseInt(partNumber, 10), | ||
| }) | ||
| } else if (method === 'PUT') { | ||
| // PutObject (simple upload) | ||
| command = new PutObjectCommand({ | ||
| Bucket: bucket, | ||
| Key: key, | ||
| }) | ||
| } else if (method === 'POST' && !uploadId) { | ||
| // CreateMultipartUpload | ||
| command = new CreateMultipartUploadCommand({ | ||
| Bucket: bucket, | ||
| Key: key, | ||
| }) | ||
| } else if (method === 'POST' && uploadId) { | ||
| // CompleteMultipartUpload | ||
| command = new CompleteMultipartUploadCommand({ | ||
| Bucket: bucket, | ||
| Key: key, | ||
| UploadId: uploadId, | ||
| // Note: parts are sent in the request body, not in the presigned URL | ||
| }) | ||
| } else if (method === 'DELETE' && uploadId) { | ||
| // AbortMultipartUpload | ||
| command = new AbortMultipartUploadCommand({ | ||
| Bucket: bucket, | ||
| Key: key, | ||
| UploadId: uploadId, | ||
| }) | ||
| } else if (method === 'GET' && uploadId) { | ||
| // ListParts | ||
| command = new ListPartsCommand({ | ||
| Bucket: bucket, | ||
| Key: key, | ||
| UploadId: uploadId, | ||
| }) | ||
| } else { | ||
| return res.status(400).json({ error: 'Unsupported operation' }) | ||
| } | ||
|
|
||
| const url = await getSignedUrl(s3Client, command, { expiresIn: 900 }) | ||
|
|
||
| res.setHeader('Access-Control-Allow-Origin', accessControlAllowOrigin) | ||
| res.json({ url }) | ||
| } catch (err) { | ||
| next(err) | ||
| } | ||
| }) |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Companion signer endpoint at '/s3/presign' doesn't appear to have any authentication or authorization checks. This means anyone who can reach this endpoint can generate pre-signed URLs for any S3 operation on the configured bucket. This is a significant security vulnerability. The endpoint should verify that the request is authorized (e.g., check session, API key, etc.) before generating pre-signed URLs.
| const { method, key, uploadId, partNumber } = req.body | ||
| getS3Client() // Ensure client is initialized | ||
|
|
||
| if (!method || !key) { | ||
| return res.status(400).json({ error: 'method and key are required' }) |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The presign endpoint doesn't validate the key parameter for path traversal or other malicious patterns. An attacker could potentially request pre-signed URLs for keys like '../../../sensitive-file' or other patterns that might escape intended directory boundaries. Consider adding validation to ensure keys conform to expected patterns and don't contain suspicious path components.
| // ============================================================================ | ||
|
|
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The shouldUseMultipart option has a type definition issue. The default value is defined as a function but cast to true type using as any as true. This creates confusion between the runtime behavior (function) and the type (boolean true). The type definition should properly reflect that the default is a function, not true.
| window.UPPY_S3_BUCKET = "${process.env.COMPANION_AWS_BUCKET}"; | ||
| window.UPPY_S3_REGION = "${process.env.COMPANION_AWS_REGION}"; |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HTML injects environment variables directly into the page without any sanitization. While COMPANION_AWS_BUCKET and COMPANION_AWS_REGION are typically safe, if these values contain special characters or malicious content, it could lead to XSS vulnerabilities. Consider using JSON.stringify() or proper HTML escaping when injecting values.
| window.UPPY_S3_BUCKET = "${process.env.COMPANION_AWS_BUCKET}"; | |
| window.UPPY_S3_REGION = "${process.env.COMPANION_AWS_REGION}"; | |
| window.UPPY_S3_BUCKET = ${JSON.stringify(process.env.COMPANION_AWS_BUCKET)}; | |
| window.UPPY_S3_REGION = ${JSON.stringify(process.env.COMPANION_AWS_REGION)}; |
| if (typeof shouldUseMultipart === 'function') { | ||
| return shouldUseMultipart(file) | ||
| } | ||
| if (typeof shouldUseMultipart === 'boolean') { |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment says the default matches 'Companion's DEFAULT behaviour', but this may not be accurate. Companion typically uses 'uploads/{timestamp}-{randomId}/{filename}' format. This discrepancy could lead to confusion or incompatibility issues when switching between Companion and direct S3 uploads.
| await this.#s3Client.putObject( | ||
| this.#key, | ||
| this.#data, | ||
| this.#file.type || 'application/octet-stream', | ||
| ) | ||
|
|
||
| // Report progress | ||
| this.#chunkState[0].uploaded = this.#data.size |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the simple upload path, the entire file is sent via putObject, but there's no progress tracking during the actual HTTP request. The progress is only updated after the upload completes. This means for large files (up to 100MB by default), users won't see any progress until the upload finishes. Consider implementing XHR-based uploads or fetch with readable streams to provide incremental progress updates.
| await this.#s3Client.putObject( | |
| this.#key, | |
| this.#data, | |
| this.#file.type || 'application/octet-stream', | |
| ) | |
| // Report progress | |
| this.#chunkState[0].uploaded = this.#data.size | |
| const bytesTotal = this.#data.size | |
| const chunkState = this.#chunkState[0] | |
| const initialUploaded = typeof chunkState.uploaded === 'number' ? chunkState.uploaded : 0 | |
| // Provide incremental progress updates during the simple upload. | |
| // Since we do not have byte-level progress from the underlying client, | |
| // we estimate progress over time and ensure we never report more than the | |
| // total size. The final, exact progress is still reported after putObject | |
| // completes. | |
| const startTime = Date.now() | |
| // A conservative estimate of how long a simple upload might take; the | |
| // actual duration may differ, but this keeps the UI responsive. | |
| const ESTIMATED_UPLOAD_DURATION_MS = 30_000 | |
| const PROGRESS_TICK_MS = 500 | |
| let progressTimer: ReturnType<typeof setInterval> | undefined | |
| const startProgressTimer = () => { | |
| if (progressTimer != null) return | |
| progressTimer = setInterval(() => { | |
| // If the upload was aborted, do not continue to emit progress. | |
| if (signal.aborted) { | |
| if (progressTimer != null) { | |
| clearInterval(progressTimer) | |
| progressTimer = undefined | |
| } | |
| return | |
| } | |
| const elapsed = Date.now() - startTime | |
| const ratio = Math.max(0, Math.min(1, elapsed / ESTIMATED_UPLOAD_DURATION_MS)) | |
| const estimatedUploaded = initialUploaded + (bytesTotal - initialUploaded) * ratio | |
| // Never report more than the total size. | |
| const nextUploaded = Math.min(bytesTotal, Math.floor(estimatedUploaded)) | |
| if (nextUploaded > chunkState.uploaded) { | |
| chunkState.uploaded = nextUploaded | |
| this.#onProgress() | |
| } | |
| }, PROGRESS_TICK_MS) | |
| } | |
| const stopProgressTimer = () => { | |
| if (progressTimer != null) { | |
| clearInterval(progressTimer) | |
| progressTimer = undefined | |
| } | |
| } | |
| startProgressTimer() | |
| try { | |
| await this.#s3Client.putObject( | |
| this.#key, | |
| this.#data, | |
| this.#file.type || 'application/octet-stream', | |
| ) | |
| } finally { | |
| stopProgressTimer() | |
| } | |
| // Report final, exact progress | |
| chunkState.uploaded = bytesTotal |
| .catch(() => { | ||
| // Ignore abort errors |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The multipart upload flow calls abortMultipartUpload on error, but if the abort itself fails (network issue, permissions, etc.), the error is silently ignored. While this is intentional to avoid masking the original error, it could lead to incomplete uploads accumulating on S3. Consider logging the abort failure or emitting a warning event.
| .catch(() => { | |
| // Ignore abort errors | |
| .catch((abortErr) => { | |
| // Log abort errors for observability, but do not mask the original error | |
| // eslint-disable-next-line no-console | |
| console.warn('MultipartUploader: failed to abort multipart upload', { | |
| key: this.#key, | |
| uploadId: this.#uploadId, | |
| originalError: err, | |
| abortError: abortErr, | |
| }) |
| for (let i = 0; i < part.length; i += 65536) { | ||
| crypto.getRandomValues( | ||
| new Uint8Array(part.buffer, i, Math.min(65536, part.length - i)), | ||
| ) |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test now generates random data for multipart uploads using crypto.getRandomValues in a loop with 65536-byte chunks. However, the loop condition i < part.length with increment i += 65536 could lead to the last iteration trying to write beyond the buffer if part.length is not a multiple of 65536. While the Math.min safeguard is in place, the logic would be clearer with i < part.length; i = Math.min(i + 65536, part.length) or similar.
| for (let i = 0; i < part.length; i += 65536) { | |
| crypto.getRandomValues( | |
| new Uint8Array(part.buffer, i, Math.min(65536, part.length - i)), | |
| ) | |
| for (let i = 0; i < part.length; ) { | |
| const chunkSize = Math.min(65536, part.length - i) | |
| crypto.getRandomValues(new Uint8Array(part.buffer, i, chunkSize)) | |
| i += chunkSize |
| * Pauses the upload. | ||
| */ | ||
| pause(): void { | ||
| this.#abortController.abort(pausingUploadReason) |
Copilot
AI
Jan 11, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pause method aborts the controller with pausingUploadReason, but there's a potential race condition if multiple pause/resume calls happen quickly. After calling abort(), the old controller is not immediately replaced with a new one, which could lead to issues if start() is called again before the abort completes.
| this.#abortController.abort(pausingUploadReason) | |
| const previousController = this.#abortController | |
| this.#abortController = new AbortController() | |
| previousController.abort(pausingUploadReason) |
|
closing this right now will reopen after #6130 is merged , to avoid reviewing 1 change twice |
This PR should be merged after #6130 which will introduce presigned url to s3mini ,
Working :
#onProgress) - this is temporary , until we implement xhr based progress tracking as - before.added new examples to examples/aws-nodejs to test the rewrite , old examples will eventually be removed
Not Yet Implemented
-Remote file uploads – Files from Google Drive, Dropbox, etc.
XHR progress