Skip to content

Conversation

@qxprakash
Copy link
Collaborator

@qxprakash qxprakash commented Jan 11, 2026

This PR should be merged after #6130 which will introduce presigned url to s3mini ,

Working :

  • Simple uploads (putObject)
  • Multipart uploads
  • Pause / Resume / Abort , Resume works using listParts as before
  • Progress tracking per chunk ( #onProgress ) - this is temporary , until we implement xhr based progress tracking as - before.

added new examples to examples/aws-nodejs to test the rewrite , old examples will eventually be removed

Not Yet Implemented

  • Rate limiting – Currently uploads are sequential
  • Retries
    -Remote file uploads – Files from Google Drive, Dropbox, etc.
    XHR progress
  • full wiring with uppy state ( I saw few state bugs while testing it manually )

@changeset-bot
Copy link

changeset-bot bot commented Jan 11, 2026

⚠️ No Changeset found

Latest commit: c498951

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@qxprakash qxprakash changed the title Plugin rewrite v3 Implement Minimal @uppy/aws-s3 plugin Jan 11, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 52aee89e10

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 330 to 332
if (signal.aborted) {
throw new Error('Upload aborted')
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve pause reason to avoid aborting uploads

Pausing calls abortController.abort(pausingUploadReason), but the abort checks throw new Error('Upload aborted') without a cause, so #onError can’t recognize a pause and instead aborts the multipart upload and emits upload-error. This means a user pausing mid-upload (or before the next loop iteration) will inadvertently cancel the upload on S3 and won’t be able to resume. Consider throwing with cause: pausingUploadReason (or using signal.reason) so pause paths are ignored as intended.

Useful? React with 👍 / 👎.

Comment on lines +204 to +207
#createCompanionSigner(companionUrl: string): IT.signRequestFn {
return async (request) => {
const response = await fetch(`${companionUrl}/s3/sign`, {
method: 'POST',

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use the presign endpoint for new signing payloads

The companion signer posts the new {method, key, uploadId, partNumber} payload to /s3/sign, but in this repo /s3/sign expects filename/contentType and only generates a simple PUT URL (see examples/aws-nodejs/index.js lines 128–145). That mismatch will 400 for the new payload and breaks multipart signing entirely (and can even upload to the wrong key), so endpoint mode won’t work with the provided server/example unless it targets the new /s3/presign endpoint or updates the server contract.

Useful? React with 👍 / 👎.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a rewritten version of the AWS S3 upload plugin that shifts from server-signed requests to a pre-signed URL approach. The implementation includes a new S3mini client for direct S3 operations and a MultipartUploader class for orchestrating uploads, supporting both simple putObject and multipart uploads with pause/resume/abort functionality.

Changes:

  • Refactored signature handling from signed headers to pre-signed URLs with query string authentication
  • Implemented new S3mini client with pre-signed URL generation and credential caching
  • Added MultipartUploader class for handling upload orchestration with progress tracking
  • Created new test example page and server endpoint for testing the rewritten plugin

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 21 comments.

Show a summary per file
File Description
packages/@uppy/aws-s3/src/s3-client/types.ts Updated type definitions from signed requests to pre-signed URL approach, removed SSEC headers support
packages/@uppy/aws-s3/src/s3-client/signer.ts Converted SigV4 signer to generate pre-signed URLs instead of authorization headers
packages/@uppy/aws-s3/src/s3-client/S3.ts Modified S3 client to use pre-signed URLs for all operations, simplified request flow
packages/@uppy/aws-s3/src/index.ts New plugin implementation with configuration validation and upload orchestration
packages/@uppy/aws-s3/src/MultipartUploader.ts New class handling upload chunking, progress tracking, and pause/resume functionality
packages/@uppy/aws-s3/tests/index.test.ts Added unit tests for plugin configuration and basic upload scenarios
packages/@uppy/aws-s3/tests/s3-client/minio.test.js Updated tests to use pre-signed URL approach, removed checksum validation tests
packages/@uppy/aws-s3/tests/s3-client/_shared.test.js Updated test helper to pass endpoint to presigner
examples/aws-nodejs/index.js Added /s3/presign endpoint for generating pre-signed URLs
examples/aws-nodejs/public/rewrite-test.html New test page demonstrating both STS and signRequest approaches
Comments suppressed due to low confidence (1)

packages/@uppy/aws-s3/src/s3-client/S3.ts:105

  • The credential caching doesn't consider the expiration field from TemporaryCredentials. If credentials have an expiration time, they should be automatically refreshed before they expire. Currently, cached credentials are only cleared on ExpiredToken/InvalidAccessKeyId errors, which means there will be at least one failed request before credentials are refreshed.
  private async _getCachedCredentials(): Promise<IT.CredentialsResponse> {
    // Return Cached Credentials if available
    if (this.cachedCredentials != null) {
      return this.cachedCredentials
    }

    // Cache the promise so concurrent calls wait for the same fetch
    if (this.cachedCredentialsPromise == null) {
      this.cachedCredentialsPromise = this.getCredentials!({})
        .then((creds) => {
          this.cachedCredentials = creds
          return creds
        })
        .finally(() => {
          // Clear promise cache after resolution to allow future retries
          this.cachedCredentialsPromise = undefined
        })
    }

    return this.cachedCredentialsPromise

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +61 to +67
// Build the canonical path (must be encoded for signing)
let canonicalPath = url.pathname
if (encodedKey && encodedKey.length > 0) {
canonicalPath =
canonicalPath === '/'
? `/${encodedKey.replace(/^\/+/, '')}`
: `${canonicalPath}/${encodedKey.replace(/^\/+/, '')}`
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The canonical path building logic has a potential issue when the endpoint already has a path component. If url.pathname is not '/', the key is appended after it. However, if the key starts with '/', the logic attempts to strip leading slashes, but this might not work correctly in all cases. For example, if url.pathname is '/bucket' and key is '/object.txt', the result would be '/bucket/object.txt', but if key is 'object.txt', it would also be '/bucket/object.txt'. This seems correct but should be tested with various endpoint formats.

Suggested change
// Build the canonical path (must be encoded for signing)
let canonicalPath = url.pathname
if (encodedKey && encodedKey.length > 0) {
canonicalPath =
canonicalPath === '/'
? `/${encodedKey.replace(/^\/+/, '')}`
: `${canonicalPath}/${encodedKey.replace(/^\/+/, '')}`
// Normalize base path to avoid double slashes and ensure leading slash
let basePath = url.pathname || '/'
if (!basePath.startsWith('/')) {
basePath = `/${basePath}`
}
if (basePath !== '/') {
basePath = basePath.replace(/\/+$/u, '')
}
// Build the canonical path (must be encoded for signing)
let canonicalPath = basePath
if (encodedKey && encodedKey.length > 0) {
const sanitizedKey = encodedKey.replace(/^\/+/u, '')
canonicalPath =
canonicalPath === '/'
? `/${sanitizedKey}`
: `${canonicalPath}/${sanitizedKey}`

Copilot uses AI. Check for mistakes.
Comment on lines +358 to +424
app.post('/s3/presign', async (req, res, next) => {
try {
const { method, key, uploadId, partNumber } = req.body
getS3Client() // Ensure client is initialized

if (!method || !key) {
return res.status(400).json({ error: 'method and key are required' })
}

const bucket = process.env.COMPANION_AWS_BUCKET
let command

// Determine which command to use based on method and params
if (method === 'PUT' && uploadId && partNumber) {
// UploadPart
command = new UploadPartCommand({
Bucket: bucket,
Key: key,
UploadId: uploadId,
PartNumber: parseInt(partNumber, 10),
})
} else if (method === 'PUT') {
// PutObject (simple upload)
command = new PutObjectCommand({
Bucket: bucket,
Key: key,
})
} else if (method === 'POST' && !uploadId) {
// CreateMultipartUpload
command = new CreateMultipartUploadCommand({
Bucket: bucket,
Key: key,
})
} else if (method === 'POST' && uploadId) {
// CompleteMultipartUpload
command = new CompleteMultipartUploadCommand({
Bucket: bucket,
Key: key,
UploadId: uploadId,
// Note: parts are sent in the request body, not in the presigned URL
})
} else if (method === 'DELETE' && uploadId) {
// AbortMultipartUpload
command = new AbortMultipartUploadCommand({
Bucket: bucket,
Key: key,
UploadId: uploadId,
})
} else if (method === 'GET' && uploadId) {
// ListParts
command = new ListPartsCommand({
Bucket: bucket,
Key: key,
UploadId: uploadId,
})
} else {
return res.status(400).json({ error: 'Unsupported operation' })
}

const url = await getSignedUrl(s3Client, command, { expiresIn: 900 })

res.setHeader('Access-Control-Allow-Origin', accessControlAllowOrigin)
res.json({ url })
} catch (err) {
next(err)
}
})
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Companion signer endpoint at '/s3/presign' doesn't appear to have any authentication or authorization checks. This means anyone who can reach this endpoint can generate pre-signed URLs for any S3 operation on the configured bucket. This is a significant security vulnerability. The endpoint should verify that the request is authorized (e.g., check session, API key, etc.) before generating pre-signed URLs.

Copilot uses AI. Check for mistakes.
Comment on lines +360 to +364
const { method, key, uploadId, partNumber } = req.body
getS3Client() // Ensure client is initialized

if (!method || !key) {
return res.status(400).json({ error: 'method and key are required' })
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The presign endpoint doesn't validate the key parameter for path traversal or other malicious patterns. An attacker could potentially request pre-signed URLs for keys like '../../../sensitive-file' or other patterns that might escape intended directory boundaries. Consider adding validation to ensure keys conform to expected patterns and don't contain suspicious path components.

Copilot uses AI. Check for mistakes.
Comment on lines +121 to +122
// ============================================================================

Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shouldUseMultipart option has a type definition issue. The default value is defined as a function but cast to true type using as any as true. This creates confusion between the runtime behavior (function) and the type (boolean true). The type definition should properly reflect that the default is a function, not true.

Copilot uses AI. Check for mistakes.
Comment on lines +447 to +448
window.UPPY_S3_BUCKET = "${process.env.COMPANION_AWS_BUCKET}";
window.UPPY_S3_REGION = "${process.env.COMPANION_AWS_REGION}";
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTML injects environment variables directly into the page without any sanitization. While COMPANION_AWS_BUCKET and COMPANION_AWS_REGION are typically safe, if these values contain special characters or malicious content, it could lead to XSS vulnerabilities. Consider using JSON.stringify() or proper HTML escaping when injecting values.

Suggested change
window.UPPY_S3_BUCKET = "${process.env.COMPANION_AWS_BUCKET}";
window.UPPY_S3_REGION = "${process.env.COMPANION_AWS_REGION}";
window.UPPY_S3_BUCKET = ${JSON.stringify(process.env.COMPANION_AWS_BUCKET)};
window.UPPY_S3_REGION = ${JSON.stringify(process.env.COMPANION_AWS_REGION)};

Copilot uses AI. Check for mistakes.
if (typeof shouldUseMultipart === 'function') {
return shouldUseMultipart(file)
}
if (typeof shouldUseMultipart === 'boolean') {
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says the default matches 'Companion's DEFAULT behaviour', but this may not be accurate. Companion typically uses 'uploads/{timestamp}-{randomId}/{filename}' format. This discrepancy could lead to confusion or incompatibility issues when switching between Companion and direct S3 uploads.

Copilot uses AI. Check for mistakes.
Comment on lines +276 to +283
await this.#s3Client.putObject(
this.#key,
this.#data,
this.#file.type || 'application/octet-stream',
)

// Report progress
this.#chunkState[0].uploaded = this.#data.size
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the simple upload path, the entire file is sent via putObject, but there's no progress tracking during the actual HTTP request. The progress is only updated after the upload completes. This means for large files (up to 100MB by default), users won't see any progress until the upload finishes. Consider implementing XHR-based uploads or fetch with readable streams to provide incremental progress updates.

Suggested change
await this.#s3Client.putObject(
this.#key,
this.#data,
this.#file.type || 'application/octet-stream',
)
// Report progress
this.#chunkState[0].uploaded = this.#data.size
const bytesTotal = this.#data.size
const chunkState = this.#chunkState[0]
const initialUploaded = typeof chunkState.uploaded === 'number' ? chunkState.uploaded : 0
// Provide incremental progress updates during the simple upload.
// Since we do not have byte-level progress from the underlying client,
// we estimate progress over time and ensure we never report more than the
// total size. The final, exact progress is still reported after putObject
// completes.
const startTime = Date.now()
// A conservative estimate of how long a simple upload might take; the
// actual duration may differ, but this keeps the UI responsive.
const ESTIMATED_UPLOAD_DURATION_MS = 30_000
const PROGRESS_TICK_MS = 500
let progressTimer: ReturnType<typeof setInterval> | undefined
const startProgressTimer = () => {
if (progressTimer != null) return
progressTimer = setInterval(() => {
// If the upload was aborted, do not continue to emit progress.
if (signal.aborted) {
if (progressTimer != null) {
clearInterval(progressTimer)
progressTimer = undefined
}
return
}
const elapsed = Date.now() - startTime
const ratio = Math.max(0, Math.min(1, elapsed / ESTIMATED_UPLOAD_DURATION_MS))
const estimatedUploaded = initialUploaded + (bytesTotal - initialUploaded) * ratio
// Never report more than the total size.
const nextUploaded = Math.min(bytesTotal, Math.floor(estimatedUploaded))
if (nextUploaded > chunkState.uploaded) {
chunkState.uploaded = nextUploaded
this.#onProgress()
}
}, PROGRESS_TICK_MS)
}
const stopProgressTimer = () => {
if (progressTimer != null) {
clearInterval(progressTimer)
progressTimer = undefined
}
}
startProgressTimer()
try {
await this.#s3Client.putObject(
this.#key,
this.#data,
this.#file.type || 'application/octet-stream',
)
} finally {
stopProgressTimer()
}
// Report final, exact progress
chunkState.uploaded = bytesTotal

Copilot uses AI. Check for mistakes.
Comment on lines +419 to +420
.catch(() => {
// Ignore abort errors
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The multipart upload flow calls abortMultipartUpload on error, but if the abort itself fails (network issue, permissions, etc.), the error is silently ignored. While this is intentional to avoid masking the original error, it could lead to incomplete uploads accumulating on S3. Consider logging the abort failure or emitting a warning event.

Suggested change
.catch(() => {
// Ignore abort errors
.catch((abortErr) => {
// Log abort errors for observability, but do not mask the original error
// eslint-disable-next-line no-console
console.warn('MultipartUploader: failed to abort multipart upload', {
key: this.#key,
uploadId: this.#uploadId,
originalError: err,
abortError: abortErr,
})

Copilot uses AI. Check for mistakes.
Comment on lines +87 to 90
for (let i = 0; i < part.length; i += 65536) {
crypto.getRandomValues(
new Uint8Array(part.buffer, i, Math.min(65536, part.length - i)),
)
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test now generates random data for multipart uploads using crypto.getRandomValues in a loop with 65536-byte chunks. However, the loop condition i < part.length with increment i += 65536 could lead to the last iteration trying to write beyond the buffer if part.length is not a multiple of 65536. While the Math.min safeguard is in place, the logic would be clearer with i < part.length; i = Math.min(i + 65536, part.length) or similar.

Suggested change
for (let i = 0; i < part.length; i += 65536) {
crypto.getRandomValues(
new Uint8Array(part.buffer, i, Math.min(65536, part.length - i)),
)
for (let i = 0; i < part.length; ) {
const chunkSize = Math.min(65536, part.length - i)
crypto.getRandomValues(new Uint8Array(part.buffer, i, chunkSize))
i += chunkSize

Copilot uses AI. Check for mistakes.
* Pauses the upload.
*/
pause(): void {
this.#abortController.abort(pausingUploadReason)
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pause method aborts the controller with pausingUploadReason, but there's a potential race condition if multiple pause/resume calls happen quickly. After calling abort(), the old controller is not immediately replaced with a new one, which could lead to issues if start() is called again before the abort completes.

Suggested change
this.#abortController.abort(pausingUploadReason)
const previousController = this.#abortController
this.#abortController = new AbortController()
previousController.abort(pausingUploadReason)

Copilot uses AI. Check for mistakes.
@qxprakash
Copy link
Collaborator Author

qxprakash commented Jan 12, 2026

closing this right now will reopen after #6130 is merged , to avoid reviewing 1 change twice

@qxprakash qxprakash closed this Jan 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant