fix(storage): sign only ACL in presigned URLs, rename fields to requiredHeaders#188
fix(storage): sign only ACL in presigned URLs, rename fields to requiredHeaders#188abhay-ramesh merged 4 commits intomainfrom
Conversation
…redHeaders The previous implementation (from #164) signed Content-Type and all x-amz-meta-* headers alongside x-amz-acl. This required every user's S3 bucket CORS config to allow x-amz-meta-* headers, breaking existing deployments silently on upgrade. Only x-amz-acl needs to be in SignedHeaders — AWS SigV4 query-string signing does not verify unsigned headers, so Content-Type and metadata can be sent by the client without being signed. S3 stores them regardless. Changes: - Split prepareS3UploadHeaders into preparePresignHeaders (signed/unsigned split for presigned PUT) and the existing helper (used for direct server-side uploads where all headers are applied directly) - Rename fields -> requiredHeaders in PresignedUrlResult, PresignedUrlResponse, and the uploadToS3 XHR function — "fields" is S3 POST form terminology, not appropriate for presigned PUT uploads - Add S3AclValue union type on BaseProviderConfig and UploadConfig.defaults so typos like "public_read" are caught at compile time - Fix and expand presigned-url-signing tests: verify metadata is in requiredHeaders but absent from the signed request, add MinIO test, correct assertions to match actual provider defaults
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 23 minutes and 16 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughPresigned upload contract changed from POST Changes
Sequence DiagramsequenceDiagram
participant Client
participant Router
participant StorageClient as Storage Client
participant AwsClient as AWS Client
Client->>Router: generatePresignedUrls(config)
Router->>StorageClient: generatePresignedUploadUrl(options)
StorageClient->>StorageClient: preparePresignHeaders()\nsigned: { x-amz-acl }\nunsigned: { Content-Type, x-amz-meta-* }
StorageClient->>AwsClient: sign(signed)
AwsClient-->>StorageClient: signed signature / signed headers
StorageClient-->>Router: PresignedUrlResult { url, requiredHeaders }
Router-->>Client: PresignedUrlResponse { url, requiredHeaders }
Client->>Client: uploadToS3(url, requiredHeaders)\nset headers via xhr.setRequestHeader()\nPUT file with requiredHeaders
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Returning x-amz-meta-* headers to the browser client exposes server-side values set by middleware (userId, routeName, etc.) in the network tab. Since metadata headers are unsigned, omitting them from requiredHeaders does not affect upload correctness — S3 does not require them. Storing object metadata via presigned PUT is tracked in issue #189. This restores pre-#164 behavior: metadata is passed to generatePresignedUploadUrl but not forwarded to the client.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/pushduck/src/core/router/router-v2.ts (1)
937-943:⚠️ Potential issue | 🟠 MajorKeep a deprecated
fieldsalias for one rollout window.This renames the presign wire contract outright. During a server-first rollout, older browser bundles that still read
fieldswill stop sending ACL/metadata headers. Returning both keys here avoids breaking cached clients whilerequiredHeadersrolls out.🔁 Compatibility shim
results.push({ success: true, file, presignedUrl: presignedResult.url, key: presignedResult.key, + fields: presignedResult.requiredHeaders, // deprecated; remove in the next major requiredHeaders: presignedResult.requiredHeaders, metadata: fileMetadata, });export interface PresignedUrlResponse { success: boolean; file: S3FileMetadata; presignedUrl?: string; key?: string; + /** `@deprecated` Use `requiredHeaders`. */ + fields?: Record<string, string>; /** Headers the client must send with the PUT request. See {`@link` PresignedUrlResult.requiredHeaders}. */ requiredHeaders?: Record<string, string>; metadata?: any; error?: string; }Also applies to: 1045-1051
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/pushduck/src/core/router/router-v2.ts` around lines 937 - 943, Update the presign response object to include the deprecated "fields" alias alongside the new "requiredHeaders" so old browser bundles keep working: when building the object in the results.push call (the block that uses presignedResult.url, presignedResult.key, presignedResult.requiredHeaders, metadata/fileMetadata), add a fields property that points to the same data (e.g., fields: presignedResult.requiredHeaders). Make the same change at the other identical response construction (the second results.push around the 1045–1051 region) so both returning shapes include requiredHeaders and the deprecated fields alias.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/pushduck/src/core/providers/providers.ts`:
- Around line 80-87: The S3AclValue union in providers.ts is missing the valid
canned ACL "aws-exec-read"; update the S3AclValue type (the union declaration
named S3AclValue) to include the string literal "aws-exec-read" alongside the
other ACL values so the type covers this Amazon S3 canned ACL.
In `@packages/pushduck/src/core/storage/client.ts`:
- Around line 751-760: The code still requires browser CORS for x-amz-meta-*
because use-upload-route.ts populates requiredHeaders with metadata and the
client sends them; update the flow so presigned PUT headers (the function around
preparePresignedPutHeaders / the presigned PUT generation in client.ts) do NOT
include x-amz-meta-* in requiredHeaders sent to the browser. Instead return
metadata as a separate JSON field (or persist it server-side) and apply it
server-side after upload (e.g., copy/replace object metadata or use S3 tagging)
so the browser no longer needs to send x-amz-meta-* request headers; also remove
x-amz-meta-* from requiredHeaders population in use-upload-route.ts (the code
around requiredHeaders assignment at lines ~233-236).
In `@packages/pushduck/src/hooks/use-upload-route.ts`:
- Around line 233-240: The fallback for older servers only sets Content-Type and
ignores presign `fields`, causing missing headers like `x-amz-acl` or
`x-amz-meta-*`; update the upload logic in use-upload-route (the block that
currently checks `requiredHeaders` and calls `xhr.setRequestHeader`) to also
read and apply `result.fields` when `requiredHeaders` is absent (or merge both
when present), i.e., iterate over `result.fields` and call
`xhr.setRequestHeader` for each key/value (in addition to `Content-Type`), and
ensure the code path that forwards `result.requiredHeaders` (around where
`requiredHeaders` is assigned/used) preserves both shapes so older presign
responses' metadata is included.
---
Outside diff comments:
In `@packages/pushduck/src/core/router/router-v2.ts`:
- Around line 937-943: Update the presign response object to include the
deprecated "fields" alias alongside the new "requiredHeaders" so old browser
bundles keep working: when building the object in the results.push call (the
block that uses presignedResult.url, presignedResult.key,
presignedResult.requiredHeaders, metadata/fileMetadata), add a fields property
that points to the same data (e.g., fields: presignedResult.requiredHeaders).
Make the same change at the other identical response construction (the second
results.push around the 1045–1051 region) so both returning shapes include
requiredHeaders and the deprecated fields alias.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e211a878-1033-4294-bc65-ada0c5f00a4c
📒 Files selected for processing (6)
packages/pushduck/src/__tests__/presigned-url-signing.test.tspackages/pushduck/src/core/config/upload-config.tspackages/pushduck/src/core/providers/providers.tspackages/pushduck/src/core/router/router-v2.tspackages/pushduck/src/core/storage/client.tspackages/pushduck/src/hooks/use-upload-route.ts
| export type S3AclValue = | ||
| | "private" | ||
| | "public-read" | ||
| | "public-read-write" | ||
| | "authenticated-read" | ||
| | "bucket-owner-read" | ||
| | "bucket-owner-full-control" | ||
| | "log-delivery-write"; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Amazon S3 support the canned ACL value aws-exec-read, and what are the full set of canned ACL values?
💡 Result:
Yes, Amazon S3 supports the canned ACL value aws-exec-read for both buckets and objects. It grants FULL_CONTROL to the owner and READ access to Amazon EC2 for retrieving Amazon Machine Image (AMI) bundles. The full set of supported canned ACL values, applicable to buckets and/or objects as noted, are: - private: Owner gets FULL_CONTROL. No one else has access (default for both buckets and objects). - public-read: Owner gets FULL_CONTROL. AllUsers group gets READ (both). - public-read-write: Owner gets FULL_CONTROL. AllUsers group gets READ and WRITE (both; not recommended for buckets). - aws-exec-read: Owner gets FULL_CONTROL. Amazon EC2 gets READ (both). - authenticated-read: Owner gets FULL_CONTROL. AuthenticatedUsers group gets READ (both). - bucket-owner-read: Object owner gets FULL_CONTROL. Bucket owner gets READ (objects only; ignored for buckets). - bucket-owner-full-control: Object owner and bucket owner both get FULL_CONTROL (objects only; ignored for buckets). - log-delivery-write: LogDelivery group gets WRITE and READ_ACP (buckets only).
Citations:
- 1: https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
- 2: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/CannedAccessControlList.html
- 3: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html
- 4: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html
Add the missing aws-exec-read canned ACL.
The S3AclValue union omits aws-exec-read, a valid Amazon S3 canned ACL that grants FULL_CONTROL to the owner and READ access to Amazon EC2 for AMI bundle retrieval.
➕ Complete the ACL union
export type S3AclValue =
| "private"
| "public-read"
| "public-read-write"
+ | "aws-exec-read"
| "authenticated-read"
| "bucket-owner-read"
| "bucket-owner-full-control"
| "log-delivery-write";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pushduck/src/core/providers/providers.ts` around lines 80 - 87, The
S3AclValue union in providers.ts is missing the valid canned ACL
"aws-exec-read"; update the S3AclValue type (the union declaration named
S3AclValue) to include the string literal "aws-exec-read" alongside the other
ACL values so the type covers this Amazon S3 canned ACL.
| if (requiredHeaders) { | ||
| for (const [key, value] of Object.entries(requiredHeaders)) { | ||
| xhr.setRequestHeader(key, value); | ||
| }); | ||
| } | ||
| } else { | ||
| // Fallback for backward compatibility | ||
| // Older server version — no requiredHeaders returned, set Content-Type directly | ||
| xhr.setRequestHeader("Content-Type", contentType); | ||
| } |
There was a problem hiding this comment.
The old-server fallback never reads fields.
Line 683 only forwards result.requiredHeaders, so when a newer client hits an older server the fallback at Lines 233-240 sends just Content-Type. Any old presign response that put x-amz-acl or x-amz-meta-* in fields will fail signature validation or lose metadata.
🪛 Read both response shapes during the transition
const blob = earlyBlob ?? await toBlob(file);
+ const uploadHeaders = result.requiredHeaders ?? result.fields;
await uploadToS3(
blob,
fileMeta.type,
result.presignedUrl,
- result.requiredHeaders,
+ uploadHeaders,
(progress, uploadSpeed, eta) =>
updateFileProgress(fileState.id, progress, uploadSpeed, eta)
);Also applies to: 679-683
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pushduck/src/hooks/use-upload-route.ts` around lines 233 - 240, The
fallback for older servers only sets Content-Type and ignores presign `fields`,
causing missing headers like `x-amz-acl` or `x-amz-meta-*`; update the upload
logic in use-upload-route (the block that currently checks `requiredHeaders` and
calls `xhr.setRequestHeader`) to also read and apply `result.fields` when
`requiredHeaders` is absent (or merge both when present), i.e., iterate over
`result.fields` and call `xhr.setRequestHeader` for each key/value (in addition
to `Content-Type`), and ensure the code path that forwards
`result.requiredHeaders` (around where `requiredHeaders` is assigned/used)
preserves both shapes so older presign responses' metadata is included.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
packages/pushduck/src/core/storage/client.ts (1)
598-614:⚠️ Potential issue | 🟠 Major
metadatabecame a silent no-op for presigned PUTs.
preparePresignHeaders()now drops everyx-amz-meta-*header, butgeneratePresignedUploadUrl()still acceptsoptions.metadataand returns no other path to persist it.router-v2.tsstill passess3Metadatainto this API, so uploads can succeed while the object loses metadata. Please either add the server-side completion/persistence step now, or fail fast whenoptions.metadatais present; therequiredHeadersdocs in this file should be updated in the same change.Safer short-term fix
function preparePresignHeaders( uploadConfig: UploadConfig, options: { contentType?: string; metadata?: Record<string, string>; } ): { signed: Record<string, string>; unsigned: Record<string, string> } { const { provider, acl: providerAcl } = uploadConfig.provider; const signed: Record<string, string> = {}; const unsigned: Record<string, string> = {}; + + if (options.metadata && Object.keys(options.metadata).length > 0) { + throw new Error( + "Presigned PUT uploads do not currently support object metadata without a server-side completion step." + ); + }Also applies to: 781-790, 828-849
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/pushduck/src/core/storage/client.ts` around lines 598 - 614, The presigned PUT flow drops x-amz-meta-* headers in preparePresignHeaders causing options.metadata passed to generatePresignedUploadUrl (and from router-v2.ts's s3Metadata) to be silently ignored; update the flow to either persist metadata server-side during the upload completion step or reject/fail fast when options.metadata is provided. Specifically: in preparePresignHeaders ensure x-amz-meta-* headers are preserved or propagated into the returned requiredHeaders, or modify generatePresignedUploadUrl to validate options.metadata and throw an error if metadata is present (and document this behavior); also update the requiredHeaders JSDoc to correctly describe handling of x-amz-meta-* and reflect the chosen behavior. Ensure references to preparePresignHeaders, generatePresignedUploadUrl, options.metadata, and router-v2.ts's s3Metadata are addressed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/pushduck/src/core/storage/client.ts`:
- Around line 598-614: The presigned PUT flow drops x-amz-meta-* headers in
preparePresignHeaders causing options.metadata passed to
generatePresignedUploadUrl (and from router-v2.ts's s3Metadata) to be silently
ignored; update the flow to either persist metadata server-side during the
upload completion step or reject/fail fast when options.metadata is provided.
Specifically: in preparePresignHeaders ensure x-amz-meta-* headers are preserved
or propagated into the returned requiredHeaders, or modify
generatePresignedUploadUrl to validate options.metadata and throw an error if
metadata is present (and document this behavior); also update the
requiredHeaders JSDoc to correctly describe handling of x-amz-meta-* and reflect
the chosen behavior. Ensure references to preparePresignHeaders,
generatePresignedUploadUrl, options.metadata, and router-v2.ts's s3Metadata are
addressed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4aebea5c-7350-4336-9e1f-beaf4b23637d
📒 Files selected for processing (2)
packages/pushduck/src/__tests__/presigned-url-signing.test.tspackages/pushduck/src/core/storage/client.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/pushduck/src/tests/presigned-url-signing.test.ts
Summary
Follows up on #164 with a more precise implementation of the presigned URL header signing.
The problem with #164's approach: It signed
Content-Typeand allx-amz-meta-*headers alongsidex-amz-acl. When headers are inSignedHeaders, the browser XHR must send them — which means the S3 bucket CORS config must allow them. Most existing deployments only allowContent-Type, so addingx-amz-meta-*to signed headers would break uploads silently on upgrade.The key insight: With
signQuery: true(AWS SigV4 query-string signing), only headers listed inX-Amz-SignedHeadersare verified. Additional headers sent by the client (Content-Type,x-amz-meta-*) pass through and are stored by S3 without signature verification. So onlyx-amz-aclneeds to be signed — it's the only header AWS won't apply unless it's inSignedHeaders.Changes
x-amz-acl— Content-Type and metadata go intorequiredHeadersbut are not signed. Client sends all of them; AWS verifies only ACL.fields→requiredHeaders—fieldsis S3 POST (multipart form) terminology. For presigned PUT uploads the correct name isrequiredHeaders.S3AclValueunion type — replacesacl?: stringonBaseProviderConfigandUploadConfig.defaults. Typos like"public_read"are now caught at compile time.requiredHeadersbut absent from the signed request, add MinIO coverage, correct assertions to match actual provider defaults.CORS impact for users
acl: 'public-read'on AWS S3x-amz-aclto AllowedHeadersx-amz-aclto AllowedHeadersUsers with middleware metadata no longer need to add
x-amz-meta-*to their bucket CORS config.Test plan
acl: 'public-read'on AWS S3 (verify file is publicly accessible)x-amz-aclheader sent)acl?: S3AclValueautocomplete works in IDESummary by CodeRabbit
Bug Fixes
Tests