-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: improve bulk uploads #1844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
lib/files/tus-redis-locker.ts (3)
68-108
: acquireLock always returns false on contention (lost recursive result) → immediate timeout.
Youawait
a recursive call, then discard its result andreturn false
, solock()
races and times out even if the lock becomes available later.Minimal fix: return the recursive result (and avoid a hot spin by adding a short delay).
protected async acquireLock( id: string, requestRelease: RequestRelease, signal: AbortSignal, ): Promise<boolean> { if (signal.aborted) { return false; } const lockKey = `tus-lock-${id}`; const lock = await this.locker.redisClient.set(lockKey, "locked", { nx: true, px: this.timeout, }); if (lock) { - // Register a release request flag in Redis - await this.locker.redisClient.set(`requestRelease:${lockKey}`, "true", { - px: this.timeout, - }); return true; } - // Check if the release was requested - const releaseRequestStr: string | null = await this.locker.redisClient.get( - `requestRelease:${lockKey}`, - ); - if (releaseRequestStr === "true") { - await requestRelease?.(); - } - - await new Promise((resolve, reject) => { - setImmediate(() => { - this.acquireLock(id, requestRelease, signal) - .then(resolve) - .catch(reject); - }); - }); - - return false; + // Politely request release from current holder + await this.locker.redisClient.set(`requestRelease:${lockKey}`, "true", { + px: this.timeout, + }); + + // Retry later and propagate the result up the chain + return new Promise<boolean>((resolve, reject) => { + setTimeout(() => { + this.acquireLock(id, requestRelease, signal) + .then(resolve) + .catch(reject); + }, 100); + }); }Notes:
- Set the request flag on contention (not when you acquire).
- Use a small delay to avoid a CPU spin.
110-119
: Unlock can delete another process’s lock (no ownership check).
If TTL expires and another worker acquires the lock,del(lockKey)
will remove their lock. Store a unique token as the value and verify it in a Lua script on unlock (Redlock-style).I can provide a tiny Lua-based
DEL_IF_VALUE_MATCHES
snippet if you want to adopt that pattern.
83-98
: Request-release signaling semantics inverted
You’re setting therequestRelease:<lockKey>
flag when acquiring the lock and have waiting contenders read it to invoke their ownrequestRelease()
—the actual lock holder never observes release requests. You must refactor so that:
- In single-process mode, you keep an in-memory map of holder callbacks by
id
and call the holder’srequestRelease
directly when another attempt contends.- In distributed mode, the lock holder polls Redis (e.g. on a timer) for
requestRelease:<lockKey>
and invokes its providedrequestRelease
callback upon seeing it.lib/files/tus-upload.ts (2)
56-69
: Pass shared “completeResolve” into onSuccess (don’t re-create per attempt).
Keep the rest of the options as-is; only change how completion is resolved.Apply:
- const upload = new tus.Upload(file, { + const upload = new tus.Upload(file, { endpoint: `${process.env.NEXT_PUBLIC_BASE_URL}/api/file/tus`, retryDelays, uploadDataDuringCreation: true, removeFingerprintOnSuccess: true, metadata: { fileName: file.name, contentType: file.type, numPages: String(numPages), teamId: teamId, ownerId: ownerId, relativePath: relativePath, },(No code change within this block; see onSuccess/timeout diffs below.)
158-174
: Resolve the shared “complete” Promise here.
No other logic change.Apply:
- completeResolve({ + completeResolve({ id: newId, url: upload.url!, relativePath, fileName: file.name, fileType: file.type, numPages, ownerId, teamId, });(Kept as-is; this is the spot that now resolves the outer
complete
.)pages/api/file/tus/[[...file]].ts (1)
102-109
: Missing await on getServerSession → always “authorized”.
getServerSession
returns a Promise; withoutawait
, the condition always passes and the route skips the 401.Apply:
-export default function handler(req: NextApiRequest, res: NextApiResponse) { +export default async function handler(req: NextApiRequest, res: NextApiResponse) { // Get the session - const session = getServerSession(req, res, authOptions); + const session = await getServerSession(req, res, authOptions); if (!session) { return res.status(401).json({ message: "Unauthorized" }); } return tusServer.handle(req, res); }lib/files/viewer-tus-upload.ts (1)
59-74
: Expose upload instance via resultHolder; metadata keys correct
- In lib/files/viewer-tus-upload.ts, after instantiating
upload
, add:// Keep the outward-facing upload handle current resultHolder.upload = upload;- Server handlers reference
metadata.contentType
, so the existingcontentType
key is correct and needs no change.
🧹 Nitpick comments (5)
lib/files/tus-redis-locker.ts (1)
51-66
: Unused AbortController; race cancellation is ineffective.
abortController
is never wired toacquireLock
/waitTimeout
and does nothing. Simplify, or actually propagate a combined signal.Apply:
- const abortController = new AbortController(); const lock = await Promise.race([ this.waitTimeout(signal), this.acquireLock(this.id, requestRelease, signal), ]); - - abortController.abort();lib/files/tus-upload.ts (2)
42-43
: Browser typing: avoid NodeJS.Timeout.
This runs client-side; preferReturnType<typeof setTimeout>
for compatibility.Apply:
- let networkTimeoutId: NodeJS.Timeout | undefined; + let networkTimeoutId: ReturnType<typeof setTimeout> | undefined;
122-157
: Use a single TIMEOUT_MS constant and ensure the timeout resets/aborts per attempt.
Also make messages consistent.Apply:
- networkTimeoutId = setTimeout(() => { + networkTimeoutId = setTimeout(() => { console.error( - `Network timeout after ${60_000}ms of no progress (attempt ${attempt + 1})`, + `Network timeout after ${TIMEOUT_MS}ms of no progress (attempt ${attempt + 1})`, ); isTimedOut = true; upload.abort(); @@ - onError?.(new Error("Network timeout after 60000ms")); - reject(new Error("Network timeout after 60000ms")); + const err = new Error(`Network timeout after ${TIMEOUT_MS}ms`); + onError?.(err); + reject(err); } - }, 60_000); + }, TIMEOUT_MS);lib/files/viewer-tus-upload.ts (2)
165-180
: Safer ID extraction from upload URL.String split on a hardcoded path is brittle. Use URL parsing and keep existing decode behavior.
Apply:
- const id = upload.url!.split("/api/file/tus-viewer/")[1]; - // if id contains a slash, then we use it as it otherwise we need to convert from buffer base64URL to utf-8 - const newId = id.includes("/") ? id : decodeBase64Url(id); + // Extract id from URL robustly + const urlObj = new URL(upload.url!); + const pathAfter = + urlObj.pathname.split("/api/file/tus-viewer/")[1] ?? + urlObj.pathname.slice(urlObj.pathname.lastIndexOf("/") + 1); + const newId = pathAfter.includes("/") + ? pathAfter + : decodeBase64Url(pathAfter);
46-49
: Gate verbose logs behind a flag.Reduce console noise in production by checking an env/config debug flag.
Example:
- console.log( + if (process.env.NEXT_PUBLIC_DEBUG_UPLOADS === "1") console.log( `Starting viewer upload attempt ${attempt + 1}/${retryDelays.length}`, );(Apply similarly to other logs.)
Also applies to: 76-80, 103-106, 135-138, 146-149, 167-167, 186-188, 197-198, 218-218, 224-224
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
components/upload-zone.tsx
(1 hunks)lib/files/tus-redis-locker.ts
(2 hunks)lib/files/tus-upload.ts
(3 hunks)lib/files/viewer-tus-upload.ts
(3 hunks)pages/api/file/tus-viewer/[[...file]].ts
(1 hunks)pages/api/file/tus/[[...file]].ts
(1 hunks)
🔇 Additional comments (7)
lib/files/tus-redis-locker.ts (1)
35-37
: Default to 60s is sensible and consistent with the rest of the PR.
Aligns lock acquisition window with API/function timeouts.lib/files/tus-upload.ts (1)
225-232
: Return the first handle plus the single shared completion.
After the refactor, resolve exactly once.Suggested pattern (context only; ensure surrounding edits applied first):
- const result = createUpload(); - resolve(result); + const { upload } = createUpload(); + resolve({ upload, complete });Please run a quick manual test that
complete
resolves after a retry and that the UI progresses to 100%.pages/api/file/tus/[[...file]].ts (1)
18-23
: Confirm that Next.js pages API supportsmaxDuration
here.
On Vercel,export const config = { maxDuration }
is honored, but behavior can vary across runtimes. Verify deployment target recognizes it for pages API routes.If unsupported, consider moving to
app
router API route or using platform config.pages/api/file/tus-viewer/[[...file]].ts (1)
17-22
: ConfirmmaxDuration
support for this pages API route as well.
Same concern as the main TUS route; verify on the target platform.lib/files/viewer-tus-upload.ts (3)
119-128
: onShouldRetry returns true unconditionally; confirm desired interplay with manual retries.With both TUS internal retries and our manual attempt restarts, backoffs may compound. Consider restricting
onShouldRetry
to transient HTTP codes only, or trimmingretryDelays
passed to TUS when manual timeouts are enabled.
213-227
: Resume-from-previous-upload handling looks good.Correctly resumes first prior upload and starts; fallback on lookup failure is sensible.
35-38
: AllviewerUpload
call sites updated to use the new{ upload, complete }
return shape.
// Provide more specific error messages | ||
let errorMessage = "Error uploading file"; | ||
if (error.message?.toLowerCase().includes("timeout")) { | ||
errorMessage = "Upload timed out. Please try again."; | ||
} else if (error.message?.toLowerCase().includes("network")) { | ||
errorMessage = | ||
"Network error. Please check your connection and try again."; | ||
} else if ((error as any).originalResponse) { | ||
const status = (error as any).originalResponse.getStatus(); | ||
if (status === 413) { | ||
errorMessage = "File too large for upload"; | ||
} else if (status >= 500) { | ||
errorMessage = "Server error. Please try again later."; | ||
} else if (status === 403) { | ||
errorMessage = "Upload not permitted"; | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden error parsing; don’t assume getStatus
exists.
Guard against different shapes of originalResponse
and include 401.
Apply:
- if (error.message?.toLowerCase().includes("timeout")) {
+ if (error.message?.toLowerCase().includes("timeout")) {
errorMessage = "Upload timed out. Please try again.";
} else if (error.message?.toLowerCase().includes("network")) {
errorMessage =
"Network error. Please check your connection and try again.";
} else if ((error as any).originalResponse) {
- const status = (error as any).originalResponse.getStatus();
+ const resp = (error as any).originalResponse;
+ const status =
+ typeof resp?.getStatus === "function"
+ ? resp.getStatus()
+ : typeof resp?.status === "number"
+ ? resp.status
+ : undefined;
- if (status === 413) {
+ if (status === 413) {
errorMessage = "File too large for upload";
- } else if (status >= 500) {
+ } else if (status === 401 || status === 403) {
+ errorMessage = "Upload not permitted";
+ } else if (typeof status === "number" && status >= 500) {
errorMessage = "Server error. Please try again later.";
- } else if (status === 403) {
- errorMessage = "Upload not permitted";
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Provide more specific error messages | |
let errorMessage = "Error uploading file"; | |
if (error.message?.toLowerCase().includes("timeout")) { | |
errorMessage = "Upload timed out. Please try again."; | |
} else if (error.message?.toLowerCase().includes("network")) { | |
errorMessage = | |
"Network error. Please check your connection and try again."; | |
} else if ((error as any).originalResponse) { | |
const status = (error as any).originalResponse.getStatus(); | |
if (status === 413) { | |
errorMessage = "File too large for upload"; | |
} else if (status >= 500) { | |
errorMessage = "Server error. Please try again later."; | |
} else if (status === 403) { | |
errorMessage = "Upload not permitted"; | |
} | |
} | |
// Provide more specific error messages | |
let errorMessage = "Error uploading file"; | |
if (error.message?.toLowerCase().includes("timeout")) { | |
errorMessage = "Upload timed out. Please try again."; | |
} else if (error.message?.toLowerCase().includes("network")) { | |
errorMessage = | |
"Network error. Please check your connection and try again."; | |
} else if ((error as any).originalResponse) { | |
const resp = (error as any).originalResponse; | |
const status = | |
typeof resp?.getStatus === "function" | |
? resp.getStatus() | |
: typeof resp?.status === "number" | |
? resp.status | |
: undefined; | |
if (status === 413) { | |
errorMessage = "File too large for upload"; | |
} else if (status === 401 || status === 403) { | |
errorMessage = "Upload not permitted"; | |
} else if (typeof status === "number" && status >= 500) { | |
errorMessage = "Server error. Please try again later."; | |
} | |
} |
🤖 Prompt for AI Agents
In components/upload-zone.tsx around lines 254 to 271, the error parsing assumes
originalResponse has getStatus and doesn't handle different shapes or a 401;
update the logic to safely inspect originalResponse (check for null/undefined),
detect status via multiple possible properties (e.g., getStatus(), status,
statusCode) only after verifying those members exist, and default to a generic
error when none found; add handling for 401 to set a clear "Unauthorized" or
"Authentication required" message and ensure all accesses are guarded to avoid
runtime errors.
return new Promise((resolve, reject) => { | ||
let attempt = 0; | ||
let networkTimeoutId: NodeJS.Timeout | undefined; | ||
|
||
const createUpload = () => { | ||
console.log( | ||
`Starting viewer upload attempt ${attempt + 1}/${retryDelays.length}`, | ||
); | ||
|
||
let completeResolve: ( | ||
value: UploadResult | PromiseLike<UploadResult>, | ||
) => void; | ||
const complete = new Promise<UploadResult>((res) => { | ||
completeResolve = res; | ||
}); | ||
|
||
let isTimedOut = false; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken promise semantics and per-attempt “complete” cause hangs and lost errors.
- The outer Promise resolves immediately (Line 234) and later
reject(...)
calls are ignored. - Each retry creates a new
complete
promise, but the caller still holds the first one, which never resolves/rejects.
Refactor to a single, shared complete
across attempts and a stable result holder returned once.
Apply:
- return new Promise((resolve, reject) => {
- let attempt = 0;
- let networkTimeoutId: NodeJS.Timeout | undefined;
+ return new Promise((resolve) => {
+ let attempt = 0;
+ let networkTimeoutId: ReturnType<typeof setTimeout> | undefined;
+
+ let completeResolve!: (value: UploadResult) => void;
+ let completeReject!: (reason?: unknown) => void;
+ const complete = new Promise<UploadResult>((res, rej) => {
+ completeResolve = res;
+ completeReject = rej;
+ });
+ const resultHolder: { upload: tus.Upload; complete: Promise<UploadResult> } = {
+ upload: undefined as unknown as tus.Upload,
+ complete,
+ };
const createUpload = () => {
console.log(
`Starting viewer upload attempt ${attempt + 1}/${retryDelays.length}`,
);
- let completeResolve: (
- value: UploadResult | PromiseLike<UploadResult>,
- ) => void;
- const complete = new Promise<UploadResult>((res) => {
- completeResolve = res;
- });
+ // `complete` is shared across attempts (defined above).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return new Promise((resolve, reject) => { | |
let attempt = 0; | |
let networkTimeoutId: NodeJS.Timeout | undefined; | |
const createUpload = () => { | |
console.log( | |
`Starting viewer upload attempt ${attempt + 1}/${retryDelays.length}`, | |
); | |
let completeResolve: ( | |
value: UploadResult | PromiseLike<UploadResult>, | |
) => void; | |
const complete = new Promise<UploadResult>((res) => { | |
completeResolve = res; | |
}); | |
let isTimedOut = false; | |
return new Promise((resolve) => { | |
let attempt = 0; | |
let networkTimeoutId: ReturnType<typeof setTimeout> | undefined; | |
// Single, shared completion promise across all attempts: | |
let completeResolve!: (value: UploadResult) => void; | |
let completeReject!: (reason?: unknown) => void; | |
const complete = new Promise<UploadResult>((res, rej) => { | |
completeResolve = res; | |
completeReject = rej; | |
}); | |
const resultHolder: { upload: tus.Upload; complete: Promise<UploadResult> } = { | |
upload: undefined as unknown as tus.Upload, | |
complete, | |
}; | |
const createUpload = () => { | |
console.log( | |
`Starting viewer upload attempt ${attempt + 1}/${retryDelays.length}`, | |
); | |
// `complete` is shared across attempts (defined above). | |
let isTimedOut = false; | |
// …rest of createUpload… |
|
||
return new Promise((resolve, reject) => { | ||
let attempt = 0; | ||
let networkTimeoutId: NodeJS.Timeout | undefined; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use browser-safe timeout type.
NodeJS.Timeout
can clash in DOM builds. Prefer ReturnType<typeof setTimeout>
.
(Already included in the larger diff above.)
🤖 Prompt for AI Agents
In lib/files/viewer-tus-upload.ts around line 43, the timeout variable is typed
as NodeJS.Timeout which can cause DOM build clashes; change its type to the
browser-safe ReturnType<typeof setTimeout> (i.e. declare let networkTimeoutId:
ReturnType<typeof setTimeout> | undefined) so it works in both Node and browser
environments.
onError: (error) => { | ||
onError && onError(error); | ||
console.error("Failed because: " + error); | ||
if (networkTimeoutId) clearTimeout(networkTimeoutId); | ||
console.error( | ||
`TUS viewer onError called on attempt ${attempt + 1}:`, | ||
error, | ||
); | ||
|
||
if (isTimedOut) { | ||
console.log( | ||
"Error was caused by our manual timeout, handling retry...", | ||
); | ||
return; // Let the timeout handler deal with retries | ||
} | ||
|
||
// Check if we should retry this error | ||
const shouldRetry = attempt < retryDelays.length - 1; | ||
const detailedError = error as tus.DetailedError; | ||
const isRetryableError = | ||
!detailedError.originalResponse || // Network error | ||
error.message?.toLowerCase().includes("timeout") || | ||
error.message?.toLowerCase().includes("network") || | ||
error.message?.toLowerCase().includes("err_timed_out") || | ||
[0, 502, 503, 504].includes( | ||
detailedError.originalResponse?.getStatus() || 0, | ||
); | ||
|
||
if (shouldRetry && isRetryableError) { | ||
attempt++; | ||
const delay = retryDelays[attempt] || 10000; | ||
console.log( | ||
`Retrying viewer upload in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | ||
); | ||
|
||
setTimeout(() => { | ||
const newUpload = createUpload(); | ||
resolve(newUpload); | ||
}, delay); | ||
return; | ||
} | ||
|
||
// No more retries or non-retryable error | ||
console.error("Viewer upload failed after all retries:", error); | ||
onError?.(error); | ||
reject(error); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not re-resolve the outer promise on retries; reject the shared complete
on terminal failure.
resolve(newUpload)
is a no-op after the first resolve; plus reject(error)
won’t propagate. Restart attempts internally and drive errors through completeReject
.
Apply:
if (shouldRetry && isRetryableError) {
attempt++;
const delay = retryDelays[attempt] || 10000;
console.log(
`Retrying viewer upload in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`,
);
- setTimeout(() => {
- const newUpload = createUpload();
- resolve(newUpload);
- }, delay);
+ setTimeout(() => {
+ createUpload();
+ }, delay);
return;
}
// No more retries or non-retryable error
console.error("Viewer upload failed after all retries:", error);
onError?.(error);
- reject(error);
+ completeReject(error);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
onError: (error) => { | |
onError && onError(error); | |
console.error("Failed because: " + error); | |
if (networkTimeoutId) clearTimeout(networkTimeoutId); | |
console.error( | |
`TUS viewer onError called on attempt ${attempt + 1}:`, | |
error, | |
); | |
if (isTimedOut) { | |
console.log( | |
"Error was caused by our manual timeout, handling retry...", | |
); | |
return; // Let the timeout handler deal with retries | |
} | |
// Check if we should retry this error | |
const shouldRetry = attempt < retryDelays.length - 1; | |
const detailedError = error as tus.DetailedError; | |
const isRetryableError = | |
!detailedError.originalResponse || // Network error | |
error.message?.toLowerCase().includes("timeout") || | |
error.message?.toLowerCase().includes("network") || | |
error.message?.toLowerCase().includes("err_timed_out") || | |
[0, 502, 503, 504].includes( | |
detailedError.originalResponse?.getStatus() || 0, | |
); | |
if (shouldRetry && isRetryableError) { | |
attempt++; | |
const delay = retryDelays[attempt] || 10000; | |
console.log( | |
`Retrying viewer upload in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | |
); | |
setTimeout(() => { | |
const newUpload = createUpload(); | |
resolve(newUpload); | |
}, delay); | |
return; | |
} | |
// No more retries or non-retryable error | |
console.error("Viewer upload failed after all retries:", error); | |
onError?.(error); | |
reject(error); | |
}, | |
onError: (error) => { | |
if (networkTimeoutId) clearTimeout(networkTimeoutId); | |
console.error( | |
`TUS viewer onError called on attempt ${attempt + 1}:`, | |
error, | |
); | |
if (isTimedOut) { | |
console.log( | |
"Error was caused by our manual timeout, handling retry...", | |
); | |
return; // Let the timeout handler deal with retries | |
} | |
// Check if we should retry this error | |
const shouldRetry = attempt < retryDelays.length - 1; | |
const detailedError = error as tus.DetailedError; | |
const isRetryableError = | |
!detailedError.originalResponse || // Network error | |
error.message?.toLowerCase().includes("timeout") || | |
error.message?.toLowerCase().includes("network") || | |
error.message?.toLowerCase().includes("err_timed_out") || | |
[0, 502, 503, 504].includes( | |
detailedError.originalResponse?.getStatus() || 0, | |
); | |
if (shouldRetry && isRetryableError) { | |
attempt++; | |
const delay = retryDelays[attempt] || 10000; | |
console.log( | |
`Retrying viewer upload in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | |
); | |
setTimeout(() => { | |
createUpload(); | |
}, delay); | |
return; | |
} | |
// No more retries or non-retryable error | |
console.error("Viewer upload failed after all retries:", error); | |
onError?.(error); | |
completeReject(error); | |
}, |
🤖 Prompt for AI Agents
lib/files/viewer-tus-upload.ts lines 74-118: the retry branch currently calls
resolve(newUpload) which attempts to re-resolve the outer promise (a no-op after
the first resolve) and prevents terminal errors from propagating; change the
retry flow to not call resolve when scheduling a retry — instead restart the
upload internally (e.g., call createUpload and continue the internal retry
logic) and ensure the final, non-retryable error path calls the shared
completeReject(error) (or the outer rejection handler) rather than reject/error
on an already-resolved promise so terminal failures propagate correctly.
onProgress: (bytesUploaded, bytesTotal) => { | ||
// Reset timeout on any progress | ||
if (networkTimeoutId) clearTimeout(networkTimeoutId); | ||
|
||
// Set a new timeout | ||
networkTimeoutId = setTimeout(() => { | ||
console.error( | ||
`Viewer upload network timeout after ${60_000}ms of no progress (attempt ${attempt + 1})`, | ||
); | ||
isTimedOut = true; | ||
upload.abort(); | ||
|
||
// Handle retry after timeout | ||
const shouldRetry = attempt < retryDelays.length - 1; | ||
if (shouldRetry) { | ||
attempt++; | ||
const delay = retryDelays[attempt] || 10000; | ||
console.log( | ||
`Retrying viewer upload after network timeout in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | ||
); | ||
|
||
setTimeout(() => { | ||
const newUpload = createUpload(); | ||
resolve(newUpload); | ||
}, delay); | ||
} else { | ||
console.error( | ||
"Viewer upload failed after network timeout with no retries left", | ||
); | ||
onError?.(new Error(`Network timeout after ${60_000}ms`)); | ||
reject(new Error(`Network timeout after ${60_000}ms`)); | ||
} | ||
}, 60_000); | ||
|
||
onProgress?.(bytesUploaded, bytesTotal); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Timeout retry path also re-resolves. Restart internally and fail via completeReject
.
Same issue as above within the progress-timeout handler.
Apply:
setTimeout(() => {
- const newUpload = createUpload();
- resolve(newUpload);
+ createUpload();
}, delay);
} else {
console.error(
"Viewer upload failed after network timeout with no retries left",
);
- onError?.(new Error(`Network timeout after ${60_000}ms`));
- reject(new Error(`Network timeout after ${60_000}ms`));
+ const err = new Error(`Network timeout after ${60_000}ms`);
+ onError?.(err);
+ completeReject(err);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
onProgress: (bytesUploaded, bytesTotal) => { | |
// Reset timeout on any progress | |
if (networkTimeoutId) clearTimeout(networkTimeoutId); | |
// Set a new timeout | |
networkTimeoutId = setTimeout(() => { | |
console.error( | |
`Viewer upload network timeout after ${60_000}ms of no progress (attempt ${attempt + 1})`, | |
); | |
isTimedOut = true; | |
upload.abort(); | |
// Handle retry after timeout | |
const shouldRetry = attempt < retryDelays.length - 1; | |
if (shouldRetry) { | |
attempt++; | |
const delay = retryDelays[attempt] || 10000; | |
console.log( | |
`Retrying viewer upload after network timeout in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | |
); | |
setTimeout(() => { | |
const newUpload = createUpload(); | |
resolve(newUpload); | |
}, delay); | |
} else { | |
console.error( | |
"Viewer upload failed after network timeout with no retries left", | |
); | |
onError?.(new Error(`Network timeout after ${60_000}ms`)); | |
reject(new Error(`Network timeout after ${60_000}ms`)); | |
} | |
}, 60_000); | |
onProgress?.(bytesUploaded, bytesTotal); | |
}, | |
onProgress: (bytesUploaded, bytesTotal) => { | |
// Reset timeout on any progress | |
if (networkTimeoutId) clearTimeout(networkTimeoutId); | |
// Set a new timeout | |
networkTimeoutId = setTimeout(() => { | |
console.error( | |
`Viewer upload network timeout after ${60_000}ms of no progress (attempt ${attempt + 1})`, | |
); | |
isTimedOut = true; | |
upload.abort(); | |
// Handle retry after timeout | |
const shouldRetry = attempt < retryDelays.length - 1; | |
if (shouldRetry) { | |
attempt++; | |
const delay = retryDelays[attempt] || 10000; | |
console.log( | |
`Retrying viewer upload after network timeout in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | |
); | |
setTimeout(() => { | |
// Restart internally without re-resolving the outer promise | |
createUpload(); | |
}, delay); | |
} else { | |
console.error( | |
"Viewer upload failed after network timeout with no retries left", | |
); | |
// Use completeReject for the final failure path | |
const err = new Error(`Network timeout after ${60_000}ms`); | |
onError?.(err); | |
completeReject(err); | |
} | |
}, 60_000); | |
onProgress?.(bytesUploaded, bytesTotal); | |
}, |
🤖 Prompt for AI Agents
In lib/files/viewer-tus-upload.ts around lines 129-164 the progress-timeout
handler can re-resolve the original promise and also call reject directly —
causing duplicate/incorrect settlement; change the logic so the handler aborts
and marks isTimedOut, clears timeouts, and if a retry is scheduled internally
restart the upload flow and only resolve the outer promise once with the new
upload (ensuring the original promise isn't re-resolved), but when no retries
remain fail via completeReject (not reject/resolve) so the rejection path goes
through the centralized completion handler; ensure you do not call
resolve/reject after completeReject and guard against double-settlement by
checking a settled flag before resolving or rejecting.
// Set initial timeout | ||
networkTimeoutId = setTimeout(() => { | ||
console.error( | ||
`Viewer upload initial network timeout after ${60_000}ms (attempt ${attempt + 1})`, | ||
); | ||
isTimedOut = true; | ||
upload.abort(); | ||
|
||
// Handle retry after timeout | ||
const shouldRetry = attempt < retryDelays.length - 1; | ||
if (shouldRetry) { | ||
attempt++; | ||
const delay = retryDelays[attempt] || 10000; | ||
console.log( | ||
`Retrying viewer upload after initial timeout in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | ||
); | ||
|
||
setTimeout(() => { | ||
const newUpload = createUpload(); | ||
resolve(newUpload); | ||
}, delay); | ||
} else { | ||
console.error( | ||
"Viewer upload failed after initial timeout with no retries left", | ||
); | ||
onError?.(new Error("Initial network timeout after 60000ms")); | ||
reject(new Error("Initial network timeout after 60000ms")); | ||
} | ||
}, 60_000); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial-timeout path has the same resolve/reject issues; align with shared complete
.
Avoid re-resolving and reject via completeReject
when out of retries.
Apply:
- setTimeout(() => {
- const newUpload = createUpload();
- resolve(newUpload);
- }, delay);
+ setTimeout(() => {
+ createUpload();
+ }, delay);
} else {
console.error(
"Viewer upload failed after initial timeout with no retries left",
);
- onError?.(new Error("Initial network timeout after 60000ms"));
- reject(new Error("Initial network timeout after 60000ms"));
+ const err = new Error("Initial network timeout after 60000ms");
+ onError?.(err);
+ completeReject(err);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Set initial timeout | |
networkTimeoutId = setTimeout(() => { | |
console.error( | |
`Viewer upload initial network timeout after ${60_000}ms (attempt ${attempt + 1})`, | |
); | |
isTimedOut = true; | |
upload.abort(); | |
// Handle retry after timeout | |
const shouldRetry = attempt < retryDelays.length - 1; | |
if (shouldRetry) { | |
attempt++; | |
const delay = retryDelays[attempt] || 10000; | |
console.log( | |
`Retrying viewer upload after initial timeout in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | |
); | |
setTimeout(() => { | |
const newUpload = createUpload(); | |
resolve(newUpload); | |
}, delay); | |
} else { | |
console.error( | |
"Viewer upload failed after initial timeout with no retries left", | |
); | |
onError?.(new Error("Initial network timeout after 60000ms")); | |
reject(new Error("Initial network timeout after 60000ms")); | |
} | |
}, 60_000); | |
// Set initial timeout | |
networkTimeoutId = setTimeout(() => { | |
console.error( | |
`Viewer upload initial network timeout after ${60_000}ms (attempt ${attempt + 1})`, | |
); | |
isTimedOut = true; | |
upload.abort(); | |
// Handle retry after timeout | |
const shouldRetry = attempt < retryDelays.length - 1; | |
if (shouldRetry) { | |
attempt++; | |
const delay = retryDelays[attempt] || 10000; | |
console.log( | |
`Retrying viewer upload after initial timeout in ${delay}ms (attempt ${attempt + 1}/${retryDelays.length})`, | |
); | |
setTimeout(() => { | |
createUpload(); | |
}, delay); | |
} else { | |
console.error( | |
"Viewer upload failed after initial timeout with no retries left", | |
); | |
const err = new Error("Initial network timeout after 60000ms"); | |
onError?.(err); | |
completeReject(err); | |
} | |
}, 60_000); |
🤖 Prompt for AI Agents
In lib/files/viewer-tus-upload.ts around lines 183 to 212, the initial-timeout
branch currently calls resolve(newUpload) and reject(...) directly which can
re-resolve/reject outside the shared completion logic; change it to use the
shared completion handlers (call completeResolve(newUpload) for retry path) and
call completeReject(new Error(...)) when out of retries, and ensure you guard
with the same "completed/isTimedOut" check used by the shared complete logic so
no duplicate resolve/reject occurs.
// Start the first upload attempt | ||
try { | ||
const result = createUpload(); | ||
resolve(result); | ||
} catch (error) { | ||
reject(error); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Resolve once with a stable holder; route synchronous errors into completeReject
.
Ensures callers always get the handle immediately while completion flows through complete
.
Apply:
// Start the first upload attempt
try {
- const result = createUpload();
- resolve(result);
+ createUpload();
+ resolve(resultHolder);
} catch (error) {
- reject(error);
+ resolve(resultHolder);
+ completeReject(error as Error);
}
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In lib/files/viewer-tus-upload.ts around lines 231 to 238, the current code
resolves the promise with the result of createUpload() directly and rejects on
synchronous errors, which causes callers to not always receive a stable handle;
change it to construct a stable holder object (the upload handle) and resolve
immediately with that holder, then invoke createUpload() inside a try/catch and
route any synchronous exception into completeReject(error) so completion flows
through the existing `complete`/`completeReject` path; ensure you only call
resolve once (resolve(holder)) and never resolve with the raw result of
createUpload.
Summary by CodeRabbit
New Features
Chores