Skip to content

Commit bf3a68f

Browse files
committed
docs: add limitations section to documentation and update README
- Introduced a new documentation page detailing architectural limitations of Pushduck, including type sharing requirements, backend support, and upload constraints. - Updated README to highlight the unopinionated nature of the library, emphasizing user control over authentication, processing, and storage lifecycle.
1 parent 413637a commit bf3a68f

4 files changed

Lines changed: 182 additions & 1 deletion

File tree

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,7 @@ public/dist
1616
**/coverage
1717

1818
# Test projects for CLI testing
19-
test-projects/
19+
test-projects/
20+
21+
.claude
22+
notes/

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@ Upload files directly to S3-compatible storage with just 3 lines of code. No hea
4545
- **Edge Runtime** - Runs on Vercel Edge, Cloudflare Workers, and more
4646
- **Progress Tracking** - Real-time progress, upload speed, and ETA estimation
4747
- **Lifecycle Callbacks** - Complete upload control with `onStart`, `onProgress`, `onSuccess`, and `onError`
48+
- **Unopinionated** - You control auth, processing, and storage lifecycle
4849
- **Storage Operations** - Complete file management API (list, delete, metadata)
4950
- **Production Ready** - Used in production by many applications
5051

docs/content/docs/limitations.mdx

Lines changed: 176 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,176 @@
1+
---
2+
title: Limitations
3+
description: Real architectural boundaries — read this before adopting to scope your project correctly.
4+
icon: Ruler
5+
---
6+
7+
Pushduck aims to be universal, but there are real architectural boundaries. This page is honest about what it **does not** do, so you can scope your project correctly before adopting it.
8+
9+
## Type sharing requires a shared TypeScript codebase
10+
11+
Pushduck's typesafe router (`InferClientRouter<typeof router>`) works by importing the router's type from your server file directly into your client. This requires:
12+
13+
- Backend and frontend in the **same TypeScript project or monorepo**, or
14+
- A **shared package** both sides can import types from, or
15+
- Manually exporting and versioning the router type alongside your API
16+
17+
**What works:**
18+
19+
- Fullstack Next.js / Remix / SvelteKit / Nuxt — backend route and frontend hook live in the same TS project. `InferClientRouter` works out of the box.
20+
- Monorepos where `packages/api` and `apps/web` share a TS boundary.
21+
- Any setup where you can `import type { AppRouter } from "../server/router"`.
22+
23+
**What doesn't work automatically:**
24+
25+
- **Separate frontend and backend repositories.** You have two choices:
26+
1. Publish a tiny types-only package from your backend repo and consume it from the frontend.
27+
2. Use the REST contract directly (Pushduck still works end-to-end — you just lose route-name autocomplete and inferred metadata types).
28+
- **Git-submodule setups** without a TS path alias between them.
29+
30+
## Non-TypeScript backends are not supported for typesafe routes
31+
32+
If your backend is in **Python, Go, Rust, Ruby, PHP, Java, Elixir, or anything else**, Pushduck's server-side API (`createUploadConfig`, `s3.createRouter`, adapters) is not usable. You cannot define routes with a non-TS server.
33+
34+
**What you can still do:**
35+
36+
- Implement the presigned URL endpoints yourself in your backend language. Pushduck's client is pure fetch/XHR against a documented REST contract, so you can point the client at any server that returns the expected JSON shape.
37+
- Use the `createUploadClient` on the frontend with `endpoint: "/api/upload"` pointing to your non-TS backend, and handle the `presign` / `complete` actions manually on the server side.
38+
39+
You lose:
40+
- Typesafe route names
41+
- Automatic metadata inference
42+
- The file schema validation built into `s3.image()` / `s3.file()` chains
43+
44+
You keep:
45+
- XHR-based progress tracking
46+
- Multi-file uploads
47+
- The presigned URL flow
48+
49+
Think of Pushduck as a **TypeScript-first library** in this sense — cross-language support is a REST contract, not a code contract.
50+
51+
## Non-React frontends have no shipped hook
52+
53+
`useUploadRoute` and `createUploadClient` currently target React. There is no shipped hook for:
54+
55+
- Vue
56+
- Svelte
57+
- Solid
58+
- Angular
59+
- Vanilla JS / Web Components
60+
61+
**What you can still do:**
62+
63+
The underlying logic is in `packages/pushduck/src/client/upload-client.ts` — a small set of functions that presign, upload via XHR, and complete. You can wrap it yourself in any framework's reactive primitive. A Vue composable or a Svelte store would be ~50 lines. Contributions welcome.
64+
65+
## Node server-side streaming uploads are out of scope
66+
67+
Pushduck's core upload flow is **client → presigned URL → S3 directly**. The library is not a Node streaming upload helper.
68+
69+
If you need to:
70+
71+
- Accept a multipart upload in a Node server handler and stream it to S3, or
72+
- Upload a file *from* your Node server *to* S3 (e.g. image processing pipelines)
73+
74+
you want the native `@aws-sdk/client-s3` `Upload` helper or `aws4fetch` directly. Pushduck's storage API (`s3.put`, `s3.delete`, `s3.list`) supports server-side one-shot operations, but it is not optimized for streaming large files through your Node process.
75+
76+
## No pause, resume, or automatic retry on network interruption
77+
78+
Pushduck uses a **single PUT request per file** to a presigned URL. This is the simplest flow S3 offers and it works everywhere, but it has hard limits:
79+
80+
- **No pause button.** Once an upload starts, there is no API to suspend it and continue later. The only way to stop is to abort the XHR entirely, which discards all bytes already sent.
81+
- **No resume after a dropped connection.** If the network drops mid-upload — Wi-Fi disconnects, the user walks into an elevator, a mobile connection flips to airplane mode — the XHR errors out and the already-uploaded bytes are lost on the server side. The next attempt starts over from byte 0.
82+
- **No automatic retry.** Pushduck does not retry failed uploads. If an upload errors, `onError` fires and the file is marked failed. Retry logic is yours to build on top (call `uploadFiles` again with the same file).
83+
- **No progress persistence across page reloads.** Refresh the tab mid-upload and the upload state is gone — there is no IndexedDB queue, no background sync worker, no service worker fallback.
84+
85+
**Why this is deliberate:** S3 multipart uploads *do* support resumable transfer, but they require orchestrating ~5MB chunks, tracking part numbers, committing/aborting the multipart session, and handling partial-state cleanup when a browser tab dies. That's a different library shape from what Pushduck is — it would roughly double the surface area and complicate the auth/middleware story.
86+
87+
**What you can still do:**
88+
89+
- For small-to-medium files (under ~100 MB on a decent connection), the single-PUT flow is fine in practice. A dropped upload is a rare event and "try again" is an acceptable UX.
90+
- Wrap `uploadFiles` in your own retry logic: catch the error in `onError`, wait with backoff, call `uploadFiles` again.
91+
- Show a "your upload was interrupted — tap to retry" UI, since reliable retry requires user intent anyway.
92+
93+
**When to look elsewhere:**
94+
95+
- If you need true resumable uploads for very large files (larger than 500 MB) on flaky connections — e.g. mobile users uploading video — use `tus-js-client` (resumable protocol with server-side state) or S3 multipart uploads directly via the AWS SDK.
96+
- If you need background uploads that survive tab closure — service workers with Background Sync API, or a native app wrapper.
97+
98+
## No multipart uploads — effective file size ceiling
99+
100+
Pushduck uploads each file as a **single HTTP PUT to a presigned URL**. S3's multipart upload API (which splits a file into chunks uploaded independently and then committed as one object) is **not implemented**.
101+
102+
This puts a practical ceiling on how large a file Pushduck can upload reliably:
103+
104+
- **S3 hard limit for a single PUT:** 5 GB. Anything larger is rejected by S3 itself with `EntityTooLarge`.
105+
- **Practical limit on mobile / flaky networks:** much lower, often 100–500 MB. Larger single PUTs become increasingly unlikely to complete in one uninterrupted attempt.
106+
- **Memory pressure on the client:** on React Native, `fetch(uri).blob()` reads the entire file into memory before uploading. Very large files can OOM the app. Web `File` is streamed by the browser, so desktop is less affected, but still bounded by tab memory.
107+
108+
**What multipart uploads would unlock (and what you lose without them):**
109+
110+
- Uploading files larger than 5 GB (up to 5 TB, S3's hard ceiling)
111+
- Parallel chunk uploads for faster throughput on fast links
112+
- Resume-from-last-committed-chunk after a network failure
113+
- Lower peak memory because each chunk is uploaded and released independently
114+
115+
**What to do right now if you need to upload very large files:**
116+
117+
- For files up to a few hundred MB: Pushduck works, but set an explicit `maxFileSize` on your route (`s3.file().maxFileSize("500MB")`) and communicate the limit to users in the UI.
118+
- For files larger than that: you'll need to implement S3 multipart yourself using `@aws-sdk/client-s3`'s `Upload` helper on the server, or use a resumable protocol like tus. Pushduck is not the right tool for terabyte-class uploads.
119+
120+
Multipart support is on the roadmap but not yet shipped. If this is a blocker for your use case, open an issue — it helps prioritize.
121+
122+
## Service-worker / tab-close uploads are not supported
123+
124+
Uploads run in the same JS context that called `uploadFiles`. Closing the tab, navigating away, losing a mobile app to the background (iOS/Android suspend JS execution on app switch or screen lock), or force-quitting the app will cancel the in-flight XHR. Pushduck does not:
125+
126+
- Register a service worker to continue uploads in the background
127+
- Persist upload queues in IndexedDB across page reloads
128+
- Use the Background Fetch or Background Sync APIs
129+
130+
If you need uploads that survive tab closure, Pushduck is not the right tool — look at a service-worker-based upload queue or a native app.
131+
132+
## Upload progress requires XHR, not fetch
133+
134+
Progress tracking uses `XMLHttpRequest.upload.onprogress`. This means:
135+
136+
- Upload progress works in all browsers and React Native (XHR is polyfilled in RN 0.68+)
137+
- Upload progress does **not** work in environments without XHR (some edge runtimes, Deno server-side, Node without a polyfill)
138+
- You cannot swap the transport to `fetch` without losing progress — `fetch` has no upload progress API on any runtime today
139+
140+
This is a deliberate tradeoff. Progress is more valuable than transport flexibility for this library's use case.
141+
142+
## Storage providers are S3-compatible only
143+
144+
Pushduck supports **AWS S3, Cloudflare R2, DigitalOcean Spaces, and MinIO** — all of which speak the S3 API. It does **not** support:
145+
146+
- Google Cloud Storage (different API surface)
147+
- Azure Blob Storage (different API surface)
148+
- Backblaze B2 native API (use their S3-compatible endpoint instead)
149+
- Local filesystem / on-disk storage
150+
151+
If your provider speaks the S3 API, it will probably work with the generic S3 provider config. If it doesn't, Pushduck is not the right tool.
152+
153+
## No built-in authentication or authorization
154+
155+
Pushduck ships `middleware` hooks on the router, but the **auth logic itself is yours to write**. It does not:
156+
157+
- Ship integrations with Clerk, Auth.js, BetterAuth, Lucia, or any auth library
158+
- Verify tokens or sessions on your behalf
159+
- Enforce per-user quotas or rate limits
160+
161+
Every example in the docs shows `middleware: async ({ req }) => { const user = await yourAuth(req); if (!user) throw new Error("Unauthorized"); return { userId: user.id }; }` — that's the entire auth story. You wire it, Pushduck trusts you.
162+
163+
## No admin dashboard, no hosted service, no managed anything
164+
165+
Pushduck is a library, not a product. There is no:
166+
167+
- Web dashboard to view uploaded files
168+
- Hosted bucket / managed storage
169+
- Analytics on upload activity
170+
- Billing / quotas / multi-tenant management
171+
172+
You bring the bucket, you bring the server, you bring the auth. Pushduck handles the presign dance and the client upload loop. That is the entire product.
173+
174+
---
175+
176+
If any of these limitations are blockers for you, please [open an issue](https://github.com/abhay-ramesh/pushduck/issues) — some of them (Vue/Svelte hooks, a non-TS client contract spec, a GCS adapter) are on the roadmap and community input helps prioritize.

docs/content/docs/meta.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
"--- Resources ---",
1717
"philosophy",
1818
"comparisons",
19+
"limitations",
1920
"roadmap",
2021
"ai-integration"
2122
]

0 commit comments

Comments
 (0)