Fast, simple file sharing with optional end-to-end encryption. No accounts required.
Bolter is a self-hostable file sharing app with optional end-to-end encryption. Share files with a link that automatically expires — no signups, no accounts. When encryption is enabled, files are encrypted in your browser before they ever leave your device, and the encryption key lives in the share link's hash fragment (never sent to the server).
- Optional E2E encryption — toggle on per-upload; AES-GCM with HKDF key derivation, entirely client-side via the Web Crypto API
- Zero knowledge when encrypted — the server never sees plaintext files or encryption keys
- Files up to 1 TB — multipart uploads with adaptive part sizing and resumability
- Self-destructing links — configurable expiration (5 min to 6 months) and download limits
- No accounts required — generate a link, share it, done
- Resilient uploads — stall detection, offline awareness, progress-based retries, IndexedDB-backed resume on page reload, and Safari/WebKit empty-chunk filtering for HEIC/HEVC compatibility
- Adaptive speed — preflight speed test measures your connection and picks optimal part sizes
- Multi-provider S3 — dynamic storage provider management via API; seamlessly migrate between S3-compatible services (Cloudflare R2, Railway, AWS S3, etc.) while existing files remain accessible on their original provider
- Self-hostable — Docker Compose, or run directly with Bun
- Fully customizable — white-label with your own branding, limits, and expiration options via environment variables
sequenceDiagram
participant User as Browser
participant Backend as Bolter Backend<br/>(Elysia + Bun)
participant S3 as S3 / Cloudflare R2
participant Redis as Redis
Note over User: 1. User drops file(s)
Note over User: 2. (Optional) Enable encryption
alt Encryption enabled
Note over User: 3. Generate AES-GCM key via HKDF
Note over User: 4. Encrypt file in 64KB records
end
User->>Backend: Request pre-signed upload URL
Backend->>S3: Generate pre-signed URL
S3-->>Backend: Pre-signed URL
Backend-->>User: Pre-signed URL
User->>S3: Upload file directly (encrypted or plaintext)
S3-->>User: Upload complete
User->>Backend: Confirm upload
Backend->>Redis: Store metadata (TTL, download limit)
Backend-->>User: Share link
alt Encryption enabled
Note over User: Share link contains encryption key<br/>in hash fragment (#) — never sent to server
end
Files are always uploaded directly to S3/R2 via pre-signed URLs — the server never handles file data. When encryption is enabled, the encryption key is embedded in the URL hash fragment (
#), which browsers never include in HTTP requests. The server orchestrates uploads and tracks metadata (expiration, download count) but has zero access to file contents.
- Bun v1.x
- Redis (or use Docker)
- An S3-compatible object store (Cloudflare R2, MinIO, AWS S3, etc.)
# Clone the repository
git clone https://github.com/slingshot/bolter.git
cd bolter
# Install dependencies
bun install
# Copy and configure environment variables
cp .env.example .env.local
# Edit .env.local with your S3/R2 credentials and Redis URL
# Start development (frontend + backend concurrently)
bun run devThe frontend runs at http://localhost:3000 and the backend at http://localhost:3001.
# Copy and configure environment variables
cp .env.example .env
# Start all services (frontend, backend, Redis)
docker compose upThis starts:
- Frontend on port
3000(Nginx serving the built SPA) - Backend on port
3001(Bun + Elysia) - Redis on port
6379(persistent, AOF-enabled)
You still need to provide S3/R2 credentials in your
.envfile — Redis is included in the Compose stack but object storage is not.
Bolter is a Turborepo monorepo with three workspaces:
bolter/
├── apps/
│ ├── frontend/ # Vite + React 18 + Tailwind CSS
│ │ ├── src/
│ │ │ ├── components/ # Radix UI-based components
│ │ │ ├── lib/ # Crypto, API client, upload state
│ │ │ ├── pages/ # Home (upload) + Download pages
│ │ │ └── stores/ # Zustand state management
│ │ └── Dockerfile # Multi-stage: Bun build → Nginx
│ │
│ └── backend/ # Elysia (Bun-native web framework)
│ ├── src/
│ │ ├── routes/ # Upload + download endpoints
│ │ ├── storage/ # S3 + Redis adapters
│ │ └── config.ts # Convict-based env validation
│ └── Dockerfile # Multi-stage: Bun slim
│
├── packages/
│ └── shared/ # Constants shared across workspaces
│ └── config.ts # BYTES, UPLOAD_LIMITS, TIME_LIMITS, etc.
│
├── turbo.json # Task pipeline (build, dev, typecheck)
├── biome.json # Linter + formatter config
├── lefthook.yml # Git hooks (pre-commit, commit-msg)
└── docker-compose.yml # Full stack deployment
| Decision | Rationale |
|---|---|
| Bun runtime | Native TypeScript execution, fast startup, built-in S3 compatibility |
| Elysia framework | Bun-optimized, end-to-end type safety, minimal overhead |
| Direct S3 uploads | Server never touches file data — pre-signed URLs let the browser upload directly |
| Optional encryption | Users choose per-upload; unencrypted shares are simpler, encrypted shares are zero-knowledge |
| Web Crypto API | Standards-based, hardware-accelerated encryption available in all modern browsers |
| HKDF key derivation | Derives separate keys for content and metadata from a single secret |
| 64KB record encryption | Streaming-friendly — encrypt/decrypt without loading the entire file into memory |
| IndexedDB resume state | Multipart upload state survives page reloads; users can resume interrupted uploads |
| Safari/WebKit compat | Handles empty stream chunks from iOS HEIC/HEVC transcoding; pre-resolves transcoded file sizes for accurate part allocation |
All configuration is done via environment variables. See .env.example for the full list.
| Variable | Description |
|---|---|
S3_BUCKET |
S3/R2 bucket name |
S3_ENDPOINT |
S3/R2 endpoint URL |
AWS_ACCESS_KEY_ID |
S3/R2 access key |
AWS_SECRET_ACCESS_KEY |
S3/R2 secret key |
| Variable | Default | Description |
|---|---|---|
REDIS_URL |
redis://localhost:6379 |
Redis connection string |
PORT |
3001 |
Backend server port |
BASE_URL |
http://localhost:3001 |
Public-facing base URL |
DETECT_BASE_URL |
false |
Auto-detect base URL from request headers |
MAX_FILE_SIZE |
1000000000000 (1 TB) |
Maximum upload size in bytes |
MAX_FILES_PER_ARCHIVE |
64 |
Max files per upload |
MAX_EXPIRE_SECONDS |
15552000 (6 months) |
Maximum link expiration time |
DEFAULT_EXPIRE_SECONDS |
86400 (1 day) |
Default expiration |
MAX_DOWNLOADS |
100 |
Maximum download limit |
DEFAULT_DOWNLOADS |
1 |
Default download limit |
| Variable | Default | Description |
|---|---|---|
PROVIDER_ENCRYPTION_KEY |
(none) | 32-byte hex key for AES-256-GCM encryption of provider secrets in Redis |
PROVIDER_CACHE_TTL_SECONDS |
60 |
How often to refresh the in-memory provider cache |
ADMIN_API_KEY |
(none) | Bearer token for authenticating provider CRUD API requests |
| Variable | Default | Description |
|---|---|---|
CUSTOM_TITLE |
Slingshot Send |
App title (runtime, served via /config) |
CUSTOM_DESCRIPTION |
Encrypt and send files... |
App description (runtime) |
VITE_APP_TITLE |
Slingshot Send |
HTML <title> tag (build-time) |
VITE_APP_DESCRIPTION |
Encrypt and send files... |
HTML <meta> description (build-time) |
Build-time vs runtime:
VITE_*variables are baked into the frontend at build time.CUSTOM_*variables are served by the backend's/configendpoint and override the build-time values at runtime.
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Full health check (Redis + S3 connectivity) |
GET |
/config |
Client configuration (limits, defaults, branding) |
POST |
/upload/url |
Request a pre-signed upload URL |
POST |
/upload/multipart/:id |
Initiate a multipart upload |
POST |
/upload/multipart/:id/resume |
List completed parts (for resuming uploads) |
POST |
/upload/speedtest |
Generate pre-signed URLs for speed test |
POST |
/upload/speedtest/cleanup |
Clean up speed test objects |
GET |
/download/url/:id |
Get a pre-signed download URL |
GET |
/providers |
List all storage providers (admin) |
GET |
/providers/:id |
Get storage provider details (admin) |
POST |
/providers |
Add a new storage provider (admin) |
PUT |
/providers/:id |
Update a storage provider (admin) |
DELETE |
/providers/:id |
Remove a storage provider (admin) |
POST |
/providers/:id/ping |
Health-check a provider (admin) |
POST |
/providers/:id/activate |
Set provider as active upload target (admin) |
Bolter supports multiple S3-compatible storage providers simultaneously. This allows you to migrate between providers (e.g., Cloudflare R2 to Railway) without downtime — existing files remain accessible on their original provider while new uploads go to the new one.
- On startup, the backend registers a default provider from environment variables (
S3_BUCKET,S3_ENDPOINT,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY). This is automatic and requires no configuration beyond the existing env vars. - Every uploaded file records which provider it was uploaded to (
providerIdfield in Redis metadata). - Downloads resolve the correct provider from the file's metadata. Files uploaded before multi-provider support (no
providerIdfield) fall back to the default provider. - Additional providers can be added at runtime via the
/providersAPI — no redeployment needed. - Provider configurations are stored in Redis with secrets encrypted via AES-256-GCM (when
PROVIDER_ENCRYPTION_KEYis set). - Provider configs are cached in memory and refreshed from Redis on a configurable interval (default: 60 seconds).
All /providers/* endpoints require the ADMIN_API_KEY environment variable to be set. Requests must include the key as a Bearer token:
Authorization: Bearer <your-admin-api-key>
If ADMIN_API_KEY is not set, all provider management endpoints return 503 Service Unavailable. This is by design — provider management is opt-in.
The PROVIDER_ENCRYPTION_KEY encrypts provider credentials (secret access keys) at rest in Redis. Generate one with:
openssl rand -hex 32If not set, secrets are stored in plaintext (a warning is logged at startup). This is acceptable for local development but should be set in production.
List all providers:
curl -H "Authorization: Bearer $ADMIN_API_KEY" http://localhost:3001/providersAdd a new provider:
curl -X POST http://localhost:3001/providers \
-H "Authorization: Bearer $ADMIN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Railway S3",
"bucket": "my-railway-bucket",
"endpoint": "https://s3.railway.app",
"accessKeyId": "...",
"secretAccessKey": "...",
"region": "auto",
"pathStyle": true,
"isActive": true
}'Setting isActive: true makes this provider the target for all new uploads and deactivates the previously active provider.
Activate an existing provider:
curl -X POST -H "Authorization: Bearer $ADMIN_API_KEY" \
http://localhost:3001/providers/railway-s3/activateHealth-check a provider:
curl -X POST -H "Authorization: Bearer $ADMIN_API_KEY" \
http://localhost:3001/providers/railway-s3/ping
# Returns: { "healthy": true, "latencyMs": 45 }Delete a provider (only if no active files reference it):
curl -X DELETE -H "Authorization: Bearer $ADMIN_API_KEY" \
http://localhost:3001/providers/railway-s3
# Returns 409 if files still reference it. Use ?force=true to override.Note: The default provider (registered from env vars) cannot be deleted.
- Deploy with existing env vars — the default provider (R2) is auto-registered. Zero behavior change.
- Add the Railway provider via
POST /providerswith"isActive": true. - All new uploads now go to Railway. Existing R2 files continue to be served from R2.
- R2 files naturally drain as they hit their TTL or download limits.
- Once no files reference R2, the provider can be removed via
DELETE /providers/default.
Secrets are never returned in API responses. The accessKeyId is masked (e.g., AKIA****WXYZ) and secretAccessKey is omitted entirely.
# Install dependencies
bun install
# Run both frontend and backend
bun run dev
# Run individually
turbo run dev --filter=@bolter/frontend
turbo run dev --filter=@bolter/backend
# Type checking
bun run typecheck
# Lint + format (Biome)
bun run check
# Production build (Turborepo-cached)
bun run buildThis project uses Conventional Commits enforced by commitlint and lefthook. Use the interactive commit helper:
bun run commitdocker compose up -dIncludes health checks for all services. Customize limits and branding via environment variables in your .env file.
# Build all workspaces
bun run build
# Start the backend
cd apps/backend && bun run start
# Serve the frontend (apps/frontend/dist) with any static file server- Object storage: Any S3-compatible service (Cloudflare R2, AWS S3, MinIO, etc.)
- Redis: For metadata storage with TTL-based expiration (v7+ recommended)
- Reverse proxy: Recommended for production (Nginx, Caddy, etc.) to terminate TLS and serve the frontend
Bolter's security model is documented in detail in SECURITY.md. The key points:
- Encryption is opt-in per upload — users toggle it on when needed
- When enabled, files are encrypted client-side with AES-128-GCM before upload
- Keys are derived via HKDF from a random 128-bit secret
- The encryption key lives in the URL hash fragment — never sent to the server
- The server only stores and serves ciphertext (when encrypted)
- Files auto-expire based on time or download count regardless of encryption
To report a vulnerability, see SECURITY.md.
Contributions are welcome. Please read CONTRIBUTING.md for guidelines on development setup, code style, and the pull request process.
Mozilla Public License 2.0 — you can use, modify, and distribute Bolter freely. Modifications to MPL-covered files must remain open source; larger works can use any license.