"Fast, resumable, auth-flexible file transfer over QUIC/SSH, written in Go."
go build -o bin/goflux-server.exe ./cmd/goflux-server
go build -o bin/goflux.exe ./cmd/goflux# With web UI (default)
.\bin\goflux-server.exe
# Then open http://localhost in your browser
# Or access via your domain/IP: http://yourdomain.comThe server uses goflux.json for configuration (created automatically if missing).
Upload a file:
.\bin\goflux.exe put ./myfile.txt /remote/path/myfile.txtDownload a file:
.\bin\goflux.exe get /remote/path/myfile.txt ./downloaded.txtList files:
.\bin\goflux.exe ls /remote/pathgoflux uses JSON configuration files instead of command-line flags for cleaner usage:
Default config (goflux.json):
{
"server": {
"address": "0.0.0.0:80",
"storage_dir": "./data",
"webui_dir": "./web",
"meta_dir": "./.goflux-meta",
"tokens_file": ""
},
"client": {
"server_url": "http://localhost",
"chunk_size": 1048576,
"token": ""
}
}Usage with config:
# Uses goflux.json by default
.\bin\goflux.exe ls
# Use a different config file
.\bin\goflux.exe --config prod.json put file.txt /file.txt
# Server also uses config
.\bin\goflux-server.exe --config goflux-production.jsonEnvironment variable for tokens:
$env:GOFLUX_TOKEN = "tok_your_token_here"
.\bin\goflux.exe lsConfig priority: Config file β Environment variable (tokens only)
goflux automatically resumes interrupted uploads! If an upload is interrupted (network failure, client crash, etc.), simply run the same put command again:
# Initial upload (interrupted after 50% )
.\bin\goflux.exe put largefile.zip /largefile.zip
# ... network disconnects ...
# Resume upload (automatically skips already-uploaded chunks)
.\bin\goflux.exe put largefile.zip /largefile.zip
# Output: π Resuming upload: 127/250 chunks already uploadedHow it works:
- Server tracks upload sessions in metadata files (
.goflux-meta/) - Client queries server before uploading to check for existing sessions
- Only missing chunks are uploaded, saving time and bandwidth
- Sessions are automatically cleaned up after successful uploads
Enable authentication on server:
Edit your config file to set tokens_file:
{
"server": {
"tokens_file": "tokens.json"
}
}Manage tokens with goflux-admin:
# Create a token
.\bin\goflux-admin.exe create --user alice --permissions upload,download,list --days 30
# List tokens
.\bin\goflux-admin.exe list
# Revoke a token
.\bin\goflux-admin.exe revoke tok_abc123def456Use tokens with client:
Set token in config file or use environment variable:
$env:GOFLUX_TOKEN = "tok_your_token_here"
.\bin\goflux.exe put file.txt /file.txtPermissions:
upload- Upload filesdownload- Download fileslist- List files*- All permissions
β Implemented (v0.3.0):
- HTTP transport for file transfer
- Chunked uploads with integrity verification (SHA-256)
- Automatic chunk reassembly on server
- Resume interrupted uploads automatically
- Server tracks upload sessions with persistent metadata
- Client automatically detects and resumes partial uploads
- Skip already-uploaded chunks to save bandwidth
- Session cleanup after successful uploads
- Real-time progress bars π
- Visual upload progress with speed and ETA
- Color-coded progress indicators
- Resume progress shows new vs existing chunks
- Spinner for downloads
- JSON configuration system π
- Simple config file management
- No messy command-line flags
- Environment variable support for tokens
- Local filesystem storage backend
- Simple put/get/ls commands
- Web UI with drag-and-drop upload and file browser (Material Design dark mode)
- Token-based authentication with permission control
- Admin CLI tool for token management
- Token revocation support
π§ Planned:
- QUIC transport
- SSH transport
- Parallel chunk uploads
- S3 storage backend
- Capability negotiation
goflux/
cmd/
goflux-server/ # Server binary
goflux/ # Client CLI
goflux-admin/ # Token management CLI
pkg/
auth/ # Token-based authentication
server/ # HTTP server and handlers
storage/ # Storage backends (local filesystem)
transport/ # HTTP client
chunk/ # Chunking and integrity verification
resume/ # Upload session management
config/ # Configuration file support
web/ # Web UI (HTML/CSS/JS)
docs/ # Documentation
examples/ # Usage examples
π See docs/architecture.md for detailed architecture diagrams and deployment guides.
π See docs/coreidea.md for design philosophy.
Memory Efficiency Fix
- π§ Fixed critical memory issue where client loaded entire files into RAM
- π§ Fixed server memory exhaustion with large file uploads
- β¨ Client now streams files in 1MB chunks (constant memory usage)
- β¨ Server writes chunks to disk immediately (no memory buffering)
- β Successfully tested with 6GB+ files
- See RELEASE_NOTES_v0.4.1.md
Configuration Simplification
- β¨ Simplified CLI to JSON config-only (removed flag clutter)
- β¨ Changed default port from 8080 to 80
- β¨ Dark mode Material Design web UI
- π§ Fixed crypto.subtle errors for HTTP uploads
β οΈ Known issue: Memory problems with large files (fixed in v0.4.1)- See RELEASE_NOTES_v0.4.0.md
- Resume support for interrupted transfers
- Token-based authentication
- Web UI for browser-based uploads
- Chunked transfer with integrity verification