The n2s simplified architecture enables robust disaster recovery with minimal dependencies. The key principle: blobs contain complete recovery information - you can rebuild files from just blob storage and encryption passphrase.
Recovery capabilities:
- Complete data recovery using only standard Unix tools + small Go binary
- No database required for basic file recovery
- Platform-independent tools for any disaster scenario
- Verified working end-to-end recovery workflow
cd recovery
./build.shCreates platform-specific binaries in bin/:
decrypt-linux-amd64,decrypt-linux-arm64decrypt-windows-amd64.exedecrypt-macos-amd64,decrypt-macos-arm64
Note: Binaries excluded from repo to reduce size. May include pre-built binaries in future for true zero-dependency recovery.
# Test blob decryption with hash verification
./test_decrypt.sh /path/to/blob_file passphrase
# With verbose debugging
./test_decrypt.sh --verbose /path/to/blob_file passphrase# Get blob (however blobs are stored)
cp /storage/abc123def456 blob.json
# View metadata (no decryption needed)
jq '.metadata' blob.json
# Decrypt and recover file
BLOBID=$(basename blob.json)
ENCRYPTED=$(jq -r '.encrypted_content' blob.json | tr -d '\n\r ')
./bin/decrypt-linux-amd64 "$BLOBID" "passphrase" "$ENCRYPTED" | lz4 -d > recovered_file.txtWhat happened: Local .n2s/<backend>-manifest.db corrupted, but blobs intact.
Recovery process:
- Enumerate blobs from storage
- Extract metadata from each blob (no decryption needed)
- Selectively recover files as needed
- Rebuild database if required
# List blobs and extract paths
for blob in /storage/*; do
echo "=== $(basename $blob) ==="
jq '.metadata | {path, size, timestamp}' "$blob"
done
# Recover specific files
./test_decrypt.sh blob_abc123def456 mypasswordWhat happened: Backend storage lost, database intact.
Recovery process:
- Query database for successfully uploaded files
- Re-read original files from filesystem
- Recreate blobs using deterministic blob creation
- Re-upload to new/restored backend
# Find uploaded files
sqlite3 .n2s/backend-manifest.db \
"SELECT path, file_id FROM files WHERE upload_finish_tm IS NOT NULL;"
# Re-run n2s backup to recreate blobs
n2s backup --source /data --backend new-backendWhat happened: Complete system failure.
Recovery process:
- Restore from filesystem backups (ZFS snapshots, Git history)
- Re-run complete backup to recreate everything
- Verify integrity by comparing file hashes
# Restore filesystem
zfs rollback tank/data@backup-2025-01-15
# Re-run backup
n2s backup --source /data --backend s3-prod --changeset "disaster-recovery"What happened: Some blobs corrupted in storage.
Recovery process:
- Test blob integrity using recovery tools
- Identify corruption via decrypt failures or hash mismatches
- Re-read source files and recreate affected blobs
- Replace corrupted blobs
# Test blob integrity
./test_decrypt.sh suspect_blob_file passphrase
# Hash verification will catch corruption
# Recreate from source if available
n2s upload-file /original/path/file.txt --backend s3-prod# 1. Extract metadata
jq '.metadata' blob.json
# Shows: {"path": "docs/file.txt", "size": 1234, "timestamp": 1234567890, "file_hash": "abc123..."}
# 2. Decrypt content
BLOBID=$(basename blob.json)
ENCRYPTED=$(jq -r '.encrypted_content' blob.json | tr -d '\n\r ')
FILENAME=$(jq -r '.metadata.path' blob.json | xargs basename)
MTIME=$(jq -r '.metadata.timestamp' blob.json)
# 3. Recover file with correct metadata
./bin/decrypt-linux-amd64 "$BLOBID" "$PASSPHRASE" "$ENCRYPTED" | lz4 -d > "$FILENAME"
touch -d "@$MTIME" "$FILENAME"
# 4. Verify integrity
EXPECTED=$(jq -r '.metadata.file_hash' blob.json)
ACTUAL=$(b3sum "$FILENAME" | cut -d' ' -f1)
[ "$EXPECTED" = "$ACTUAL" ] && echo "✓ Verified" || echo "✗ Corruption"#!/bin/bash
# Recover all blobs in a directory
for blob in /storage/*; do
echo "Processing $(basename $blob)..."
# Extract metadata
path=$(jq -r '.metadata.path' "$blob")
filename=$(basename "$path")
mtime=$(jq -r '.metadata.timestamp' "$blob")
# Create output directory
mkdir -p "recovered/$(dirname "$path")"
# Decrypt and recover
blobid=$(basename "$blob")
encrypted=$(jq -r '.encrypted_content' "$blob" | tr -d '\n\r ')
if ./bin/decrypt-linux-amd64 "$blobid" "$PASSPHRASE" "$encrypted" | lz4 -d > "recovered/$path"; then
touch -d "@$mtime" "recovered/$path"
echo "✓ Recovered: $path"
else
echo "✗ Failed: $path"
fi
doneRequired for disaster recovery:
jq- JSON processinglz4- LZ4 decompressionb3sum- BLAKE3 hash verification (Ubuntu package)touch- Set file timestamps- Decrypt binary - ChaCha20 decryption utility (see Build section)
For building decrypt binary:
- Go 1.21+ compiler
- Internet access for Go module downloads
Note: Once built, the decrypt binary is self-contained and requires no Go runtime.
- Algorithm: ChaCha20-Poly1305 AEAD cipher
- Key derivation: PBKDF2-HMAC-SHA256 (100k iterations)
- Salt/nonce: Deterministic from
BLAKE3(path:file_hash) - Base64 encoding: For JSON compatibility
{
"encrypted_content": "base64_encoded_encrypted_compressed_data",
"metadata": {
"path": "relative/path/to/file.txt",
"size": 12345,
"timestamp": 1749388804.3256009,
"file_hash": "blake3_hash_of_original_content"
}
}Security properties:
- Metadata plaintext: Paths/sizes visible without passphrase
- Content encrypted: File data requires passphrase + correct blob ID
- Deterministic: Same file always produces same blob
- Tamper-evident: Hash verification detects corruption
- Passphrase storage: Separate from storage credentials
- Key rotation: Consider for long-term archives
- Access control: Recovery requires both storage access AND passphrase
decrypt.go- Go source for decrypt toolgo.mod- Go module dependenciesbuild.sh- Build script for all platformstest_decrypt.sh- Single blob test with verificationdisaster_recovery.sh- Recovery script (needsrgetcommand)bin/- Built binaries (created by build.sh, git-ignored)
- Minimal dependencies: Standard Unix tools + small Go binary
- Self-contained blobs: Complete recovery info in each blob
- No database required: Metadata readable without decryption
- Platform independent: Works on any Unix-like system
- Deterministic: Same files always produce same blobs
- Tamper-evident: Hash verification built-in
The architecture prioritizes recoverability over storage efficiency - you can always get your data back with minimal tooling.