Self-hosted sync stack for Synology NAS (DSM 7.x) or any Windows/Linux/Mac machine running Docker.
| Service | What it does | Schedule |
|---|---|---|
vdirsyncer |
CalDAV VEVENT ↔ Google Calendar (bidirectional, GCal wins on conflict) | every 15 min |
carddav-google-contacts |
CardDAV ↔ Google Contacts (bidirectional via People API) | every 30 min |
vtodo-notion |
CalDAV VTODO ↔ Notion database (bidirectional) | every 10 min |
notion-backup |
Dual-track Notion backup: JSON via API + HTML ZIP via native export · hardlink snapshots · git versioning | daily (configurable) |
caldav-backup |
Full CalDAV backup (VEVENT + VTODO) exported as .ics files |
every 60 min |
All services self-schedule via supercronic — no external cron or Task Scheduler needed.
- Docker Engine ≥ 24 and Docker Compose v2 (
docker compose)- Windows: install Docker Desktop — make sure it is running before any
dockercommand - Synology NAS (DSM 7.x): install Container Manager from Package Center
- Windows: install Docker Desktop — make sure it is running before any
- A Google Cloud project with Google Calendar API and Google People API enabled
- A Notion account with an internal integration token
Copy the example file and fill in every value:
# Linux / Mac / Synology SSH
cp .env.example .env
# Windows PowerShell
Copy-Item .env.example .envThe sections below explain where to find each value. Do not commit .env to git — it contains secrets.
vdirsyncer syncs CalDAV calendars to Google Calendar, and carddav-google-contacts syncs contacts. Both require OAuth 2.0. This is a one-time interactive step that must be done on a machine with a browser (not via SSH).
- Go to console.cloud.google.com
- Create or select a project
- Enable the Google Calendar API and the Google People API (APIs & Services → Library)
- APIs & Services → Credentials → Create Credentials → OAuth client ID
- Application type: Desktop app — give it any name
- Click Create — copy the
client_idandclient_secretinto.env:GOOGLE_CLIENT_ID=your_client_id_here.apps.googleusercontent.com GOOGLE_CLIENT_SECRET=GOCSPX-...
If prompted to configure the OAuth consent screen, set it to External, add your Google account as a test user, and add the scopes
https://www.googleapis.com/auth/calendarandhttps://www.googleapis.com/auth/contacts.
We provide a helper script (authorize-google.py) to generate tokens for both Calendar and Contacts. Because Docker Desktop on Windows/Mac does not bridge random container ports to the host natively, run the script directly on the host machine (not inside Docker).
# Install requirements locally
pip install "vdirsyncer[google]" google-auth-oauthlibRun the authorization script:
python authorize-google.pyThe script will open a browser to authorize Google Calendar (for vdirsyncer), and then open a second prompt to authorize Google Contacts (People API). Log in, click Allow for both, and the terminal will confirm success.
It generates two files in your home directory:
google.jsongoogle_contacts.json
Synology NAS / SSH sessions: SSH has no browser. Run the python script on your Windows/Mac machine first to get the tokens, then follow step 2.4 to copy them to the NAS volume.
Load the tokens into the Docker volume so the containers can use them:
# Replace 'syncer' with your actual project folder prefix if different
# 1. Calendar token
docker run --rm \
-v syncer_vdirsyncer_token:/data/token \
-v "$HOME/google.json":/src/google.json \
alpine cp /src/google.json /data/token/google.json
# 2. Contacts token
docker run --rm \
-v syncer_vdirsyncer_token:/data/token \
-v "$HOME/google_contacts.json":/src/google_contacts.json \
alpine cp /src/google_contacts.json /data/token/google_contacts.jsonCreate a new full-page database in Notion with this exact schema:
| Property name | Type | Options (exact spelling) |
|---|---|---|
Name |
Title | — |
UID CalDAV |
Text | — |
Descrizione |
Text | — |
Scadenza |
Date | — |
Priorità |
Select | Alta, Media, Bassa, Nessuna |
Luogo |
Text | — |
URL |
URL | — |
Lista |
Select | — (auto-populated from CalDAV list names) |
Periodicità |
Text | — |
Ultima sync |
Date | — |
Completato |
Status | Done, In progress, Not started |
How Completato works:
- Set it to Done in Notion → propagates
STATUS:COMPLETEDto CalDAV on the next sync - Non-recurring tasks: the page is archived automatically on the next sync cycle
- Recurring tasks (field
Periodicitànon vuoto): the checkbox resets toNot startedautomatically and the due date advances to the next occurrence — the CalDAV server (Synology) manages the recurrence series
Synology Calendar — correct VTODO URL:
CALDAV_URLforvtodo-notionmust point to the tasks endpoint, not the calendar endpoint. Example:https://nas.example.com/caldav.php/username/home_todo/(Use/home_todo/for VTODO lists, not/home/which is for VEVENT.)
Open the database in a browser. The URL looks like:
https://www.notion.so/yourworkspace/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?v=...
The 32-character hex string before ?v= is your NOTION_DATABASE_ID.
In the database, click ... (top-right) → Connections → select your Notion integration.
Without this step the API token cannot read or write the database.
NOTION_API_TOKEN can be the same value as NOTION_TOKEN. Both are the same Notion integration secret (ntn_... or secret_...). Track 1 uses the official API and never expires.
Track 2 exports the full Notion workspace as an HTML ZIP via Notion's internal API. It needs two browser session cookies that expire periodically (weeks to a few months).
If Track 2 fails, logs will show
[Track2] FAILED. Track 1 always runs independently and is never blocked by Track 2. Renew the cookies below and restart the service.
- Open notion.so in Chrome/Firefox — log in
- Open DevTools (F12) → Application tab (Chrome) or Storage tab (Firefox)
- Left panel: Cookies → https://www.notion.so
- Find
token_v2→ copy its value → paste asNOTION_TOKEN_V2in.env
file_token is not in the static cookie list — it only appears in file download network requests:
- Open notion.so — log in
- Open DevTools (F12) → Network tab
- Navigate to a Notion page that has an image, PDF, or file attachment
- In the Network tab, filter by
notion.so/f/ - Click one of those requests → Headers → Request Headers
- Find the
cookie:header → copy thefile_token=...value (betweenfile_token=and the next;) - Paste it as
NOTION_FILE_TOKENin.env
Alternative: DevTools → Network → trigger an export from Notion UI (Settings → General → Export all workspace content) → find the
enqueueTaskrequest → Request Headers →cookie:→ extractfile_token.
- DevTools → Network tab → reload notion.so
- Filter requests by
api/v3→ click any request (e.g.getSpaces) - Response JSON → find key
"space"→ the first key inside is your space ID (32-char hex) - Paste it as
NOTION_SPACE_IDin.env
docker compose restart notion-backup
docker compose logs notion-backup --tail=40Set NOTION_BACKUP_PATH to an absolute path on the host and create the directory first:
# Synology NAS (SSH)
mkdir -p /volume1/docker/syncer/notion-backup
# Linux / Mac
mkdir -p /opt/notion-backup
# Windows PowerShell
New-Item -ItemType Directory -Force "C:\notion-backup"Then set in .env:
# Synology
NOTION_BACKUP_PATH=/volume1/docker/syncer/notion-backup
# Windows (forward slashes required in .env)
NOTION_BACKUP_PATH=C:/notion-backup
The directory must exist before starting the container — Docker will fail to mount a non-existent bind path.
caldav-backup writes .ics files to CALDAV_BACKUP_PATH. Create the directory and set the path:
# Synology NAS (SSH)
mkdir -p /volume1/docker/syncer/caldav-backup
# Linux / Mac
mkdir -p /opt/caldav-backup
# Windows PowerShell
New-Item -ItemType Directory -Force "C:\caldav-backup"Then in .env:
# Synology
CALDAV_BACKUP_PATH=/volume1/docker/syncer/caldav-backup
# Windows
CALDAV_BACKUP_PATH=C:/caldav-backup
# Build all images and start services in the background
docker compose up -d --build
# Follow all logs in real time
docker compose logs -fEach service runs an initial sync/backup immediately on startup, then on its schedule.
# Start all services
docker compose up -d
# Stop all services (containers removed, data volumes kept)
docker compose down
# Restart a single service (e.g. after updating .env)
docker compose restart vtodo-notion
# Rebuild and restart after code changes
docker compose up -d --build vtodo-notion# All services, live
docker compose logs -f
# Single service, last 100 lines + live
docker compose logs -f --tail=100 vtodo-notion
docker compose logs -f --tail=100 vdirsyncer
docker compose logs -f --tail=100 carddav-google-contacts
docker compose logs -f --tail=100 notion-backup
docker compose logs -f --tail=100 caldav-backup
# Container health status
docker compose ps| Service | Healthy output | Warning signs |
|---|---|---|
vtodo-notion |
Sync complete · errors=0 |
✗ ERROR · Circuit breaker triggered · Fatal sync error |
carddav-google-contacts |
Sync complete: Google (+0, ~0)... |
Error updating Google contact · Circuit breaker triggered |
vdirsyncer |
Syncing caldav_gcal/... (no error: lines) |
error: · 401 / 403 · name resolution |
notion-backup |
Tracks complete — JSON backup: OK |
[Track1] Fatal · [Track2] FAILED · token_v2 or file_token may have expired |
caldav-backup |
Backup complete! Calendars: N |
Error exporting · Required environment variable |
vtodo-notion (CalDAV ↔ Notion tasks):
- In Notion, the
Ultima synccolumn is updated on every processed task — sort by it descending to confirm - Successful sync log line:
CalDAV → Notion: created=0, updated=2, skipped=45, archived=0, errors=0 Notion → CalDAV: updated=1, skipped=46, archived=0, recurring_completed=0, errors=0
carddav-google-contacts (CardDAV ↔ Google Contacts):
- Open Google Contacts and check that they match your CardDAV address book.
- Successful sync log line:
Sync complete: Google (+0, ~1), CardDAV (+0, ~0), skipped 150, errors 0
vdirsyncer (CalDAV ↔ Google Calendar):
- Open Google Calendar and check that events match your CalDAV server
- Telegram notification: you receive a
✅ vdirsyncer sync OKheartbeat every 24 h (configurable viaNOTIFY_OK_EVERY_HOURS) - To force a manual sync:
docker compose exec vdirsyncer vdirsyncer sync - Sync runs every 15 minutes (override in
docker-compose.ymlviaSYNC_INTERVAL_MINUTES)
notion-backup:
- Check
NOTION_BACKUP_PATH/json/manifest.json— it containstimestampandtotal_pages - A new git commit appears in
NOTION_BACKUP_PATH/.gitafter each successful Track 1 run:git -C /your/backup/path log --oneline -5 - Backup runs at
BACKUP_SCHEDULE(default0 21 * * *= 22:00 CET)
caldav-backup:
- Check
CALDAV_BACKUP_PATH/manifest.jsonfortimestampand item counts .icsfiles are updated in-place every hour
If TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID are set, all four services send alerts:
| Event | Who sends it |
|---|---|
| Sync errors > 20% of items | vtodo-notion, carddav-google-contacts |
| Circuit breaker activated | vtodo-notion, carddav-google-contacts |
| Fatal crash | vtodo-notion, carddav-google-contacts |
| Sync error (DNS, auth, etc.) | vdirsyncer |
| Daily heartbeat (sync OK) | vdirsyncer |
| Track 1 or Track 2 failed | notion-backup |
| Track 1 — JSON (official API) | Track 2 — HTML ZIP (native export) | |
|---|---|---|
| Auth | NOTION_API_TOKEN — never expires |
Browser cookies — may expire |
| Output | Structured JSON per page/database | Full HTML, human-readable |
| Use case | Programmatic restore, diffs | Manual reading, disaster recovery |
Both tracks run concurrently. A failure in Track 2 (e.g. cookies expired) does not block Track 1.
Hardlink snapshots of the JSON backup are created after each successful Track 1 run:
| Tier | Kept | Folder name | Policy |
|---|---|---|---|
| Daily | last 7 | YYYY-MM-DD |
Overwritten every day |
| Weekly | last 8 | YYYY-Www |
Created once per ISO week, never overwritten |
Hardlinks mean unchanged files share the same inode — extra disk usage per snapshot equals only what actually changed in Notion that day.
NOTION_BACKUP_PATH/
├── json/ ← always current (latest Track 1 output)
│ ├── manifest.json ← timestamp, page/db counts, all IDs + titles
│ ├── {page-id}/
│ │ ├── content.json ← page metadata
│ │ └── blocks.json ← full block tree
│ └── {database-id}/
│ ├── content.json
│ ├── blocks.json
│ └── rows.json ← all database rows
├── html/
│ ├── latest/ ← latest Track 2 export, unzipped
│ └── archives/ ← last 3 ZIPs (notion-export-YYYY-MM-DDTHH-MM-SSZ.zip)
├── snapshots/
│ ├── daily/
│ │ ├── 2025-01-17/ ← hardlink snapshot of json/ on that day
│ │ └── ... (last 7 days)
│ └── weekly/
│ ├── 2025-W03/
│ └── ... (last 8 weeks)
└── .git/ ← git repo — one commit per successful backup
The path at NOTION_BACKUP_PATH is a plain directory on your host filesystem:
| Platform | How to access |
|---|---|
| Synology File Station | Browse to docker/syncer/notion-backup |
| Windows (SMB) | \\NAS-NAME\docker\notion-backup |
| Mac / Linux (SMB) | smb://NAS-NAME/docker/notion-backup |
| Windows local | Explorer → C:\notion-backup |
| SCP / rsync | rsync user@nas:/volume1/docker/syncer/notion-backup ./ |
CALDAV_BACKUP_PATH/
├── calendar_NomCalendario.ics ← all VEVENT for that calendar
├── tasks_NomeLista.ics ← all VTODO for that task list
└── manifest.json ← timestamp, calendar list, item counts
Files are overwritten in-place on every backup (hourly). There is no rotation — the backup is a snapshot of the current CalDAV server state.
| Variable | Service | Required | Default | Description |
|---|---|---|---|---|
CALDAV_URL |
vdirsyncer, vtodo-notion, caldav-backup | ✓ | — | CalDAV server root URL |
CARDDAV_URL |
carddav-google-contacts | ✓ | — | CardDAV server root URL |
CALDAV_USERNAME |
all | ✓ | — | CalDAV / CardDAV username |
CALDAV_PASSWORD |
all | ✓ | — | CalDAV / CardDAV password |
GOOGLE_CLIENT_ID |
vdirsyncer, contacts | ✓ | — | Google OAuth client ID |
GOOGLE_CLIENT_SECRET |
vdirsyncer, contacts | ✓ | — | Google OAuth client secret |
GOOGLE_TOKEN_FILE |
vdirsyncer | ✓ | /data/token/google.json |
Do not change |
GOOGLE_CONTACTS_TOKEN_FILE |
carddav-google-contacts | ✓ | /data/token/google_contacts.json |
Generated by auth script |
SYNC_INTERVAL_MINUTES |
vdirsyncer, vtodo-notion, contacts | — | 60 / 10 / 30 |
Sync interval in minutes |
NOTION_TOKEN |
vtodo-notion | ✓ | — | Notion integration token (ntn_...) |
NOTION_DATABASE_ID |
vtodo-notion | ✓ | — | Target Notion database ID |
NOTION_SYNC_LOG_PATH |
vtodo-notion | — | ./logs-vtodo |
Host path for sync log file |
NOTION_API_TOKEN |
notion-backup | ✓ | — | Notion integration token (can equal NOTION_TOKEN) |
NOTION_TOKEN_V2 |
notion-backup | — | — | Browser cookie for native HTML export |
NOTION_FILE_TOKEN |
notion-backup | — | — | Browser cookie for file downloads |
NOTION_SPACE_ID |
notion-backup | — | — | Notion workspace ID for native export |
NOTION_BACKUP_PATH |
notion-backup | ✓ | — | Absolute host path for backup storage (must exist) |
BACKUP_SCHEDULE |
notion-backup | — | 0 21 * * * |
Cron expression (UTC) — default = 22:00 CET |
GIT_REMOTE_URL |
notion-backup | — | — | Git remote to push backup repo after each commit |
CALDAV_BACKUP_PATH |
caldav-backup | — | ./caldav-backup-output |
Host path for CalDAV .ics backup files |
TELEGRAM_BOT_TOKEN |
all | — | — | Telegram bot token from @BotFather |
TELEGRAM_CHAT_ID |
all | — | — | Your Telegram user or chat ID |
TARGETARCH |
all (build) | ✓ | amd64 |
amd64 (Intel/AMD) or arm64 |
- All services schedule themselves via supercronic — no external cron needed
vtodo-notionis bidirectional: conflict resolution is based onlast-modifiedtimestamp (most recent write wins)carddav-google-contactsis bidirectional: uses Google's People API mapping CardDAVUIDto Google'sresourceNameviaexternalIds. Resolves conflict based on local SQLite cache and ETag matching.vdirsynceris bidirectional: new/changed events propagate in both directions; when both sides differ simultaneously, GCal wins (conflict_resolution = "b wins") — correct for shared meeting invitations where you are not the organizer.My Calendar(l.manca03@gmail.com) is excluded from sync to avoid 403 errors on read-only events.notion-backupTrack 1 respects the Notion API rate limit (3 req/s, token-bucket)- Snapshots use
unlink-before-write: future writes tojson/never corrupt inode of older snapshots - Git commits happen only when Track 1 completes successfully
vCard BDAY fields can arrive in multiple formats (19900115, --0115, 1990-01-15). Some third-party apps confuse DD/MM vs MM/DD, producing swapped birthdays. The sync service includes an automatic diagnostic that logs a WARNING for every ambiguous date (e.g. 02-01 which could be Jan 2nd or Feb 1st) and always normalizes output to the People API format {year, month, day}.
Events created or modified via Apple Reminders (iOS/macOS) on a CalDAV calendar get stored with a non-standard UID of the form x-apple-reminderkit://REMCDReminder/<UUID>. vdirsyncer 0.20.x constructs a malformed Google Calendar API URL from this UID and fails with Unknown error occurred.
Fix: run the following one-time cleanup script to rewrite all Apple Reminders UIDs in-place to the plain UUID, then restart the container:
docker compose exec vdirsyncer python3 - << 'EOF'
import os, re, requests
URL = os.environ["CALDAV_URL"].rstrip("/")
USER = os.environ["CALDAV_USERNAME"]
PASS = os.environ["CALDAV_PASSWORD"]
# ... (see project history for full script)
EOFAfter fixing, vdirsyncer will sync those events to GCal on the next run.
Outlook meeting invitations sometimes produce base64-encoded UIDs containing / (e.g. TOThHNg0/EOUGF2rrxm+0w==). The slash splits the CalDAV API URL path and causes Unknown error occurred.
Fix: these are always empty "Busy" blocks — safe to delete from CalDAV. They will be automatically removed from GCal on the next sync.
Goal: a lightweight service (notion-export-fetcher) that monitors a Gmail label for Notion export-ready emails, extracts the download link, downloads the ZIP, and stores it alongside the Track 2 backup — fully automating what currently requires a manual step.
Background: since late 2024, Notion's internal export API no longer returns a direct exportURL in the task result. Instead, Notion sends an email to the account owner with a short-lived file.notion.so/... download link. Track 2 of notion-backup triggers the export but cannot capture the file automatically.
Design:
- Read-only Gmail access via OAuth 2.0 (scope:
gmail.readonly) — never writes or deletes - Poll a specific Gmail label (e.g.
Notion/exports) every few minutes - Parse email body (HTML or plain text) for links matching
https://file.notion.so/... - Download the ZIP using the same
file_tokencookie already in.env - Save to
notion-backup/backup/html/archives/with ISO timestamp filename - Extract into
notion-backup/backup/html/latest/(replacing previous latest) - Send Telegram notification on success or download failure
- Mark processed emails to avoid re-downloading (via a local state file, not Gmail modification)
Implementation notes:
- Reuse
GOOGLE_CLIENT_ID/GOOGLE_CLIENT_SECRETwith scopehttps://www.googleapis.com/auth/gmail.readonly - Separate OAuth token file for Gmail (
gmail.jsonalongsidegoogle.jsoninvdirsyncer/token/) - Label to watch: configurable via
GMAIL_NOTION_LABELin.env - Download links expire after ~24 h — polling interval should be ≤ 1 h after first backup run of the day
Goal: a vtodo-gtasks container that mirrors VTODO state from CalDAV → Google Tasks (one-way, CalDAV is authoritative).
Design:
- CalDAV and Notion remain the two bidirectional sources of truth
- Google Tasks is a read-only mirror, useful for visibility in the Google ecosystem
- No sync back from Google Tasks (to avoid three-way conflict resolution)
Implementation notes:
- Use the Google Tasks REST API (Google rejects VTODO over CalDAV)
- Reuse
GOOGLE_CLIENT_ID/GOOGLE_CLIENT_SECRETwith scopehttps://www.googleapis.com/auth/tasks - One CalDAV VTODO list → one Google Tasks list; use VTODO
UIDas idempotent anchor in task notes - Fields without Tasks equivalent (priority, RRULE, location): store as structured text in the
notesfield