Todofy is a self-hosted task management tool designed to help you organize and prioritize your tasks efficiently. It's built as a collection of microservices communicating over gRPC, with email-driven task creation powered by Google Gemini LLM summarization.
graph TB
%% External systems and users
User[👤 User<br/>HTTP Client]
Email[📧 Email System<br/>Cloudmailin Webhook]
%% External services
Gemini[🤖 Google Gemini<br/>LLM API]
Mailjet[📬 Mailjet<br/>Email Service]
Notion[📝 Notion<br/>Database API]
Todoist[✅ Todoist<br/>Task API v1]
%% Main HTTP Server
Main[🌐 Todofy Main Server<br/>HTTP REST API<br/>Port: 8080<br/>Basic Auth + Rate Limiting]
%% Microservices
LLM[🧠 LLM Service<br/>gRPC Server<br/>Port: 50051]
Todo[📋 Todo Service<br/>gRPC Server<br/>Port: 50052]
DB[🗄️ Database Service<br/>gRPC Server<br/>Port: 50053<br/>SQLite Backend]
%% API endpoints
Summary[📊 /api/summary<br/>GET endpoint]
UpdateTodo[📝 /api/v1/update_todo<br/>POST endpoint]
Recommend[🏆 /api/recommendation<br/>GET endpoint<br/>?top=N]
%% Data flow connections
User -->|HTTPS GET| Summary
User -->|HTTPS GET| Recommend
Email -->|Webhook POST| UpdateTodo
Summary --> Main
UpdateTodo --> Main
Recommend --> Main
Main -->|gRPC Health Check| LLM
Main -->|gRPC Health Check| Todo
Main -->|gRPC Health Check| DB
%% Dedup cache flow: check DB first, then conditionally call LLM
Main -->|CheckExist<br/>hash_id lookup| DB
Main -.->|LLMSummaryRequest<br/>only on cache miss| LLM
Main -->|TodoRequest| Todo
Main -->|Write<br/>with hash_id| DB
LLM -->|API Calls| Gemini
Todo -->|Email Send| Mailjet
Todo -->|Task Creation| Notion
Todo -->|Task Creation| Todoist
%% Service descriptions
Main -.->|Container| MainContainer[🐳 ghcr.io/ziyixi/todofy:latest]
LLM -.->|Container| LLMContainer[🐳 ghcr.io/ziyixi/todofy-llm:latest]
Todo -.->|Container| TodoContainer[🐳 ghcr.io/ziyixi/todofy-todo:latest]
DB -.->|Container| DBContainer[🐳 ghcr.io/ziyixi/todofy-database:latest]
%% Styling
classDef external fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
classDef service fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef endpoint fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
classDef container fill:#fff3e0,stroke:#f57c00,stroke-width:2px,stroke-dasharray: 5 5
class User,Email,Gemini,Mailjet,Notion,Todoist external
class Main,LLM,Todo,DB service
class Summary,UpdateTodo,Recommend endpoint
class MainContainer,LLMContainer,TodoContainer,DBContainer container
- Task Management: Core functionality for creating, updating, and managing tasks.
- LLM Integration: Leverages Google Gemini models for email summarization with automatic model fallback (via
todofy-llmservice). - Cost Controls: Daily token limit with 24-hour sliding window (default: 3M tokens) to prevent runaway API costs, plus email content truncation (50K character hard limit).
- Dedup Cache: SHA-256 hash-based deduplication — identical emails skip the expensive LLM call and reuse the cached summary from the database.
- Task Recommendations:
GET /api/recommendation?top=Nqueries recent 24h tasks, asks the LLM to pick the top-N most important ones (default 3, max 10), and returns structured JSON with rank, title, and reason for each. - Email/API Task Population: Allows tasks to be populated or managed via email or API interactions (via
todofy-todoservice). - Persistent Storage: Uses SQLite for storing task data with hash-indexed lookups (via
todofy-databaseservice). - Containerized Services: All components are containerized using Docker for easy deployment and scaling.
- Comprehensive Testing: Unit tests, e2e tests with mock Gemini client injection, and Docker-based integration tests.
The LLM service uses Google Gemini for email summarization with several cost-control and reliability features:
Models are tried in priority order for automatic fallback:
gemini-2.5-flash-lite(fastest, cheapest)gemini-2.5-flash(balanced)gemini-3-flash-preview(latest)
Additionally, gemini-2.5-pro is available when explicitly requested.
| Feature | Default | Description |
|---|---|---|
| Daily token limit | 3,000,000 | 24-hour sliding window; configurable via --daily-token-limit flag (0 = unlimited) |
| Email content limit | 50,000 chars | Hard truncation of email body before LLM processing |
| Token counting | Per-request | Content is iteratively truncated (to 90%) until under the per-model token limit (1M tokens) |
| Dedup cache | Always on | SHA-256 hash of prompt + email content; duplicate emails return cached summary without LLM call |
| Flag | Default | Description |
|---|---|---|
--port |
50051 |
gRPC server port |
--gemini-api-key |
(required) | Google Gemini API key |
--daily-token-limit |
3000000 |
Max tokens per 24h sliding window (0 = unlimited) |
The application is composed of the following services:
-
Todofy (Main App)
- Description: The primary user-facing application and HTTP API gateway.
- Dockerfile:
./Dockerfile - Default Port:
8080(configurable viaPORTenv var) - Image:
ghcr.io/ziyixi/todofy:latest
-
LLM Service (
todofy-llm)- Description: Email summarization via Google Gemini with model fallback and daily token tracking.
- Dockerfile:
llm/Dockerfile - Default Port:
50051(configurable via--portflag) - Image:
ghcr.io/ziyixi/todofy-llm:latest
-
Todo Service (
todofy-todo)- Description: Manages task creation via Todoist (API v1), Notion, and email (Mailjet).
- Dockerfile:
todo/Dockerfile - Default Port:
50052(configurable via--portflag) - Image:
ghcr.io/ziyixi/todofy-todo:latest
-
Database Service (
todofy-database)- Description: Provides database access and management using SQLite. Supports
Write,QueryRecent, andCheckExist(hash-based dedup lookup) RPCs. - Dockerfile:
database/Dockerfile - Default Port:
50053(configurable viaPORTenv var) - Image:
ghcr.io/ziyixi/todofy-database:latest
- Description: Provides database access and management using SQLite. Supports
The CI/CD pipeline uses GitHub Actions with reusable workflows organized as a dependency graph:
graph LR
T[Test] --> B[Build]
L[Lint] --> B
S[Security] --> B
I[Integration Test] --> B
T --> N[Notify]
L --> N
S --> N
I --> N
| Workflow | Description |
|---|---|
| Test | Runs go test -race with coverage, uploads to Codecov |
| Lint | Runs golangci-lint |
| Security | Runs gosec with SARIF upload to GitHub Security |
| Integration Test | Builds all 4 Docker images and validates with health checks |
| Build | Pushes Docker images to GHCR on main only when build-relevant files change (or manual dispatch) |
| Notify | Reports pass/fail status |
Docker images for each service are automatically built and pushed to GitHub Container Registry (GHCR) by the CI/CD pipeline. You can pull them using:
docker pull ghcr.io/ziyixi/todofy:latestdocker pull ghcr.io/ziyixi/todofy-llm:latestdocker pull ghcr.io/ziyixi/todofy-todo:latestdocker pull ghcr.io/ziyixi/todofy-database:latest
Run all tests:
go test ./...Run with coverage:
go test -race -coverprofile=coverage.out -covermode=atomic ./...
go tool cover -func=coverage.outThe LLM service includes e2e tests with a mock Gemini client (no real API calls or costs), covering:
- Full summarization flow and model fallback
- Daily token limit enforcement and sliding window expiry
- Token usage tracking (with
UsageMetadataandCountTokensfallback) - Content truncation for oversized inputs
- Error handling (empty responses, client failures, missing API key)
The database service includes tests for:
CheckExistRPC — cache hit, cache miss, empty hash validation, uninitialized DB- Full integration workflow: create → write (with hash_id) → query → CheckExist verification
The recommendation handler includes tests for:
- No tasks / database error / LLM error handling
- Valid JSON parsing with correct ranks, titles, and reasons
- Markdown code fence stripping (
\``json ... ````) - Fallback when LLM returns plain text instead of JSON
?top=Nparameter validation (default 3, range 1-10, invalid values)- Prompt content verification (correct format string interpolation)
task_countreflects DB entries, not recommendation count