A complete logging and tracing solution with SDK, server, and web UI for real-time monitoring. Similar to Datadog, but self-hosted.
- π¦ SDK - JavaScript/TypeScript logger and tracer for Node.js, Browser, and Tauri
- π Server - High-performance API with WebSocket real-time streaming
- π₯οΈ Web UI - Beautiful dark-themed dashboard with live updates
- οΏ½ Swagger UI - Interactive API documentation with OpenAPI 3.0 spec
- οΏ½π¨ Error Tracking - Automatic error grouping with fingerprinting (like Sentry)
- π Distributed Tracing - Full tracing support with waterfall visualization (like Datadog APM)
- ποΈ Data Retention - Configurable retention periods with automatic cleanup
- π³ Docker - One-command deployment with Docker Compose
/trace-dock
βββ packages/
β βββ sdk/ # TypeScript SDK for logging
βββ server/ # Hono API server with WebSocket
βββ web/ # Vue 3 + Vite web interface
βββ docker-compose.yml
βββ package.json
- Node.js 20+
- pnpm 8+
# Clone the repository
git clone https://github.com/JeepayJipex/trace-dock.git
cd trace-dock
# Install dependencies
pnpm install# Start both server and web UI
pnpm dev
# Or start individually
pnpm dev:server # Server on http://localhost:3001
pnpm dev:web # Web UI on http://localhost:5173The easiest way to deploy Trace-Dock is using the unified Docker image that includes both the web UI and server:
# Quick start with SQLite
docker run -d \
--name trace-dock \
-p 8080:80 \
-v trace-dock-data:/app/data \
jeepayjipex/trace-dock:latestAccess the application at http://localhost:8080
For more options (PostgreSQL, MySQL, etc.), see the Docker README.
# Build and start all services
pnpm docker:up
# Or using docker-compose directly
docker-compose up -d --build
# View logs
docker-compose logs -f
# Stop services
pnpm docker:downAccess the application:
- Web UI: http://localhost:8080
- API: http://localhost:3001
- Swagger UI: http://localhost:3001/ui
- OpenAPI Spec: http://localhost:3001/doc
- WebSocket: ws://localhost:3001/live
npm install @trace-dock/sdk
# or
pnpm add @trace-dock/sdk
# or
yarn add @trace-dock/sdkimport { createLogger } from '@trace-dock/sdk';
const logger = createLogger({
endpoint: 'http://localhost:3001/ingest', // Dev: direct to server
// endpoint: 'http://localhost:8080/api/ingest', // Docker: through nginx proxy
appName: 'my-app',
});
// Log messages
logger.debug('Debug message', { extra: 'data' });
logger.info('User logged in', { userId: 123 });
logger.warn('High memory usage', { usage: '85%' });
logger.error('Database connection failed', { error: new Error('Connection refused') });import { createTracer } from '@trace-dock/sdk';
const tracer = createTracer({
endpoint: 'http://localhost:3001/ingest', // Dev: direct to server
// endpoint: 'http://localhost:8080/api/ingest', // Docker: through nginx proxy
appName: 'my-app',
});
// Trace an entire operation
const result = await tracer.withTrace('process-order', async () => {
// Track sub-operations with spans
const user = await tracer.withSpan('fetch-user', async () => {
return await db.users.findById(userId);
}, { operationType: 'db' });
const payment = await tracer.withSpan('charge-payment', async () => {
return await stripe.charges.create({ amount, customer: user.stripeId });
}, { operationType: 'http' });
return { user, payment };
});const logger = createLogger({
// Required
endpoint: 'http://localhost:3001/ingest',
appName: 'my-app',
// Optional
sessionId: 'custom-session-id', // Auto-generated if not provided
enableWebSocket: true, // Enable real-time streaming
wsEndpoint: 'ws://localhost:3001/live',
batchSize: 10, // Batch logs before sending
flushInterval: 5000, // Flush interval in ms
maxRetries: 3, // Max retry attempts
debug: false, // Console log in development
// Global metadata added to all logs
metadata: {
version: '1.0.0',
environment: 'production',
},
// Error handler
onError: (error) => {
console.error('Logger error:', error);
},
});const tracer = createTracer({
// Required
endpoint: 'http://localhost:3001/ingest',
appName: 'my-app',
// Optional
sessionId: 'custom-session-id',
debug: false,
metadata: {},
spanTimeout: 300000, // Auto-end spans after 5 minutes
onError: (error) => console.error('Tracer error:', error),
});// Create a child logger with additional context
const userLogger = logger.child({
userId: 123,
module: 'auth',
});
userLogger.info('User action'); // Includes userId and module// Get current session ID
const sessionId = logger.getSessionId();
// Set new session ID (e.g., after user login)
logger.setSessionId('new-session-id');<script type="module">
import { createLogger } from '@trace-dock/sdk';
const logger = createLogger({
endpoint: '/api/ingest',
appName: 'web-app',
});
window.onerror = (message, source, line, col, error) => {
logger.error('Uncaught error', { message, source, line, col, error });
};
logger.info('App initialized');
</script>import { createLogger } from '@trace-dock/sdk';
const logger = createLogger({
endpoint: 'http://localhost:3001/ingest',
appName: 'node-app',
});
process.on('uncaughtException', (error) => {
logger.error('Uncaught exception', { error });
process.exit(1);
});
process.on('unhandledRejection', (reason) => {
logger.error('Unhandled rejection', { reason });
});
logger.info('Server started', { port: 3000 });import { createLogger } from '@trace-dock/sdk';
const logger = createLogger({
endpoint: 'http://localhost:3001/ingest',
appName: 'tauri-app',
});
// Tauri environment is auto-detected
logger.info('Tauri app started');Trace Dock provides interactive API documentation via Swagger UI, powered by OpenAPI 3.0.
- Swagger UI: http://localhost:3001/ui - Interactive API explorer to test endpoints directly
- OpenAPI Spec: http://localhost:3001/doc - Raw OpenAPI 3.0 JSON specification
The Swagger UI allows you to:
- Browse all available endpoints organized by category (Logs, Traces, Error Groups, Settings)
- View request/response schemas with examples
- Test API calls directly from the browser
- Download the OpenAPI spec for code generation
Ingest a new log entry.
curl -X POST http://localhost:3001/ingest \
-H "Content-Type: application/json" \
-d '{
"id": "uuid",
"timestamp": "2024-01-01T00:00:00.000Z",
"level": "info",
"message": "Test log",
"appName": "test-app",
"sessionId": "session-123",
"environment": { "type": "node" }
}'Fetch logs with pagination and filtering.
# Get all logs
curl http://localhost:3001/logs
# With filters
curl "http://localhost:3001/logs?level=error&appName=my-app&limit=100&offset=0"Query Parameters:
level- Filter by log level (debug, info, warn, error)appName- Filter by application namesessionId- Filter by session IDsearch- Full-text searchstartDate- Filter by start date (ISO format)endDate- Filter by end date (ISO format)limit- Number of results (default: 50, max: 1000)offset- Pagination offset
Get a single log entry by ID.
Get log statistics.
{
"total": 1234,
"byLevel": { "debug": 100, "info": 800, "warn": 200, "error": 134 },
"byApp": { "my-app": 1000, "other-app": 234 }
}Get list of unique application names.
Get list of session IDs.
Get error groups with pagination and filtering.
curl "http://localhost:3001/error-groups?status=unreviewed&appName=my-app&limit=20"Query Parameters:
appName- Filter by application namestatus- Filter by status (unreviewed, reviewed, ignored, resolved)search- Search in error messagessortBy- Sort by field (last_seen, first_seen, occurrence_count)sortOrder- Sort order (asc, desc)limit- Number of results (default: 20)offset- Pagination offset
Get error group statistics.
{
"totalGroups": 42,
"totalOccurrences": 1234,
"byStatus": { "unreviewed": 10, "reviewed": 20, "ignored": 5, "resolved": 7 },
"byApp": { "my-app": 30, "other-app": 12 },
"recentTrend": [{ "date": "2024-01-01", "count": 15 }]
}Get a single error group by ID.
Update error group status.
curl -X PATCH http://localhost:3001/error-groups/uuid/status \
-H "Content-Type: application/json" \
-d '{ "status": "resolved" }'Get all log occurrences for an error group.
Get traces with pagination and filtering.
curl "http://localhost:3001/traces?appName=my-app&status=completed&minDuration=100"Query Parameters:
appName- Filter by application namesessionId- Filter by session IDstatus- Filter by status (running, completed, error)name- Search by trace nameminDuration- Minimum duration in msmaxDuration- Maximum duration in msstartDate- Filter by start dateendDate- Filter by end datelimit- Number of results (default: 20)offset- Pagination offset
Get trace statistics.
{
"totalTraces": 500,
"avgDurationMs": 245.5,
"byStatus": { "running": 2, "completed": 480, "error": 18 },
"byApp": { "my-app": 400, "other-app": 100 },
"recentTrend": [{ "date": "2024-01-01", "count": 50, "avgDuration": 230 }]
}Get a single trace with all spans and associated logs.
{
"trace": { "id": "...", "name": "HTTP GET /users", "durationMs": 245, ... },
"spans": [
{ "id": "...", "name": "db.query", "durationMs": 45, "parentSpanId": null, ... },
{ "id": "...", "name": "cache.get", "durationMs": 2, "parentSpanId": "...", ... }
],
"logs": [
{ "id": "...", "message": "Fetching users", "traceId": "...", "spanId": "...", ... }
]
}Create a new trace.
curl -X POST http://localhost:3001/traces \
-H "Content-Type: application/json" \
-d '{
"name": "process-order",
"appName": "my-app",
"sessionId": "session-123"
}'Update a trace (end it or change status).
curl -X PATCH http://localhost:3001/traces/uuid \
-H "Content-Type: application/json" \
-d '{
"endTime": "2024-01-01T00:01:00.000Z",
"durationMs": 60000,
"status": "completed"
}'Create a new span within a trace.
curl -X POST http://localhost:3001/spans \
-H "Content-Type: application/json" \
-d '{
"traceId": "trace-uuid",
"name": "db.query.users",
"operationType": "db",
"parentSpanId": "parent-span-uuid"
}'Update a span (end it or change status).
curl -X PATCH http://localhost:3001/spans/uuid \
-H "Content-Type: application/json" \
-d '{
"endTime": "2024-01-01T00:00:01.000Z",
"durationMs": 45,
"status": "completed"
}'Get current retention and cleanup settings.
{
"logsRetentionDays": 7,
"tracesRetentionDays": 14,
"spansRetentionDays": 14,
"errorGroupsRetentionDays": 30,
"cleanupEnabled": true,
"cleanupIntervalHours": 1
}Update retention and cleanup settings.
curl -X PATCH http://localhost:3001/settings \
-H "Content-Type: application/json" \
-d '{
"logsRetentionDays": 14,
"cleanupEnabled": true,
"cleanupIntervalHours": 2
}'Get storage statistics.
{
"totalLogs": 12345,
"totalTraces": 500,
"totalSpans": 2500,
"totalErrorGroups": 42,
"databaseSizeBytes": 10485760,
"oldestLog": "2024-01-01T00:00:00.000Z",
"oldestTrace": "2024-01-01T00:00:00.000Z"
}Trigger manual cleanup based on current retention settings.
{
"logsDeleted": 150,
"tracesDeleted": 25,
"spansDeleted": 100,
"errorGroupsDeleted": 5,
"durationMs": 45
}Real-time log streaming.
const ws = new WebSocket('ws://localhost:3001/live');
ws.onmessage = (event) => {
const { type, data } = JSON.parse(event.data);
if (type === 'log') {
console.log('New log:', data);
}
};- Live Mode - Real-time log streaming via WebSocket
- Filtering - Filter by level, app, session, date range, and text search
- Advanced Search - Datadog-like search syntax (
level:error app:myapp key:value) - Detail View - Expandable log entries with full metadata and stack traces
- Error Tracking - Automatic error grouping with:
- Fingerprint-based deduplication
- Status management (unreviewed, reviewed, ignored, resolved)
- Occurrence history with charts
- Bulk actions for triaging
- Option to hide ignored errors from the main feed
- Distributed Tracing - Full APM-like tracing with:
- Waterfall timeline visualization
- Nested span hierarchy
- Duration breakdown
- Associated logs per trace
- Status indicators (running, completed, error)
- Data Retention & Cleanup - Automatic data management:
- Configurable retention periods per data type (logs, traces, spans, error groups)
- Automatic cleanup job (runs hourly by default)
- Manual cleanup trigger
- Storage statistics (database size, record counts, oldest data)
- Set retention to 0 to disable cleanup for specific types
- Dark Theme - Beautiful dark UI optimized for readability
- Responsive - Works on desktop and mobile
pnpm buildpnpm build:sdk
pnpm build:server
pnpm build:webThe project includes comprehensive tests for all packages using Vitest.
# Run all tests across the monorepo
pnpm test:run
# Run tests in watch mode
pnpm test
# Run tests for specific packages
pnpm test:sdk # SDK tests (44 tests)
pnpm test:server # Server tests (53 tests)
pnpm test:web # Web tests (19 tests)- SDK: Uses MSW (Mock Service Worker) for API mocking
- Server: Uses in-memory SQLite for test isolation (
:memory:) - Web: Uses happy-dom for Vue component testing
# Generate test logs (manual testing)
node -e "
const { createLogger } = require('./packages/sdk/dist');
const logger = createLogger({
endpoint: 'http://localhost:3001/ingest',
appName: 'test'
});
logger.info('Test log');
logger.error('Test error', { error: new Error('Test') });
"| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
Server port |
DB_TYPE |
sqlite |
Database type (sqlite, postgresql, mysql) |
DATABASE_URL |
./data/trace-dock.sqlite |
Database connection URL |
DATA_DIR |
./data |
SQLite database directory (legacy) |
DB_PATH |
${DATA_DIR}/trace-dock.sqlite |
Database file path (legacy) |
DB_DEBUG |
false |
Enable database debug logging |
CORS_ORIGINS |
http://localhost:5173,... |
Comma-separated allowed origins |
CORS_ALLOW_ALL |
false |
Allow all origins (use with caution) |
| Variable | Default | Description |
|---|---|---|
VITE_API_URL |
/api |
API base URL |
VITE_WS_URL |
Auto-detected | WebSocket URL |
Trace Dock uses Drizzle ORM and supports multiple database backends. The database is automatically initialized on first startup - no manual migration is required.
- Choose your database by setting environment variables:
# SQLite (default - no setup needed)
DB_TYPE=sqlite
DATABASE_URL=./data/trace-dock.sqlite
# PostgreSQL
DB_TYPE=postgresql
DATABASE_URL=postgres://user:password@localhost:5432/tracedock
# MySQL
DB_TYPE=mysql
DATABASE_URL=mysql://user:password@localhost:3306/tracedock- Start the server - tables are created automatically:
pnpm dev:server
# or
pnpm docker:upThat's it! The server will create all necessary tables on startup.
Switching between database types (e.g., SQLite β PostgreSQL) will result in data loss.
The data is stored in the specific database you configure. If you change DB_TYPE:
- Your existing data stays in the old database
- The new database starts empty
- There is no automatic migration between database types
If you need to switch databases:
- Export your data from the old database (if needed)
- Change the
DB_TYPEandDATABASE_URLenvironment variables - Restart the server (new empty tables will be created)
- Import your data (if applicable)
For development and advanced usage, you can use these scripts:
# Generate Drizzle migrations (for contributing)
pnpm --filter @trace-dock/server db:generate --type=sqlite
pnpm --filter @trace-dock/server db:generate --type=postgresql
pnpm --filter @trace-dock/server db:generate --type=mysql
# Initialize database manually (usually not needed)
pnpm --filter @trace-dock/server db:setup --type=sqliteTrace Dock supports multiple database backends via Drizzle ORM:
SQLite is the default database, perfect for development and small deployments.
# Default configuration - no setup needed
DB_TYPE=sqlite
DATABASE_URL=./data/trace-dock.sqliteFor production deployments with higher concurrency needs.
DB_TYPE=postgresql
DATABASE_URL=postgres://user:password@localhost:5432/tracedockDocker Compose example with PostgreSQL:
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: tracedock
POSTGRES_USER: tracedock
POSTGRES_PASSWORD: secret
volumes:
- postgres-data:/var/lib/postgresql/data
server:
environment:
- DB_TYPE=postgresql
- DATABASE_URL=postgres://tracedock:secret@postgres:5432/tracedockDB_TYPE=mysql
DATABASE_URL=mysql://user:password@localhost:3306/tracedockNote: PostgreSQL and MySQL support requires implementing the respective repository adapters. Currently, only SQLite is fully implemented. The schema definitions for PostgreSQL and MySQL are ready in
server/src/db/schema/.
# Build and start all services (SQLite)
pnpm docker:up
# Or using docker-compose directly
docker-compose up -d --build
# View logs
docker-compose logs -f
# Stop services
pnpm docker:downCreate a .env file in the root directory:
# Server port (default: 3000)
SERVER_PORT=3000
# Web port (default: 8080)
WEB_PORT=8080
# Database type: sqlite | postgresql | mysql
DB_TYPE=sqlite
# Database URL (for PostgreSQL/MySQL)
# DATABASE_URL=postgres://user:pass@host:5432/tracedock
# CORS origins
CORS_ORIGINS=http://localhost:8080,http://localhost:3001Create a docker-compose.override.yml for PostgreSQL:
version: '3.8'
services:
postgres:
image: postgres:16-alpine
container_name: trace-dock-postgres
restart: unless-stopped
environment:
POSTGRES_DB: tracedock
POSTGRES_USER: tracedock
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U tracedock"]
interval: 10s
timeout: 5s
retries: 5
server:
depends_on:
postgres:
condition: service_healthy
environment:
- DB_TYPE=postgresql
- DATABASE_URL=postgres://tracedock:${POSTGRES_PASSWORD:-changeme}@postgres:5432/tracedock
volumes:
postgres-data:
driver: localData is persisted using Docker volumes:
volumes:
trace-dock-data: # SQLite database
driver: local
postgres-data: # PostgreSQL data (if using)
driver: localBoth services include health checks:
- Server:
GET /- Returns server status - Web:
GET /- Returns nginx status
# Build server image
docker build -f server/Dockerfile -t trace-dock-server .
# Build web image
docker build -f web/Dockerfile -t trace-dock-web \
--build-arg VITE_API_BASE_URL=/api .MIT License - see LICENSE for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Made with β€οΈ by the Trace Dock team
