The zero-configuration observability toolkit that makes monitoring actually enjoyable
Stop flying blind. Start shipping with confidence.
Get Started โข Examples โข Dashboard โข Docs
You've built something amazing. But when it breaks at 2 AM, you're debugging blind:
$ tail -f production.log
# ๐ญ Nothing useful here...
$ ps aux | grep node
# ๐คทโโ๏ธ Is it even running?
$ top
# ๐ 85% CPU but why?!Enterprise monitoring is overkill. Datadog costs more than your server. Setting up Prometheus + Grafana takes a weekend. You just want to see what's happening in your app.
Add one line to your code. Get a gorgeous dashboard showing exactly what your app is doing:
const obs = require('lite-observability');
// โจ This is literally it
obs.init({ dashboard: true });
// Your existing app works unchanged
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.json({ message: 'Hello World!' });
});
app.listen(3000, () => {
console.log('๐ Server: http://localhost:3000');
console.log('๐ Dashboard: http://localhost:3001');
});from lite_observability import init_observability
from fastapi import FastAPI
app = FastAPI()
@app.on_event("startup")
async def startup():
# โจ This is literally it
await init_observability(dashboard=True)
@app.get("/")
async def root():
return {"message": "Hello World"}
# Start: uvicorn main:app --port 8000
# ๐ Server: http://localhost:8000
# ๐ Dashboard: http://localhost:8001That's it. Open the dashboard and watch your app come to life with real-time metrics, beautiful charts, and actionable insights.
๐ Request Volume ๐ Response Times ๐จ Error Rates
247 req/min P95: 23ms 0.3% errors
๐ป Resource Usage ๐ Event Loop ๐ฆ Memory
CPU: 12% Lag: 2ms RSS: 145MB
See exactly where time is spent in each request:
GET /api/users (127ms)
โโโ ๐ Auth middleware (3ms)
โโโ ๐๏ธ Database query (98ms) โ bottleneck found!
โโโ ๐ External API call (24ms)
โโโ ๐ค Response serialization (2ms)
Catch issues before your users do:
โ UnhandledPromiseRejectionWarning
TypeError: Cannot read property 'id' of undefined
at UserService.getUser (user.service.js:42)
๐ 2 minutes ago โข ๐ Trace ID: 7f3a2b1c- Live Charts: CPU, memory, request rates updating in real-time
- Request Inspector: Click any request to see its complete trace
- Error Browser: Filter, search, and debug exceptions
- Performance Insights: Automatic detection of slow endpoints
- Health Checks: One-click diagnostics and system analysis
Left: Real-time metrics with beautiful charts and key performance indicators
Right: Distributed tracing showing request flow and timing breakdown
Try it yourself in 30 seconds:
# Node.js
git clone https://github.com/observability-kit/lite-observability
cd lite-observability/examples/nodejs
npm install && npm start
# Python
cd examples/python
pip install -r requirements.txt
python fastapi_app.pyThen visit the dashboard and make some requests. Watch the magic happen! โจ
Node.js
// Trace database operations
await obs.createSpan('fetch-user-data', async (span) => {
span.setAttributes({ userId: 123, operation: 'read' });
const user = await db.users.findById(123);
// Record custom metrics
obs.recordMetric('cache_hit_rate', 0.85);
obs.recordMetric('db_query_time', 45, { table: 'users' });
return user;
});Python
# Elegant decorators
@trace_function('process_payment')
@monitor_function('payment_duration')
async def process_payment(amount: float):
async with create_span('validate_card'):
await validate_card()
async with create_span('charge_card'):
result = await charge_card(amount)
record_metric('payment_processed', amount)
return result// Errors are automatically captured with full context
app.post('/api/orders', async (req, res) => {
try {
const order = await createOrder(req.body);
res.json(order);
} catch (error) {
// ๐ฏ Error appears in dashboard with:
// - Full stack trace
// - Request details
// - Distributed trace
// - User context
throw error;
}
});- No budget for expensive monitoring
- Need immediate insights without complexity
- Want to catch issues before users complain
- Building local development confidence
- Understanding application behavior
- Learning observability best practices
- Bridging the gap to enterprise monitoring
- Proving observability value to stakeholders
- Training developers on telemetry concepts
- Lightweight enough for production use
- Scales with configurable sampling
- Easy migration to enterprise solutions
Start with zero config, customize as you grow:
// Start simple
obs.init({ dashboard: true });
// Customize for your needs
obs.init({
serviceName: 'my-awesome-api',
environment: 'production',
dashboard: process.env.NODE_ENV === 'development',
sampleRate: 0.1, // Sample 10% in production
enablePrometheus: true, // Export to Prometheus
otlpEndpoint: 'https://...', // Send to enterprise system
persistence: true, // Save data to disk
customThresholds: {
cpuWarning: 80,
memoryWarning: 512,
latencyWarning: 1000
}
});"Finally! Observability that doesn't require a PhD in DevOps"
โ Sarah Chen, Full-Stack Developer
"Added one line, immediately found our N+1 query problem"
โ Marcus Johnson, Backend Engineer
"The dashboard is actually beautiful. I keep it open all day"
โ Alex Rivera, Site Reliability Engineer
"From zero observability to production monitoring in 5 minutes"
โ Priya Patel, Engineering Manager
cd examples/nodejs
npm start
# ๐ http://localhost:3000 ๐ http://localhost:3001Features: Auto-instrumentation, custom spans, metrics, error handling
cd examples/python
python fastapi_app.py
# ๐ http://localhost:8000 ๐ http://localhost:8001Features: Async tracing, decorators, context managers, automatic FastAPI integration
# Generate traffic to see metrics in action
curl -X POST http://localhost:3000/api/users \
-H "Content-Type: application/json" \
-d '{"name":"John","email":"[email protected]"}'Watch the dashboard light up with real-time metrics! ๐
- < 5% CPU overhead in typical applications
- < 1ms latency added per request
- 10-50MB memory depending on retention settings
- Configurable sampling for high-traffic apps
- Localhost-only dashboard by default
- No external dependencies required
- Configurable data retention and persistence
- OpenTelemetry standard ensures vendor neutrality
// Production configuration
obs.init({
environment: 'production',
dashboard: false, // Disable UI in production
sampleRate: 0.01, // Sample 1% of traces
enablePrometheus: true, // Export to monitoring system
persistence: true, // Persist important data
maxTraces: 1000, // Limit memory usage
maxErrors: 500 // Keep recent errors only
});- More Frameworks: Django, Flask, Koa, NestJS support
- Advanced Alerting: Slack, email, webhook notifications
- AI Insights: Automatic anomaly detection and suggestions
- Cloud Dashboard: Hosted service for team collaboration
- Mobile SDKs: React Native and Flutter support
- Browser Monitoring: Frontend performance tracking
We'd love your help making observability accessible to everyone!
git clone https://github.com/observability-kit/lite-observability
cd lite-observability
npm install
npm run devWays to contribute:
- ๐ Bug Reports: Found an issue? Let us know!
- ๐ก Feature Ideas: What would make this even better?
- ๐ Documentation: Help others get started faster
- ๐ง Code: Add support for new frameworks or features
- โญ Star: Show your support and help others discover this
- ๐ Getting Started Guide - Complete setup and usage
- ๐ฏ API Reference - Full API documentation
- ๐ Production Guide - Deploy with confidence
- ๐ง Troubleshooting - Common issues and solutions
- ๐ช Examples - Working code you can run today
MIT License - use it anywhere, modify it however you want, build amazing things!
Ready to stop flying blind?
โญ Star this repo โข ๐ Share with your team โข ๐ Start monitoring in 30 seconds


