Real-time hardware monitoring for Kubernetes clusters with dynamic configuration and beautiful web interface
- ๐ฅ Real-time Monitoring: Live hardware sensor data from all cluster nodes
- ๐จ Modern UI: Beautiful terminal-style web interface with dark theme
- โ๏ธ Dynamic Configuration: ConfigMap-based node management with hot-reload
- ๐ Cloud Native: Kubernetes-first design with DaemonSet architecture
- ๐ฑ Responsive: Works perfectly on desktop and mobile devices
- ๐ Smart Discovery: Configurable auto-discovery with manual node control
- ๐ Multi-Node: Monitor temperature, voltage, and system info across your entire cluster
- ๐ ๏ธ Management Tools: CLI tools for easy configuration management
- โก Lightweight: Optimized containers with minimal resource footprint
This project uses a dynamic configuration microservice architecture:
| Component | Purpose | Image | Deployment |
|---|---|---|---|
| Sensor DaemonSet | Hardware data collection | ghcr.io/michaeltrip/lmsensors-daemonset-container |
Runs on every node |
| Web Dashboard | Modern web interface | ghcr.io/michaeltrip/lmsensors-web |
Centralized deployment |
| ConfigMap | Dynamic node configuration | Built-in Kubernetes | Configuration storage |
- Purpose: Collects hardware sensor data from each node
- Technology: Ubuntu + lm_sensors + fastfetch
- Deployment: Runs on every node via DaemonSet
- Data: Temperature, voltage, fan speeds, system information
- Schedule: Updates every 60 seconds
- Output: Standardized file format (
lmsensors-{node}.txt,fastfetch-{node}.txt)
- Purpose: Modern web interface with dynamic configuration
- Technology: nginx + responsive HTML/CSS/JS + ConfigMap integration
- Features: Real-time updates, configurable nodes, mobile-friendly
- Configuration: Loads node definitions from ConfigMap endpoint
- Access: Single deployment with service endpoint
- Purpose: Centralized node management without code changes
- Technology: Kubernetes ConfigMap + custom nginx endpoints
- Features: Hot-reload, CLI management, backup/restore
- Control: Define which nodes to display with metadata
Get up and running in under 2 minutes with dynamic configuration:
# Clone the repository
git clone https://github.com/MichaelTrip/lmsensors-container.git
cd lmsensors-container
# Deploy with dynamic configuration
./deploy-dynamic.sh
# Access the dashboard
kubectl port-forward service/sensordash-service 8080:80 -n sensordashThen open http://localhost:8080 in your browser! ๐
Define your nodes in the ConfigMap with rich metadata:
{
"nodes": [
{
"name": "virt1",
"displayName": "Virtual Node 1",
"description": "Primary virtual machine",
"status": "online"
},
{
"name": "worker-01",
"displayName": "Production Worker 01",
"description": "Main production workload node",
"status": "online"
}
],
"settings": {
"refreshInterval": 30000,
"fallbackNodes": ["node-001", "node-002"],
"autoDiscovery": true,
"displayMode": "terminal"
}
}# View current configuration
./config-manager.sh view
# Add a new node
./config-manager.sh add worker-02 "Production Worker 02"
# Remove a node
./config-manager.sh remove old-node
# Edit configuration interactively
./config-manager.sh edit
# Backup configuration
./config-manager.sh backup
# Show example configuration
./config-manager.sh example# View current node configuration
kubectl get configmap sensordash-config -n sensordash -o jsonpath='{.data.nodes\.json}' | jq .
# Edit configuration directly
kubectl edit configmap sensordash-config -n sensordash
# Apply new configuration
kubectl apply -f deployment-files/configmap.yaml -n sensordash| Setting | Description | Default | Example |
|---|---|---|---|
refreshInterval |
Update frequency (ms) | 30000 |
60000 |
fallbackNodes |
Placeholder nodes when no data | [] |
["node-001", "node-002"] |
autoDiscovery |
Auto-add discovered nodes | true |
false |
displayMode |
UI theme | "terminal" |
"terminal" |
| Property | Required | Description | Example |
|---|---|---|---|
name |
โ | Node identifier (matches sensor files) | "worker-01" |
displayName |
โ | Human-readable name | "Production Worker 01" |
description |
โ | Node description (shown in tooltips) | "Main production node" |
status |
โ | Default status indicator | "online" |
| Container | Registry | Latest Version |
|---|---|---|
| DaemonSet | ghcr.io/michaeltrip/lmsensors-daemonset-container:latest |
|
| Web UI | ghcr.io/michaeltrip/lmsensors-web:latest |
- Kubernetes cluster (1.19+)
- Persistent volume support (
ReadWriteMany) - Privileged container support (for hardware access)
jqcommand-line tool (for config-manager.sh)
- ๐ก๏ธ CPU Temperature - Real-time thermal monitoring
- โก Voltage Rails - Power supply monitoring
- ๐ Fan Speeds - Cooling system status
- ๐พ System Info - Hardware specifications
- ๐ Node Status - Health indicators with custom metadata
- ๐ Live Updates - Configurable auto-refresh intervals
- ๐ท๏ธ Custom Labels - User-defined display names and descriptions
# Deploy with ConfigMap-based dynamic configuration
./deploy-dynamic.sh
# Manage nodes with CLI tool
./config-manager.sh view
./config-manager.sh add worker-03 "Worker Node 03"โโโ sensor-container/ # DaemonSet container source
โโโ web-container/ # Web interface container source
โโโ deployment-files/ # Kubernetes manifests
โ โโโ configmap.yaml # Dynamic node configuration
โ โโโ webserver-modern.yaml # Web deployment with ConfigMap
โ โโโ daemonset.yaml # Sensor collection DaemonSet
โ โโโ pvc.yaml # Persistent volume claim
โโโ deploy-dynamic.sh # Quick deployment with ConfigMap
โโโ config-manager.sh # Configuration management CLI
โโโ .github/workflows/ # CI/CD pipelines
โโโ cleanup.sh # Cleanup script
Click to expand manual deployment steps
# 1. Create namespace
kubectl create namespace sensordash
# 2. Deploy dynamic configuration
kubectl apply -f deployment-files/configmap.yaml -n sensordash
# 3. Deploy persistent volume claim
kubectl apply -f deployment-files/pvc.yaml -n sensordash
# 4. Deploy sensor collection DaemonSet
kubectl apply -f deployment-files/daemonset.yaml -n sensordash
# 5. Deploy web dashboard with ConfigMap integration
kubectl apply -f deployment-files/webserver-modern.yaml -n sensordash
# 6. Access the dashboard
kubectl port-forward service/sensordash-service 8080:80 -n sensordash{
"nodes": [
{
"name": "control-plane",
"displayName": "Control Plane",
"description": "Kubernetes master node",
"status": "online"
},
{
"name": "worker-01",
"displayName": "Worker Node 01",
"description": "Production workload node",
"status": "online"
}
]
}{
"nodes": [
{
"name": "gpu-node-01",
"displayName": "๐ฎ GPU Worker 01",
"description": "NVIDIA RTX 4090 - ML Training Node",
"status": "online"
},
{
"name": "storage-node",
"displayName": "๐พ Storage Node",
"description": "High-capacity storage with NVMe arrays",
"status": "warning"
}
],
"settings": {
"refreshInterval": 15000,
"fallbackNodes": ["placeholder-01", "placeholder-02"],
"autoDiscovery": false,
"displayMode": "terminal"
}
}kubectl apply -f deployment-files/webserver-modern.yaml
kubectl port-forward service/sensordash-service 8080:80
</details>
## ๐งน Cleanup
Remove all components safely:
```bash
./cleanup.sh
# Or manual cleanup
kubectl delete namespace sensordash --cascade=foreground
The cleanup script will:
- Remove all deployments and services
- Delete the sensordash namespace
- Optionally preserve your sensor data
- Confirm before destructive operations
Changes to the ConfigMap are automatically picked up by the web interface:
# Method 1: Use the management tool
./config-manager.sh add new-node "New Node Display Name"
# Method 2: Edit directly
kubectl edit configmap sensordash-config -n sensordash
# Method 3: Apply updated file
kubectl apply -f deployment-files/configmap.yaml -n sensordash# Backup current configuration
./config-manager.sh backup
# Creates: sensordash-config-backup-YYYYMMDD-HHMMSS.json
# Restore from backup
./config-manager.sh restore sensordash-config-backup-20250910-143022.jsonThis project uses semantic versioning with conventional commits:
- ๐ฏ Automatic versioning based on commit messages
- ๐๏ธ Parallel container builds for optimal speed
- ๐ฆ Multi-platform support (linux/amd64)
- ๐ Auto-deployment file updates
- ๐ท๏ธ Smart tagging with semantic versions
feat: add new sensor support # โ Minor version bump
fix: resolve memory leak # โ Patch version bump
feat!: breaking API change # โ Major version bump
# Build containers locally
docker build -t lmsensors-daemonset:dev sensor-container/
docker build -t lmsensors-web:dev web-container/
# Test with docker-compose (if available)
docker-compose up
# Test configuration changes
kubectl apply -f deployment-files/configmap.yaml -n sensordash
./config-manager.sh view- ๐ด Fork the repository
- ๐ฟ Create a feature branch
- ๐ Use conventional commits
- ๐งช Test your changes
- ๏ฟฝ Test ConfigMap functionality
- ๏ฟฝ๐ค Submit a pull request
# Test configuration manager
./config-manager.sh example
./config-manager.sh add test-node "Test Node"
./config-manager.sh view
./config-manager.sh remove test-node
# Test web interface updates
kubectl port-forward service/sensordash-service 8080:80 -n sensordash
# Visit http://localhost:8080 and verify changesThe ConfigMap includes nginx configuration for serving the node configuration:
# Custom endpoint in ConfigMap
location /config/nodes.json {
alias /etc/sensordash/nodes.json;
add_header Content-Type application/json;
add_header Cache-Control "no-cache, no-store, must-revalidate";
}Export configuration for external tools:
# Get configuration in various formats
kubectl get configmap sensordash-config -n sensordash -o jsonpath='{.data.nodes\.json}' | jq .
# Export for Prometheus labels
kubectl get configmap sensordash-config -n sensordash -o jsonpath='{.data.nodes\.json}' | jq -r '.nodes[] | "\(.name)=\(.displayName)"'Use different ConfigMaps per environment:
# Development
kubectl apply -f configmaps/dev-config.yaml -n sensordash-dev
# Production
kubectl apply -f configmaps/prod-config.yaml -n sensordash-prod- ๐ Documentation: Check our Wiki
- ๐ Issues: Report bugs
- ๐ก Features: Request features
- ๐ฌ Discussions: Community discussions
This project is licensed under the MIT License - see the LICENSE file for details.
If this project helped you, please consider:
- โญ Starring the repository
- ๐ด Forking for your own use
- ๐ข Sharing with others
- ๐ Contributing improvements
Running containers with privileged access can pose security risks. Be cautious where and how you use such configurations.
