A monitoring system of file transfers from the Summit to the USDF.
- Docker and Docker Compose
- Python 3.11 or higher
- uv (Python package manager)
uv is a fast Python package installer and resolver written in Rust. Choose one of the following installation methods:
curl -LsSf https://astral.sh/uv/install.sh | shpip install uvbrew install uvAfter installation, verify uv is installed:
uv --version- Clone the repository and navigate to the project directory:
cd data_transfer_monitoring- Install Python dependencies using uv:
uv syncThis will create a virtual environment and install all dependencies specified in pyproject.toml.
The easiest way to run the entire application stack is using Docker Compose, which will start all required services including:
- DTM application
- Kafka
- PostgreSQL
- LocalStack (AWS S3 emulation)
- Prometheus
- Grafana
- Loki (logging)
- Promtail (log aggregation)
docker compose upTo run in detached mode (background):
docker compose up -dTo stop all services:
docker compose downTo stop and remove all data volumes:
docker compose down -vTo generate test data and send messages to the system, run the local producers script:
uv run python local_producers.pyThis script will produce sample messages to the Kafka topics that the monitoring system consumes.
Once running, the following services will be available:
- DTM Application: http://localhost:8000
- Grafana: http://localhost:3000 (admin/admin)
- Prometheus: http://localhost:9090
- Kafka UI: http://localhost:8080
- LocalStack (S3): http://localhost:4566
uv run pytest tests/If you want to run the DTM application locally for development:
- Ensure all dependent services are running via Docker Compose:
docker compose up kafka postgres localstack-
Set required environment variables (see
docker-compose.ymlfor full list) -
Run the application:
uv run python main.pyThe project uses Ruff for linting and formatting:
# Check for linting issues
uv run ruff check .
# Format code
uv run ruff format .data_transfer_monitoring/
├── docker-compose.yml # Docker Compose configuration
├── Dockerfile # DTM application container
├── main.py # Main application entry point
├── local_producers.py # Test data generator
├── listeners/ # Kafka consumer implementations
├── models/ # Data models and schemas
├── shared/ # Shared utilities and configurations
├── tests/ # Test suite
├── k8s/ # Kubernetes configurations
├── prometheus.yml # Prometheus configuration
├── loki-config.yaml # Loki configuration
└── promtail-config.yaml # Promtail configuration
Key environment variables (configured in docker-compose.yml):
POSTGRES_CONNECTION_STRING: PostgreSQL connection detailsAWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY: AWS credentials (test values for LocalStack)LOCAL_S3_ENDPOINT: LocalStack S3 endpointFILE_NOTIFICATION_KAFKA_BOOTSTRAP_SERVERS: Kafka broker addressEND_READOUT_KAFKA_BOOTSTRAP_SERVERS: Kafka broker address for end readout events
- Check Docker daemon is running:
docker ps - Check for port conflicts: Ensure ports 8000, 3000, 9090, 8080, 4566 are available
- View logs:
docker compose logs [service-name]
- Clear uv cache:
uv cache clean - Recreate virtual environment:
rm -rf .venv && uv sync
- Ensure Kafka is fully started before running producers:
docker compose logs kafka - Check topic creation: Topics are auto-created on first use