SEDIMARK Rest API for interacting with the MLFlow in the toolbox
-
uv
- uv >= 0.5.0
- python >= 3.11.0
-
First run
uv run src/mlflow_api/main.py
-
All the other runs
uv run mlflow_api
-
Docker
- MLFLOW_TRACKING_USERNAME - The username for the local MLFlow instance
- MLFLOW_TRACKING_PASSWORD - The password for the local MLFlow instance
- AWS_ACCESS_KEY_ID - The access key for the local MINIO/remote S3 instance
- AWS_SECRET_ACCESS_KEY - The secret key for the local MINIO/remote S3 instance
- MLFLOW_S3_ENDPOINT_URL - The url for the local MINIO/remote S3 instance
- MLFLOW_TRACKING_INSECURE_TLS - The type of connection for the local MLFlow instance (true/false)
- MLFLOW_TRACKING_URI - The url for the local MLFlow instance
docker build -t mlflow_api .docker run -itd -p 8000:8000 \ -e MLFLOW_TRACKING_USERNAME=admin \ -e MLFLOW_TRACKING_PASSWORD=password \ -e AWS_ACCESS_KEY_ID=<key> \ -e AWS_SECRET_ACCESS_KEY=<secret> \ -e MLFLOW_S3_ENDPOINT_URL=http://localhost:9001 \ -e MLFLOW_TRACKING_INSECURE_TLS=true \ -e MLFLOW_TRACKING_URI=http://localhost:5000 \ mlflow_api
A powerful REST API for seamless MLflow integration
The MLflow API is a comprehensive REST API service that provides seamless integration with MLflow for machine learning model management. Built with FastAPI, it offers endpoints for model registration, versioning, deployment, and various ML operations including support for multiple frameworks like PyTorch, TensorFlow, Keras, and scikit-learn.
- π Model Discovery: Browse and search registered models
- π Metrics & Parameters: Access model parameters, metrics, and metadata
- ποΈ Version Management: Handle model versions and lifecycle stages
- π Dataset Integration: Access training datasets and artifacts
- πΌοΈ Image Artifacts: Retrieve and display model-related images
- π Model Import/Export: Seamlessly transfer models between environments
- π― Predictions: Make predictions using registered models
- π§ Multi-Framework Support: Works with PyTorch, TensorFlow, Keras, scikit-learn
- π§ Framework Tools: Get optimizers and loss functions for different ML frameworks
- π¦ Model Packaging: Package models for deployment
| Framework | Import | Export | Predictions | Packaging |
|---|---|---|---|---|
| PyTorch | β | β | β | β |
| TensorFlow | β | β | β | β |
| Keras | β | β | β | β |
| scikit-learn | β | β | β | β |
| PyFunc | β | β | β | β |
- Python >= 3.12.0
- uv >= 0.5.0 (recommended) or pip
- Access to an MLflow tracking server
- S3-compatible storage (MinIO or AWS S3)
# Clone the repository
git clone <your-repo-url>
cd mlflow_api
# First run - install dependencies and run
uv run src/mlflow_api/main.py
# Subsequent runs
uv run mlflow_apiCreate a .env file with the following variables:
MLFLOW_TRACKING_USERNAME=admin
MLFLOW_TRACKING_PASSWORD=password
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
MLFLOW_S3_ENDPOINT_URL=http://localhost:9001
MLFLOW_TRACKING_INSECURE_TLS=true
MLFLOW_TRACKING_URI=http://localhost:5000# Build the Docker image
docker build -t mlflow-api .
# Run the container
docker run -itd -p 8000:8000 \
-e MLFLOW_TRACKING_USERNAME=admin \
-e MLFLOW_TRACKING_PASSWORD=password \
-e AWS_ACCESS_KEY_ID=your_access_key \
-e AWS_SECRET_ACCESS_KEY=your_secret_key \
-e MLFLOW_S3_ENDPOINT_URL=http://localhost:9001 \
-e MLFLOW_TRACKING_INSECURE_TLS=true \
-e MLFLOW_TRACKING_URI=http://localhost:5000 \
mlflow-apiOnce the service is running, access the interactive API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
GET /- Server health check
GET /models- List all registered modelsGET /model/parameters- Get model parametersGET /model/metrics- Get model metricsGET /model/dataset- Download training datasetGET /model/images- Get model artifacts (images)GET /model/versions- Get model versionsGET /model/package- Package model for deploymentGET /model/export- Export model as ZIPPOST /model/import- Import model from ZIPPOST /model/predict- Make predictionsPOST /model/register- Register a new model
GET /optimizers/{framework}- Get available optimizersGET /losses/{framework}- Get available loss functions
Supported frameworks: torch, keras
mlflow_api/
βββ src/mlflow_api/
β βββ __init__.py # Package initialization
β βββ main.py # FastAPI application and endpoints
β βββ mlflow_client.py # MLflow client wrapper
β βββ models.py # Model handlers for different frameworks
β βββ schemas.py # Pydantic schemas for API responses
βββ .github/workflows/ # CI/CD workflows
βββ Dockerfile # Docker configuration
βββ pyproject.toml # Project configuration and dependencies
βββ README.md # This file
curl -X POST "http://localhost:8000/model/register" \
-H "Content-Type: application/json" \
-d '{"run_id": "experiment_id/run_id", "model_name": "my_model"}'curl "http://localhost:8000/model/parameters?name=my_model&version=1"curl -X POST "http://localhost:8000/model/predict?name=my_model" \
-F "[email protected]"curl "http://localhost:8000/model/export?name=my_model&version=1" \
--output my_model_v1.zipThe API uses environment variables for configuration. Ensure secure storage of credentials:
- Use strong passwords for MLflow authentication
- Secure S3/MinIO access keys
- Consider using Docker secrets or Kubernetes secrets in production
- Enable TLS in production environments
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License.
Made with β€οΈ for the ML community