This project provides a FastAPI-based implementation of the Fields of the World Inference API based on the OpenAPI specification. It enables running machine learning inference on satellite imagery using the ftw-tools
package.
-
Install Pixi:
curl -fsSL https://pixi.sh/install.sh | sh # macOS/Linux # or: brew install pixi
-
Clone and setup:
git clone https://github.com/fieldsoftheworld/ftw-inference-api cd ftw-inference-api pixi install
For rapid deployment on AWS EC2 instances using Ubuntu Deep Learning AMI with NVIDIA drivers:
curl -L https://raw.githubusercontent.com/fieldsoftheworld/ftw-inference-api/main/deploy.sh | bash
To deploy a specific branch:
curl -L https://raw.githubusercontent.com/fieldsoftheworld/ftw-inference-api/main/deploy.sh | bash -s -- -b your-branch-name
This script will:
- Install Pixi package manager
- Clone the repository and checkout the specified branch
- Install dependencies using Pixi production environment
- Download all pre-trained model checkpoints (~800MB total)
- Enable GPU support in configuration
- Configure a systemd service for automatic startup
- Set up log rotation
Service management:
sudo systemctl status ftw-inference-api # Check status
sudo systemctl start ftw-inference-api # Start service
sudo systemctl stop ftw-inference-api # Stop service
sudo systemctl restart ftw-inference-api # Restart service
sudo journalctl -u ftw-inference-api -f # Follow logs
sudo journalctl -u ftw-inference-api --since today # Today's logs
- Docker (required for DynamoDB Local)
-
Setup DynamoDB Local (required for development):
# Copy example environment file and configure local DynamoDB cp .env.example .env # Edit .env to uncomment DynamoDB local settings: # DYNAMODB__DYNAMODB_ENDPOINT="http://localhost:8001" # AWS_ACCESS_KEY_ID="fake_key_id" # AWS_SECRET_ACCESS_KEY="fake_secret_key"
-
Start services:
pixi run dynamodb-local # Start DynamoDB Local (port 8001) pixi run start # Start development server (port 8000)
pixi run start # Development server with debug mode and auto reload
Or run directly with options:
pixi run python server/run.py --host 127.0.0.1 --port 8080 --debug
Command-line options:
--host HOST
: Host address (default: 0.0.0.0)--port PORT
: Port number (default: 8000)--config CONFIG
: Custom config file path--debug
: Enable debug mode and auto-reload
The server loads configuration from server/config/base.toml
by default. Settings can be overridden using environment variables with double underscore delimiter (e.g., SECURITY__SECRET_KEY
).
You can specify a custom configuration file using the --config
command-line option:
python run.py --config /path/to/custom_config.toml
The API provides the following versioned endpoints under /v1/
:
GET /
: Root endpoint that returns basic API informationPUT /v1/example
: Compute field boundaries for a small area quickly and return as GeoJSONPOST /v1/projects
: Create a new projectGET /v1/projects
: List all projectsGET /v1/projects/{project_id}
: Get details of a specific projectDELETE /v1/projects/{project_id}
: Delete a specific projectPUT /v1/projects/{project_id}/images/{window}
: Upload an image for a project (window can be 'a' or 'b')PUT /v1/projects/{project_id}/inference
: Run inference on project imagesPUT /v1/projects/{project_id}/polygons
: Run polygonization on inference resultsGET /v1/projects/{project_id}/inference
: Get inference results for a project
The API uses Bearer token authentication. Include the Authorization
header with a valid JWT token:
Authorization: Bearer <your_token_here>
For development and testing, you can disable authentication by setting auth_disabled
to true
in server/config/base.toml
.
You still need to send a Bearer token to the API, but you can define a token via jwt.io for example.
The important part is that the secret key in config and in the config file align.
You also need to set the sub
to guest
.
For the default config, the following token can be used:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJndWVzdCIsIm5hbWUiOiJHdWVzdCIsImlhdCI6MTc0ODIxNzYwMCwiZXhwaXJlcyI6OTk5OTk5OTk5OX0.lJIkuuSdE7ihufZwWtLx10D_93ygWUcUrtKhvlh6M8k
The application follows clean architecture principles with clear separation of concerns:
server/
├── app/ # Main application package
│ ├── api/v1/ # API endpoints and dependencies
│ ├── services/ # Business logic layer
│ ├── ml/ # ML pipeline and validation
│ ├── core/ # Infrastructure (auth, config, storage)
│ ├── schemas/ # Pydantic request/response models
│ ├── db/ # Database models and connection
│ └── main.py # FastAPI application setup
├── config/ # Configuration files
├── data/ # ML models, results, temp files
├── tests/ # Test suite
└── run.py # Development server runner
Uses Ruff for linting/formatting and pre-commit hooks for quality checks.
pixi run lint # Run all pre-commit hooks
pixi run format # Format code
pixi run check # Check without fixing
Setup pre-commit:
pixi run pre-commit install
pixi run test # All tests with coverage
See the LICENSE file for details.