Lightweight container orchestration framework for Python environments.
Define environments once, deploy anywhere with Docker containers and secure HTTP communication.
- Simple Environment Definition: Only requires
env.pyfile - Container Isolation: Isolated Docker containers with automatic cleanup
- Secure Communication: Internal network (no exposed ports) + SSH tunnels for remote access
- Multi-Instance Support: Deploy multiple replicas with load balancing
- Dynamic Method Dispatch: Automatic method exposure via HTTP API
- Zero Burden: Environment developers only write business logic
import affinetes as af_env
import asyncio
async def main():
# Load environment from Docker image
env = af_env.load_env(
image="bignickeye/agentgym:sciworld-v2",
env_vars={"CHUTES_API_KEY": "your-api-key"}
)
# Execute methods
result = await env.evaluate(
model="deepseek-ai/DeepSeek-V3",
base_url="https://llm.chutes.ai/v1",
task_id=10
)
print(f"Score: {result['score']}")
# Cleanup
await env.cleanup()
asyncio.run(main())import affinetes as af_env
import asyncio
async def main():
# Connect to user-deployed environment service
env = af_env.load_env(
mode="url",
base_url="http://your-service.com:8080"
)
# Execute methods
result = await env.evaluate(
model="deepseek-ai/DeepSeek-V3",
base_url="https://llm.chutes.ai/v1",
task_id=10
)
print(f"Score: {result['score']}")
# Cleanup
await env.cleanup()
asyncio.run(main())async with af_env.load_env(
image="bignickeye/agentgym:sciworld-v2",
env_vars={"CHUTES_API_KEY": "your-api-key"}
) as env:
result = await env.evaluate(
model="deepseek-ai/DeepSeek-V3",
base_url="https://llm.chutes.ai/v1",
task_id=10
)
# Auto cleanupCreate env.py with simple calculator functions:
import os
class Actor:
def __init__(self):
self.precision = int(os.getenv("PRECISION", "2"))
async def add(self, a: float, b: float) -> dict:
"""Add two numbers"""
result = a + b
return {
"operation": "add",
"a": a,
"b": b,
"result": round(result, self.precision)
}
async def multiply(self, a: float, b: float) -> dict:
"""Multiply two numbers"""
result = a * b
return {
"operation": "multiply",
"a": a,
"b": b,
"result": round(result, self.precision)
}Build and run:
# Build image
af_env.build_image_from_env(
env_path="environments/calculator",
image_tag="calculator:latest"
)
# Load and execute
env = af_env.load_env(
image="calculator:latest",
env_vars={"PRECISION": "3"}
)
# Call methods
result = await env.add(a=10.5, b=20.3)
print(result) # {"operation": "add", "a": 10.5, "b": 20.3, "result": 30.8}
result = await env.multiply(a=3.5, b=4.2)
print(result) # {"operation": "multiply", "a": 3.5, "b": 4.2, "result": 14.7}pip install -e .uv is a modern, fast Python package manager written in Rust.
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Sync dependencies and install affinetes
uv sync
source .venv/bin/activateRequirements:
- Python 3.8+
- Docker daemon running
- (Optional) SSH access for remote deployment
The afs CLI follows the init β build β run β call workflow.
# 1. Initialize environment directory
afs init my-env --template actor
# 2. Build Docker image
afs build my-env --tag my-env:v1
# 3. Start environment container
afs run my-env:v1 --name my-env --env API_KEY=xxx
# 4. Call environment methods
afs call my-env evaluate --arg task_id=10Create a new environment directory with template files.
Syntax:
afs init NAME [--type TYPE] [--template TEMPLATE]Parameters:
NAME: Environment name (creates directory with this name)--type: Environment typefunction(default): Function/class-based environmenthttp: HTTP-based environment
--template: Template typebasic(default): Module functionsactor: Actor classfastapi: FastAPI application
Examples:
# Create calculator environment with Actor class
afs init calculator --template actor
# Create basic function-based calculator
afs init calculator --template basic
# Create FastAPI environment
afs init web-env --type http --template fastapiGenerated Files:
env.py- Environment implementation (contains add/multiply functions)Dockerfile- Docker build configuration
Template Content:
actor: Actor class with add, multiply, and batch_calculate methodsbasic: Module-level functions: add, multiply, batch_calculatefastapi: FastAPI application template
Build Docker image from environment directory.
Syntax:
afs build ENV_DIR --tag TAG [OPTIONS]Parameters:
ENV_DIR: Environment directory path--tag TAG: Image tag (required), format:name:version--push: Push to registry after build--registry URL: Registry URL (used with --push)--no-cache: Don't use build cache--quiet: Suppress build output--build-arg KEY=VALUE: Docker build arguments (can be specified multiple times)
Examples:
# Local build
afs build environments/affine --tag affine:v2
# Build and push
afs build my-env --tag my-env:v1 --push --registry docker.io/username
# Build without cache
afs build my-env --tag my-env:v1 --no-cache
# Build with arguments
afs build my-env --tag my-env:v1 --build-arg ENV_NAME=prodDirectory Requirements:
- Required:
env.py- Environment implementation - Required:
Dockerfile- Build configuration - Optional:
requirements.txt- Python dependencies - Optional:
config.py- Configuration file
Start environment container from image or directory.
Syntax:
afs run [IMAGE] [--dir ENV_DIR] [OPTIONS]Parameters:
IMAGE: Docker image name--dir ENV_DIR: Build from directory and start (auto-build)--tag TAG: Image tag when using --dir (default: auto-generated)--name NAME: Container name (default: derived from image)--env KEY=VALUE: Environment variables (can be specified multiple times)--pull: Pull image before starting--mem-limit MEM: Memory limit (e.g., 512m, 1g, 2g)--no-cache: Don't use cache when building (only with --dir)
Examples:
# Start from image
afs run bignickeye/agentgym:webshop-v2 --env CHUTES_API_KEY=xxx
# Specify container name and memory limit
afs run affine:v2 --name affine-prod --mem-limit 2g
# Build from directory and start
afs run --dir environments/my-env --tag my-env:latest
# Pull latest image before starting
afs run my-env:latest --pullAfter Starting:
- Shows container name
- Lists available methods
- Displays usage examples
Call methods on running environment.
Syntax:
afs call NAME METHOD [OPTIONS]Parameters:
NAME: Environment/container nameMETHOD: Method name--arg KEY=VALUE: Method arguments (can be specified multiple times)--json STRING: JSON-formatted arguments--timeout SECS: Timeout in seconds (default: 300)
Argument Parsing:
- Auto-parse JSON values:
--arg ids=[10,20]β{"ids": [10, 20]} - String values:
--arg model="gpt-4"β{"model": "gpt-4"} --jsonoverrides--argfor same keys
Examples:
# Simple arguments
afs call my-env evaluate --arg task_id=10
# Complex arguments (lists, objects)
afs call webshop evaluate --arg ids=[10,20,30] --arg model="deepseek-ai/DeepSeek-V3"
# JSON arguments
afs call affine evaluate --json '{"task_type": "abd", "num_samples": 5}'
# Custom timeout
afs call my-env long_task --arg task_id=1 --timeout 600
# Combined arguments
afs call agentgym evaluate \
--arg ids=[10] \
--arg model="deepseek-ai/DeepSeek-V3" \
--arg base_url="https://llm.chutes.ai/v1" \
--arg seed=2717596881Notes:
- Container must be running (started via
afs runor verify withdocker ps) - Method must exist in environment's
env.py - Results output as JSON
# 1. Initialize calculator environment
afs init calculator --template actor
# 2. (Optional) Edit calculator/env.py to customize logic
vim calculator/env.py
# 3. Build image
afs build calculator --tag calculator:v1
# 4. Start environment
afs run calculator:v1 --name calc --env PRECISION=3
# 5. Call methods
afs call calc add --arg a=10.5 --arg b=20.3
# Output: {"operation": "add", "a": 10.5, "b": 20.3, "result": 30.8}
afs call calc multiply --arg a=3.5 --arg b=4.2
# Output: {"operation": "multiply", "a": 3.5, "b": 4.2, "result": 14.7}
# 6. Batch calculations
afs call calc batch_calculate --json '{"operations": [{"op": "add", "a": 1, "b": 2}, {"op": "multiply", "a": 3, "b": 4}]}'
# 7. Stop container
docker stop calcBuild Docker image from environment directory.
af_env.build_image_from_env(
env_path: str, # Path to environment directory
image_tag: str, # Image tag (e.g., "affine:latest")
nocache: bool = False, # Don't use build cache
quiet: bool = False, # Suppress build output
buildargs: Dict[str, str] = None # Docker build arguments
) -> str # Returns image tagRequirements:
env_pathmust containenv.pyfile- Optional:
Dockerfile,requirements.txt, other Python files
Behavior:
- Detects environment type (function-based or http-based)
- For function-based: Builds base image, then injects HTTP server (two-stage build)
- For http-based: Uses existing Dockerfile as-is
Load environment from Docker image.
af_env.load_env(
image: str, # Docker image name
env_vars: Dict[str, str] = None, # Environment variables
replicas: int = 1, # Number of instances
hosts: List[str] = None, # Remote hosts via SSH
load_balance: str = "random", # Load balancing: "random" or "round_robin"
mem_limit: str = None, # Memory limit: "512m", "1g", "2g"
pull: bool = False, # Pull image before starting
cleanup: bool = True, # Auto cleanup on exit
**kwargs
) -> EnvironmentWrapperExamples:
# Basic usage
env = af_env.load_env(
image="my-env:latest",
env_vars={"API_KEY": "xxx"}
)
# Multi-instance with load balancing
env = af_env.load_env(
image="my-env:latest",
replicas=3,
load_balance="round_robin"
)
# Remote deployment via SSH
env = af_env.load_env(
image="my-env:latest",
hosts=["ssh://user@host1", "ssh://user@host2"]
)await env.cleanup() # Stop container(s) and cleanup
await env.list_methods() # List available methods
env.is_ready() # Check if ready for execution
await env.<method_name>(**kwargs) # Call any method from env.py
env.get_stats() # Get pool statistics (multi-instance)Call-Level Timeout:
# Set timeout for specific method call
result = await env.evaluate(
task_type="sat",
_timeout=90 # Timeout after 90 seconds
)af_env.list_active_environments() # List all active environment IDs
af_env.cleanup_all_environments() # Cleanup all environments (auto on exit)
af_env.get_environment(env_id) # Get environment by IDβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User Application β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β import affinetes as af_env β β
β β env = af_env.load_env("affine:latest", replicas=3) β β
β β result = await env.evaluate(...) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Affinetes Framework β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β API Layer β β Core Layer β β Backend β β
β β - build_* ββ β - Wrapper ββ β - Local β β
β β - load_env β β - Registry β β - Pool β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β β β
β ββββββββββββββββ β β β
β βInfrastructureββββββββββ β β
β β- ImageBuilderβ β β
β β- EnvDetector β β β
β β- HTTPExecutorββββββββββββββββββββββββββββββ β
β ββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ Docker Internal Network
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Docker Container(s) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β HTTP Server (Uvicorn) - 172.17.0.x:8000 β β
β β - GET /health β β
β β - GET /methods β β
β β - POST /call {"method": "evaluate", "args": [...]} β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β User's env.py β β
β β class Actor: β β
β β def __init__(self): ... β β
β β async def evaluate(self, ...): ... β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
No Port Exposure: Containers are accessed via Docker's internal network (e.g., 172.17.0.2:8000) instead of exposing ports to the host machine. This prevents unauthorized external access.
SSH Remote Access: Remote Docker daemons are accessed via SSH protocol (ssh://user@host) using public key authentication, providing secure encrypted communication.
Affinetes supports multiple execution modes for different deployment scenarios:
Manages Docker containers locally or remotely via SSH.
# Local deployment
env = af_env.load_env(
image="my-env:latest",
mode="docker" # default mode
)
# Remote deployment via SSH
env = af_env.load_env(
image="my-env:latest",
mode="docker",
hosts=["ssh://user@remote-host"]
)Connect to environment services that users have deployed themselves. The service must implement the standard affinetes HTTP API:
Required Endpoints:
GET /health- Health checkGET /methods- List available methodsPOST /call- Call method with JSON body:{"method": "...", "args": [...], "kwargs": {...}}
Usage:
env = af_env.load_env(
mode="url",
base_url="http://your-service.com:8080"
)
result = await env.evaluate(task_id=10)Typical Workflow:
-
Deploy environment container on your infrastructure:
docker run -d -p 8080:8000 \ --name my-env-service \ -e CHUTES_API_KEY=xxx \ my-env:latest
-
Connect via URL mode:
env = af_env.load_env( mode="url", base_url="http://your-server.com:8080" )
Benefits:
- Full control over deployment infrastructure
- No SSH access required
- Works with any hosting provider
- Can be integrated into existing services
See examples/url_backend_demo.py for complete examples.
Reserved for future Basilica service integration. Currently a placeholder.
env = af_env.load_env(
image="affine",
mode="basilica",
)# Deploy 3 instances with round-robin load balancing
env = af_env.load_env(
image="my-env:latest",
replicas=3,
load_balance="round_robin"
)
# Concurrent execution (auto-balanced)
tasks = [env.evaluate(task_id=i) for i in range(10)]
results = await asyncio.gather(*tasks)
# Check distribution
stats = env.get_stats()
for inst in stats['instances']:
print(f"{inst['host']}: {inst['requests']} requests")# Deploy to remote hosts
env = af_env.load_env(
image="my-env:latest",
hosts=[
"ssh://user@host1",
"ssh://user@host2"
]
)
result = await env.evaluate(task_id=10)# Set memory limit (auto-restart on OOM)
env = af_env.load_env(
image="my-env:latest",
mem_limit="512m"
)
# Multi-instance with limits
env = af_env.load_env(
image="my-env:latest",
replicas=3,
mem_limit="1g" # Each instance limited
)# Keep container running for debugging
env = af_env.load_env(
image="my-env:latest",
cleanup=False # Manual cleanup required
)
# Pull latest image before starting
env = af_env.load_env(
image="my-env:latest",
pull=True
)
# Custom timeout for method calls
result = await env.evaluate(
task_id=10,
_timeout=600 # 10 minutes
)Define env.py with Actor class or module functions:
import os
class Actor:
def __init__(self):
self.precision = int(os.getenv("PRECISION", "2"))
async def add(self, a: float, b: float) -> dict:
"""Add two numbers"""
result = a + b
return {
"operation": "add",
"a": a,
"b": b,
"result": round(result, self.precision)
}
async def multiply(self, a: float, b: float) -> dict:
"""Multiply two numbers"""
result = a * b
return {
"operation": "multiply",
"a": a,
"b": b,
"result": round(result, self.precision)
}Framework automatically injects HTTP server - no HTTP code needed.
Use existing FastAPI application:
from fastapi import FastAPI
app = FastAPI()
@app.post("/evaluate")
async def evaluate(data: dict):
return {"score": 1.0}Requires CMD in Dockerfile to start server.
- Language-agnostic: JSON over HTTP works with any language
- Simple debugging: Standard HTTP logs and tools
- No version conflicts: Independent of Python version
- Production-ready: Battle-tested protocol
- Security: No exposed ports to internet
- Performance: Direct container-to-container communication
- Simplicity: No port conflicts or management
- SSH tunnels: Secure remote access without exposure
Affinetes automatically creates SSH tunnels for secure remote deployment:
env = af_env.load_env(
image="my-env:latest",
hosts=["ssh://user@remote-host"]
)
# Automatic SSH tunnel: local -> encrypted -> remote containerFeatures:
- Zero port exposure on remote host
- Encrypted communication via SSH
- Automatic tunnel management
- No manual configuration needed
Setup:
# Generate SSH key
ssh-keygen -t rsa -b 4096
# Copy to remote host
ssh-copy-id user@remote-host
# Test
ssh user@remote-host docker psContainer won't start:
# Check logs
docker logs <container_name>
# Verify HTTP server on port 8000
docker exec <container_name> curl localhost:8000/healthMethod not found:
# List available methods
methods = await env.list_methods()
print(methods)SSH connection fails:
# Test SSH + Docker access
ssh user@remote-host docker ps
# Fix key permissions
chmod 600 ~/.ssh/id_rsaMIT
Contributions welcome! Please ensure:
- Code follows existing patterns
- Tests pass (when available)
- Documentation is updated