A Docker-based API for generating images with the FLUX.1-dev model from Black Forest Labs.
- Simple REST API: Generate images by sending HTTP requests
- Docker-based: Easy deployment with Docker and Docker Compose
- GPU Accelerated: Utilizes CUDA for fast image generation
- Persistent Storage: Model files are downloaded once and reused
- Customizable: Configure image size, steps, and other parameters
- Docker and Docker Compose
- NVIDIA GPU with CUDA 12.8 drivers
- NVIDIA Container Toolkit installed
- Hugging Face account with access to FLUX.1-dev model
- At least 20GB of free disk space for the model
-
Clone this repository:
git clone https://github.com/yourusername/flux-api.git cd flux-api -
Create environment file:
cp .env.example .env
Edit
.envand add your Hugging Face token:HUGGINGFACE_TOKEN=your_huggingface_token_hereYou can create a token at https://huggingface.co/settings/tokens
-
Create directories for persistent storage:
mkdir -p models outputs
-
Build and start the container:
docker compose up --build
The API will be available at
http://localhost:2030
If you're using WSL2 and Docker Desktop, make sure:
- WSL integration is enabled in Docker Desktop (Settings > Resources > WSL Integration)
- NVIDIA Container Toolkit is properly configured
POST /generate
Request body:
{
"prompt": "A stunning mountain landscape at sunset",
"negative_prompt": "blurry, low quality",
"width": 1024,
"height": 1024,
"num_inference_steps": 50,
"guidance_scale": 3.5,
"max_sequence_length": 512,
"seed": 12345
}Parameters:
prompt(required): Text description of the image to generatenegative_prompt(optional): Text describing what to avoid in the imagewidth(optional, default: 1024): Width of the generated imageheight(optional, default: 1024): Height of the generated imagenum_inference_steps(optional, default: 50): Number of denoising stepsguidance_scale(optional, default: 3.5): How closely to follow the promptmax_sequence_length(optional, default: 512): Maximum token length for prompt processingseed(optional): Random seed for reproducible results
Response:
{
"status": "success",
"filename": "flux-20240408-123456-abcd1234.png",
"filepath": "/app/outputs/flux-20240408-123456-abcd1234.png",
"download_url": "/download/flux-20240408-123456-abcd1234.png"
}GET /download/{filename}
Returns the generated image as a PNG file.
GET /status
Returns information about the API, model loading status, and CUDA availability.
import requests
# Generate an image
response = requests.post(
"http://localhost:2030/generate",
json={
"prompt": "A beautiful sunset over mountains",
"negative_prompt": "blurry, low quality",
"num_inference_steps": 50,
"guidance_scale": 3.5,
"width": 1024,
"height": 1024
}
)
# Get the download URL from the response
result = response.json()
print(f"Image generated: {result['filename']}")
# Download the image
image_response = requests.get(f"http://localhost:2030{result['download_url']}")
with open(f"generated_{result['filename']}", "wb") as f:
f.write(image_response.content)# Generate an image
curl -X POST http://localhost:2030/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "A beautiful sunset over mountains", "negative_prompt": "blurry, low quality", "num_inference_steps": 50, "guidance_scale": 3.5}'
# Download the image (replace with the filename from the response)
curl -o generated_image.png http://localhost:2030/download/flux-20240408-123456-abcd1234.png- The model is downloaded on first startup (20+ GB), which may take some time
- Generated images are saved in the
outputsdirectory and persist between container restarts - The model is stored in the
modelsdirectory and persists between container restarts
This project uses the FLUX.1-dev model, which is licensed under the FLUX.1-dev Non-Commercial License.