A Backend For Frontend (BFF) service in Go that triggers Kubeflow pipelines using template files from the filesystem and accepts parameters via REST API. Also provides integration with Llama Stack for AI model management.
- REST API: Simple HTTP API for triggering and monitoring Kubeflow pipeline runs
- Llama Stack Integration: List and retrieve AI models from Llama Stack API
- Template-based: Use YAML or JSON pipeline templates stored on the filesystem
- Parameter Injection: Pass runtime parameters to customize pipeline executions
- OAuth/OIDC Support: Pass-through authentication using bearer tokens
- Containerized: Docker support for easy deployment
- Health Checks: Built-in health endpoint for monitoring
┌─────────────┐ HTTP/REST ┌──────────────┐ Kubeflow API ┌─────────────┐
│ UI/Client │ ──────────────────> │ BFF Service │ ──────────────────────> │ Kubeflow │
│ │ OAuth Token │ │ OAuth Token │ Pipelines │
└─────────────┘ └──────────────┘ └─────────────┘
│
│ reads
▼
┌──────────────┐
│ Templates │
│ Directory │
└──────────────┘
test-redhat-bff/
├── cmd/
│ └── server/
│ └── main.go # Application entry point
├── internal/
│ ├── api/
│ │ ├── handler.go # HTTP handlers
│ │ └── router.go # Route definitions
│ ├── config/
│ │ └── config.go # Configuration management
│ ├── kubeflow/
│ │ ├── client.go # Kubeflow Pipelines client
│ │ └── pipeline.go # Pipeline operations
│ ├── llamastack/
│ │ ├── client.go # Llama Stack client
│ │ └── models.go # Model operations
│ └── template/
│ └── manager.go # Template file handling
├── templates/ # Pipeline template files
├── Dockerfile # Container image definition
├── .dockerignore # Docker build exclusions
├── go.mod # Go module definition
├── README.md # This file
└── .env.example # Example environment configuration
- Go 1.21 or later
- Access to a Kubeflow Pipelines cluster
- OAuth/OIDC token for authentication
- Clone the repository:
git clone https://github.com/chrjones-rh/test-ui-bff.git
cd test-ui-bff- Install dependencies:
go mod download- Create environment configuration:
cp .env.example .env
# Edit .env with your configuration- Run the server:
go run cmd/server/main.goThe BFF service is configured using environment variables:
| Variable | Description | Default | Required |
|---|---|---|---|
SERVER_PORT |
HTTP server port | 8080 |
No |
KUBEFLOW_API_ENDPOINT |
Kubeflow Pipelines API base URL | - | Yes |
LLAMA_STACK_API_ENDPOINT |
Llama Stack API base URL (OpenAI-compatible) | - | No |
TEMPLATE_DIR |
Directory containing pipeline templates | ./templates |
No |
OIDC_ISSUER_URL |
OIDC issuer URL (for future use) | - | No |
Build the Docker image:
docker build -t test-ui-bff:latest .Run the container:
docker run -d \
-p 8080:8080 \
-e KUBEFLOW_API_ENDPOINT=https://kubeflow.example.com \
-v $(pwd)/templates:/app/templates \
test-ui-bff:latestPOST /api/v1/pipelines/run
Triggers a new pipeline run using a template file.
Headers:
Authorization: Bearer <oauth-token>
Content-Type: application/json
Request Body:
{
"template": "example-pipeline.yaml",
"display_name": "My Pipeline Run",
"parameters": {
"param1": "value1",
"param2": "value2"
}
}Response (200 OK):
{
"run_id": "run-xyz-123",
"status": "running"
}Error Response (4xx/5xx):
{
"error": "error_code",
"message": "Detailed error message"
}GET /api/v1/pipelines/run/:id
Retrieves the status and details of a pipeline run.
Headers:
Authorization: Bearer <oauth-token>
Response (200 OK):
{
"run_id": "run-xyz-123",
"display_name": "My Pipeline Run",
"state": "SUCCEEDED",
"created_at": "2026-01-26T10:00:00Z",
"finished_at": "2026-01-26T10:15:00Z",
"pipeline_spec": { ... }
}GET /api/v1/templates
Lists all available pipeline templates.
Response (200 OK):
{
"templates": [
"example-pipeline.yaml",
"training-pipeline.yaml",
"inference-pipeline.json"
]
}GET /api/v1/models
Lists all available AI models from Llama Stack.
Headers:
Authorization: Bearer <oauth-token>
Response (200 OK):
{
"object": "list",
"data": [
{
"id": "meta-llama/Llama-3.2-11B-Vision-Instruct",
"object": "model",
"created": 1234567890,
"owned_by": "meta"
},
{
"id": "meta-llama/Llama-3.2-3B-Instruct",
"object": "model",
"created": 1234567890,
"owned_by": "meta"
}
]
}Note: Requires LLAMA_STACK_API_ENDPOINT to be configured. Returns 503 if not configured.
GET /api/v1/models/:id
Retrieves details about a specific AI model.
Headers:
Authorization: Bearer <oauth-token>
Response (200 OK):
{
"id": "meta-llama/Llama-3.2-11B-Vision-Instruct",
"object": "model",
"created": 1234567890,
"owned_by": "meta"
}Note: Requires LLAMA_STACK_API_ENDPOINT to be configured. Returns 503 if not configured.
GET /health
Health check endpoint for monitoring.
Response (200 OK):
{
"status": "healthy"
}Templates are YAML or JSON files stored in the templates/ directory. They define Kubeflow pipeline specifications.
# templates/example-pipeline.yaml
apiVersion: pipelines.kubeflow.org/v2beta1
kind: PipelineSpec
metadata:
name: example-pipeline
spec:
pipelineInfo:
name: example-pipeline
root:
dag:
tasks:
- name: hello-world
componentRef:
name: comp-hello-world
components:
comp-hello-world:
executorLabel: exec-hello-world
deploymentSpec:
executors:
exec-hello-world:
container:
image: alpine:latest
command:
- echo
args:
- "Hello, {{$.inputs.parameters.message}}"
runtime_config:
parameters:
message:
stringValue: "World"Parameters can be specified in the template under runtime_config.parameters and overridden via the API:
{
"template": "example-pipeline.yaml",
"parameters": {
"message": "Kubeflow!"
}
}- Trigger a pipeline run:
curl -X POST http://localhost:8080/api/v1/pipelines/run \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"template": "example-pipeline.yaml",
"display_name": "Test Run",
"parameters": {
"message": "Hello from BFF!"
}
}'- Check run status:
curl -X GET http://localhost:8080/api/v1/pipelines/run/RUN_ID \
-H "Authorization: Bearer YOUR_TOKEN"- List templates:
curl -X GET http://localhost:8080/api/v1/templates- List available models:
curl -X GET http://localhost:8080/api/v1/models \
-H "Authorization: Bearer YOUR_TOKEN"- Get model details:
curl -X GET http://localhost:8080/api/v1/models/meta-llama/Llama-3.2-11B-Vision-Instruct \
-H "Authorization: Bearer YOUR_TOKEN"# Trigger pipeline
http POST localhost:8080/api/v1/pipelines/run \
Authorization:"Bearer YOUR_TOKEN" \
template=example-pipeline.yaml \
display_name="Test Run" \
parameters:='{"message":"Hello!"}'
# Get run status
http GET localhost:8080/api/v1/pipelines/run/RUN_ID \
Authorization:"Bearer YOUR_TOKEN"
# List models
http GET localhost:8080/api/v1/models \
Authorization:"Bearer YOUR_TOKEN"
# Get model details
http GET localhost:8080/api/v1/models/meta-llama/Llama-3.2-11B-Vision-Instruct \
Authorization:"Bearer YOUR_TOKEN"The BFF uses a pass-through authentication model:
- Client includes OAuth/OIDC bearer token in
Authorizationheader - BFF extracts the token and forwards it to the backend API (Kubeflow or Llama Stack)
- Backend API validates the token and authorizes the request
- No token validation occurs in the BFF layer
This approach:
- Simplifies the BFF architecture
- Reduces dependencies
- Delegates auth to backend services (single source of truth)
- Supports any auth method the backend services use
go test ./...go build -o bin/test-ui-bff ./cmd/server- Create a YAML or JSON file in the
templates/directory - Define the pipeline specification following Kubeflow v2beta1 API format
- Optionally include default parameters under
runtime_config.parameters - The template will be automatically available via the API
- Path Traversal Protection: Template manager validates file paths to prevent directory traversal attacks
- Token Handling: OAuth tokens are passed securely via Authorization headers
- Container Security: Non-root user in Docker container
- Input Validation: Request bodies are validated before processing
-
"KUBEFLOW_API_ENDPOINT environment variable is required"
- Ensure
KUBEFLOW_API_ENDPOINTis set in your environment
- Ensure
-
"template file not found"
- Verify the template exists in the templates directory
- Check the template name matches the file name exactly
-
"unauthorized"
- Verify your OAuth token is valid
- Check the Authorization header format:
Bearer <token>
-
"kubeflow API error"
- Check Kubeflow API endpoint is accessible
- Verify OAuth token has proper permissions in Kubeflow
- Review Kubeflow logs for detailed error messages
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
[Add your license here]
[Add contact information or links]