Skip to content

chrjones-rh/test-ui-bff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kubeflow Pipeline BFF

A Backend For Frontend (BFF) service in Go that triggers Kubeflow pipelines using template files from the filesystem and accepts parameters via REST API. Also provides integration with Llama Stack for AI model management.

Features

  • REST API: Simple HTTP API for triggering and monitoring Kubeflow pipeline runs
  • Llama Stack Integration: List and retrieve AI models from Llama Stack API
  • Template-based: Use YAML or JSON pipeline templates stored on the filesystem
  • Parameter Injection: Pass runtime parameters to customize pipeline executions
  • OAuth/OIDC Support: Pass-through authentication using bearer tokens
  • Containerized: Docker support for easy deployment
  • Health Checks: Built-in health endpoint for monitoring

Architecture

┌─────────────┐      HTTP/REST      ┌──────────────┐      Kubeflow API     ┌─────────────┐
│   UI/Client │ ──────────────────> │  BFF Service │ ──────────────────────> │  Kubeflow   │
│             │   OAuth Token       │              │    OAuth Token         │  Pipelines  │
└─────────────┘                     └──────────────┘                        └─────────────┘
                                           │
                                           │ reads
                                           ▼
                                    ┌──────────────┐
                                    │  Templates   │
                                    │  Directory   │
                                    └──────────────┘

Project Structure

test-redhat-bff/
├── cmd/
│   └── server/
│       └── main.go              # Application entry point
├── internal/
│   ├── api/
│   │   ├── handler.go           # HTTP handlers
│   │   └── router.go            # Route definitions
│   ├── config/
│   │   └── config.go            # Configuration management
│   ├── kubeflow/
│   │   ├── client.go            # Kubeflow Pipelines client
│   │   └── pipeline.go          # Pipeline operations
│   ├── llamastack/
│   │   ├── client.go            # Llama Stack client
│   │   └── models.go            # Model operations
│   └── template/
│       └── manager.go           # Template file handling
├── templates/                   # Pipeline template files
├── Dockerfile                   # Container image definition
├── .dockerignore               # Docker build exclusions
├── go.mod                      # Go module definition
├── README.md                   # This file
└── .env.example                # Example environment configuration

Getting Started

Prerequisites

  • Go 1.21 or later
  • Access to a Kubeflow Pipelines cluster
  • OAuth/OIDC token for authentication

Installation

  1. Clone the repository:
git clone https://github.com/chrjones-rh/test-ui-bff.git
cd test-ui-bff
  1. Install dependencies:
go mod download
  1. Create environment configuration:
cp .env.example .env
# Edit .env with your configuration
  1. Run the server:
go run cmd/server/main.go

Configuration

The BFF service is configured using environment variables:

Variable Description Default Required
SERVER_PORT HTTP server port 8080 No
KUBEFLOW_API_ENDPOINT Kubeflow Pipelines API base URL - Yes
LLAMA_STACK_API_ENDPOINT Llama Stack API base URL (OpenAI-compatible) - No
TEMPLATE_DIR Directory containing pipeline templates ./templates No
OIDC_ISSUER_URL OIDC issuer URL (for future use) - No

Docker Deployment

Build the Docker image:

docker build -t test-ui-bff:latest .

Run the container:

docker run -d \
  -p 8080:8080 \
  -e KUBEFLOW_API_ENDPOINT=https://kubeflow.example.com \
  -v $(pwd)/templates:/app/templates \
  test-ui-bff:latest

API Documentation

Endpoints

1. Trigger Pipeline Run

POST /api/v1/pipelines/run

Triggers a new pipeline run using a template file.

Headers:

Authorization: Bearer <oauth-token>
Content-Type: application/json

Request Body:

{
  "template": "example-pipeline.yaml",
  "display_name": "My Pipeline Run",
  "parameters": {
    "param1": "value1",
    "param2": "value2"
  }
}

Response (200 OK):

{
  "run_id": "run-xyz-123",
  "status": "running"
}

Error Response (4xx/5xx):

{
  "error": "error_code",
  "message": "Detailed error message"
}

2. Get Pipeline Run Status

GET /api/v1/pipelines/run/:id

Retrieves the status and details of a pipeline run.

Headers:

Authorization: Bearer <oauth-token>

Response (200 OK):

{
  "run_id": "run-xyz-123",
  "display_name": "My Pipeline Run",
  "state": "SUCCEEDED",
  "created_at": "2026-01-26T10:00:00Z",
  "finished_at": "2026-01-26T10:15:00Z",
  "pipeline_spec": { ... }
}

3. List Available Templates

GET /api/v1/templates

Lists all available pipeline templates.

Response (200 OK):

{
  "templates": [
    "example-pipeline.yaml",
    "training-pipeline.yaml",
    "inference-pipeline.json"
  ]
}

4. List Available Models

GET /api/v1/models

Lists all available AI models from Llama Stack.

Headers:

Authorization: Bearer <oauth-token>

Response (200 OK):

{
  "object": "list",
  "data": [
    {
      "id": "meta-llama/Llama-3.2-11B-Vision-Instruct",
      "object": "model",
      "created": 1234567890,
      "owned_by": "meta"
    },
    {
      "id": "meta-llama/Llama-3.2-3B-Instruct",
      "object": "model",
      "created": 1234567890,
      "owned_by": "meta"
    }
  ]
}

Note: Requires LLAMA_STACK_API_ENDPOINT to be configured. Returns 503 if not configured.

5. Get Model Details

GET /api/v1/models/:id

Retrieves details about a specific AI model.

Headers:

Authorization: Bearer <oauth-token>

Response (200 OK):

{
  "id": "meta-llama/Llama-3.2-11B-Vision-Instruct",
  "object": "model",
  "created": 1234567890,
  "owned_by": "meta"
}

Note: Requires LLAMA_STACK_API_ENDPOINT to be configured. Returns 503 if not configured.

6. Health Check

GET /health

Health check endpoint for monitoring.

Response (200 OK):

{
  "status": "healthy"
}

Pipeline Templates

Templates are YAML or JSON files stored in the templates/ directory. They define Kubeflow pipeline specifications.

Example Template (YAML)

# templates/example-pipeline.yaml
apiVersion: pipelines.kubeflow.org/v2beta1
kind: PipelineSpec
metadata:
  name: example-pipeline
spec:
  pipelineInfo:
    name: example-pipeline
  root:
    dag:
      tasks:
        - name: hello-world
          componentRef:
            name: comp-hello-world
  components:
    comp-hello-world:
      executorLabel: exec-hello-world
  deploymentSpec:
    executors:
      exec-hello-world:
        container:
          image: alpine:latest
          command:
            - echo
          args:
            - "Hello, {{$.inputs.parameters.message}}"
runtime_config:
  parameters:
    message:
      stringValue: "World"

Template Parameters

Parameters can be specified in the template under runtime_config.parameters and overridden via the API:

{
  "template": "example-pipeline.yaml",
  "parameters": {
    "message": "Kubeflow!"
  }
}

Usage Examples

Using curl

  1. Trigger a pipeline run:
curl -X POST http://localhost:8080/api/v1/pipelines/run \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "template": "example-pipeline.yaml",
    "display_name": "Test Run",
    "parameters": {
      "message": "Hello from BFF!"
    }
  }'
  1. Check run status:
curl -X GET http://localhost:8080/api/v1/pipelines/run/RUN_ID \
  -H "Authorization: Bearer YOUR_TOKEN"
  1. List templates:
curl -X GET http://localhost:8080/api/v1/templates
  1. List available models:
curl -X GET http://localhost:8080/api/v1/models \
  -H "Authorization: Bearer YOUR_TOKEN"
  1. Get model details:
curl -X GET http://localhost:8080/api/v1/models/meta-llama/Llama-3.2-11B-Vision-Instruct \
  -H "Authorization: Bearer YOUR_TOKEN"

Using httpie

# Trigger pipeline
http POST localhost:8080/api/v1/pipelines/run \
  Authorization:"Bearer YOUR_TOKEN" \
  template=example-pipeline.yaml \
  display_name="Test Run" \
  parameters:='{"message":"Hello!"}'

# Get run status
http GET localhost:8080/api/v1/pipelines/run/RUN_ID \
  Authorization:"Bearer YOUR_TOKEN"

# List models
http GET localhost:8080/api/v1/models \
  Authorization:"Bearer YOUR_TOKEN"

# Get model details
http GET localhost:8080/api/v1/models/meta-llama/Llama-3.2-11B-Vision-Instruct \
  Authorization:"Bearer YOUR_TOKEN"

Authentication

The BFF uses a pass-through authentication model:

  1. Client includes OAuth/OIDC bearer token in Authorization header
  2. BFF extracts the token and forwards it to the backend API (Kubeflow or Llama Stack)
  3. Backend API validates the token and authorizes the request
  4. No token validation occurs in the BFF layer

This approach:

  • Simplifies the BFF architecture
  • Reduces dependencies
  • Delegates auth to backend services (single source of truth)
  • Supports any auth method the backend services use

Development

Running Tests

go test ./...

Building

go build -o bin/test-ui-bff ./cmd/server

Adding New Templates

  1. Create a YAML or JSON file in the templates/ directory
  2. Define the pipeline specification following Kubeflow v2beta1 API format
  3. Optionally include default parameters under runtime_config.parameters
  4. The template will be automatically available via the API

Security Considerations

  • Path Traversal Protection: Template manager validates file paths to prevent directory traversal attacks
  • Token Handling: OAuth tokens are passed securely via Authorization headers
  • Container Security: Non-root user in Docker container
  • Input Validation: Request bodies are validated before processing

Troubleshooting

Common Issues

  1. "KUBEFLOW_API_ENDPOINT environment variable is required"

    • Ensure KUBEFLOW_API_ENDPOINT is set in your environment
  2. "template file not found"

    • Verify the template exists in the templates directory
    • Check the template name matches the file name exactly
  3. "unauthorized"

    • Verify your OAuth token is valid
    • Check the Authorization header format: Bearer <token>
  4. "kubeflow API error"

    • Check Kubeflow API endpoint is accessible
    • Verify OAuth token has proper permissions in Kubeflow
    • Review Kubeflow logs for detailed error messages

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

[Add your license here]

Contact

[Add contact information or links]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors