Skip to content

Add e2e test environment setup scripts and configuration files#59

Merged
dmaniloff merged 1 commit intotrustyai-explainability:mainfrom
dmaniloff:cluster-deployment
Feb 26, 2026
Merged

Add e2e test environment setup scripts and configuration files#59
dmaniloff merged 1 commit intotrustyai-explainability:mainfrom
dmaniloff:cluster-deployment

Conversation

@dmaniloff
Copy link
Copy Markdown
Collaborator

@dmaniloff dmaniloff commented Feb 24, 2026

This commit introduces a comprehensive setup for the llama-stack-provider-ragas e2e test environment, including:

  • A Containerfile for building the test image.
  • Deployment and teardown scripts for managing the test environment on OpenShift.
  • Configuration files for Kubernetes resources, including ConfigMaps, Secrets, and DataSciencePipelinesApplication manifests.
  • MinIO deployment for results storage and necessary operator configurations.

These additions facilitate automated testing and deployment of the llama-stack provider in a Kubernetes environment.

Summary by Sourcery

Add automation and Kubernetes resources to provision an OpenShift-based e2e test environment for the llama-stack-provider-ragas distribution, including image build, deployment, and teardown.

New Features:

  • Provide a deploy-e2e script that builds or consumes a container image, installs required operators, configures namespaces and secrets, and deploys the llama-stack distribution for testing.
  • Introduce a teardown-e2e script to cleanly remove the e2e test namespace and its resources.
  • Add a Containerfile for building a dedicated llama-stack-provider-ragas distribution image used in e2e tests.
  • Define a LlamaStackDistribution custom resource and associated ConfigMap to configure the test distribution instance.
  • Deploy a MinIO instance and bucket-initialization job for storing evaluation results in the test cluster.
  • Configure Open Data Hub DataScienceCluster and DataSciencePipelinesApplication resources to provide Kubeflow Pipelines v2 for remote evaluations.
  • Add ConfigMaps, Secrets, and NetworkPolicy resources to wire up Kubeflow pipelines, S3 credentials, and llama-stack access within the e2e namespace.

Tests:

  • Establish a reusable OpenShift-based cluster deployment setup under tests/cluster-deployment for running end-to-end tests against the llama-stack-provider-ragas distribution.

This commit introduces a comprehensive setup for the llama-stack-provider-ragas e2e test environment, including:
- A Containerfile for building the test image.
- Deployment and teardown scripts for managing the test environment on OpenShift.
- Configuration files for Kubernetes resources, including ConfigMaps, Secrets, and DataSciencePipelinesApplication manifests.
- MinIO deployment for results storage and necessary operator configurations.

These additions facilitate automated testing and deployment of the llama-stack provider in a Kubernetes environment.
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai bot commented Feb 24, 2026

Reviewer's Guide

Adds a complete OpenShift-based e2e test environment for llama-stack-provider-ragas, including a buildable test image, deployment/teardown scripts, LlamaStackDistribution config, Open Data Hub / DSPA / Kubeflow integration, and a MinIO-backed S3-compatible results store wired into the provider config.

File-Level Changes

Change Details Files
Introduce deploy/teardown automation for running the llama-stack-provider-ragas e2e stack on OpenShift.
  • Add deploy-e2e.sh to build or consume a container image, push it to the OpenShift internal registry, install required operators, and apply all test manifests in a dedicated namespace
  • Implement environment secret creation from repo-level .env into a single ragas-env Kubernetes secret
  • Add teardown-e2e.sh to delete the ragas-test namespace and clean up all test resources
tests/cluster-deployment/deploy-e2e.sh
tests/cluster-deployment/teardown-e2e.sh
Provide a dedicated container image definition for the e2e llama-stack distribution including model pre-download.
  • Define a Containerfile that installs uv, creates a virtualenv, pre-installs torch and sentence-transformers, pre-downloads the nomic-ai/nomic-embed-text-v1.5 model, installs the project in editable mode with remote/distro extras, and runs the distribution via llama stack run
  • Expose port 8321 as the LlamaStack service port
tests/cluster-deployment/Containerfile
Configure a LlamaStackDistribution and runtime configuration for the e2e environment.
  • Create a ConfigMap carrying the llama-stack config.yaml wiring eval, inference, files, datasetio providers, and storage backends with env-based configuration
  • Define a LlamaStackDistribution CR that injects env vars from the kubeflow-ragas-config ConfigMap and ragas-env Secret, sets resource requests/limits, server port, and references the test image placeholder for templating
  • Bind the userConfig ConfigMap into the distribution so the e2e LlamaStack instance uses the test-specific config
tests/cluster-deployment/manifests/llama-stack-distribution.yaml
Stand up MinIO as an S3-compatible results store and bootstrap its bucket for evaluations.
  • Add a MinIO Deployment and Service in the ragas-test namespace with fixed credentials and readiness probe
  • Define a Job using the MinIO mc client to wait for MinIO readiness and create the ragas-results bucket if it does not exist
tests/cluster-deployment/manifests/minio.yaml
Set up Open Data Hub / Data Science Pipelines and Kubeflow-related resources required by the remote eval provider.
  • Subscribe to the Open Data Hub operator in openshift-operators via an OperatorHub Subscription
  • Create DSCInitialization and DataScienceCluster resources enabling the datasciencepipelines component and associated CRDs
  • Add a DataSciencePipelinesApplication instance in ragas-test to deploy a KFP v2 pipeline server with MariaDB and MinIO object storage
  • Create a kubeflow-ragas-config ConfigMap providing default inference, embedding, Kubeflow, and MinIO configuration used by the LlamaStackDistribution
  • Add an aws-credentials Secret for pipeline S3 access and a NetworkPolicy allowing the llama-stack pod to reach the DSPA API server on the required ports
tests/cluster-deployment/manifests/operators/opendatahub-operator.yaml
tests/cluster-deployment/manifests/operators/datasciencecluster.yaml
tests/cluster-deployment/manifests/datasciencepipelinesapplication.yaml
tests/cluster-deployment/manifests/configmap-and-secrets.yaml
tests/cluster-deployment/manifests/kubeflow-pipeline-resources.yaml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • In deploy-e2e.sh, consider adding more robust handling/logging for timeouts (e.g., printing current conditions or relevant events for DSC/DSPA/operator resources) to make diagnosing failures easier when a wait loop exits with an error.
  • Several manifests hard-code the ragas-test namespace and fixed resource names (e.g., DataSciencePipelinesApplication, MinIO, LlamaStackDistribution); if you expect to run multiple environments or reuse these manifests, parameterizing namespace and names via kustomize or envsubst would make the setup more flexible.
  • MinIO and mc images are pinned to :latest and use fixed default credentials (minioadmin); for reproducibility and security in long‑lived clusters, it would be better to pin explicit versions and, where feasible, allow overriding credentials via secrets or environment variables rather than inlined values.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In deploy-e2e.sh, consider adding more robust handling/logging for timeouts (e.g., printing current conditions or relevant events for DSC/DSPA/operator resources) to make diagnosing failures easier when a wait loop exits with an error.
- Several manifests hard-code the ragas-test namespace and fixed resource names (e.g., DataSciencePipelinesApplication, MinIO, LlamaStackDistribution); if you expect to run multiple environments or reuse these manifests, parameterizing namespace and names via kustomize or envsubst would make the setup more flexible.
- MinIO and mc images are pinned to `:latest` and use fixed default credentials (`minioadmin`); for reproducibility and security in long‑lived clusters, it would be better to pin explicit versions and, where feasible, allow overriding credentials via secrets or environment variables rather than inlined values.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@dmaniloff dmaniloff merged commit 95882df into trustyai-explainability:main Feb 26, 2026
4 checks passed
dmaniloff added a commit to dmaniloff/llama-stack-provider-ragas that referenced this pull request Mar 12, 2026
These files were superseded by tests/cluster-deployment/ in PR trustyai-explainability#59.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants