Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ include::modules/configuring-the-opentelemetry-exporter.adoc[leveloffset=+1]
include::modules/using-hugging-face-models-with-guardrails-orchestrator.adoc[leveloffset=+1]
include::modules/configuring-the-guardrails-detector-hugging-face-serving-runtime.adoc[leveloffset=+1]
include::modules/using-a-hugging-face-prompt-injection-detector-with-the-guardrails-orchestrator.adoc[leveloffset=+1]
include::modules/using-guardrails-orchestrator-with-llama-stack.adoc[leveloffset=+1]



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ This example demonstrates how to use the built-in link:https://github.com/trusty
ifdef::upstream[]
* You have installed {productname-long}, version 2.29 or later.
endif::[]
ifdef::upstream[]
ifndef::upstream[]
* You have installed {productname-long}, version 2.20 or later.
endif::[]

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
:_module-type: PROCEDURE

ifdef::context[:parent-context: {context}]
[id="using-llama-stack-external-eval-provider-with-lm-evaluation-harness-in-TrustyAI_{context}"]
= Using Llama Stack external eval provider with lm-evaluation-harness in TrustyAI
[role='_abstract']

This example demonstrates how to evaluate a language model in {productname-long}} using the LMEval Llama Stack external eval provider, using Python in a workbench. To do this, you configure a Llama Stack server to use the LMEval Eval provider, register a benchmark dataset, and run a benchmark evaluation job on a language model.

.Prerequisites

ifdef::upstream[]
* You have installed {productname-long}, version 2.29 or later.
endif::[]
ifndef::upstream[]
* You have installed {productname-long}, version 2.20 or later.
endif::[]

* You have cluster administrator privileges for your {productname-short} cluster.
* You have downloaded and installed the {productname-short} command-line interface (CLI). For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-latest-version}/html/cli_tools/openshift-cli-oc[Installing the OpenShift CLI^].
* You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace.
* You have installed TrustyAI Operator in your {OpenShift} cluster.
* You have set KServe to Raw Deployment mode in your cluster.
Comment on lines 19 to 27
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Attribute usage: CLI and platform names are inconsistent.

The CLI is for {openshift-platform} (oc), not {productname-short}. Also keep attribute names consistent with other modules.

-* You have cluster administrator privileges for your {productname-short} cluster.
+* You have cluster administrator privileges for your {openshift-platform} cluster.
@@
-* You have downloaded and installed the {productname-short}  command-line interface (CLI). For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-latest-version}/html/cli_tools/openshift-cli-oc[Installing the OpenShift CLI^].
+* You have downloaded and installed the {openshift-platform} command-line interface (CLI). For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-latest-version}/html/cli_tools/openshift-cli-oc[Installing the OpenShift CLI^].
@@
-* You have installed TrustyAI Operator in your {OpenShift} cluster.
+* You have installed the TrustyAI Operator in your {openshift-platform} cluster.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
* You have cluster administrator privileges for your {productname-short} cluster.
* You have downloaded and installed the {productname-short} command-line interface (CLI). For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-latest-version}/html/cli_tools/openshift-cli-oc[Installing the OpenShift CLI^].
* You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace.
* You have installed TrustyAI Operator in your {OpenShift} cluster.
* You have set KServe to Raw Deployment mode in your cluster.
* You have cluster administrator privileges for your {openshift-platform} cluster.
* You have downloaded and installed the {openshift-platform} command-line interface (CLI). For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-latest-version}/html/cli_tools/openshift-cli-oc[Installing the OpenShift CLI^].
* You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace.
* You have installed the TrustyAI Operator in your {openshift-platform} cluster.
* You have set KServe to Raw Deployment mode in your cluster.
🤖 Prompt for AI Agents
In
modules/using-llama-stack-external-eval-provider-with-lm-evaluation-harness-in-TrustyAI.adoc
around lines 19 to 27, the second bullet incorrectly uses the
{productname-short} attribute for the OpenShift CLI; change that instance to
{openshift-platform} (oc) so the CLI reference is accurate and matches other
modules, and review the surrounding bullets to ensure attribute names are
consistent across the file (replace any other {productname-short} uses that
refer to the platform/CLI with {openshift-platform}).

.Procedure

. Configure a Python virtual environment for this tutorial in your `DataScienceCluster`:
+
[source,bash]
----
python3 -m venv .venv
source .venv/bin/activate
----
. Install the link:https://pypi.org/project/llama-stack/[Llama Stack provider] from the Python Package Index (PyPI):
+
[source,bash]
----
pip install llama-stack-provider-lmeval
----
. Configure the Llama Stack server. Set the variables to configure the runtime endpoint and namespace. The VLLM_URL value should be the `v1/completions` endpoint of your model route and the TRUSTYAI_LM_EVAL_NAMESPACE should be the namespace where your model is deployed. For example:
Comment on lines +39 to +45
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Missing required packages for server and client.

You install only the provider. The server CLI and client library aren’t installed, causing later steps to fail.

-. Install the link:https://pypi.org/project/llama-stack/[Llama Stack provider] from the Python Package Index (PyPI):
+. Install the required packages from PyPI:
@@
---- 
-pip install llama-stack-provider-lmeval
+pip install \
+  llama-stack \
+  llama-stack-client \
+  llama-stack-provider-lmeval
---- 
🤖 Prompt for AI Agents
In
modules/using-llama-stack-external-eval-provider-with-lm-evaluation-harness-in-TrustyAI.adoc
around lines 39 to 45, the instructions only install the Llama Stack provider
but omit required server and client packages; update the installation step to
also install the llama-stack server CLI and client library by adding their
package names to the pip install command (or separate pip install lines) and
mention that both server and client must be installed before configuring
VLLM_URL and TRUSTYAI_LM_EVAL_NAMESPACE so subsequent steps don't fail.

+
[source,bash]
----
export VLLM_URL=https://$(oc get $(oc get ksvc -o name | grep predictor) --template='{{.status.url}}')/v1/completions
export TRUSTYAI_LM_EVAL_NAMESPACE=$(oc project | cut -d '"' -f2)
----
. Download the `providers.d` provider configuration directory and the `run.yaml` execution file:
+
[source, bash]
----
curl --create-dirs --output providers.d/remote/eval/trustyai_lmeval.yaml https://raw.githubusercontent.com/trustyai-explainability/llama-stack-provider-lmeval/refs/heads/main/providers.d/remote/eval/trustyai_lmeval.yaml

curl --create-dirs --output run.yaml https://raw.githubusercontent.com/trustyai-explainability/llama-stack-provider-lmeval/refs/heads/main/run.yaml
----
. Start the Llama Stack server in a virtual environment, which uses port `8321` by default:
+
[source,bash]
----
llama stack run run.yaml --image-type venv
----
. Create a Python script in a Jupyter workbench and import the following libraries and modules, to interact with the server and run an evaluation:
+
[source,python]
----
import os
import subprocess

import logging

import time
import pprint
----
. Start the Llama Stack Python client to interact with the running Llama Stack server:
+
[source,python]
----
BASE_URL = "http://localhost:8321"

def create_http_client():
from llama_stack_client import LlamaStackClient
return LlamaStackClient(base_url=BASE_URL)

client = create_http_client()
----
Print a list of the current available benchmarks:
+
[source,python]
----
benchmarks = client.benchmarks.list()

pprint.print(f"Available benchmarks: {benchmarks}")
----
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: pprint.print does not exist.

Use pprint.pprint(...) or built-in print(...).

-pprint.print(f"Available benchmarks: {benchmarks}")
+pprint.pprint(f"Available benchmarks: {benchmarks}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Print a list of the current available benchmarks:
+
[source,python]
----
benchmarks = client.benchmarks.list()
pprint.print(f"Available benchmarks: {benchmarks}")
----
benchmarks = client.benchmarks.list()
pprint.pprint(f"Available benchmarks: {benchmarks}")
🤖 Prompt for AI Agents
In
modules/using-llama-stack-external-eval-provider-with-lm-evaluation-harness-in-TrustyAI.adoc
around lines 90 to 97, the snippet calls pprint.print(...) which does not exist;
replace it with either pprint.pprint(benchmarks) or simply print(benchmarks),
and if choosing pprint.pprint ensure the module is imported (import pprint) or
reference via from pprint import pprint so the call resolves correctly.

. LMEval provides access to over 100 preconfigured evaluation datasets. Register the ARC-Easy benchmark, a dataset of grade-school level, multiple-choice science questions:
+
[source,python]
----
client.benchmarks.register(
benchmark_id="trustyai_lmeval::arc_easy",
dataset_id="trustyai_lmeval::arc_easy",
scoring_functions=["string"],
provider_benchmark_id="string",
provider_id="trustyai_lmeval",
metadata={
"tokenizer": "google/flan-t5-small",
"tokenized_requests": False,
}
)
----
. Verify that the benchmark has been registered successfully:
+
[source,python]
----
benchmarks = client.benchmarks.list()
pprint.print(f"Available benchmarks: {benchmarks}")
----
. Run a benchmark evaluation job on your deployed model using the following input. Replace phi-3 with the name of your deployed model:
+
[source,python]
----
job = client.eval.run_eval(
benchmark_id="trustyai_lmeval::arc_easy",
benchmark_config={
"eval_candidate": {
"type": "model",
"model": "phi-3",
"provider_id": "trustyai_lmeval",
"sampling_params": {
"temperature": 0.7,
"top_p": 0.9,
"max_tokens": 256
},
},
"num_examples": 1000,
},
)

print(f"Starting job '{job.job_id}'")
----
. Monitor the status of the evaluation job using the following code. The job will run asynchronously, so you can check its status periodically:
[source, python]
----
def get_job_status(job_id, benchmark_id):
return client.eval.jobs.status(job_id=job_id, benchmark_id=benchmark_id)

while True:
job = get_job_status(job_id=job.job_id, benchmark_id="trustyai_lmeval::arc_easy")
print(job)

if job.status in ['failed', 'completed']:
print(f"Job ended with status: {job.status}")
break

time.sleep(20)
----
. Retrieve the evaluation job results once the job status reports back as `completed`:
+
[source,python]
----
pprint.pprint(client.eval.jobs.retrieve(job_id=job.job_id, benchmark_id="trustyai_lmeval::arc_easy").scores)
----
21 changes: 21 additions & 0 deletions modules/using-llama-stack-with-trustyai.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
:_module-type: ASSEMBLY
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify assembly location vs. type

File sits under modules/ but declares ASSEMBLY. Confirm that upstream assemblies include this path correctly (or relocate to assemblies/).


🏁 Script executed:

#!/usr/bin/env bash
# Find all occurrences of using-llama-stack-with-trustyai.adoc and who includes them
fd -a 'using-llama-stack-with-trustyai.adoc'
rg -n 'include::.*using-llama-stack-with-trustyai\.adoc' -C2

Length of output: 574


Sync file location with includes
Move modules/using-llama-stack-with-trustyai.adoc into the assemblies/ directory (or update the include in monitoring-data-science-models.adoc:34 to reference modules/using-llama-stack-with-trustyai.adoc).

🤖 Prompt for AI Agents
In modules/using-llama-stack-with-trustyai.adoc around line 1, the file location
doesn't match the include path used from monitoring-data-science-models.adoc:34;
either move this file into the assemblies/ directory so the existing include
resolves, or update the include statement in monitoring-data-science-models.adoc
(line 34) to reference modules/using-llama-stack-with-trustyai.adoc; ensure the
include path is correct relative to the including file and update any build/CI
references if necessary.


ifdef::context[:parent-context: {context}]
[id="modules/using-llama-stack-with-trustyai_{context}"]
= Using llama stack with Trusty AI

This section contains tutorials for working with Llama Stack in Trusty AI. These tutorials demonstrate how to use various Llama Stack components and providers to evaluate and work with language models.

The following sections describe how to work with llama stack and provide example use cases:

* Using the Llama Stack external eval provider with lm-evaluation-harness in Trusty AI
* Running custom evaluations with LMEval Llama Stack external eval provider
* Use the trustyai-fms Guardrails Orchestrator with Llama-stack
include::../modules/using-guardrails-orchestrator-with-llama-stack.adoc[leveloffset=+1]



ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
3 changes: 3 additions & 0 deletions monitoring-data-science-models.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,7 @@ include::assemblies/evaluating-large-language-models.adoc[leveloffset=+1]

include::assemblies/configuring-the-guardrails-orchestrator-service.adoc[leveloffset=+1]

include::assemblies/using-llama-stack-with-trustyai.adoc[leveloffset=+1]

// currently bias-monitoring is only in ODH
Comment on lines +34 to +36
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify include target exists and path is correct (assemblies vs modules).

The include points to assemblies/using-llama-stack-with-trustyai.adoc, but this PR shows modules/using-llama-stack-with-trustyai.adoc. If the assemblies/ file doesn’t exist, the build will fail. Update the path or add the assembly file accordingly.

Run to confirm presence and pick the correct path:


🏁 Script executed:

#!/bin/bash
# List any matching files so we can choose the right include path.
fd -a 'using-llama-stack-with-trustyai.adoc' -t f -H

Length of output: 116


Fix include path to modules/using-llama-stack-with-trustyai.adoc
assemblies/using-llama-stack-with-trustyai.adoc doesn’t exist; update the include in monitoring-data-science-models.adoc (line 34) to:

include::modules/using-llama-stack-with-trustyai.adoc[leveloffset=+1]
🤖 Prompt for AI Agents
In monitoring-data-science-models.adoc around lines 34 to 36 the include path
points to a non-existent assemblies/using-llama-stack-with-trustyai.adoc; change
the include to reference the correct file under modules by replacing that
include with modules/using-llama-stack-with-trustyai.adoc[leveloffset=+1] so the
document pulls the correct module file.

include::assemblies/bias-monitoring-tutorial.adoc[leveloffset=+1]