You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-13Lines changed: 12 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# `trustyai_fms`: out-of-tree remote safety provider for llama stack
2
2
3
-
This repo implements [FMS Guardrails Orchestrator](https://github.com/foundation-model-stack/fms-guardrails-orchestrator) together with community detectors:
3
+
This repo implements [FMS Guardrails Orchestrator](https://github.com/foundation-model-stack/fms-guardrails-orchestrator) together with community detectors:
-[Hugging Face content detectors](https://github.com/trustyai-explainability/guardrails-detectors)
@@ -42,27 +42,27 @@ as an out-of-tree remote safety provider for [llama stack](https://github.com/me
42
42
43
43
## Running demos
44
44
45
-
To run the demos in full, there is a need to deploy the orchestrator and detectors on Openshift, unless you have access to the necessary routes of the deployed services. If you do not have access to these routes, follow
46
-
[Part A below](#part-a-openshift-setup-for-the-orchestrator-and-detectors) to set them up.
45
+
To run the demos in full, there is a need to deploy the orchestrator and detectors on Openshift, unless you have access to the necessary routes of the deployed services. If you do not have access to these routes, follow
46
+
[Part A below](#part-a-openshift-setup-for-the-orchestrator-and-detectors) to set them up.
47
47
48
48
Subsequently, to create a local llama stack distribution, follow [Part B below](#part-b-setup-to-create-a-local-llama-stack-distribution-with-external-trustyai_fms-remote-safety-provider)
49
49
50
50
### Part A. Openshift setup for the orchestrator and detectors
51
51
52
-
The demos require deploying the orchestrator and detectors on Openshift.
52
+
The demos require deploying the orchestrator and detectors on Openshift.
53
53
54
-
The following operators are required in the Openshift cluster:
54
+
The following operators are required in the Openshift cluster:
55
55
56
56
__GPU__ -- follow [this guide](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/steps-overview.html) and install:
57
57
- Node Feature Discovery Operator (4.17.0-202505061137 provided by Red Hat):
58
58
- ensure to create an instance of NodeFeatureDiscovery using the NodeFeatureDiscovery tab
59
59
- NVIDIA GPU Operator (25.3.0 provided by NVIDIA Corporation)
60
60
- ensure to create an instance of ClusterPolicy using the ClusterPolicy tab
61
61
62
-
__Model Serving__:
62
+
__Model Serving__:
63
63
- Red Hat OpenShift Service Mesh 2 (2.6.7-0 provided by Red Hat, Inc.)
64
64
- Red Hat OpenShift Serverless (1.35.1 provided by Red Hat)
65
-
__Authentication__:
65
+
__Authentication__:
66
66
- Red Hat - Authorino Operator (1.2.1 provided by Red Hat)
67
67
68
68
__AI Platform__:
@@ -80,14 +80,14 @@ __AI Platform__:
80
80
name: knative-serving
81
81
```
82
82
83
-
Once the above steps are completed,
83
+
Once the above steps are completed,
84
84
85
85
1. Create a new project
86
86
```bash
87
87
oc new-project test
88
88
```
89
89
90
-
2. Apply the manifests in the `openshift-manifests/` directory to deploy the orchestrator and detectors.
90
+
2. Apply the manifests in the `openshift-manifests/` directory to deploy the orchestrator and detectors.
91
91
92
92
```bash
93
93
oc apply -k openshift-manifests/
@@ -118,15 +118,15 @@ source .venv/bin/activate
118
118
pip install -e .
119
119
```
120
120
121
-
6. Pick a runtime configuration file from `runtime_configurations/` and run the stack:
121
+
6. Pick a runtime configuration file from `runtime_configurations/` and run the stack:
122
122
123
123
a. __for the orchestrator API__:
124
124
125
125
```bash
126
126
llama stack run runtime_configurations/orchestrator_api.yaml --image-type=venv
127
127
```
128
128
129
-
Note that you might need to export the following environment variables:
129
+
Note that you might need to export the following environment variables:
130
130
131
131
```bash
132
132
export FMS_ORCHESTRATOR_URL="https://$(oc get routes guardrails-orchestrator-http -o jsonpath='{.spec.host}')"
@@ -138,7 +138,7 @@ pip install -e .
138
138
llama stack run runtime_configurations/detector_api.yaml --image-type=venv
139
139
```
140
140
141
-
Not that you might need to export the following environment variables:
141
+
Not that you might need to export the following environment variables:
142
142
143
143
```bash
144
144
export FMS_CHAT_URL="http://$(oc get routes granite-2b-detector-route -o jsonpath='{.spec.host}')"
0 commit comments