Skip to content

Commit c5a8894

Browse files
authored
Merge branch 'main' into source_label_mcp
2 parents 5230d95 + b28f0ed commit c5a8894

File tree

7 files changed

+600
-7
lines changed

7 files changed

+600
-7
lines changed

tests/cluster_health/README.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# Cluster Health Tests
2+
3+
This directory contains foundational health check tests for OpenDataHub/RHOAI clusters. These tests serve as prerequisites to ensure the cluster and operators are in a healthy state before running more complex integration tests.
4+
5+
## Directory Structure
6+
7+
```text
8+
cluster_health/
9+
├── test_cluster_health.py # Cluster node health validation
10+
└── test_operator_health.py # Operator and pod health validation
11+
```
12+
13+
### Current Test Suites
14+
15+
- **`test_cluster_health.py`** - Validates that all cluster nodes are healthy and schedulable
16+
- **`test_operator_health.py`** - Validates that DSCInitialization, DataScienceCluster resources are ready, and all pods in operator/application namespaces are running
17+
18+
## Test Markers
19+
20+
Tests use the following markers defined in `pytest.ini`:
21+
22+
- `@pytest.mark.cluster_health` - Tests that verify the cluster is healthy to begin testing
23+
- `@pytest.mark.operator_health` - Tests that verify OpenDataHub/RHOAI operators are healthy and functioning correctly
24+
25+
## Test Details
26+
27+
### Cluster Node Health (`test_cluster_health.py`)
28+
29+
- **`test_cluster_node_healthy`** - Asserts all cluster nodes have `KubeletReady: True` condition and are schedulable (not cordoned)
30+
31+
### Operator Health (`test_operator_health.py`)
32+
33+
- **`test_data_science_cluster_initialization_healthy`** - Validates the DSCInitialization resource reaches `READY` status (120s timeout)
34+
- **`test_data_science_cluster_healthy`** - Validates the DataScienceCluster resource reaches `READY` status (120s timeout)
35+
- **`test_pods_cluster_healthy`** - Validates all pods in operator and application namespaces reach Running/Completed state (180s timeout). Parametrized across `operator_namespace` and `applications_namespace` from global config
36+
37+
## Running Tests
38+
39+
### Run All Cluster Health Tests
40+
41+
```bash
42+
uv run pytest tests/cluster_health/
43+
```
44+
45+
### Run by Marker
46+
47+
```bash
48+
# Run cluster node health tests
49+
uv run pytest -m cluster_health
50+
51+
# Run operator health tests
52+
uv run pytest -m operator_health
53+
54+
# Run both
55+
uv run pytest -m "cluster_health or operator_health"
56+
```

tests/fixtures/README.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# Shared Test Fixtures
2+
3+
This directory contains shared pytest fixtures that are used across multiple test modules. These fixtures are automatically loaded via pytest's plugin mechanism, registered in `/tests/conftest.py`.
4+
5+
## Directory Structure
6+
7+
```text
8+
fixtures/
9+
├── files.py # File storage provider fixtures
10+
├── guardrails.py # Guardrails orchestrator infrastructure fixtures
11+
├── inference.py # Inference service and serving runtime fixtures
12+
├── trustyai.py # TrustyAI operator and DSC configuration fixtures
13+
└── vector_io.py # Vector database provider deployment fixtures
14+
```
15+
16+
### Fixture Modules
17+
18+
- **`files.py`** - Factory fixture for configuring file storage providers (local, S3/MinIO)
19+
- **`guardrails.py`** - Fixtures for deploying and configuring the Guardrails Orchestrator, including pods, routes, health checks, and gateway configuration
20+
- **`inference.py`** - Fixtures for vLLM CPU serving runtimes, InferenceServices (Qwen), LLM-d inference simulator, and KServe controller configuration
21+
- **`trustyai.py`** - Fixtures for TrustyAI operator deployment and DataScienceCluster LMEval configuration
22+
- **`vector_io.py`** - Factory fixture for deploying vector database providers (Milvus, Faiss, PGVector, Qdrant) with their backing services and configuration
23+
24+
## Registration
25+
26+
All fixture modules are registered as pytest plugins in `/tests/conftest.py`:
27+
28+
```python
29+
pytest_plugins = [
30+
"tests.fixtures.inference",
31+
"tests.fixtures.guardrails",
32+
"tests.fixtures.trustyai",
33+
"tests.fixtures.vector_io",
34+
"tests.fixtures.files",
35+
]
36+
```
37+
38+
## Usage
39+
40+
Fixtures are automatically available to all tests. Factory fixtures accept parameters via `pytest.mark.parametrize` with `indirect=True`.
41+
42+
### Vector I/O Provider Example
43+
44+
```python
45+
@pytest.mark.parametrize(
46+
"vector_io_provider_deployment_config_factory",
47+
["milvus", "pgvector", "qdrant-remote"],
48+
indirect=True,
49+
)
50+
def test_with_vector_db(vector_io_provider_deployment_config_factory):
51+
# Fixture deploys the provider and returns env var configuration
52+
...
53+
```
54+
55+
### Supported Vector I/O Providers
56+
57+
| Provider | Type | Description |
58+
| --------------- | ------ | ------------------------------------------- |
59+
| `milvus` | Local | In-memory Milvus (no external dependencies) |
60+
| `milvus-remote` | Remote | Milvus standalone with etcd backend |
61+
| `faiss` | Local | Facebook AI Similarity Search (in-memory) |
62+
| `pgvector` | Local | PostgreSQL with pgvector extension |
63+
| `qdrant-remote` | Remote | Qdrant vector database |
64+
65+
### Supported File Providers
66+
67+
| Provider | Description |
68+
| -------- | ---------------------------------- |
69+
| `local` | Local filesystem storage (default) |
70+
| `s3` | S3/MinIO remote object storage |
71+
72+
## Adding New Fixtures
73+
74+
When adding shared fixtures, place them in the appropriate module file (or create a new one), and register the new module in `/tests/conftest.py` under `pytest_plugins`. Follow the project's fixture conventions: use noun-based names, narrowest appropriate scope, and context managers for resource lifecycle.

tests/llama_stack/README.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -88,23 +88,23 @@ LLS_FILES_S3_AUTO_CREATE_BUCKET=true # Optional
8888
To run all tests in the `/tests/llama_stack` directory:
8989

9090
```bash
91-
pytest tests/llama_stack/
91+
uv run pytest tests/llama_stack/
9292
```
9393

9494
### Run Tests by Component/Team
9595

9696
To run tests for a specific team (e.g. rag):
9797

9898
```bash
99-
pytest -m rag tests/llama_stack/
99+
uv run pytest -m rag tests/llama_stack/
100100
```
101101

102102
### Run Tests for a llama-stack API
103103

104104
To run tests for a specific API (e.g., vector_io):
105105

106106
```bash
107-
pytest tests/llama_stack/vector_io
107+
uv run pytest tests/llama_stack/vector_io
108108
```
109109

110110
### Run Tests with Additional Markers
@@ -113,10 +113,10 @@ You can combine team markers with other pytest markers:
113113

114114
```bash
115115
# Run only smoke tests for rag
116-
pytest -m "rag and smoke" tests/llama_stack/
116+
uv run pytest -m "rag and smoke" tests/llama_stack/
117117

118118
# Run all rag tests except the ones requiring a GPU
119-
pytest -m "rag and not gpu" tests/llama_stack/
119+
uv run pytest -m "rag and not gpu" tests/llama_stack/
120120
```
121121

122122
## Related Testing Repositories
@@ -145,5 +145,3 @@ For information about the APIs and Providers available in the Red Hat LlamaStack
145145
## Additional Resources
146146

147147
- [Llama Stack Documentation](https://llamastack.github.io/docs/)
148-
- [OpenDataHub Documentation](https://opendatahub.io/docs)
149-
- [OpenShift AI Documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed)
Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
# Model Explainability Tests
2+
3+
This directory contains tests for AI/ML model explainability, trustworthiness, evaluation, and safety components in OpenDataHub/RHOAI. It covers TrustyAI Service, Guardrails Orchestrator, LM Eval, EvalHub, and the TrustyAI Operator.
4+
5+
## Directory Structure
6+
7+
```text
8+
model_explainability/
9+
├── conftest.py # Shared fixtures (PVC, TrustyAI configmap)
10+
├── utils.py # Image validation utilities
11+
12+
├── evalhub/ # EvalHub service tests
13+
│ ├── conftest.py
14+
│ ├── constants.py
15+
│ ├── test_evalhub_health.py # Health endpoint validation
16+
│ └── utils.py
17+
18+
├── guardrails/ # AI Safety Guardrails tests
19+
│ ├── conftest.py # Detectors, Tempo, OpenTelemetry fixtures
20+
│ ├── constants.py
21+
│ ├── test_guardrails.py # Built-in, HuggingFace, autoconfig tests
22+
│ ├── upgrade/
23+
│ │ └── test_guardrails_upgrade.py # Pre/post-upgrade tests
24+
│ └── utils.py
25+
26+
├── lm_eval/ # Language Model Evaluation tests
27+
│ ├── conftest.py # LMEvalJob fixtures (HF, local, vLLM, S3, OCI)
28+
│ ├── constants.py # Task definitions (UNITXT, LLMAAJ)
29+
│ ├── data/ # Test data files
30+
│ ├── test_lm_eval.py # HuggingFace, offline, vLLM, S3 tests
31+
│ └── utils.py
32+
33+
├── trustyai_operator/ # TrustyAI Operator validation
34+
│ ├── test_trustyai_operator.py # Operator image validation
35+
│ └── utils.py
36+
37+
└── trustyai_service/ # TrustyAI Service core tests
38+
├── conftest.py # MariaDB, KServe, ISVC fixtures
39+
├── constants.py # Storage configs, model formats
40+
├── trustyai_service_utils.py # TrustyAI REST client, metrics validation
41+
├── utils.py # Service creation, RBAC, MariaDB utilities
42+
43+
├── drift/ # Drift detection tests
44+
│ ├── model_data/ # Test data batches
45+
│ └── test_drift.py # Meanshift, KSTest, ApproxKSTest, FourierMMD
46+
47+
├── fairness/ # Fairness metrics tests
48+
│ ├── conftest.py
49+
│ ├── model_data/ # Fairness test data
50+
│ └── test_fairness.py # SPD, DIR fairness metrics
51+
52+
├── service/ # Core service tests
53+
│ ├── conftest.py
54+
│ ├── test_trustyai_service.py # Image validation, DB migration, DB cert tests
55+
│ ├── utils.py
56+
│ └── multi_ns/ # Multi-namespace tests
57+
│ └── test_trustyai_service_multi_ns.py
58+
59+
└── upgrade/ # Upgrade compatibility tests
60+
└── test_trustyai_service_upgrade.py
61+
```
62+
63+
### Current Test Suites
64+
65+
- **`evalhub/`** - EvalHub service health endpoint validation via kube-rbac-proxy
66+
- **`guardrails/`** - Guardrails Orchestrator tests with built-in regex detectors (PII), HuggingFace detectors (prompt injection, HAP), auto-configuration, and gateway routing. Includes OpenTelemetry/Tempo trace integration
67+
- **`lm_eval/`** - Language Model Evaluation tests covering HuggingFace models, local/offline tasks, vLLM integration, S3 storage, and OCI registry artifacts
68+
- **`trustyai_operator/`** - TrustyAI operator container image validation (SHA256 digests, CSV relatedImages)
69+
- **`trustyai_service/`** - TrustyAI Service tests for drift detection (4 metrics), fairness metrics (SPD, DIR), database migration, multi-namespace support, and upgrade scenarios. Tests run against both PVC and database storage backends
70+
71+
## Test Markers
72+
73+
```python
74+
@pytest.mark.model_explainability # Module-level marker
75+
@pytest.mark.smoke # Critical smoke tests
76+
@pytest.mark.tier1 # Tier 1 tests
77+
@pytest.mark.tier2 # Tier 2 tests
78+
@pytest.mark.pre_upgrade # Pre-upgrade tests
79+
@pytest.mark.post_upgrade # Post-upgrade tests
80+
@pytest.mark.rawdeployment # KServe raw deployment mode
81+
@pytest.mark.skip_on_disconnected # Requires internet connectivity
82+
```
83+
84+
## Running Tests
85+
86+
### Run All Model Explainability Tests
87+
88+
```bash
89+
uv run pytest tests/model_explainability/
90+
```
91+
92+
### Run Tests by Component
93+
94+
```bash
95+
# Run TrustyAI Service tests
96+
uv run pytest tests/model_explainability/trustyai_service/
97+
98+
# Run Guardrails tests
99+
uv run pytest tests/model_explainability/guardrails/
100+
101+
# Run LM Eval tests
102+
uv run pytest tests/model_explainability/lm_eval/
103+
104+
# Run EvalHub tests
105+
uv run pytest tests/model_explainability/evalhub/
106+
```
107+
108+
### Run Tests with Markers
109+
110+
```bash
111+
# Run only smoke tests
112+
uv run pytest -m "model_explainability and smoke" tests/model_explainability/
113+
114+
# Run drift detection tests
115+
uv run pytest tests/model_explainability/trustyai_service/drift/
116+
117+
# Run fairness tests
118+
uv run pytest tests/model_explainability/trustyai_service/fairness/
119+
```
120+
121+
## Additional Resources
122+
123+
- [TrustyAI Documentation](https://github.com/trustyai-explainability)

0 commit comments

Comments
 (0)