forked from opendatahub-io/opendatahub-tests
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathpytest.ini
More file actions
66 lines (59 loc) · 3.39 KB
/
pytest.ini
File metadata and controls
66 lines (59 loc) · 3.39 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
[pytest]
testpaths = tests
markers =
# General
polarion: Store polarion test ID
jira: Store jira bug ID
skip_on_disconnected: Mark tests that can only be run in deployments with Internet access i.e. not on disconnected clusters.
parallel: marks tests that can run in parallel along with pytest-xdist
# CI
smoke: Mark tests as smoke tests; very high critical priority tests. Covers core functionality of the product. Aims to ensure that the build is stable enough for further testing.
sanity: <<DEPRECATION WARNING: to be superseded by tier1>> Mark tests as sanity tests. Aims to verify that specific functionality is working as expected.
tier1: Mark tests as tier1. High-priority tests.
tier2: Mark tests as tier2. Medium/low-priority positive tests.
tier3: Mark tests as tier3. Negative and destructive tests.
slow: Mark tests which take more than 10 minutes as slow tests.
pre_upgrade: Mark tests which should be run before upgrading the product.
post_upgrade: Mark tests which should be run after upgrading the product.
fuzzer: Mark tests that use fuzzing and are probably going to generate unanticipated failures.
ocp_interop: Interop testing with Openshift.
downstream_only: Tests that are specific to downstream
cluster_health: Tests that verifies that cluster is healthy to begin testing
operator_health: Tests that verifies that OpenDataHub/RHOAI operators are healthy and functioning correctly
component_health: Tests that verifies that OpenDataHub/RHOAI components are healthy and functioning correctly
skip_must_gather: Tests that does not require must-gather for triaging
install: Tests that are relevant for install scenario. To be used with upgrade marker, to indicate tests that are valid for both install and upgrade
# Model server
modelmesh: Mark tests which are model mesh tests
rawdeployment: Mark tests which are raw deployment tests
minio: Mark tests which are using MinIO storage
tls: Mark tests which are testing TLS
metrics: Mark tests which are testing metrics
kueue: Mark tests which are testing Kueue
model_server_gpu: Mark tests which are testing model server with GPU resources
gpu: Mark tests which require GPU resources
vllm_nvidia_single_gpu: Mark tests which require GPU resources for VLLM NVIDIA deployment
vllm_nvidia_multi_gpu: Mark tests which require multiple GPU resources for VLLM NVIDIA deployment
vllm_amd_gpu: Mark tests which require GPU resources for VLLM AMD deployment
multinode: Mark tests which require multiple nodes
keda: Mark tests which are testing KEDA scaling
llmd_cpu: Mark tests which are testing LLMD (LLM Deployment) with CPU resources
llmd_gpu: Mark tests which are testing LLMD (LLM Deployment) with GPU resources
model_validation: Mark tests which are testing model validation functionality
# Model Registry:
custom_namespace: mark tests that are to be run with custom namespace
# Component markers: used to document test suite ownership and to facilitate their execution
# Example: "pytest -m rag" will run all tests owned by the RAG team, which might include
# multiple subfolders at /tests/llama_stack
model_explainability
llama_stack
rag
addopts =
-s
-p no:logging
--basetemp=/tmp/pytest
--strict-markers
--show-progress
--tc-file=tests/global_config.py
--tc-format=python
--jira