Skip to content

Commit f3dc97a

Browse files
wlai2seanlaii
authored andcommitted
[doc] Add KubeRay ecosystem
Signed-off-by: seanlaii <[email protected]>
1 parent 1e97e9c commit f3dc97a

File tree

1 file changed

+124
-0
lines changed

1 file changed

+124
-0
lines changed
Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
+++
2+
title = "KubeRay on Volcano"
3+
4+
date = 2025-12-22
5+
lastmod = 2025-12-22
6+
7+
draft = false # Is this a draft? true/false
8+
toc = true # Show table of contents? true/false
9+
type = "docs" # Do not modify.
10+
11+
# Add menu entry to sidebar.
12+
linktitle = "KubeRay"
13+
[menu.docs]
14+
parent = "ecosystem"
15+
weight = 9
16+
17+
+++
18+
19+
20+
21+
### KubeRay Introduction
22+
23+
[Ray](https://docs.ray.io/en/latest/ray-overview/getting-started.html) is a unified distributed computing framework designed for AI/ML applications. Ray provides:
24+
25+
- **Distributed Training**: Scale machine learning workloads from a single machine to thousands of nodes
26+
- **Hyperparameter Tuning**: Run parallel experiments with Ray Tune for efficient model optimization
27+
- **Distributed Data Processing**: Process large datasets with Ray Data for batch inference and data preprocessing
28+
- **Reinforcement Learning**: Train RL models at scale with Ray RLlib
29+
- **Serving**: Deploy and scale ML models in production with Ray Serve
30+
- **General Purpose Distributed Computing**: Build any distributed application with Ray Core APIs
31+
32+
[KubeRay](https://docs.ray.io/en/latest/cluster/kubernetes/index.html) is an open-source Kubernetes operator that simplifies running Ray on Kubernetes. It provides automated deployment, scaling, and management of Ray clusters through Kubernetes-native tools and APIs.
33+
34+
### KubeRay Integration with Volcano
35+
36+
Starting with KubeRay v1.5.1, RayJob and RayCluster resources integrate with Volcano to support gang scheduling and network topology-aware scheduling. This integration enables more efficient resource allocation and improved performance for distributed AI/ML workloads.
37+
38+
#### Supported Labels
39+
40+
To configure RayJob and RayCluster resources with Volcano scheduling, you can use the following labels in the metadata section:
41+
42+
| Label | Description | Required |
43+
|-------|-------------|----------|
44+
| `ray.io/priority-class-name` | Assigns a [Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) priority class for pod scheduling | No |
45+
| `volcano.sh/queue-name` | Specifies the Volcano queue for resource submission | No |
46+
| `volcano.sh/network-topology-mode` | Configures network topology-aware scheduling mode | No |
47+
| `volcano.sh/network-topology-highest-tier-allowed` | Sets the highest network tier allowed for scheduling | No |
48+
49+
Below are setup examples with detailed explanations. For comprehensive configuration options, please refer to the [KubeRay Volcano Scheduler Documentation](https://docs.ray.io/en/latest/cluster/kubernetes/k8s-ecosystem/volcano.html#kuberay-integration-with-volcano).
50+
51+
#### Autoscaling Behavior
52+
53+
KubeRay's integration with Volcano handles gang scheduling differently based on whether autoscaling is enabled:
54+
55+
- **When autoscaling is enabled**: `minReplicas` is used for gang scheduling
56+
- **When autoscaling is disabled**: The desired replica count is used for gang scheduling
57+
58+
This ensures that the gang scheduling constraints are properly maintained while allowing for flexible scaling behaviors based on your workload requirements.
59+
60+
61+
### Setup Guide
62+
63+
#### Prerequisites
64+
65+
##### 1. Create a Kubernetes Cluster
66+
```bash
67+
$ kind create cluster
68+
```
69+
70+
##### 2. Install Volcano
71+
Follow the instructions in the [Volcano installation guide](https://volcano.sh/en/docs/installation/) to install Volcano on your Kubernetes cluster.
72+
73+
##### 3. Install KubeRay Operator
74+
Deploy the KubeRay Operator with the `--batch-scheduler=volcano` flag to enable Volcano batch scheduling support:
75+
```bash
76+
$ helm install kuberay-operator kuberay/kuberay-operator --version 1.5.1 --set batchScheduler.name=volcano
77+
```
78+
79+
#### Example Deployments
80+
81+
##### RayCluster Example
82+
83+
Deploy a RayCluster with Volcano scheduling:
84+
85+
```bash
86+
# Download the sample RayCluster configuration with Volcano labels
87+
$ curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.5.1/ray-operator/config/samples/ray-cluster.volcano-scheduler.yaml
88+
89+
# Apply the configuration
90+
$ kubectl apply -f ray-cluster.volcano-scheduler.yaml
91+
92+
# Verify the RayCluster deployment
93+
$ kubectl get pod -l ray.io/cluster=test-cluster-0
94+
95+
# Expected output:
96+
# NAME READY STATUS RESTARTS AGE
97+
# test-cluster-0-head-jj9bg 1/1 Running 0 36s
98+
```
99+
100+
##### RayJob Example
101+
102+
RayJob support with Volcano is available since KubeRay v1.5.1:
103+
104+
```bash
105+
# Download the sample RayJob configuration with Volcano queue integration
106+
$ curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.5.1/ray-operator/config/samples/ray-job.volcano-scheduler-queue.yaml
107+
108+
# Apply the configuration
109+
$ kubectl apply -f ray-job.volcano-scheduler-queue.yaml
110+
111+
# Monitor the job execution
112+
$ kubectl get pod
113+
114+
# Expected output:
115+
# NAME READY STATUS RESTARTS AGE
116+
# rayjob-sample-0-k449j-head-rlgxj 1/1 Running 0 93s
117+
# rayjob-sample-0-k449j-small-group-worker-c6dt8 1/1 Running 0 93s
118+
# rayjob-sample-0-k449j-small-group-worker-cq6xn 1/1 Running 0 93s
119+
# rayjob-sample-0-qmm8s 0/1 Completed 0 32s
120+
```
121+
122+
### Learn More
123+
124+
For detailed configuration options, advanced scheduling strategies, network topology configurations, and best practices, visit the [KubeRay Volcano Scheduler Documentation](https://docs.ray.io/en/latest/cluster/kubernetes/k8s-ecosystem/volcano.html#kuberay-integration-with-volcano).

0 commit comments

Comments
 (0)