Skip to content

Commit 7e0416c

Browse files
Merge pull request #433 from seanlaii/kuberay-eco
[doc] Add KubeRay ecosystem
2 parents 1e97e9c + 2a1c482 commit 7e0416c

File tree

2 files changed

+438
-0
lines changed

2 files changed

+438
-0
lines changed

content/en/docs/ray_on_volcano.md

Lines changed: 219 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,219 @@
1+
+++
2+
title = "Ray on Volcano"
3+
4+
date = 2025-12-22
5+
lastmod = 2025-12-22
6+
7+
draft = false # Is this a draft? true/false
8+
toc = true # Show table of contents? true/false
9+
type = "docs" # Do not modify.
10+
11+
# Add menu entry to sidebar.
12+
linktitle = "Ray"
13+
[menu.docs]
14+
parent = "ecosystem"
15+
weight = 9
16+
17+
+++
18+
19+
20+
21+
### Ray Introduction
22+
23+
[Ray](https://docs.ray.io/en/latest/ray-overview/getting-started.html) is a unified distributed computing framework designed for AI/ML applications. Ray provides:
24+
25+
- **Distributed Training**: Scale machine learning workloads from a single machine to thousands of nodes
26+
- **Hyperparameter Tuning**: Run parallel experiments with Ray Tune for efficient model optimization
27+
- **Distributed Data Processing**: Process large datasets with Ray Data for batch inference and data preprocessing
28+
- **Reinforcement Learning**: Train RL models at scale with Ray RLlib
29+
- **Serving**: Deploy and scale ML models in production with Ray Serve
30+
- **General Purpose Distributed Computing**: Build any distributed application with Ray Core APIs
31+
32+
### Running Ray on Volcano
33+
34+
There are two approaches to deploy Ray clusters on Volcano:
35+
36+
1. **KubeRay Operator Approach**: Use the KubeRay operator with Volcano scheduler integration for automated deployment and management of `RayCluster`, `RayJob` and `RayService` resources
37+
2. **Volcano Job (vcjob) Approach**: Deploy Ray clusters directly using Volcano Job with the Ray plugin
38+
39+
Both approaches leverage Volcano's powerful scheduling capabilities including gang scheduling and network topology-aware scheduling for optimal resource allocation.
40+
41+
### Method 1: Using KubeRay Operator
42+
43+
[KubeRay](https://docs.ray.io/en/latest/cluster/kubernetes/index.html) is an open-source Kubernetes operator that simplifies running Ray on Kubernetes. It provides automated deployment, scaling, and management of Ray clusters through Kubernetes-native tools and APIs.
44+
45+
#### KubeRay Integration with Volcano
46+
47+
Starting with KubeRay v1.5.1, all KubeRay resources (RayJob, RayCluster, and RayService) support Volcano's advanced scheduling features, including gang scheduling and network topology-aware scheduling. This integration optimizes resource allocation and enhances performance for distributed AI/ML workloads.
48+
49+
#### Supported Labels
50+
51+
To configure RayJob and RayCluster resources with Volcano scheduling, you can use the following labels in the metadata section:
52+
53+
| Label | Description | Required |
54+
|-------|-------------|----------|
55+
| `ray.io/priority-class-name` | Assigns a [Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) priority class for pod scheduling | No |
56+
| `volcano.sh/queue-name` | Specifies the Volcano queue for resource submission | No |
57+
| `volcano.sh/network-topology-mode` | Configures network topology-aware scheduling mode | No |
58+
| `volcano.sh/network-topology-highest-tier-allowed` | Sets the highest network tier allowed for scheduling | No |
59+
60+
#### Autoscaling Behavior
61+
62+
KubeRay's integration with Volcano handles gang scheduling differently based on whether autoscaling is enabled:
63+
64+
- **When autoscaling is enabled**: `minReplicas` is used for gang scheduling
65+
- **When autoscaling is disabled**: The desired replica count is used for gang scheduling
66+
67+
This ensures that the gang scheduling constraints are properly maintained while allowing for flexible scaling behaviors based on your workload requirements.
68+
69+
Below are setup examples with detailed explanations. For comprehensive configuration options, please refer to the [KubeRay Volcano Scheduler Documentation](https://docs.ray.io/en/latest/cluster/kubernetes/k8s-ecosystem/volcano.html#kuberay-integration-with-volcano).
70+
71+
72+
#### Setup Requirements
73+
74+
Before deploying Ray with KubeRay, ensure you have:
75+
76+
- A running Kubernetes cluster with Volcano installed
77+
- KubeRay Operator installed with Volcano batch scheduler support:
78+
79+
```bash
80+
# Install KubeRay Operator with Volcano integration
81+
$ helm install kuberay-operator kuberay/kuberay-operator --version 1.5.1 --set batchScheduler.name=volcano
82+
```
83+
84+
#### Example Deployments
85+
86+
##### RayCluster Example
87+
88+
Deploy a RayCluster with Volcano scheduling:
89+
90+
```bash
91+
# Download the sample RayCluster configuration with Volcano labels
92+
$ curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.5.1/ray-operator/config/samples/ray-cluster.volcano-scheduler.yaml
93+
94+
# Apply the configuration
95+
$ kubectl apply -f ray-cluster.volcano-scheduler.yaml
96+
97+
# Verify the RayCluster deployment
98+
$ kubectl get pod -l ray.io/cluster=test-cluster-0
99+
100+
# Expected output:
101+
# NAME READY STATUS RESTARTS AGE
102+
# test-cluster-0-head-jj9bg 1/1 Running 0 36s
103+
```
104+
105+
##### RayJob Example
106+
107+
RayJob support with Volcano is available since KubeRay v1.5.1:
108+
109+
```bash
110+
# Download the sample RayJob configuration with Volcano queue integration
111+
$ curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.5.1/ray-operator/config/samples/ray-job.volcano-scheduler-queue.yaml
112+
113+
# Apply the configuration
114+
$ kubectl apply -f ray-job.volcano-scheduler-queue.yaml
115+
116+
# Monitor the job execution
117+
$ kubectl get pod
118+
119+
# Expected output:
120+
# NAME READY STATUS RESTARTS AGE
121+
# rayjob-sample-0-k449j-head-rlgxj 1/1 Running 0 93s
122+
# rayjob-sample-0-k449j-small-group-worker-c6dt8 1/1 Running 0 93s
123+
# rayjob-sample-0-k449j-small-group-worker-cq6xn 1/1 Running 0 93s
124+
# rayjob-sample-0-qmm8s 0/1 Completed 0 32s
125+
```
126+
127+
### Method 2: Using Volcano Job with Ray Plugin
128+
129+
Volcano provides a native Ray plugin that simplifies deploying Ray clusters directly through Volcano Jobs. This approach offers a lightweight alternative to KubeRay, allowing you to manage Ray clusters using Volcano's job management capabilities.
130+
131+
#### How the Ray Plugin Works
132+
133+
The Ray plugin automatically:
134+
135+
- Configures the commands for head and worker nodes in a Ray cluster
136+
- Opens the required ports for Ray services (GCS: 6379, Dashboard: 8265, Client Server: 10001)
137+
- Creates a Kubernetes service mapped to the Ray head node for job submission and dashboard access
138+
139+
#### Setup Requirements
140+
141+
Before deploying Ray with Volcano Job, ensure:
142+
143+
- Volcano is installed with the Ray plugin enabled
144+
- The `svc` plugin is also enabled (required for service creation)
145+
146+
#### Example Deployment
147+
148+
Create a Ray cluster with one head node and two worker nodes:
149+
150+
```yaml
151+
apiVersion: batch.volcano.sh/v1alpha1
152+
kind: Job
153+
metadata:
154+
name: ray-cluster-job
155+
spec:
156+
minAvailable: 3
157+
schedulerName: volcano
158+
plugins:
159+
ray: []
160+
svc: []
161+
policies:
162+
- event: PodEvicted
163+
action: RestartJob
164+
queue: default
165+
tasks:
166+
- replicas: 1
167+
name: head
168+
template:
169+
spec:
170+
containers:
171+
- name: head
172+
image: rayproject/ray:latest-py311-cpu
173+
resources: {}
174+
restartPolicy: OnFailure
175+
- replicas: 2
176+
name: worker
177+
template:
178+
spec:
179+
containers:
180+
- name: worker
181+
image: rayproject/ray:latest-py311-cpu
182+
resources: {}
183+
restartPolicy: OnFailure
184+
```
185+
186+
Apply the configuration:
187+
```bash
188+
kubectl apply -f ray-cluster-job.yaml
189+
```
190+
191+
#### Accessing the Ray Cluster
192+
193+
Once deployed, you can access the Ray cluster through the automatically created service:
194+
195+
```bash
196+
# Check pod status
197+
kubectl get pod
198+
# Expected output:
199+
# NAME READY STATUS RESTARTS AGE
200+
# ray-cluster-job-head-0 1/1 Running 0 106s
201+
# ray-cluster-job-worker-0 1/1 Running 0 106s
202+
# ray-cluster-job-worker-1 1/1 Running 0 106s
203+
204+
# Check service
205+
kubectl get service
206+
# Expected output includes:
207+
# ray-cluster-job-head-svc ClusterIP 10.96.184.65 <none> 6379/TCP,8265/TCP,10001/TCP
208+
209+
# Port-forward to access Ray Dashboard
210+
kubectl port-forward service/ray-cluster-job-head-svc 8265:8265
211+
212+
# Submit a job to the cluster
213+
ray job submit --address http://localhost:8265 -- python -c "import ray; ray.init(); print(ray.cluster_resources())"
214+
```
215+
216+
### Learn More
217+
218+
- For KubeRay integration details, visit the [KubeRay Volcano Scheduler Documentation](https://docs.ray.io/en/latest/cluster/kubernetes/k8s-ecosystem/volcano.html#kuberay-integration-with-volcano)
219+
- For Volcano Job Ray plugin details, see the [Volcano Ray Plugin Guide](https://github.com/volcano-sh/volcano/blob/master/docs/user-guide/how_to_use_ray_plugin.md)

0 commit comments

Comments
 (0)