If you are experiencing errors in {productname-long} relating to distributed workloads, read this section to understand what could be causing the problem, and how to resolve the problem.
If the problem is not documented here or in the release notes, contact {org-name} Support.
The resource quota specified in the cluster queue configuration might be insufficient, or the resource flavor might not yet be created.
The Ray cluster head pod or worker pods remain in a suspended state.
-
Check the workload resource:
-
Click Search, and from the Resources list, select Workload.
-
Select the workload resource that is created with the Ray cluster resource, and click the YAML tab.
-
Check the text in the
status.conditions.message
field, which provides the reason for the suspended state, as shown in the following example:status: conditions: - lastTransitionTime: '2024-05-29T13:05:09Z' message: 'couldn''t assign flavors to pod set small-group-jobtest12: insufficient quota for nvidia.com/gpu in flavor default-flavor in ClusterQueue'
-
-
Check the Ray cluster resource:
-
Click Search, and from the Resources list, select RayCluster.
-
Select the Ray cluster resource, and click the YAML tab.
-
Check the text in the
status.conditions.message
field.
-
-
Check the cluster queue resource:
-
Click Search, and from the Resources list, select ClusterQueue.
-
Check your cluster queue configuration to ensure that the resources that you requested are within the limits defined for the project.
-
Either reduce your requested resources, or contact your administrator to request more resources.
-
You might have insufficient resources.
The Ray cluster head pod or worker pods are not running.
When a Ray cluster is created, it initially enters a failed
state.
This failed state usually resolves after the reconciliation process completes and the Ray cluster pods are running.
If the failed state persists, complete the following steps:
-
Click Search, and from the Resources list, select Pod.
-
Click your pod name to open the pod details page.
-
Click the Events tab, and review the pod events to identify the cause of the problem.
-
If you cannot resolve the problem, contact your administrator to request assistance.
After you run the cluster.up()
command, the following error is shown:
ApiException: (500)
Reason: Internal Server Error
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"mraycluster.ray.openshift.ai\": failed to call webhook: Post \"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"codeflare-operator-webhook-service\"","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"mraycluster.ray.openshift.ai\": failed to call webhook: Post \"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"codeflare-operator-webhook-service\""}]},"code":500}
The CodeFlare Operator pod might not be running.
Contact your administrator to request assistance.
After you run the cluster.up()
command, the following error is shown:
ApiException: (500)
Reason: Internal Server Error
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"mraycluster.kb.io\": failed to call webhook: Post \"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"kueue-webhook-service\"","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"mraycluster.kb.io\": failed to call webhook: Post \"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"kueue-webhook-service\""}]},"code":500}
The Kueue pod might not be running.
Contact your administrator to request assistance.
After you run the cluster.up()
command, when you run either the cluster.details()
command or the cluster.status()
command, the Ray Cluster remains in the Starting
status instead of changing to the Ready
status.
No pods are created.
-
Check the workload resource:
-
Click Search, and from the Resources list, select Workload.
-
Select the workload resource that is created with the Ray cluster resource, and click the YAML tab.
-
Check the text in the
status.conditions.message
field, which provides the reason for remaining in theStarting
state.
-
-
Check the Ray cluster resource:
-
Click Search, and from the Resources list, select RayCluster.
-
Select the Ray cluster resource, and click the YAML tab.
-
Check the text in the
status.conditions.message
field.
-
If you cannot resolve the problem, contact your administrator to request assistance.
After you run the cluster.up()
command, the following error is shown:
Default Local Queue with kueue.x-k8s.io/default-queue: true annotation not found please create a default Local Queue or provide the local_queue name in Cluster Configuration.
No default local queue is defined, and a local queue is not specified in the cluster configuration.
-
Click Search, and from the Resources list, select LocalQueue.
-
Resolve the problem in one of the following ways:
-
If a local queue exists, add it to your cluster configuration as follows:
local_queue="<local_queue_name>"
-
If no local queue exists, contact your administrator to request assistance.
-
After you run the cluster.up()
command, the following error is shown:
local_queue provided does not exist or is not in this namespace. Please provide the correct local_queue name in Cluster Configuration.
An incorrect value is specified for the local queue in the cluster configuration, or an incorrect default local queue is defined. The specified local queue either does not exist, or exists in a different namespace.
-
Click Search, and from the Resources list, select LocalQueue.
-
Resolve the problem in one of the following ways:
-
If a local queue exists, ensure that you spelled the local queue name correctly in your cluster configuration, and that the
namespace
value in the cluster configuration matches your project name. If you do not specify anamespace
value in the cluster configuration, the Ray cluster is created in the current project. -
If no local queue exists, contact your administrator to request assistance.
-
After you run the cluster.up()
command, an error similar to the following error is shown:
RuntimeError: Failed to get RayCluster CustomResourceDefinition: (403)
Reason: Forbidden
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rayclusters.ray.io is forbidden: User \"system:serviceaccount:regularuser-project:regularuser-workbench\" cannot list resource \"rayclusters\" in API group \"ray.io\" in the namespace \"regularuser-project\"","reason":"Forbidden","details":{"group":"ray.io","kind":"rayclusters"},"code":403}
The correct OpenShift login credentials are not specified in the TokenAuthentication
section of your notebook code.
-
Identify the correct OpenShift login credentials as follows:
-
In the new tab that opens, log in as the user whose credentials you want to use.
-
Click Display Token.
-
From the Log in with this token section, copy the
token
andserver
values.
-
-
In your notebook code, specify the copied
token
andserver
values as follows:auth = TokenAuthentication( token = "<token>", server = "<server>", skip_tls=False ) auth.login()
Kueue waits for a period of time before marking a workload as ready, to enable all of the workload pods to become provisioned and running. By default, Kueue waits for 5 minutes. If the pod image is very large and is still being pulled after the 5-minute waiting period elapses, Kueue fails the workload and terminates the related pods.
-
Click Search, and from the Resources list, select Pod.
-
Click the Ray head pod name to open the pod details page.
-
Click the Events tab, and review the pod events to check whether the image pull completed successfully.
If the pod takes more than 5 minutes to pull the image, contact your administrator to request assistance.