You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm facing persistent CUDA-related issues with my RTX 5090 when attempting to run GPU-accelerated tasks, and I’m hoping to get some guidance from the community.
System Specs:
GPU: NVIDIA RTX 5090 (Zotac Infinity Edition)
Driver Version: [Insert your driver version here, e.g., 551.xx]
When I run image generation tasks (e.g., using Stable Diffusion models with .safetensors), my GPU triggers a massive number of CUDA errors. This happens across different environments — whether I’m using PyCharm scripts, Anaconda environments, or launching AUTOMATIC1111 / ComfyUI.
Typical errors include:
CUDA error: no kernel image is available for execution on the device
Torch not compiled with CUDA enabled
RuntimeError: CUDA out of memory (even when VRAM usage is low)
Failed to load CUDA backend for torch
Occasionally, errors related to mismatched CUDA versions or missing kernel images.
Despite these issues, when I run simple matrix calculations or basic PyTorch GPU tests, the RTX 5090 performs perfectly, solving operations in under a second. This suggests the GPU is functioning correctly for certain workloads.
Additionally, when I force the image generation to run on CPU mode, everything works fine—just slower, as expected.
Adjusted webui-user.bat settings in AUTOMATIC1111:
Tried adding --xformers, --precision full, --no-half, etc.
Checked VRAM usage — plenty of free memory available during errors.
Tested both AUTOMATIC1111 and ComfyUI — same CUDA-related crashes.
Ensured my environment paths are properly set for CUDA.
Suspicions:
Possible CUDA 12.1 compatibility issues with newer GPUs like RTX 5090.
Mismatch between PyTorch, CUDA, and AI tool versions.
Potential bugs in AUTOMATIC1111 or ComfyUI not fully supporting Ada Lovelace Next-gen cards (5090).
Driver instability or missing low-level support for certain AI workflows.
Looking for Help On:
Recommended CUDA Toolkit version for RTX 5090 in AI tasks.
The best matching PyTorch build for Stable Diffusion and other image generation frameworks.
Any known fixes or configs for AUTOMATIC1111 / ComfyUI with 5090 GPUs.
Whether downgrading or upgrading specific libraries could resolve this.
Alternative settings or environment tweaks to stabilize CUDA performance.
Any advice, shared experiences, or guidance would be highly appreciated. I’ve spent days troubleshooting this, and I’d love to get the RTX 5090 fully operational for my AI projects!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I'm facing persistent CUDA-related issues with my RTX 5090 when attempting to run GPU-accelerated tasks, and I’m hoping to get some guidance from the community.
System Specs:
GPU: NVIDIA RTX 5090 (Zotac Infinity Edition)
Driver Version: [Insert your driver version here, e.g., 551.xx]
CUDA Toolkit: CUDA 12.1
Python Version: 3.10.x (via Anaconda / Miniconda)
Frameworks Installed:
torch, torchvision, torchaudio (CUDA 12.1 versions)
onnxruntime-gpu
Running environments: PyCharm, Anaconda/Miniconda, VSCode
AI Tools:
AUTOMATIC1111 Stable Diffusion WebUI
ComfyUI
OS: Windows 11 Pro
The Problem:
When I run image generation tasks (e.g., using Stable Diffusion models with .safetensors), my GPU triggers a massive number of CUDA errors. This happens across different environments — whether I’m using PyCharm scripts, Anaconda environments, or launching AUTOMATIC1111 / ComfyUI.
Typical errors include:
CUDA error: no kernel image is available for execution on the device
Torch not compiled with CUDA enabled
RuntimeError: CUDA out of memory (even when VRAM usage is low)
Failed to load CUDA backend for torch
Occasionally, errors related to mismatched CUDA versions or missing kernel images.
Despite these issues, when I run simple matrix calculations or basic PyTorch GPU tests, the RTX 5090 performs perfectly, solving operations in under a second. This suggests the GPU is functioning correctly for certain workloads.
Additionally, when I force the image generation to run on CPU mode, everything works fine—just slower, as expected.
What I’ve Tried So Far:
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
— Returns True and correctly identifies the RTX 5090.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Updated NVIDIA drivers to the latest version.
Adjusted webui-user.bat settings in AUTOMATIC1111:
Tried adding --xformers, --precision full, --no-half, etc.
Checked VRAM usage — plenty of free memory available during errors.
Tested both AUTOMATIC1111 and ComfyUI — same CUDA-related crashes.
Ensured my environment paths are properly set for CUDA.
Suspicions:
Possible CUDA 12.1 compatibility issues with newer GPUs like RTX 5090.
Mismatch between PyTorch, CUDA, and AI tool versions.
Potential bugs in AUTOMATIC1111 or ComfyUI not fully supporting Ada Lovelace Next-gen cards (5090).
Driver instability or missing low-level support for certain AI workflows.
Looking for Help On:
Recommended CUDA Toolkit version for RTX 5090 in AI tasks.
The best matching PyTorch build for Stable Diffusion and other image generation frameworks.
Any known fixes or configs for AUTOMATIC1111 / ComfyUI with 5090 GPUs.
Whether downgrading or upgrading specific libraries could resolve this.
Alternative settings or environment tweaks to stabilize CUDA performance.
Any advice, shared experiences, or guidance would be highly appreciated. I’ve spent days troubleshooting this, and I’d love to get the RTX 5090 fully operational for my AI projects!
Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions