Unable to register cuDNN factory , Unable to register cuFFT factory , Unable to register cuBLAS factory: , TF-TRT Warning: Could not find TensorRT #2495
Replies: 1 comment
-
UPDATE: after doing this: Verify the installation:!python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" was able to fix 3 out of 4 errors: I'm Still getting this error: Already up-to-date #Update i read this article about similar issue and he wrote at the end that he was able to fix it by: Not sure how to uninstalling libtiff on Colab tho.. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey friends ,
Would love to fix this problem please:
Update failed.
No module named 'pygit2'
Update succeeded.
[System ARGV] ['/content/drive/MyDrive/Fooocus/entry_with_update.py', '--preset', 'turbo', '--theme', 'dark', '--share', '--always-high-vram', '--all-in-fp16']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.2.1
Error checking version for torchsde: No package metadata was found for torchsde
Installing requirements
Loaded preset: /content/drive/MyDrive/Fooocus/presets/turbo.json
Total VRAM 16151 MB, total RAM 52218 MB
Forcing FP16.
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla V100-SXM2-16GB : native
VAE dtype: torch.float32
Using pytorch cross attention
2024-03-10 07:24:36.461730: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-10 07:24:36.461776: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-03-10 07:24:36.463163: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-03-10 07:24:37.655981: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
#Update:
UPDATE: after doing this:
!python3 -m pip install tensorflow[and-cuda]
!python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
was able to fix 3 out of 4 errors:
Unable to register cuDNN factory , Unable to register cuFFT factory , Unable to register cuBLAS factory:
I'm Still getting this error:
"W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT"
Already up-to-date
Update succeeded.
[System ARGV] ['/content/drive/MyDrive/Fooocus/entry_with_update.py', '--preset', 'turbo', '--theme', 'dark', '--share', '--always-high-vram', '--all-in-fp16']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.2.1
Loaded preset: /content/drive/MyDrive/Fooocus/presets/turbo.json
Total VRAM 16151 MB, total RAM 52218 MB
Forcing FP16.
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla V100-SXM2-16GB : native
VAE dtype: torch.float32
Using pytorch cross attention
2024-03-10 08:11:22.317985: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
!python /content/drive/MyDrive/Fooocus/entry_with_update.py --preset turbo --theme dark --share --always-high-vram --all-in-fp16
Thanks for the help!
Beta Was this translation helpful? Give feedback.
All reactions