Unable to register cuDNN factory, Could not find TensorRT #2501
Unanswered
Amit30swgoh
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
lready up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--preset', 'turbo', '--theme', 'dark', '--share', '--disable-offload-from-vram', '--always-high-vram', '--all-in-fp16']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.2.1
Load preset [/content/Fooocus/presets/turbo.json] failed
Downloading: "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors" to /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
100% 6.62G/6.62G [00:22<00:00, 322MB/s]
Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" to /content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors
100% 47.3M/47.3M [00:00<00:00, 296MB/s]
Total VRAM 16151 MB, total RAM 52218 MB
Forcing FP16.
Set vram state to: HIGH_VRAM
Device: cuda:0 Tesla V100-SXM2-16GB : native
VAE dtype: torch.float32
Using pytorch cross attention
2024-03-10 17:08:03.508252: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-10 17:08:03.508318: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-03-10 17:08:03.509892: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-03-10 17:08:04.997155: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
Running on local URL: http://127.0.0.1:7865/
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Running on public URL: https://1dd791670cbbf3731c.gradio.live/
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run
gradio deploy
from Terminal to deploy to Spaces (https://huggingface.co/spaces)Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.62 seconds
Started worker with PID 4410
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://1dd791670cbbf3731c.gradio.live/
Keyboard interruption in main thread... closing server.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2199, in block_thread
time.sleep(0.1)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/Fooocus/entry_with_update.py", line 46, in
from launch import *
File "/content/Fooocus/launch.py", line 127, in
from webui import *
File "/content/Fooocus/webui.py", line 680, in
shared.gradio_root.launch(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2115, in launch
self.block_thread()
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2203, in block_thread
self.server.close()
File "/usr/local/lib/python3.10/dist-packages/gradio/networking.py", line 49, in close
self.thread.join()
File "/usr/lib/python3.10/threading.py", line 1064, in join
def join(self, timeout=None):
Beta Was this translation helpful? Give feedback.
All reactions