RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select) #2478
Unanswered
aleixr1997
asked this question in
Q&A
Replies: 1 comment
-
Have you resolved it? I also have this question |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
After running fooocus and the interface appears in chrome I add a prompt, and when I generate it, this error appears:
D:\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.2.1
Total VRAM 8191 MB, total RAM 32710 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3070 : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL: http://127.0.0.1:7865
To create a public link, set
share=True
inlaunch()
.model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: D:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
loading in lowvram mode 64.0
lowvram: loaded module regularly Embedding(49408, 768)
lowvram: loaded module regularly Embedding(77, 768)
lowvram: loaded module regularly Embedding(49408, 1280)
lowvram: loaded module regularly Embedding(77, 1280)
[Fooocus Model Management] Moving model(s) has taken 0.22 seconds
Started worker with PID 34772
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 7640253055175883182
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] car, cinematic, atmosphere, gorgeous, full, crisp, light, clear, focus, extremely detailed, intricate, very sleek, complex, highly color, advanced, professional, elegant, luxury, dramatic, determined, cool, inspiring, amazing, attractive, smart, cute, confident, passionate, vibrant, iconic, epic, best, contemporary, futuristic, trendy, enhanced
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] car, highly detailed, intricate, sharp focus, beautiful dynamic light, vivid colors, symmetry, full color, cinematic, refined, elegant, deep aesthetic, magical, appealing, very inspirational, inspiring, original, fine detail, clear background, professional, ambient, magic, epic, best, winning, fair, pretty, perfect, artistic, positive, thoughtful, pure, rational
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.16 seconds
Traceback (most recent call last):
File "D:\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 900, in worker
handler(task)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 480, in handler
t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 191, in clip_encode
cond, pooled = clip_encode_single(final_clip, text)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 149, in clip_encode_single
result = clip.encode_from_tokens(tokens, return_pooled=True)
File "D:\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sd.py", line 128, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "D:\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sdxl_clip.py", line 54, in encode_token_weights
g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
File "D:\Fooocus_win64_2-1-831\Fooocus\modules\patch_clip.py", line 39, in patched_encode_token_weights
out, pooled = self.encode(to_encode)
File "D:\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sd1_clip.py", line 190, in encode
return self(tokens)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\Fooocus\modules\patch_clip.py", line 125, in patched_SDClipModel_forward
outputs = self.transformer(input_ids=tokens, attention_mask=attention_mask,
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 229, in forward
position_embeddings = self.position_embedding(position_ids)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
return F.embedding(
File "D:\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
Total time: 3.68 seconds
Beta Was this translation helpful? Give feedback.
All reactions