Skip to content

[Bug]: Only detected 1GB of vram #3990

@Userunknownn000

Description

@Userunknownn000

Checklist

  • The issue has not been resolved by following the troubleshooting guide
  • The issue exists on a clean installation of Fooocus
  • The issue exists in the current version of Fooocus
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

I have a 9070XT Amd GPU (16gb Vram) and also I am completely new to the AI situation. So please explain to me like an idiot.
I have already followed the steps given to use foocus with my AMD GPU. However everytime i boot up run.bat it has only detected 1gb of vram and the whole 32gb of my RAM.
Everytime I try to generate an image, fooocus tends to use all my RAM, making my system unable to do anything for 2 minutes. I think fooocus is using my integrated GPU instead of my actual GPU.
Can someone help me with this.

Also another problem, when i use the default model in fooocus everything works, but when i download other models with civitai (SDXL) and process any image, it goes straight to connection error.

Steps to reproduce the problem

Tried different models also the same thing, tried using different command lines

What should have happened?

Fooocus should process my image

What browsers do you use to access Fooocus?

Google Chrome

Where are you running Fooocus?

Locally

What operating system are you using?

Windows 11

Console logs

D:\GAMES\Foocus\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.5
[Cleanup] Attempting to delete content of temp dir C:\Users\camer\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Traceback (most recent call last):
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\entry_with_update.py", line 46, in <module>
    from launch import *
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\launch.py", line 152, in <module>
    from webui import *
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\webui.py", line 10, in <module>
    import modules.async_worker as worker
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 3, in <module>
    from extras.inpaint_mask import generate_mask_from_image, SAMOptions
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\extras\inpaint_mask.py", line 6, in <module>
    from extras.GroundingDINO.util.inference import default_groundingdino
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\extras\GroundingDINO\util\inference.py", line 3, in <module>
    import ldm_patched.modules.model_management as model_management
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 121, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 90, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 878, in current_device
    _lazy_init()
  File "D:\GAMES\Foocus\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 305, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

D:\GAMES\Foocus\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
Found existing installation: torch 2.4.1
Uninstalling torch-2.4.1:
  Successfully uninstalled torch-2.4.1
Found existing installation: torchvision 0.19.1
Uninstalling torchvision-0.19.1:
  Successfully uninstalled torchvision-0.19.1
WARNING: Skipping torchaudio as it is not installed.
WARNING: Skipping torchtext as it is not installed.
WARNING: Skipping functorch as it is not installed.
WARNING: Skipping xformers as it is not installed.

D:\GAMES\Foocus\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip install torch-directml
Requirement already satisfied: torch-directml in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (0.2.5.dev240914)
Collecting torch==2.4.1 (from torch-directml)
  Using cached torch-2.4.1-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision==0.19.1 (from torch-directml)
  Using cached torchvision-0.19.1-cp310-cp310-win_amd64.whl.metadata (6.1 kB)
Requirement already satisfied: filelock in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.12.2)
Requirement already satisfied: typing-extensions>=4.8.0 in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (4.13.2)
Requirement already satisfied: sympy in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (1.12)
Requirement already satisfied: networkx in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.1)
Requirement already satisfied: jinja2 in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.1.2)
Requirement already satisfied: fsspec in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (2023.6.0)
Requirement already satisfied: numpy in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.19.1->torch-directml) (1.26.4)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.19.1->torch-directml) (10.4.0)
Requirement already satisfied: MarkupSafe>=2.0 in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from jinja2->torch==2.4.1->torch-directml) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in d:\games\foocus\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from sympy->torch==2.4.1->torch-directml) (1.3.0)
Using cached torch-2.4.1-cp310-cp310-win_amd64.whl (199.4 MB)
Using cached torchvision-0.19.1-cp310-cp310-win_amd64.whl (1.3 MB)
Installing collected packages: torch, torchvision
  WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'D:\GAMES\Foocus\Fooocus_win64_2-5-0\python_embeded\Scripts' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed torch-2.4.1 torchvision-0.19.1

[notice] A new release of pip is available: 24.1.2 -> 25.1.1
[notice] To update, run: D:\GAMES\Foocus\Fooocus_win64_2-5-0\python_embeded\python.exe -m pip install --upgrade pip

D:\GAMES\Foocus\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--directml']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.5
[Cleanup] Attempting to delete content of temp dir C:\Users\camer\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Using directml with device:
Total VRAM 1024 MB, total RAM 31861 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
IMPORTANT: You are using gradio version 3.41.2, however version 4.44.1 is available, please upgrade.
--------
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\GAMES\Foocus\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Started worker with PID 9912

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingtriageThis needs an (initial) review

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions