Replies: 1 comment
-
There is solution #624 (comment) , but for me is a bit unstable. Get memory crashes while generating. Need to find sweet spot. For me ATM it's only 8192MB. Saying ONLY, as my card has 16GB of VRAM. Tried 12GB, 15GB, 16GB, but if was failing a lot. Probably it's an issue with the AMD cards compatibility, as it is in Beta stage. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Has anyone gotten it to work on a 7900xtx I edit the run.bat with the AMD commands and get the web interface but nothing works. It seems to not even see my Card.
Batch file output:
Using directml with device:
Total VRAM 1024 MB, total RAM 32690 MB
Set vram state to: NORMAL_VRAM
Disabling smart memory management
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Refiner unloaded.
model_type EPS
adm 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: C:\Users\Shane\Downloads\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors
Beta Was this translation helpful? Give feedback.
All reactions