You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 4096, 1, 512) (torch.float32) key : shape=(1, 4096, 1, 512) (torch.float32) value : shape=(1, 4096, 1, 512) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 cutlassF is not supported because: device=cpu (supported: {'cuda'}) flshattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) max(query.shape[-1] != value.shape[-1]) > 128 tritonflashattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see python -m xformers.info for more info triton is not available smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 512
problem in inpaint. run with these settings "--medvram --allow-code --xformers --no-half --api --ckpt-dir A:\stable-diffusion-checkpoints\ --ignore-config --skip-torch-cuda -test"
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 4096, 1, 512) (torch.float32) key : shape=(1, 4096, 1, 512) (torch.float32) value : shape=(1, 4096, 1, 512) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 cutlassF is not supported because: device=cpu (supported: {'cuda'}) flshattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) max(query.shape[-1] != value.shape[-1]) > 128 tritonflashattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see python -m xformers.info for more info triton is not available smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 512
problem in inpaint. run with these settings "--medvram --allow-code --xformers --no-half --api --ckpt-dir A:\stable-diffusion-checkpoints\ --ignore-config --skip-torch-cuda -test"
Beta Was this translation helpful? Give feedback.
All reactions