Skip to content

requires device with capability < (8, 0) but your GPU has capability (12, 0) (too new) #1356

@lucasjinreal

Description

@lucasjinreal

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(8, 1, 8, 64) (torch.float32)
key : shape=(8, 192, 8, 64) (torch.float32)
value : shape=(8, 192, 8, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
[email protected] is not supported because:
requires device with capability < (8, 0) but your GPU has capability (12, 0) (too new)
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
requires device with capability == (8, 0) but your GPU has capability (12, 0) (too new)
[email protected] is not supported because:
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
cutlassF-pt is not supported because:
requires device with capability < (5, 0) but your GPU has capability (12, 0) (too new)

5070 can not running this function, any help?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions