Skip to content

Releases: Dao-AILab/flash-attention

v2.3.1.post1

04 Oct 05:21

Choose a tag to compare

[CI] Use official Pytorch 2.1, add CUDA 11.8 for Pytorch 2.1

v2.3.1

04 Oct 02:57

Choose a tag to compare

Bump to v2.3.1

v2.3.0

27 Sep 05:09

Choose a tag to compare

Bump to v2.3.0

v2.2.5

24 Sep 08:05

Choose a tag to compare

Bump to v2.2.5

v2.2.4.post1

22 Sep 06:57

Choose a tag to compare

Re-enable compilation for Hopper

v2.2.4

21 Sep 06:40

Choose a tag to compare

Bump to v2.2.4

v2.2.3.post2

18 Sep 05:17

Choose a tag to compare

Don't compile for Pytorch 2.1 on CUDA 12.1 due to nvcc segfaults

v2.2.3.post1

17 Sep 23:40

Choose a tag to compare

Set block size to 64 x 64 for kvcache to avoid nvcc segfaults

v2.2.3

16 Sep 08:47

Choose a tag to compare

Bump to v2.2.3

v2.2.2

11 Sep 06:48

Choose a tag to compare

Bump to v2.2.2