Skip to content

Releases: Dao-AILab/flash-attention

v2.7.4.post1

v2.7.4

29 Jan 21:34
Compare
Choose a tag to compare
Bump to v2.7.4

v2.7.3

10 Jan 18:01
89c5a7d
Compare
Choose a tag to compare
Change version to 2.7.3 (#1437)

Signed-off-by: Kirthi Shankar Sivamani <[email protected]>

v2.7.2.post1

07 Dec 18:43
Compare
Choose a tag to compare
[CI] Use MAX_JOBS=1 with nvcc 12.3, don't need OLD_GENERATOR_PATH

v2.7.2

07 Dec 16:34
Compare
Choose a tag to compare
Bump to v2.7.2

v2.7.1.post4

07 Dec 06:15
Compare
Choose a tag to compare
[CI] Don't include <ATen/cuda/CUDAGraphsUtils.cuh>

v2.7.1.post3

07 Dec 05:41
Compare
Choose a tag to compare
[CI] Change torch #include to make it work with torch 2.1 Philox

v2.7.1.post2

07 Dec 01:13
Compare
Choose a tag to compare
[CI] Use torch 2.6.0.dev20241001, reduce torch #include

v2.7.1.post1

06 Dec 01:53
Compare
Choose a tag to compare
[CI] Fix CUDA version for torch 2.6

v2.7.1

06 Dec 01:43
Compare
Choose a tag to compare
Bump to v2.7.1