Releases: Dao-AILab/flash-attention
Releases · Dao-AILab/flash-attention
v2.3.1.post1
04 Oct 05:21
Compare
Sorry, something went wrong.
No results found
[CI] Use official Pytorch 2.1, add CUDA 11.8 for Pytorch 2.1
v2.3.1
04 Oct 02:57
Compare
Sorry, something went wrong.
No results found
v2.3.0
27 Sep 05:09
Compare
Sorry, something went wrong.
No results found
v2.2.5
24 Sep 08:05
Compare
Sorry, something went wrong.
No results found
v2.2.4.post1
22 Sep 06:57
Compare
Sorry, something went wrong.
No results found
Re-enable compilation for Hopper
v2.2.4
21 Sep 06:40
Compare
Sorry, something went wrong.
No results found
v2.2.3.post2
18 Sep 05:17
Compare
Sorry, something went wrong.
No results found
Don't compile for Pytorch 2.1 on CUDA 12.1 due to nvcc segfaults
v2.2.3.post1
17 Sep 23:40
Compare
Sorry, something went wrong.
No results found
Set block size to 64 x 64 for kvcache to avoid nvcc segfaults
v2.2.3
16 Sep 08:47
Compare
Sorry, something went wrong.
No results found
v2.2.2
11 Sep 06:48
Compare
Sorry, something went wrong.
No results found