diff --git a/README.md b/README.md
index e5f4ee8f8..62a4502c2 100644
--- a/README.md
+++ b/README.md
@@ -207,6 +207,17 @@ def flash_attn_with_kvcache(
To see how these functions are used in a multi-head attention layer (which
includes QKV projection, output projection), see the MHA [implementation](https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/modules/mha.py).
+## FlashAttention and FlashAttention2 Animations
+
+Explore the inner workings of attention mechanisms with this animated series! These visualizations illustrate the implementation of three attention algorithms, Standard Attention, Flash Attention, and Flash Attention 2. For short sequences, the animations provide an intuition for the algorithms. As the sequence length increases, it becomes evident that, asymptotically, both Flash Attention and Flash Attention 2 exhibit superior IO performance.
+
+| Sequence Length | Standard Attention | Flash Attention | Flash Attention 2 |
+| --- | -------- | -------- | -------- |
+| Short |
|
|
|
+| Medium |
|
|
|
+| Long |
|
|
|
+
+
## Changelog
### 2.0: Complete rewrite, 2x faster