Skip to content

Commit c9625ad

Browse files
authored
Update README.md
1 parent b85b796 commit c9625ad

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
[Siyuan Li](https://lupin1998.github.io)<sup>1,3*</sup>, [Luyuan Zhang](https://openreview.net/profile?id=~Luyuan_Zhang1)<sup>2*</sup>, [Zedong Wang](https://jacky1128.github.io)<sup>4</sup>, [Juanxi Tian](https://tianshijing.github.io)<sup>3</sup>, [Cheng Tan](https://chengtan9907.github.io)<sup>1,3</sup>, [Zicheng Liu](https://pone7.github.io)<sup>1,3</sup>, [Chang Yu](https://openreview.net/profile?id=~Chang_Yu1)<sup>3</sup>, [Qingsong Xie](https://openreview.net/profile?id=~Qingsong_Xie1)<sup>5†</sup>, [Haoqian Wang](https://www.sigs.tsinghua.edu.cn/whq_en/main.htm)<sup>2</sup>, [Zhen Lei](http://www.cbsr.ia.ac.cn/users/zlei/)<sup>6,7,8†</sup>
88

9-
<sup>1</sup> Zhejiang University &emsp; <sup>2</sup> Tsinghua University &emsp; <sup>3</sup> Westlake University &emsp; <sup>4</sup> HKUST &emsp; <sup>5</sup> OPPO AI Center &emsp; <sup>6</sup> CAIR, HKISI-CAS &emsp; <sup>7</sup> MAIS CASIA &emsp; <sup>8</sup> University of Chinese Academy of Sciences
9+
<sup>1</sup> Zhejiang University &emsp; <sup>2</sup> Tsinghua University &emsp; <sup>3</sup> Westlake University &emsp; <sup>4</sup> HKUST &emsp; <sup>5</sup> OPPO AI Center &emsp; <sup>6</sup> CAIR, HKISI-CAS &emsp; <sup>7</sup> MAIS CASIA &emsp; <sup>8</sup> University of Chinese Academy of Sciences
1010

1111
<sup>*</sup> Equal Contributions. <sup>†</sup> Corresponding Authors.
1212

@@ -29,7 +29,7 @@ Masked Image Modeling (MIM) with Vector Quantization (VQ) has achieved great suc
2929
To push the limits of this paradigm, we propose MergeVQ, which incorporates token merging techniques into VQ-based autoregressive generative models to bridge the gap between visual generation and representation learning in a unified architecture. During pre-training, MergeVQ decouples top-k semantics from latent space with a token merge module after self-attention blocks in the encoder for subsequent Look-up Free Quantization (LFQ) and global alignment and recovers their fine-grained details through cross-attention in the decoder for reconstruction. As for the second-stage generation, we introduce MergeAR, which performs KV Cache compression for efficient raster-order prediction.
3030
Experiments on ImageNet verify that MergeVQ as an AR generative model achieves competitive performance in both representation learning and image generation tasks while maintaining favorable token efficiency and inference speed.
3131

32-
HuggingFace: [https://huggingface.co/papers](https://huggingface.co/papers/2504.00999) (#1 Paper of the day⬆️)
32+
HuggingFace: [https://huggingface.co/papers](https://huggingface.co/papers/2504.00999) ("#1 Paper of the day" ⬆️)
3333
## Catalog
3434

3535
We plan to release implementations of MergeVQ in a few months (before CVPR2025 taking place). Please watch us for the latest release and welcome to open issues for discussion!

0 commit comments

Comments
 (0)