Skip to content

Commit a54c9a0

Browse files
committed
clean
1 parent c40461e commit a54c9a0

File tree

1 file changed

+206
-7
lines changed

1 file changed

+206
-7
lines changed

README.md

Lines changed: 206 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,214 @@
11
<div align="center">
2-
3-
# MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization
42

5-
![mergevq_framework](https://github.com/user-attachments/assets/a3e22ba0-6f0d-43bb-bf38-cf628ec1aa41)
6-
7-
### [arXiv Paper]()
3+
<h2><a href="https://arxiv.org/abs/2504.00999">MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization (CVPR 2025)</a></h2>
84

95
[Siyuan Li](https://lupin1998.github.io)<sup>1,3*</sup>, [Luyuan Zhang](https://openreview.net/profile?id=~Luyuan_Zhang1)<sup>2*</sup>, [Zedong Wang](https://jacky1128.github.io)<sup>4</sup>, [Juanxi Tian](https://tianshijing.github.io)<sup>3</sup>, [Cheng Tan](https://chengtan9907.github.io)<sup>1,3</sup>, [Zicheng Liu](https://pone7.github.io)<sup>1,3</sup>, [Chang Yu](https://openreview.net/profile?id=~Chang_Yu1)<sup>3</sup>, [Qingsong Xie](https://openreview.net/profile?id=~Qingsong_Xie1)<sup>5†</sup>, [Haoqian Wang](https://www.sigs.tsinghua.edu.cn/whq_en/main.htm)<sup>2</sup>, [Zhen Lei](http://www.cbsr.ia.ac.cn/users/zlei/)<sup>6,7,8†</sup>
106

11-
<sup>1</sup> Zhejiang University &emsp; <sup>2</sup> Tsinghua University &emsp; <sup>3</sup> Westlake University &emsp; <sup>4</sup> The Hong Kong University of Science and Technology &emsp; <sup>5</sup> OPPO AI Center &emsp; <sup>6</sup> CAIR, HKISI-CAS &emsp; <sup>7</sup> Institute of Automation, CASIA &emsp; <sup>8</sup> University of Chinese Academy of Sciences
7+
<sup>1</sup> Zhejiang University &emsp; <sup>2</sup> Tsinghua University &emsp; <sup>3</sup> Westlake University &emsp; <sup>4</sup> HKUST &emsp; <sup>5</sup> OPPO AI Center &emsp;
8+
<sup>6</sup> CAIR, HKISI-CAS &emsp; <sup>7</sup> MAIS CASIA &emsp; <sup>8</sup> University of Chinese Academy of Sciences
9+
10+
<sup>*</sup> Equal Contributions; <sup>†</sup> Corresponding Authors.
1211

13-
<sup>*</sup> Equal Contributions. <sup>†</sup> Project Leader. <sup>‡</sup> Corresponding Author.
12+
<!-- IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025 -->
1413

1514
</div>
15+
16+
<p align="center">
17+
<a href="https://arxiv.org/abs/2504.00999" alt="arXiv">
18+
<img src="https://img.shields.io/badge/arXiv-2504.00999-b31b1b.svg?style=flat" /></a>
19+
<a href="https://github.com/ApexGen-X/MergeVQ/blob/main/LICENSE" alt="license">
20+
<img src="https://img.shields.io/badge/license-Apache--2.0-%23B7A800" /></a>
21+
<!-- <a href="https://colab.research.google.com/github/Westlake-AI/MogaNet/blob/main/demo.ipynb" alt="Colab">
22+
<img src="https://colab.research.google.com/assets/colab-badge.svg" /></a> -->
23+
<!-- <a href="https://huggingface.co/MogaNet" alt="Huggingface">
24+
<img src="https://img.shields.io/badge/huggingface-MogaNet-blueviolet" /></a> -->
25+
</p>
26+
27+
![mergevq_framework](https://github.com/user-attachments/assets/a3e22ba0-6f0d-43bb-bf38-cf628ec1aa41)
28+
29+
Masked Image Modeling (MIM) with Vector Quantization (VQ) has achieved great success in both self-supervised pre-training and image generation. However, most existing methods struggle to address the trade-off in shared latent space for generation quality vs. representation learning and efficiency. To push the limits of this paradigm, we propose MergeVQ, which incorporates token merging techniques into VQ-based autoregressive generative models to bridge the gap between visual generation and representation learning in a unified architecture. During pre-training, MergeVQ decouples top-k semantics from latent space with a token merge module after self-attention blocks in the encoder for subsequent Look-up Free Quantization (LFQ) and global alignment and recovers their fine-grained details through cross-attention in the decoder for reconstruction. As for the second-stage generation, we introduce MergeAR, which performs KV Cache compression for efficient raster-order prediction. Experiments on ImageNet verify that MergeVQ as an AR generative model achieves competitive performance in both representation learning and image generation tasks while maintaining favorable token efficiency and inference speed.
30+
31+
🤗 HuggingFace Daily Papers Top-1: [https://huggingface.co/papers/2504.00999](https://huggingface.co/papers/2504.00999)
32+
33+
## Catalog
34+
35+
We plan to release implementations of MergeVQ in a few months (before CVPR2025 taking place). Please watch us for the latest release and welcome to open issues for discussion! Currently, we have released the basic implementations of MergeVQ tokenizers.
36+
37+
## 📖 Implementations
38+
39+
### 🛠️ Installation
40+
41+
#### GPU
42+
- **Environments**: We have tested on `Python3.10.0` + `torch2.1.0+cuda12.1`, and `Python 3.8.8` + `torch==1.3.0+cuda11.8`, and other versions may also work.
43+
- **Dependencies**: `pip install -r requirements.txt`
44+
Here is an example of installing with `torch2.1.0+cuda12.1` from scratch:
45+
```sh
46+
conda create -n mergevq python=3.10.0
47+
conda activate mergevq
48+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu124
49+
pip install -r requirements.txt
50+
```
51+
52+
#### NPU
53+
- **Env**: `Python 3.9.16` and [`CANN 8.0.T13`](https://www.hiascend.com/en/software/cann)
54+
- **Main Dependencies**: `torch=2.1.0+cpu` + `torch-npu=2.1.0.post3-20240523` + [`Lightning`](https://github.com/hipudding/pytorch-lightning/tree/npu_support)
55+
- **Other Dependencies**: see in `requirements.txt`
56+
57+
#### Datasets Preparation
58+
We use ILSVRC2012 ImageNet with [training set](https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_train.tar) and [validation set](https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar) at the root, which could be downloaded as untared as follows:
59+
```
60+
.cache/imagenet
61+
└── train/
62+
├── n01440764
63+
├── n01440764_10026.JPEG
64+
├── n01440764_10027.JPEG
65+
├── ...
66+
├── n01443537
67+
├── ...
68+
└── val/
69+
├── n01440764
70+
├── n01443537
71+
├── ...
72+
```
73+
When start training or evaluation, these files will be generated under `.cache/imagenet/train` and `.cache/imagenet/val`, including `filelist.txt`, `imagenet_idx_to_synset.yaml`, `synset_human.txt`, and `validation_synset.txt`. If you want to use a custom dataset or ImageNet at the other file path, please specify `cachedir` for `taming.data.imagenet.ImageNetTrain` in the training config file.
74+
75+
#### Pre-training Models
76+
If you are not available to access `https://huggingface.co/` smoothly, we have two solutions.
77+
* Export to the mirror website (`https://hf-mirror.com`) and start training directly:
78+
```sh
79+
export HF_ENDPOINT=https://hf-mirror.com
80+
```
81+
Manually download the following pre-trained models from the offical or mirror websites and copy them to the cache folder as follows, or modify the config file with the path of local huggingface models.
82+
```
83+
/root/.cache/huggingface/hub
84+
└── models--facebook--dinov2-base
85+
└── models--laion--CLIP-ViT-B-16-laion2B-s34B-b88K
86+
└── models--timm--vit_base_patch14_dinov2.lvd142m
87+
```
88+
```python
89+
from timm import create_model
90+
teacher_weights = create_model("vit_base_patch14_dinov2.lvd142m", pretrained=True).state_dict()
91+
teacher_weights = create_model("vit_base_patch16_clip_224.laion2b", pretrained=True).state_dict()
92+
from transformers import AutoModel
93+
dist_model = AutoModel.from_pretrained("facebook/dinov2-base")
94+
```
95+
96+
### Stage I: Training of Visual Tokenizer
97+
98+
#### 🚀 Training Scripts
99+
* $256\times 256$ MergeVQ-d64 (G+R) Tokenizer Training with multiple nodes:
100+
```sh
101+
bash scripts/train_tokenizer/run_256_GR_d64_multi.sh MASTER_ADDR MASTER_PORT NODE_RANK
102+
```
103+
Or you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 16 and 2 times gradient accumulations as an example:
104+
```sh
105+
bash scripts/train_tokenizer/run_256_GR_d64_single.sh
106+
```
107+
108+
* $256\times 256$ MergeVQ-d96 (G+R) Tokenizer Training with multiple nodes:
109+
```sh
110+
bash scripts/train_tokenizer/run_256_GR_d96_multi.sh MASTER_ADDR MASTER_PORT NODE_RANK
111+
```
112+
Or you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 16 and 2 times gradient accumulations as an example:
113+
```sh
114+
bash scripts/train_tokenizer/run_256_GR_d96_single.sh
115+
```
116+
117+
* $256\times 256$ MergeVQ-d64 (G) Tokenizer Training with multiple nodes:
118+
```sh
119+
bash scripts/train_tokenizer/run_256_G_d64_multi.sh MASTER_ADDR MASTER_PORT NODE_RANK
120+
```
121+
Or you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 8 and 4 times gradient accumulations as an example:
122+
```sh
123+
bash scripts/train_tokenizer/run_256_G_d64_single.sh
124+
```
125+
126+
#### Evaluation Scripts
127+
We gather evaluation scripts of experiments above into one bash file, which can be executed with modified path to config files, results, and checkpoints:
128+
```sh
129+
bash scripts/evaluation/evaluation_mergevq.sh
130+
```
131+
132+
#### Note of Errors
133+
If the some errors occur during training, you may solve them with the following steps:
134+
* The version of `timm`. The low version of `timm` like `0.6.13` will cause errors in building Transformer Blocks, which can be solved by `pip install timm==0.9.11`.
135+
* Errors in building up ImageNet dataset. Although the meta files of ImageNet will be generated automatically, you might copy our preprocess meta files manually if it cannot be generated.
136+
<!-- * The assertion error of `accumulate_grad_batches` from `lightning`. Since we manually use `accumulate_grad_batches` in config files to setup gradient accumulation, please replace the source file `configuration_validator.py` with our modified version in lightning.
137+
```sh
138+
cp -r scripts/.modify_lightning/configuration_validator.py /root/anaconda3/envs/maskgit/lib/python3.10/site-packages/lightning/pytorch/trainer/configuration_validator
139+
``` -->
140+
141+
<!--
142+
#### 🚀 Evaluation Scripts
143+
* $128\times 128$ Tokenizer Evaluation
144+
```
145+
bash scripts/evaluation/evaluation_128.sh
146+
```
147+
148+
* $256\times 256$ Tokenizer Evaluation
149+
```
150+
bash scripts/evaluation/evaluation_256.sh
151+
``` -->
152+
153+
#### 🍺 Performance and Models (Updating)
154+
155+
**Tokenizer**
156+
| Method | Type | #Tokens | Train Size | Epoch | Codebook Size | rFID (Full) | rFID (Merge) | Checkpoint |
157+
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
158+
| Open-MAGVIT2 | 2D | $16^2$ | $256^2$ | 270 | $2^{18}$ | 1.53 (256) | - | [ckpt](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/imagenet_256_L.ckpt) |
159+
| MergeVQ-d32 (G) | 1D | [256, 1024] | $256^2$ | 200 | $2^{18}$ | 0.48 (1024) | 0.80 (256) | TODO |
160+
| MergeVQ-d64 (G) | 1D | [256, 1024] | $256^2$ | 100 | $2^{18}$ | 0.49 (1024) | 0.91 (256) | TODO |
161+
| MergeVQ-d64 (G) | 1D | [256, 1024] | $256^2$ | 200 | $2^{18}$ | 0.43 (1024) | 0.83 (256) | TODO |
162+
| MergeVQ-d32 (G+R) | 1D | [144, 256] | $256^2$ | 270 | $2^{18}$ | 1.27 (256) | 1.74 (144) | TODO |
163+
| MergeVQ-d64 (G+R) | 1D | [144, 256] | $256^2$ | 270 | $2^{18}$ | 1.12 (256) | 1.48 (144) | TODO |
164+
| MergeVQ-d96 (G+R) | 1D | [144, 256] | $256^2$ | 200 | $2^{18}$ | 1.03 (256) | 1.33 (144) | TODO |
165+
166+
### Stage II: Training of Auto-Regressive Models
167+
168+
#### 🚀 Training Scripts
169+
Please see in scripts/train_autogressive/run.sh for different model configurations.
170+
```
171+
bash scripts/train_autogressive/run.sh MASTER_ADDR MASTER_PORT NODE_RANK
172+
```
173+
174+
#### 🚀 Sample Scripts
175+
Please see in scripts/train_autogressive/run.sh for different sampling hyper-parameters for different scale of models.
176+
```
177+
bash scripts/evaluation/sample_npu.sh or scripts/evaluation/sample_gpu.sh Your_Total_Rank
178+
```
179+
180+
<!-- #### 🍺 Performance and Models
181+
| Method | Params| #Tokens | FID | IS | Checkpoint |
182+
|:------:|:-----:|:-------:|:---:|:--:|:----------:|
183+
|Open-MAGVIT2| 343M | 16 $\times$ 16 | 3.08 | 258.26 | [AR_256_B](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/AR_256_B.ckpt)|
184+
|Open-MAGVIT2| 804M | 16 $\times$ 16 | 2.51 | 271.70 | [AR_256_L](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/AR_256_L.ckpt)|
185+
|Open-MAGVIT2| 1.5B | 16 $\times$ 16 | 2.33 | 271.77 | [AR_256_XL](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/AR_256_XL.ckpt)| -->
186+
187+
188+
## License
189+
190+
This project is released under the [Apache 2.0 license](LICENSE).
191+
192+
## Acknowledgement
193+
194+
Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.
195+
196+
- [VQGAN](https://github.com/CompVis/taming-transformers): Taming Transformers for High-Resolution Image Synthesis.
197+
- [ToMe](https://github.com/facebookresearch/ToMe): Token Merging: Your ViT but Faster.
198+
- [LlamaGen](https://github.com/FoundationVision/LlamaGen): Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation.
199+
- [SEED-Voken (OpenMAGVIT2)](https://github.com/TencentARC/SEED-Voken): SEED-Voken: A Series of Powerful Visual Tokenizers.
200+
- [pytorch-image-models](https://github.com/rwightman/pytorch-image-models): PyTorch image models, scripts, pretrained weights.
201+
202+
## Citation
203+
204+
If you find this repository helpful, please consider citing:
205+
```
206+
@inproceedings{cvpr2025mergevq,
207+
title={MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization},
208+
author={Li, Siyuan and Zhang, Luyuan and Wang, Zedong and Tian, Juanxi and Tan, Cheng and Liu, Zicheng and Yu, Chang and Xie, Qingsong and Lu, Haonan and Wang, Haoqian and Lei, Zhen},
209+
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
210+
year={2025}
211+
}
212+
```
213+
214+
<p align="right">(<a href="#top">back to top</a>)</p>

0 commit comments

Comments
 (0)