Skip to content

[ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "SparseVLM+: Visual Token Sparsification with Improved Text-Visual Attention Pattern"

License

Notifications You must be signed in to change notification settings

Gumpest/SparseVLMs

Repository files navigation

mask


SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference
Peking University, UC Berkeley, Panasonic Holdings Corporation

SparseVLM+: Visual Token Sparsification with Improved Text-Visual Attention Pattern
Peking University, Tsinghua University

📜 News

🔥 [2025/12/11] We released new version SparseVLM+! Bring a Stronger Performance with Improved Text-Visual Attention Pattern!

🔥 [2025/06/04] The sparsification code for VideoLLaVA is now open source! Please check the video branch.

🔥 [2025/05/01] Our SparseVLM is accepted by ICML 2025!

🔥 [2025/03/06] We released SparseVLM v1.5! Higher Accuracy, Flexible Pruning Manner, and Compatibility with FlashAttention 2!

🔥 [2024/10/15] We released SparseVLM and its Project Page! The Code is now open-source! Please check the v1.5 branch for the latest version.

mask

✒️ Contents

👀 Overview

In vision-language models (VLMs), visual tokens usually consume a significant amount of computational overhead, despite their sparser information density compared to text tokens. To address this, existing methods extract more compact image representations by modifying the image encoder or projector. While some recent works further sparsify vision tokens during the decoding, they still ignore the guidance from the language tokens, which contradicts the multimodality paradigm. We argue that visual tokens should be sparsified adaptively based on the question prompt, as the model might focus on different parts (e.g., foreground or background) when dealing with various questions, as shown in Figure below. Unlike previous methods with text-agnostic visual sparsification (c) e.g., recent FastV, our SparseVLM (b) is guided by question prompts to select relevant visual patches.

image

👨‍💻 Preparation

  1. Clone this repository and navigate to SparseVLMs folder
git clone https://github.com/Gumpest/SparseVLMs.git
cd SparseVLMs
  1. Install necessary package
conda create -n SparseVLMs python=3.10 -y
conda activate SparseVLMs
pip install -e .
pip install transformers==4.37.0
pip install flash_attn==2.3.3
  1. Download Multimodal Benchmark

Please follow the detailed instruction in LLaVA-Evaluation.

🎯 Basic Usage

Specifically, setting RETAIN_TOKN in the environment variables indicates the number of tokens to be retained after the SparseVLM algorithm. It supports four numbers of tokens, including 192, 128, 96, and 64. If a specific number of tokens is required, please make modifications in ./llava/model/language_model/score.py

  1. Example for evaluating MME results (retain 192 tokens):
RETAIN_TOKN=192 bash scripts/v1_5/eval/mme.sh
  1. Example for evaluating TextVQA results (retain 128 tokens):
RETAIN_TOKN=128 bash scripts/v1_5/eval/textvqa.sh
  1. Example for evaluating ScienceQA results (retain 96 tokens):
RETAIN_TOKN=96 bash scripts/v1_5/eval/sqa.sh
  1. Example for evaluating MMBench results (default 64 tokens):
RETAIN_TOKN=64 bash scripts/v1_5/eval/mmbench.sh

🛠️ One-Click Enable SparseVLM+ (V2.0 Mode)

You can boost the performance of SparseVLM by enabling the V2.0 mode, which can be seamlessly enabled via an environment variable without modifying the code.

  1. Example for evaluating MME results (retain 192 tokens):
USE_VERSION=2_0 RETAIN_TOKN=192 bash scripts/v1_5/eval/mme.sh
  1. Example for evaluating TextVQA results (retain 128 tokens):
USE_VERSION=2_0 RETAIN_TOKN=128 bash scripts/v1_5/eval/textvqa.sh
  1. Example for evaluating MMBench results (retain 96 tokens):
USE_VERSION=2_0 RETAIN_TOKN=96 bash scripts/v1_5/eval/mmbench.sh
  1. Example for evaluating GQA results (retain 64 tokens):
USE_VERSION=2_0 RETAIN_TOKN=64 bash scripts/v1_5/eval/gqa.sh

License

This project is released under the Apache 2.0 license.

Citation

If you use SparseVLM in your research, please cite our work by using the following BibTeX entry:

@inproceedings{zhang2024sparsevlm,
  title={SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference},
  author={Zhang, Yuan and Fan, Chun-Kai and Ma, Junpeng and Zheng, Wenzhao and Huang, Tao and Cheng, Kuan and Gudovskiy, Denis and Okuno, Tomoyuki and Nakata, Yohei and Keutzer, Kurt and others},
  booktitle={International Conference on Machine Learning},
  year={2025}
}
@article{zhangsparsevlm+,
  title={SparseVLM+: Visual Token Sparsification with Improved Text-Visual Attention Pattern},
  author={Zhang, Yuan and Ma, Junpeng and Zhang, Qizhe and Fan, Chun-Kai and Zheng, Wenzhao and Cheng, Kuan and Lu, Jiwen and Zhang, Shanghang}
}

Acknowledgment

We extend our gratitude to the open-source efforts of TCFormer, LLaVA, MiniGemini and VideoLLaVA.

About

[ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "SparseVLM+: Visual Token Sparsification with Improved Text-Visual Attention Pattern"

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published