Skip to content

ChangPtR/AbdMLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AbductiveMLLM: Boosting Visual Abductive Reasoning within MLLMs

  📖 ArXiv    │   📖 Appendix    │   🤗 Models

🔥 News

  • [2025/11/11] We release our code on GitHub.
  • [2025/11/08] Our work is accepted to AAAI 2026 as Oral presentation 🎉!

🔎 Overview

Visual abductive reasoning (VAR) is a challenging task that requires AI systems to infer the most likely explanation for incomplete visual observations. While recent MLLMs develop strong general-purpose multimodal reasoning capabilities, they fall short in abductive inference, as compared to human beings. To bridge this gap, we draw inspiration from the interplay between verbal and pictorial abduction in human cognition, and propose to strengthen abduction of MLLMs by mimicking such dual-mode behavior. Concretely, we introduce AbductiveMLLM comprising of two synergistic components: REASONER and IMAGINER. The REASONER operates in the verbal domain. It first explores a broad space of possible explanations using a blind LLM and then prunes visually incongruent hypotheses based on cross-modal causal alignment. The remaining hypotheses are introduced into the MLLM as targeted priors, steering its reasoning toward causally coherent explanations. The IMAGINER, on the otherhand, further guides MLLMs by emulating human-like pictorial thinking. It conditions a text-to-image diffusion model on both the input video and the REASONER ’s output embeddings to “imagine” plausible visual scenes that correspond to verbal explanation, thereby enriching MLLMs’ contextual grounding. The two components are trained jointly in an end-to-end manner. Experiments on standard VAR benchmarks show that AbductiveMLLM achieves state-of-the-art performance, consistently outperforming traditional solutions and advanced MLLMs.

🛠️ Set up

Installation

git clone https://github.com/ChangPtR/AbdMLLM.git
cd AbdMLLM

conda create -n abdmllm python=3.10
conda activate abdmllm

pip install -r requirements.txt

After that, you can install flash-attention from wheels.

Dataset

  • For VAR dataset, download from the original repo VAR.
  • For YouCookII dataset, you can download from the official page URL or Huggingface. We re-partition the original training and validation sets and adapt YouCookII to the same format as VAR.

The train/val split files for both datasets and other model input files are under /resources.

📈 Inference & Evaluation

Inference

We release LoRA adapters fine-tuned on the two datasets on Huggingface. Please first download the weights of base model Qwen/Qwen2-VL-7B-Instruct from Huggingface. Then, follow the example on Model Card to merge the adapters to the base model.

Inference code:

python eval_qwen2vl.py

Evaluation Procedure

We follow the evaluation procedure in VAR. Run the command below:

python -m eval_kit.evaluate_models path/to/your/inference_result.json

🙏 Acknowledgements

We gratefully acknowledge the contributions of the open-source community, particularly VAR, SimDA.

📚 Citations

If you find this work helpful, please consider citing:

@article{chang2026abductivemllm,
  title={AbductiveMLLM: Boosting Visual Abductive Reasoning Within MLLMs},
  author={Chang, Boyu and Wang, Qi and Guo, Xi and Nan, Zhixiong and Yao, Yazhou and Zhou, Tianfei},
  journal={arXiv preprint arXiv:2601.02771},
  year={2026}
}

About

[AAAI-26 Oral] AbductiveMLLM: Boosting Visual Abductive Reasoning within MLLMs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors