diff --git a/README.md b/README.md
index 741d205..75317e0 100644
--- a/README.md
+++ b/README.md
@@ -120,6 +120,7 @@ This is the first work to correct hallucination in multimodal large language mod
| 
[**Emerging Properties in Unified Multimodal Pretraining**](https://arxiv.org/pdf/2505.14683)
| arXiv | 2025-05-23 | [Github](https://github.com/bytedance-seed/BAGEL) | [Demo](https://demo.bagel-ai.org/) |
| 
[**MMaDA: Multimodal Large Diffusion Language Models**](https://arxiv.org/pdf/2505.15809)
| arXiv | 2025-05-21 | [Github](https://github.com/Gen-Verse/MMaDA) | [Demo](https://huggingface.co/spaces/Gen-Verse/MMaDA) |
| [**UniGen: Enhanced Training & Test-Time Strategies for Unified Multimodal Understanding and Generation**](https://arxiv.org/pdf/2505.14682) | arXiv | 2025-05-20 | - | - |
+| 
[**MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO**](https://arxiv.org/pdf/2505.13031)
| arXiv | 2025-05-19 | [Github](https://github.com/TencentARC/MindOmni) | [Demo](https://huggingface.co/spaces/stevengrove/MindOmni) |
| 
[**BLIP3-o: A Family of Fully Open Unified Multimodal Models-Architecture, Training and Dataset**](https://arxiv.org/pdf/2505.09568)
| arXiv | 2025-05-14 | [Github](https://github.com/JiuhaiChen/BLIP3o) | Local Demo |
| [**Seed1.5-VL Technical Report**](https://arxiv.org/pdf/2505.07062) | arXiv | 2025-05-11 | - | - |
| 
[**Perception, Reason, Think, and Plan: A Survey on Large Multimodal Reasoning Models**](https://arxiv.org/pdf/2505.04921)
| arXiv | 2025-05-08 | [Github](https://github.com/HITsz-TMG/Awesome-Large-Multimodal-Reasoning-Models) | - |