Skip to content

Commit c538590

Browse files
committed
add readme and webpage
1 parent 26b2bcc commit c538590

File tree

2 files changed

+446
-420
lines changed

2 files changed

+446
-420
lines changed

README.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,17 @@
33
Welcome to the official repository for **LLM2CLIP**! This project leverages large language models (LLMs) as powerful textual teachers for CLIP's visual encoder, enabling more nuanced and comprehensive multimodal learning.
44

55
[![Paper](https://img.shields.io/badge/Paper-arXiv-red)](https://arxiv.org/abs/2411.04997) [![Project Homepage](https://img.shields.io/badge/Project-Homepage-blue)](https://aka.ms/llm2clip) [![HuggingFace Collection](https://img.shields.io/badge/HuggingFace-Collection-orange)](https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c)
6-
**Paper:** Accepted to NeurIPS 2024 Workshop: Self-Supervised Learning - Theory and Practice and AAAI 2026
6+
**Paper:** Accepted to NeurIPS 2024 Workshop: Self-Supervised Learning – Theory and Practice, and AAAI 2026 (**Outstanding Paper Award**)
7+
78

89
---
910

1011
## News 🚀🚀🚀
12+
- **[2026-01-23]** 🎉 **LLM2CLIP received the AAAI 2026 Outstanding Paper Award!**
13+
Our work was recognized by AAAI for its contribution to multimodal representation learning, highlighting the effectiveness of leveraging large language models as textual teachers to significantly enhance CLIP-style visual representations.
14+
👉 [AAAI 2026 Conference Paper Awards and Recognition](https://aaai.org/about-aaai/aaai-awards/aaai-conference-paper-awards-and-recognition/)
15+
- **[2025-03-25]** 🔥 **SigLIP2 models updated with LLM2CLIP training.**
16+
The new SigLIP2-based checkpoints show **substantial improvements** in both **short- and long-text image retrieval**, as well as **multilingual text–image retrieval**, further validating the scalability and generality of the LLM2CLIP framework.
1117
- **[2024-11-18]** Our Caption-Contrastive finetuned Llama3-8B-CC released on [HuggingFace](https://huggingface.co/microsoft/LLM2CLIP-Llama-3-8B-Instruct-CC-Finetuned), we will try release more version.
1218
- **[2024-11-08]** We are currently training a **scaled-up** version with ten times the training dataset, along with upcoming updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results trained with LLM2CLIP. Stay tuned for the most powerful CLIP models, and thank you for your star!
1319
- **[2024-11-06]** OpenAI's CLIP and EVA02's ViT base and large models are now available on [HuggingFace](https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c).

0 commit comments

Comments
 (0)