We address How to create a poster from a paper and How to evaluate poster.
PosterAgent is a top-down, visual-in-the-loop multi-agent system from paper.pdf to editable poster.pptx.
Parallelization is now supported! Simply specify as hyper parameter --max_workers.
E.g. --max_workers=5
Our Paper2Poster supports both local deployment (via vLLM) or API-based access (e.g., GPT-4o).
Python Environment
pip install -r requirements.txtInstall Libreoffice
sudo apt install libreofficeor, if you do not have sudo access, download soffice executable directly: https://www.libreoffice.org/download/download-libreoffice/, and add the executable directory to your $PATH.
Install poppler
conda install -c conda-forge popplerAPI Key
Create a .env file in the project root and add your OpenAI API key:
OPENAI_API_KEY=<your_openai_api_key>Create a folder named {paper_name} under {dataset_dir}, and place your paper inside it as a PDF file named paper.pdf.
📁 {dataset_dir}/
└── 📁 {paper_name}/
└── 📄 paper.pdf
To use open-source models, you need to first deploy them using vLLM, ensuring the port is correctly specified in the get_agent_config() function in utils/wei_utils.py.
- [High Performance] Generate a poster with
GPT-4o:
python -m PosterAgent.new_pipeline \
--poster_path="${dataset_dir}/${paper_name}/paper.pdf" \
--model_name_t="4o" \ # LLM
--model_name_v="4o" \ # VLM
--poster_width_inches=48 \
--poster_height_inches=36- [Economic] Generate a poster with
Qwen-2.5-7B-InstructandGPT-4o:
python -m PosterAgent.new_pipeline \
--poster_path="${dataset_dir}/${paper_name}/paper.pdf" \
--model_name_t="vllm_qwen" \ # LLM
--model_name_v="4o" \ # VLM
--poster_width_inches=48 \
--poster_height_inches=36 \
--no_blank_detection # An option to disable blank detection- [Local] Generate a poster with
Qwen-2.5-7B-Instruct:
python -m PosterAgent.new_pipeline \
--poster_path="${dataset_dir}/${paper_name}/paper.pdf" \
--model_name_t="vllm_qwen" \ # LLM
--model_name_v="vllm_qwen_vl" \ # VLM
--poster_width_inches=48 \
--poster_height_inches=36PosterAgent supports flexible combination of LLM / VLM, feel free to try other options, or customize your own settings in get_agent_config() in utils/wei_utils.py.
Download Paper2Poster evaluation dataset via:
python -m PosterAgent.create_datasetIn evaluation, papers are stored under a directory called Paper2Poster-data.
To evaluate a generated poster with PaperQuiz:
python -m Paper2Poster-eval.eval_poster_pipeline \
--paper_name="${paper_name}" \
--poster_method="${model_t}_${model_v}_generated_posters" \
--metric=qa # PaperQuizTo evaluate a generated poster with VLM-as-Judge:
python -m Paper2Poster-eval.eval_poster_pipeline \
--paper_name="${paper_name}" \
--poster_method="${model_t}_${model_v}_generated_posters" \
--metric=judge # VLM-as-JudgeTo evaluate a generated poster with other statistical metrics (such as visual similarity, PPL, etc):
python -m Paper2Poster-eval.eval_poster_pipeline \
--paper_name="${paper_name}" \
--poster_method="${model_t}_${model_v}_generated_posters" \
--metric=stats # statistical measuresIf you want to create a PaperQuiz for your own paper:
python -m Paper2Poster-eval.create_paper_questions \
--paper_folder="Paper2Poster-data/${paper_name}"We extend our gratitude to 🐫CAMEL, 🦉OWL, Docling, PPTAgent for providing their codebases.
Please kindly cite our paper if you find this project helpful.
@misc{paper2poster,
title={Paper2Poster: Towards Multimodal Poster Automation from Scientific Papers},
author={Wei Pang and Kevin Qinghong Lin and Xiangru Jian and Xi He and Philip Torr},
year={2025},
eprint={2505.21497},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.21497},
}
