Announcement on Compatibility between MinerU 2.5 and the New vLLM Version #3548
myhloli
announced in
Announcements
Replies: 1 comment 2 replies
-
|
佬,我想知道我在quay.io/ascend/vllm-ascend:v0.10.2rc1镜像容器里,下载mineru[core],然后怎么才能用到vlm加速呢?之前我是在华为的mindie镜像中用fastapi启动的。用的pipeline模式。mineru-api --host 0.0.0.0 --port 8009 现在的话,我理解应该backend要传vlm-pipeline。 |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Recently, we officially released the MinerU 2.5 model with significantly improved performance, and migrated the inference acceleration framework from
sglangto vLLM. This upgrade leverages vLLM’s richer ecosystem to enhance compatibility with mainstream platforms, enabling more users to easily access state-of-the-art document parsing capabilities.In the latest vLLM 0.10.2 release, its V1 engine has finally added support for Turing architecture and earlier GPUs—an important step forward in expanding hardware compatibility. However, this version upgrades the required PyTorch version to 2.8.0. We previously observed compatibility issues between PyTorch 2.8.0 and our pipeline backend (see GitHub Discussion #3337), which led us to cap the PyTorch version at
<2.8.0.To enable as many users as possible to experience the powerful performance of MinerU 2.5, we have now removed the upper limit on the PyTorch version in our latest release. We have also invested significant effort into adapting the pipeline backend to PyTorch 2.8.0, aiming to minimize performance degradation while maintaining functional stability. Nevertheless, some compatibility issues remain unavoidable, such as:
To help you achieve the best possible experience, we provide the following recommendations based on your deployment method:
🔧 Users Installing via
uv/pipIf you are primarily using the pipeline backend and encounter the above issues, we recommend downgrading PyTorch to version 2.7.1 to avoid compatibility problems:
This version offers optimal compatibility with the current pipeline backend and restores previous performance levels.
🐳 Docker Users
The default base image in the current Dockerfile is
vllm/vllm-openai:v0.10.1.1, which includes PyTorch 2.7.1. Therefore, it is not affected by the PyTorch 2.8.0 compatibility issues, and we recommend most users continue using this version.In such cases, update your base image to:
We are committed to achieving the best possible balance among performance, compatibility, and stability. Thank you for your understanding and support. We welcome your feedback via GitHub to help us further improve the project.
— The MinerU Team
Beta Was this translation helpful? Give feedback.
All reactions