Skip to content

Commit 4d57515

Browse files
albanDfacebook-github-bot
authored andcommitted
Add caution message to readme before the repo gets archived. (#330)
Summary: This is trying to clarify the maintenance status of multipy. In particular that it has been unmaintained for quite some time and there is no plans to invest in it going forward. This is especially true with Free Threaded CPython becoming a reality and so the multithread limitations that multipy was trying to solve are becoming moot. For users looking at LLM-like workloads, we also have much better solutions today (in particular vLLM) as they are more efficient and simpler than multipy. You can see the rendering of the caution message at https://github.com/pytorch/multipy/tree/warn_dead Pull Request resolved: #330 Reviewed By: malfet Differential Revision: D77989684 Pulled By: albanD fbshipit-source-id: e24e1249da351a8415ec50819767372a50a5e065
1 parent 0b9d624 commit 4d57515

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,12 @@
33

44
# `torch::deploy` (MultiPy)
55

6+
> [!CAUTION]
7+
> MultiPy has been unmaintained for some time and is going to be archived soon. We recommend
8+
> users to look at the new [Free Threaded CPython](https://docs.python.org/3/howto/free-threading-python.html) version available starting with CPthon 3.13 as a long term solution to enable efficient multi-threading inference in CPython.
9+
> For users looking to serve LLMs on server, we recommend higher level solutions such as [vLLM](https://docs.vllm.ai/en/latest/) as a good alternative.
10+
11+
612
`torch::deploy` (MultiPy for non-PyTorch use cases) is a C++ library that enables you to run eager mode PyTorch models in production without any modifications to your model to support tracing. `torch::deploy` provides a way to run using multiple independent Python interpreters in a single process without a shared global interpreter lock (GIL). For more information on how `torch::deploy` works
713
internally, please see the related [arXiv paper](https://arxiv.org/pdf/2104.00254.pdf).
814

0 commit comments

Comments
 (0)