You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 23, 2026. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,19 @@ Optimum-TPU
9
9
[](https://github.com/huggingface/optimum-tpu/actions/workflows/test-pytorch-xla-tpu-tgi.yml)
10
10
</div>
11
11
12
+
> [!CAUTION]
13
+
> **🚧 Optimum-TPU is now in maintenance mode.**
14
+
>
15
+
> We’ll continue to welcome community contributions for minor bug fixes, documentation improvements, and lightweight maintenance tasks.
16
+
>
17
+
> Optimum-TPU was created to make it easier to train and run inference on TPUs using 🤗 Transformers and 🤗 Accelerate. Thanks to everyone who has contributed and supported the project! ❤️
18
+
>
19
+
> While this repository is no longer under active development, you can continue exploring TPU solutions with:
20
+
> • [tpu-inference](https://github.com/vllm-project/tpu-inference) for inference
21
+
> • [🤗 Accelerate](https://github.com/huggingface/accelerate) for training
22
+
>
23
+
> Thank you for being part of the journey! 🚀
24
+
12
25
[Tensor Processing Units (TPU)](https://cloud.google.com/tpu) are AI accelerator made by Google to optimize
13
26
performance and cost from AI training to inference.
0 commit comments