Skip to content
This repository was archived by the owner on Jan 23, 2026. It is now read-only.

Commit 5a775c5

Browse files
committed
chore: announce maintenance mode
1 parent 0f6d70e commit 5a775c5

File tree

1 file changed

+13
-0
lines changed

1 file changed

+13
-0
lines changed

README.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,19 @@ Optimum-TPU
99
[![Optimum TPU / Test TGI on TPU](https://github.com/huggingface/optimum-tpu/actions/workflows/test-pytorch-xla-tpu-tgi.yml/badge.svg)](https://github.com/huggingface/optimum-tpu/actions/workflows/test-pytorch-xla-tpu-tgi.yml)
1010
</div>
1111

12+
> [!CAUTION]
13+
> **🚧 Optimum-TPU is now in maintenance mode.**
14+
>
15+
> We’ll continue to welcome community contributions for minor bug fixes, documentation improvements, and lightweight maintenance tasks.
16+
>
17+
> Optimum-TPU was created to make it easier to train and run inference on TPUs using 🤗 Transformers and 🤗 Accelerate. Thanks to everyone who has contributed and supported the project! ❤️
18+
>
19+
> While this repository is no longer under active development, you can continue exploring TPU solutions with:
20+
> [tpu-inference](https://github.com/vllm-project/tpu-inference) for inference
21+
> [🤗 Accelerate](https://github.com/huggingface/accelerate) for training
22+
>
23+
> Thank you for being part of the journey! 🚀
24+
1225
[Tensor Processing Units (TPU)](https://cloud.google.com/tpu) are AI accelerator made by Google to optimize
1326
performance and cost from AI training to inference.
1427

0 commit comments

Comments
 (0)