TT-Forge FE is a graph compiler designed to optimize and transform computational graphs for deep learning models, enhancing their performance and efficiency.
- Getting Started / How to Run a Model
- Build - Use these instructions if you plan to do development work.
TT-Forge-FE is a front end component within the broader TT-Forge ecosystem, which is designed to compile and execute machine learning models on Tenstorrent hardware platforms like Wormhole and Blackhole. TT-Forge-FE can ingest models from various machine learning frameworks including ONNX and TensorFlow through the TVM Intermediate Representation (IR). It can also ingest models from PyTorch, however it is recommended that you use TT-XLA to do this. TT-Forge-FE does not support multi-chip configurations; it is for single-chip projects only.
-
- TT-XLA is the primary frontend for running PyTorch and JAX models. It leverages a PJRT interface to integrate JAX (and in the future other frameworks), TT-MLIR, and Tenstorrent hardware. It supports ingestion of JAX models via jit compile, providing StableHLO (SHLO) graph to TT-MLIR compiler. TT-XLA can be used for single and multi-chip projects.
- See the TT-XLA docs pages for an overview and getting started guide.
-
- A TVM based graph compiler designed to optimize and transform computational graphs for deep learning models. Supports ingestion of ONNX, TensorFlow, PaddlePaddle and similar ML frameworks via TVM (TT-TVM). It also supports ingestion of PyTorch, however it is recommended that you use TT-XLA. TT-Forge-FE does not support multi-chip configurations; it is for single-chip projects only.
- See the TT-Forge-FE docs pages for an overview and getting started guide.
-
TT-Torch - (deprecated)
- A MLIR-native, open-source, PyTorch 2.X and torch-mlir based front-end. It provides stableHLO (SHLO) graphs to TT-MLIR. Supports ingestion of PyTorch models via PT2.X compile and ONNX models via torch-mlir (ONNX->SHLO)
- See the TT-Torch docs pages for an overview and getting started guide.
You can run a demo using the TT-Forge-FE Getting Started page.
- TT-XLA - (single and multi-chip) For use with PyTorch and JAX
- TT-Forge-FE - (single chip only) For use with TensorFlow, ONNX, and PaddlePaddle, it also runs PyTorch, however it is recommended to use TT-XLA for PyTorch
- TT-MLIR - Open source compiler framework for compiling and optimizing machine learning models for Tenstorrent hardware
- TT-Metal - Low-level programming model, enabling kernel development for Tenstorrent hardware
- TT-TVM - A compiler stack for deep learning systems designed to close the gap between the productivity-focused deep learning frameworks, and the performance and efficiency-focused hardware backends
- TT-Torch - (Deprecated) Previously for use with PyTorch. It is recommended that you use TT-XLA for PyTorch.
This repo is a part of Tenstorrent’s bounty program. If you are interested in helping to improve tt-forge, please make sure to read the Tenstorrent Bounty Program Terms and Conditions before heading to the issues tab. Look for the issues that are tagged with both “bounty” and difficulty level!