Mixing PyTorch and TensorFlow #835
-
Apologies for the basic question. I gather it is possible to use both PyTorch and TensorFlow models with DJL, but it isn't clear to me if the correpsonding engine needs to be used for each. For example, if I train a model in PyTorch and export it via TorchScript, can I predict in Java using the MXNet/TensorFlor engine? Or does each type of model have to be run on the corresponding engine? Would it possible to predict with a PyTorch model and TensorFlow model on the same engine at the same time? Many thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @pyhh It is possible. But you may only use PyTorch engine to run inference with TorchScript model. It is not possible to use MXNet/TensorFlow to run a PyTorch model. As you mentioned, you can use models from two different engines, say one from MXNet, the other from PyTorch. All you need to do is import both MXNet and PyTorch dependencies and declare the Here is an example using Multiple Engines at the same time: https://github.com/aws-samples/djl-demo/tree/master/multi-engine. It demos how to run MXNet + PyTorch models at the same time. |
Beta Was this translation helpful? Give feedback.
Hi @pyhh It is possible. But you may only use PyTorch engine to run inference with TorchScript model. It is not possible to use MXNet/TensorFlow to run a PyTorch model.
As you mentioned, you can use models from two different engines, say one from MXNet, the other from PyTorch. All you need to do is import both MXNet and PyTorch dependencies and declare the
.optEngine
from your Criteria. Once this is done, DJL will load the model with the right engine and you can run inference at the same time.Here is an example using Multiple Engines at the same time: https://github.com/aws-samples/djl-demo/tree/master/multi-engine. It demos how to run MXNet + PyTorch models at the same time.