How can I use the generated pte file to process my own data and predict the results? #8220
-
auto train_loader = torch::data::make_data_loader( Is this correct? Then how do we process the data with the model?
Is it correct to write code in libtorch way? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
PTE is the model represented in a way ExecuTorch runtime can consume it. The runtime can consume data / input tensors at runtime and run "forward" on the device. Modify this to pass your inputs to the method. |
Beta Was this translation helpful? Give feedback.
-
I mean, before converting to pte file, my model input is processed by pytorch's dataloader data_loader = DataLoader(dataset=dataset, So in executor_runner.cpp, can I import libtorch to process data with pytorch's C++ API? Like below #include<torch/torch.h> If this is possible, how can I use libtorch in executorch successfully? |
Beta Was this translation helpful? Give feedback.
-
Hi,
|
Beta Was this translation helpful? Give feedback.
Hi,
I solved it the following way. You can also have a look here: https://github.com/ChristophKarlHeck/mbed-torch-fusion-os/tree/main