-
How do I setup XNNPACK backend in React Native?Hey Executorch community! I'm slowly trying to digest how to utilize Executorch on a React Native app for Android and iOS. As I've been experimenting with XNNPACK (following Llama demo readme and followed through react-native example app), I've been trying to piece the puzzle pieces needed to take XNNPACK and integrate it to React Native. I recognize there is the library react-native-executorch, however they don't specify how to utilize the XNNPACK backend inside of that project structure. How can I achieve this? Currently I have exported my Llama 3.2 1B model to .pte and have the tokenizer, but unsure of what else I require to include the XNNPACK backend. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Also asked on the ExecuTorch Discord channel for React-Native https://discord.com/channels/1334270993966825602/1337888330162897048 |
Beta Was this translation helpful? Give feedback.
-
Hi @sskarz, the XNNPACK is utilized out of the box as long as you have the model exported to XNNPACK. There is no further steeps needed from your side. If you want to integrate LLaMa into your react native apps, we highly recommend using our useLLM hook. The initializing section is all you need to get started. And if you use the constants shipped with the library, they will run on XNNPACK. Let me know if you have any further questions! |
Beta Was this translation helpful? Give feedback.
Hi @sskarz, the XNNPACK is utilized out of the box as long as you have the model exported to XNNPACK. There is no further steeps needed from your side. If you want to integrate LLaMa into your react native apps, we highly recommend using our useLLM hook. The initializing section is all you need to get started. And if you use the constants shipped with the library, they will run on XNNPACK. Let me know if you have any further questions!