React Native ExecuTorch is a declarative way to run AI models in React Native on device, powered by ExecuTorch π.
ExecuTorch is a novel framework created by Meta that enables running AI models on devices such as mobile phones or microcontrollers. React Native ExecuTorch bridges the gap between React Native and native platform capabilities, allowing developers to run AI models locally on mobile devices with state-of-the-art performance, without requiring deep knowledge of native code or machine learning internals.
Table of contents:
- Compatibility
- Ready-made models π€
- Documentation π
- Quickstart - Running Llama π¦
- Minimal supported versions
- Examples π²
- License
- What's next?
React Native Executorch supports only the New React Native architecture.
If your app still runs on the old architecture, please consider upgrading to the New Architecture.
To run any AI model in ExecuTorch, you need to export it to a .pte
format. If you're interested in experimenting with your own models, we highly encourage you to check out the Python API. If you prefer focusing on developing your React Native app, we will cover several common use cases. For more details, please refer to the documentation.
Take a look at how our library can help build you your React Native AI features in our docs:
https://docs.swmansion.com/react-native-executorch
Get started with AI-powered text generation in 3 easy steps!
# Install the package
yarn add react-native-executorch
cd ios && pod install && cd ..
Add this to your component file:
import {
useLLM,
LLAMA3_2_1B,
LLAMA3_2_TOKENIZER,
LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';
function MyComponent() {
// Initialize the model π
const llama = useLLM({
modelSource: LLAMA3_2_1B,
tokenizerSource: LLAMA3_2_TOKENIZER,
tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
});
// ... rest of your component
}
const handleGenerate = async () => {
const chat = [
{ role: 'system' content: 'You are a helpful assistant' }
{ role: 'user', content: 'What is the meaning of life?' }
];
// Chat completion
await llm.generate(chat);
console.log('Llama says:', llm.response);
};
The minimal supported version are:
- iOS 17.0
- Android 13
We currently host a few example apps demonstrating use cases of our library:
apps/llm
- chat application showcasing use of LLMsapps/speech-to-text
- Whisper and Moonshine models ready for transcription tasksapps/computer-vision
- computer vision related tasksapps/text-embeddings
- computing text representations for semantic search
If you would like to run it, navigate to it's project directory, for example apps/llm
from the repository root and install dependencies with:
yarn
And then, if you want to run on Android:
yarn expo run:android
or iOS:
yarn expo run:ios
Running LLMs requires a significant amount of RAM. If you are encountering unexpected app crashes, try to increase the amount of RAM allocated to the emulator.
This library is licensed under The MIT License.
To learn about our upcoming plans and developments, please visit our discussion page.
Since 2012 Software Mansion is a software agency with experience in building web and mobile apps. We are Core React Native Contributors and experts in dealing with all kinds of React Native issues. We can help you build your next dream product β Hire us.