A powerful React Native OCR (Optical Character Recognition) module powered by Google ML Kit. Supports multiple languages and scripts with selective model loading for optimized app size.
- 🌍 Multi-language support: Latin, Chinese, Devanagari, Japanese, and Korean scripts
- 📦 Selective model loading: Include only the languages you need to minimize app size
- ⚡ High performance: Powered by Google ML Kit's on-device text recognition
- 🔄 Flexible deployment: Choose between bundled models (offline) or unbundled models (download on demand)
- 📱 Cross-platform: Works on both iOS and Android
| Don't forget to hit the star ⭐ button |
- iOS 15.5+ (Note: ML Kit iOS APIs run only on 64-bit devices.)
- Android API 23+
npm install rn-mlkit-ocr
# or
yarn add rn-mlkit-ocrRun pod install:
cd ios && pod installNo additional setup required for Android.
By default, all language models are included. To optimize your app size, you can specify which models to include.
Add the plugin to your app.json or app.config.js:
{
"expo": {
"plugins": [
[
"rn-mlkit-ocr",
{
"ocrModels": ["latin", "chinese", "devanagari", "japanese", "korean"],
"ocrUseBundled": true
}
]
]
}
}Add the following to your android/build.gradle file inside the buildscript { ext { ... } } block:
buildscript {
ext {
// ... other configurations
ocrModels = ["latin", "chinese", "devanagari", "japanese", "korean"]
ocrUseBundled = true
}
}Add the following to your ios/Podfile before the use_react_native! call:
# --- RN-MLKIT-OCR CONFIG ---
$ReactNativeOcrSubspecs = ['latin', 'chinese', 'devanagari', 'japanese', 'korean']
# --- END RN-MLKIT-OCR CONFIG ---ocrModels: Array of language models to include- Available options:
'latin','chinese','devanagari','japanese','korean', or'all' - Default:
['all']
- Available options:
ocrUseBundled(Android only): Whether to use bundled modelstrue: Models are bundled with the app (larger app size, works offline immediately)false: Models are downloaded on first use (smaller app size, requires internet on first use)- Default:
false
import MlkitOcr from 'rn-mlkit-ocr';
const imageUri = 'file:///path/to/image.jpg'; // Local image or link
try {
const result = await MlkitOcr.recognizeText(imageUri);
console.log('Recognized text:', result.text);
// Access detailed information
result.blocks.forEach((block) => {
console.log('Block:', block.text);
block.lines.forEach((line) => {
console.log(' Line:', line.text);
line.elements.forEach((element) => {
console.log(' Element:', element.text);
});
});
});
} catch (error) {
console.error('OCR Error:', error);
}import MlkitOcr from 'rn-mlkit-ocr';
// Recognize Chinese text
const result = await MlkitOcr.recognizeText(imageUri, 'chinese');
// Recognize Japanese text
const result = await MlkitOcr.recognizeText(imageUri, 'japanese');import MlkitOcr from 'rn-mlkit-ocr';
const languages = await MlkitOcr.getAvailableLanguages();
console.log('Available languages:', languages);
// Output: ['latin', 'chinese', 'devanagari', 'japanese', 'korean']Performs OCR on the specified image.
Parameters:
imageUri: Path to the image (file path, content URI, or HTTP/HTTPS URL)detectorType: Optional language detector type ('latin','chinese','devanagari','japanese','korean'). Defaults to'latin'
Returns: Promise resolving to OcrResult
Returns the list of language models available in the app based on your configuration.
Returns: Promise resolving to array of detector types
interface OcrResult {
text: string; // Full recognized text
blocks: OcrBlock[]; // Text blocks
}
interface OcrBlock {
text: string;
frame: OcrFrame;
lines: OcrLine[];
}
interface OcrLine {
text: string;
frame: OcrFrame;
elements: OcrElement[];
}
interface OcrElement {
text: string;
frame: OcrFrame;
}
interface OcrFrame {
x: number;
y: number;
width: number;
height: number;
}
type DetectorType = 'latin' | 'chinese' | 'devanagari' | 'japanese' | 'korean';For a complete list of supported languages, see Google ML Kit Text Recognition Languages.
Check out the example app in the example/ directory for a complete working implementation.
cd example
yarn install
# For iOS
cd ios && pod install && cd ..
yarn ios
# For Android
yarn androidIf you encounter this error when running pod install:
command `pod install` failed.
└─ Cause: CocoaPods could not find compatible versions for pod "RnMlkitOcr":
In Podfile:
RnMlkitOcr (from `../node_modules/rn-mlkit-ocr`)
Specs satisfying the `RnMlkitOcr (from `../node_modules/rn-mlkit-ocr`)`
dependency were found, but they required a higher minimum deployment target.
Cause: This package requires iOS 15.5 or higher as minimum deployment target.
Solution 1: For Expo Projects (Recommended)
Install and configure expo-build-properties:
npx expo install expo-build-propertiesAdd to your app.json or app.config.js:
{
"expo": {
"plugins": [
[
"expo-build-properties",
{
"ios": {
"deploymentTarget": "15.5"
}
}
]
]
}
}Solution 2: For React Native CLI Projects
Update your ios/Podfile:
platform :ios, '15.5' # Update this lineThen run:
cd ios && pod installIf you encounter this error when building for iOS simulator:
building for 'iOS-simulator', but linking in object file
(.../Pods/MLImage/Frameworks/MLImage.framework/MLImage[arm64][2](...))
built for 'iOS'
Cause: This occurs when the ML Kit framework includes arm64 architecture for device but you're building for the simulator.
Solution:
Add the following to your ios/Podfile inside the post_install hook:
post_install do |installer|
# ... other configurations
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['EXCLUDED_ARCHS[sdk=iphonesimulator*]'] = "arm64"
end
end
endThen run:
cd ios && pod installContributions are welcome! Please feel free to submit a Pull Request.