VoxIgnota is an Android application that translates American Sign Language (ASL) gestures into text in real-time using the device's camera. It leverages on-device machine learning to provide a seamless translation experience.
Note
Not a complete app yet, but yes it works.
Sign detection is too fast [I mean very fast really,not usable] but still it works.
- Real-time Sign to Text: Translates ASL signs captured from the camera into text instantly.
- Camera Switching: Easily switch between front and back cameras.
- History: Save the translated text for future reference.
- On-Device Inference: All processing happens locally on the device, ensuring privacy and offline functionality.
The application uses the CameraX library to access the camera feed. Each frame is preprocessed (resized, converted to grayscale) and fed into a TensorFlow Lite model (asl_model.tflite). The model predicts the ASL sign, and the corresponding character is displayed on the screen. The recognized text can be saved into a local Room database.
- Android SDK (Java)
- CameraX: For camera operations and image analysis.
- TensorFlow Lite: For on-device machine learning inference.
- Room Persistence Library: For storing translation history.
- Material Components: For UI elements.
To build and run this project, follow these steps:
- Clone the repository:
git clone https://github.com/Akshay-86/Sign_Translator.git
- Open in Android Studio:
- Open Android Studio.
- Click on
File>Openand select the cloned project directory.
- Build and Run:
- Let Android Studio sync the Gradle files.
- Connect an Android device or start an emulator.
- Click the
Run 'app'button.
- Ch. Vindhya - UI Designing.
- Chat Gpt,Gemini 2.5 - General Assistance in Coding.
- And all my teammates - Data Gathering.
- Pretrained model used in this project From this repo [asl_model.tflite] — by Idara-Abasi Udoh
Contributions are welcome! Please feel free to submit a pull request or open an issue. But please credit the original authors.