Language barriers remain a challenge for many individuals, especially for those who cannot speak or hear. DetectX aims to bridge this gap by enabling real-time translation of sign language into text, providing an accessible communication tool.
DetectX utilizes TensorFlow Object Detection and Python to create an end-to-end solution for real-time sign language detection.
- Real-time detection and translation of sign language into text.
- Built using TensorFlow, OpenCV, and Python.
- End-to-end pipeline from image collection to model deployment.
- Image Collection:
- Collect images of sign language gestures using a webcam and OpenCV.
- Data Labeling:
- Label the collected images with LabelImg to prepare the dataset.
- Model Training:
- Configure the TensorFlow Object Detection pipeline.
- Use transfer learning to train the model for accurate detection.
- Real-Time Detection:
- Integrate the trained model with OpenCV for real-time sign language detection.
- Clone the repository:
git clone https://github.com/your-username/DetectX.git
- Navigate to the project directory:
cd DetectX - Install dependencies:
pip install -r requirements.txt
- Collect Images: Run the script to capture images of gestures using your webcam:
python collect_images.py
- Label Data: Use the LabelImg tool to annotate your images for model training.
- Train the Model: Train the TensorFlow Object Detection model with the labeled dataset:
python train_model.py
- Real-Time Detection: Run the real-time detection script:
python detect_sign_language.py
TensorFlow: For building and training the object detection model. OpenCV: For capturing images and performing real-time detection. Python: Backend programming and scripting. LabelImg: Data annotation tool for creating labeled datasets.
- Add support for more sign language gestures.
- Integrate with text-to-speech for enhanced accessibility.
- Expand the model to support multiple sign languages.
Contributions are welcome! Please fork the repository, make your changes, and submit a pull request.
Special thanks to the open-source community and resources that made this project possible. Inspired by the idea of breaking down communication barriers one step at a time.