In today's world, hand gestures are used a lot, especially with new technology like Artificial Intelligence and Machine Learning. This helps in many areas like helping deaf people communicate, controlling robots, and even in medical work. This report talks about a modern way to understand American Sign Language (ASL) using computers. It looks at both still and moving hand signs. They used a tool called MediaPipe from Google to find where the hand is, and they made their own set of pictures to test it. They used a type of computer program called LSTM from Tensorflow to understand the signs, and it worked really well. This is a big step in helping people who can't speak to be understood better. Communication is really important for those people who can't hear or speak. They use sign language, but many people don't understand it. Computers can help by learning to understand sign language and turn it into words. Some systems can already do this for Indian Sign Language (ISL), but they are slow. This paper talks about making a system that can understand American Sign Language in real-time. They used a regular camera to watch people signing, and they trained the computer to understand it quickly using a method called transfer learning. Even though they didn't have a lot of examples, the computer still did a good job, which is promising for helping people communicate better.
Sign language recognition aims to bridge communication gaps between the deaf and hearing communities by translating sign language into text. This technology detects and interprets the intricate hand movements and gestures to inherent in sign language. Its purpose is to facilitate real-time communication for the deaf, enabling them to interact more seamlessly with others, access educational resources, and participate fully in society. Sign language recognition systems strive for accuracy and efficiency, empowering individuals with hearing impairments to express themselves fluently and be understood by a wider audience.
● Emergency Situations: In emergency situations, clear communication is essential for ensuring the safety and well-being of all individuals involved. Sign language recognition allows emergency responders to communicate with deaf individuals quickly and accurately, providing instructions, assistance, and reassurance during crises such as natural disasters or medical emergencies.
● Education: In education, sign language recognition facilitates teaching and learning for both deaf and hearing students, providing accessible resources such as online tutorials, interactive lessons and digital textbooks.
● Personal Development: Sign language recognition encourages personal development and self-expression among deaf individuals by providing them with tools to communicate confidently and effectively in various contexts. It empowers them to pursue educational, professional, and personal goals with independence and agency.
● Public Services: Sign language recognition technology enhances access to public services such as transportation, government offices, and social welfare programs. It ensures that deaf individuals can communicate effectively with service providers, access information, and avail themselves of essential services without facing communication barriers or discrimination.
● Accessibility: Sign language recognition enhances accessibility for the deaf and hard of hearing, allowing them to engage with digital content, communicate through technology, and access various services independently.
● Healthcare: In healthcare sectors, accurate communication is crucial for providing quality care. Sign language recognition assists healthcare professionals in communicating with deaf patients, ensuring that medical information is conveyed accurately and patient's needs are understood and addressed properly.
The dataset is created for American Sign Language where signs are alphabets of the English language. The dataset is created following the data acquisition method described in Section 8 (Result and Discussion). The experimentation was carried out on a system with an AMD Ryzen 5 5500H 3.30GHz processor, 8 GB memory and webcam (HP TrueVision HD camera with 0.31 MP and 640x480 resolution), running Windows 11 operating system. The programming environment includes Python (version 3.9.0), Jupyter Notebook, OpenCV (version 4.2.0), TensorFlow Object Detection API.
- Long Short-Term Memory (LSTM)
- Python: Version 3.9.0
- Jupyter Notebook
- OpenCV
- Time
- Sktlearn
- Mlxtend
- TensorFlow API
- Mediapipe
Let's start with Installation to run our project.
Below are the Steps to Install and Run this Project
- Clone the repo
git clone https://github.com/your_username_/Project-Name.git
- Install the required packages
pip install requirement.txt
- Change Directory
cd Sign_Language_Detection_using_LSTM - Run app.py
python app.py
Enjoy!!!
Demonstration
## LicenseDistributed under the MIT License. See MIT License for more information.
Your Name - Paresh Gupta - [email protected]
Project Link: https://github.com/IMMORTAL-blip/SignLanguageDetectionUsingLSTM

