Skip to content

KreativeThinker/Dynago

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

93 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🦖 DyNaGO – Dynamic Natural Gesture Operations

DyNaGO is a real-time AI-powered, Human Computer Interface employing gesture recognition. It uses computer vision and machine learning to enable users to control their machines using natural, dynamic hand gestures—no special hardware required.

Whether for accessibility, low-interaction environments, or futuristic UI prototyping, DyNaGO delivers a lightweight, modular, and efficient solution for gesture-based computing.


✨ Features

  • 🔧 SVM + MediaPipe–based gesture classification
  • Dynamic velocity vector analysis for real-time gesture detection
  • 🎮 System command mapping: volume control, tab switching, app launch, and more
  • 🖥️ Fully functional on standard webcams
  • 🧱 Modular architecture – easily expandable with new gestures or models
  • 🧪 Trained on 4,200+ gesture samples across 6 static classes

🧠 Dataset & Training Summary

  • Total Samples: 4291
  • Gestures: fist, two_fingers, three_fingers (2 types), pinch, point
  • Normalization: wrist-centered + scaled to unit sphere
  • Accuracy: 92.3%
  • Best Class: point (99.4%)
  • Weakest Class: pinch (72.3%)

Confusion Matrix Preview:


🏗 System Architecture

  1. Initialization – Load webcam, environment, set base gesture
  2. Static Gesture Detection – Classify using MediaPipe landmarks + SVM
  3. Motion Vector Analysis – Track gesture trajectory using velocity between frames
  4. Action Mapping – Trigger system functions via OS hooks / APIs

🛠 Usage

Installation

git clone https://github.com/KreativeThinker/DyNaGO
cd DyNaGO
python -m venv .venv
source .venv/bin/activate
pip install poetry
poetry install

Commands

Command Task
poetry run capture Capture training samples with label
poetry run normalize Normalize and prepare dataset for training
poetry run train_static Train SVM model
poetry run dev Launch dynamic gesture predictor

>_ See all commands: pyproject.toml


📈 Experiment Highlights

Gesture Accuracy AUC Confusions
point 99.4% 1.00 minor confusion with fist
pinch 72.3% 0.95 major confusion with palm and point
three_fingers 87.3% 1.00 some confusion with two_fingers

📊 See full report: Experiment Analysis


🎥 Demo

Coming Soon — recording in progress. Will showcase real-time gesture use for volume control and workspace switching.


🌱 Future Work

  • Better configuration file
  • Hybrid dynamic gesture detection with light weight SVM + Velocity Vector Analysis
  • Complete cursor control
  • Real-time inference optimization (GPU support)
  • Multi-gesture chaining (command macros)
  • Browser-based version via TensorFlow.js
  • Integrated Audio Agent with custom function execution (branch voice)

👨‍💻 Author

Built by Anumeya Sehgal
✉ Email: [email protected]
🌐 LinkedIn: anumeya-sehgal


📜 License

MIT License – Free for use, distribution, and enhancement.

About

Dynamic and Natural Gesture Operations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages