|
| 1 | +<p align="center"> |
| 2 | + <img src="https://github.com/FNBUBBLES420-ORG/game-vision-aid/blob/main/banner/Game_Vision_Aid.png" alt="Game Vision Aid Banner" width="300"/> |
| 3 | +</p> |
| 4 | + |
| 5 | +## 🎯 Real-Time Object Detection Overlay with YOLO, TensorRT, and BetterCam |
| 6 | + |
| 7 | +This project enables **real-time object detection** using **YOLOv5/YOLOv8**, with support for **PyTorch**, **ONNX**, and **TensorRT** models. Bounding boxes are rendered using a **transparent overlay** that displays on top of your game or any screen — perfect for assistive tools, AI research, and accessibility-focused applications. |
| 8 | + |
| 9 | +--- |
| 10 | + |
| 11 | +## 🧠 Components |
| 12 | + |
| 13 | +| File | Description | |
| 14 | +|-------------|-------------| |
| 15 | +| `main.py` | Core logic that captures frames, runs detection using the selected model, and displays bounding boxes via an overlay. | |
| 16 | +| `config.py` | User-configurable settings: choose model type, set paths, screen dimensions, GPU support, overlay transparency, and more. | |
| 17 | +| `overlay.py`| Creates a transparent, always-on-top, click-through overlay window that displays detection bounding boxes over your game or screen. | |
| 18 | + |
| 19 | +--- |
| 20 | + |
| 21 | +## ⚙️ Features |
| 22 | + |
| 23 | +✅ Supports **YOLOv5 / YOLOv8** via Ultralytics |
| 24 | +✅ Use **PyTorch (.pt)**, **ONNX (.onnx)**, or **TensorRT (.engine)** models |
| 25 | +✅ Automatic GPU detection: **NVIDIA (CUDA)**, **AMD (DirectML)**, or **CPU fallback** |
| 26 | +✅ Seamless integration with **BetterCam** for real-time screen capture |
| 27 | +✅ Transparent overlay using **Win32 API** (minimal FPS impact) |
| 28 | +✅ Multi-monitor support and full resolution control |
| 29 | +✅ Smooth, real-time bounding box rendering |
| 30 | +✅ Fully configurable through `config.py` |
| 31 | + |
| 32 | +--- |
| 33 | + |
| 34 | +## 🛠️ Requirements |
| 35 | + |
| 36 | +Install dependencies: |
| 37 | + |
| 38 | +```bash |
| 39 | +pip install ultralytics opencv-python numpy torch torchvision torchaudio torch-directml onnx onnxruntime onnxruntime-directml onnx-simplifier pycuda tensorrt colorama customtkinter requests pandas cupy bettercam |
| 40 | +``` |
| 41 | +- ⚠️ `BetterCam` must be installed or available in your environment. If it's a custom module, ensure it's in your Python path. |
| 42 | + |
| 43 | +## 🧩 Configuration (config.py) |
| 44 | +### ✨ Model Type |
| 45 | + |
| 46 | +- Choose the model engine to use: |
| 47 | +```bash |
| 48 | +modelType = 'torch' # Options: 'torch', 'onnx', 'engine' |
| 49 | +``` |
| 50 | + |
| 51 | +## 🧠 Model Paths |
| 52 | + |
| 53 | +- Point to your desired model files: |
| 54 | + |
| 55 | +```bash |
| 56 | +torchModelPath = 'models/yolov8n.pt' |
| 57 | +onnxModelPath = 'models/yolov8n.onnx' |
| 58 | +tensorrtModelPath = 'models/yolov8n.engine' |
| 59 | +``` |
| 60 | + |
| 61 | +## 🎥 Screen & Overlay Settings |
| 62 | + |
| 63 | +- Adjust `resolution`, `transparency`, and `monitor index`: |
| 64 | + |
| 65 | +```bash |
| 66 | +screenWidth = 640 |
| 67 | +screenHeight = 640 |
| 68 | +overlayWidth = 1920 |
| 69 | +overlayHeight = 1080 |
| 70 | +overlayAlpha = 200 # 0–255 (higher = more opaque) |
| 71 | +``` |
| 72 | + |
| 73 | +## ⚡ GPU Support |
| 74 | + |
| 75 | +- These values are used primarily for logic awareness — GPU detection is automatic: |
| 76 | +```bash |
| 77 | +useCuda = True # CUDA (NVIDIA) |
| 78 | +useDirectML = True # DirectML (AMD) |
| 79 | +``` |
| 80 | + |
| 81 | +## 🚀 Running the Program |
| 82 | + |
| 83 | +- Once your configuration is set, launch the overlay: |
| 84 | + |
| 85 | +```bash |
| 86 | +python main.py |
| 87 | +``` |
| 88 | + |
| 89 | +### You’ll be prompted to start your game. After pressing Enter: |
| 90 | + |
| 91 | +- Your screen will begin real-time capture |
| 92 | + |
| 93 | +- The selected model will run detection |
| 94 | + |
| 95 | +- Bounding boxes will appear in the transparent overlay |
| 96 | + |
| 97 | +- Press `Q` anytime to exit safely |
| 98 | + |
| 99 | +📸 Supported Hardware |
| 100 | + |
| 101 | +| GPU Type | Engine Used | |
| 102 | +|-------------|------------------------------| |
| 103 | +| NVIDIA | CUDA / TensorRT | |
| 104 | +| AMD Radeon | DirectML (ONNX + PyTorch DML)| |
| 105 | +| CPU Only | ONNX CPU Execution | |
| 106 | + |
| 107 | + |
| 108 | +# ❤️ Credits |
| 109 | +- Created with purpose by `Bubbles The Dev` 🫧 |
| 110 | +- Supporting accessible gaming, AI innovation, and empowering every player. |
| 111 | +- Need extra features like FPS display, audio alerts, model switching GUI, or OBS integration? |
| 112 | + |
| 113 | +### Hit me up — I got you! 😎 |
| 114 | + |
| 115 | + |
| 116 | + |
| 117 | +--- |
| 118 | +--- |
| 119 | +# 🚀 NVIDIA CUDA Installation Guide |
| 120 | + |
| 121 | +### 1. **Download the NVIDIA CUDA Toolkit 11.8** |
| 122 | + |
| 123 | +First, download the CUDA Toolkit 11.8 from the official NVIDIA website: |
| 124 | + |
| 125 | +👉 [Nvidia CUDA Toolkit 11.8 - DOWNLOAD HERE](https://developer.nvidia.com/cuda-11-8-0-download-archive) |
| 126 | + |
| 127 | +### 2. **Install the CUDA Toolkit** |
| 128 | + |
| 129 | +- After downloading, open the installer (`.exe`) and follow the instructions provided by the installer. |
| 130 | +- Make sure to select the following components during installation: |
| 131 | + - CUDA Toolkit |
| 132 | + - CUDA Samples |
| 133 | + - CUDA Documentation (optional) |
| 134 | + |
| 135 | +### 3. **Verify the Installation** |
| 136 | + |
| 137 | +- After the installation completes, open the `cmd.exe` terminal and run the following command to ensure that CUDA has been installed correctly: |
| 138 | + ``` |
| 139 | + nvcc --version |
| 140 | + ``` |
| 141 | +This will display the installed CUDA version. |
| 142 | + |
| 143 | +### **4. Install Cupy** |
| 144 | +Run the following command in your terminal to install Cupy: |
| 145 | + ``` |
| 146 | + pip install cupy-cuda11x |
| 147 | + ``` |
| 148 | + |
| 149 | +``` |
| 150 | +@echo off |
| 151 | +echo MAKE SURE TO HAVE THE WHL DOWNLOADED BEFORE YOU CONTINUE!!! |
| 152 | +pause |
| 153 | +echo Click the link to download the WHL: press ctrl then left click with mouse |
| 154 | +echo https://github.com/cupy/cupy/releases/download/v13.4.1/cupy_cuda11x-13.4.0-cp311-cp311-win_amd64.whl |
| 155 | +pause |
| 156 | +
|
| 157 | +echo Installing CuPy from WHL... |
| 158 | +pip install https://github.com/cupy/cupy/releases/download/v13.4.1/cupy_cuda11x-13.4.0-cp311-cp311-win_amd64.whl |
| 159 | +pause |
| 160 | +
|
| 161 | +echo All packages installed successfully! |
| 162 | +pause |
| 163 | +``` |
| 164 | + |
| 165 | +## 5. CUDNN Installation 🧩 |
| 166 | +Download cuDNN (CUDA Deep Neural Network library) from the NVIDIA website: |
| 167 | + |
| 168 | +👉 [Download CUDNN](https://developer.nvidia.com/downloads/compute/cudnn/secure/8.9.6/local_installers/11.x/cudnn-windows-x86_64-8.9.6.50_cuda11-archive.zip/). (Requires an NVIDIA account – it's free). |
| 169 | + |
| 170 | +## 6. Unzip and Relocate 📁➡️ |
| 171 | +Open the `.zip` cuDNN file and move all the folders/files to the location where the CUDA Toolkit is installed on your machine, typically: |
| 172 | + |
| 173 | +``` |
| 174 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8 |
| 175 | +``` |
| 176 | + |
| 177 | + |
| 178 | +## 7. Get TensorRT 8.6 GA 🔽 |
| 179 | +Download [TensorRT 8.6 GA](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/zip/TensorRT-8.6.1.6.Windows10.x86_64.cuda-11.8.zip). |
| 180 | + |
| 181 | +## 8. Unzip and Relocate 📁➡️ |
| 182 | +Open the `.zip` TensorRT file and move all the folders/files to the CUDA Toolkit folder, typically located at: |
| 183 | + |
| 184 | +``` |
| 185 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8 |
| 186 | +``` |
| 187 | + |
| 188 | + |
| 189 | +## 9. Python TensorRT Installation 🎡 |
| 190 | +Once all the files are copied, run the following command to install TensorRT for Python: |
| 191 | + |
| 192 | +``` |
| 193 | +pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python\tensorrt-8.6.1-cp311-none-win_amd64.whl" |
| 194 | +``` |
| 195 | + |
| 196 | +🚨 **Note:** If this step doesn’t work, double-check that the `.whl` file matches your Python version (e.g., `cp311` is for Python 3.11). Just locate the correct `.whl` file in the `python` folder and replace the path accordingly. |
| 197 | + |
| 198 | +## 10. Set Your Environment Variables 🌎 |
| 199 | +Add the following paths to your environment variables: |
| 200 | + |
| 201 | +``` |
| 202 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\lib |
| 203 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp |
| 204 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin |
| 205 | +``` |
| 206 | + |
| 207 | +# Setting Up CUDA 11.8 with cuDNN on Windows |
| 208 | + |
| 209 | +Once you have CUDA 11.8 installed and cuDNN properly configured, you need to set up your environment via `cmd.exe` to ensure that the system uses the correct version of CUDA (especially if multiple CUDA versions are installed). |
| 210 | + |
| 211 | +## Steps to Set Up CUDA 11.8 Using `cmd.exe` |
| 212 | + |
| 213 | +### 1. Set the CUDA Path in `cmd.exe` |
| 214 | + |
| 215 | +You need to add the CUDA 11.8 binaries to the environment variables in the current `cmd.exe` session. |
| 216 | + |
| 217 | +Open `cmd.exe` and run the following commands: |
| 218 | + |
| 219 | +``` |
| 220 | +set PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin;%PATH% |
| 221 | +``` |
| 222 | +``` |
| 223 | +set PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp;%PATH% |
| 224 | +``` |
| 225 | +``` |
| 226 | +set PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\CUPTI\lib64;%PATH% |
| 227 | +``` |
| 228 | +These commands add the CUDA 11.8 binary, lib, and CUPTI paths to your system's current session. Adjust the paths as necessary depending on your installation directory. |
| 229 | + |
| 230 | +2. Verify the CUDA Version |
| 231 | +After setting the paths, you can verify that your system is using CUDA 11.8 by running: |
| 232 | +``` |
| 233 | +nvcc --version |
| 234 | +``` |
| 235 | +This should display the details of CUDA 11.8. If it shows a different version, check the paths and ensure the proper version is set. |
| 236 | + |
| 237 | +3. **Set the Environment Variables for a Persistent Session** |
| 238 | +If you want to ensure CUDA 11.8 is used every time you open `cmd.exe`, you can add these paths to your system environment variables permanently: |
| 239 | + |
| 240 | +1. Open `Control Panel` -> `System` -> `Advanced System Settings`. |
| 241 | +Click on `Environment Variables`. |
| 242 | +Under `System variables`, select `Path` and click `Edit`. |
| 243 | +Add the following entries at the top of the list: |
| 244 | +``` |
| 245 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin |
| 246 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp |
| 247 | +C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\CUPTI\lib64 |
| 248 | +``` |
| 249 | +This ensures that CUDA 11.8 is prioritized when running CUDA applications, even on systems with multiple CUDA versions. |
| 250 | + |
| 251 | +4. **Set CUDA Environment Variables for cuDNN** |
| 252 | +If you're using cuDNN, ensure the `cudnn64_8.dll` is also in your system path: |
| 253 | +``` |
| 254 | +set PATH=C:\tools\cuda\bin;%PATH% |
| 255 | +``` |
| 256 | +This should properly set up CUDA 11.8 to be used for your projects via `cmd.exe`. |
| 257 | + |
| 258 | +#### Additional Information |
| 259 | +- Ensure that your GPU drivers are up to date. |
| 260 | +- You can check CUDA compatibility with other software (e.g., PyTorch or TensorFlow) by referring to their documentation for specific versions supported by CUDA 11.8. |
| 261 | + |
| 262 | +``` |
| 263 | +import torch |
| 264 | +
|
| 265 | +print(torch.cuda.is_available()) # This will return True if CUDA is available |
| 266 | +print(torch.version.cuda) # This will print the CUDA version being used |
| 267 | +print(torch.cuda.get_device_name(0)) # This will print the name of the GPU, e.g., 'NVIDIA GeForce RTX GPU Model' |
| 268 | +``` |
| 269 | +run the `get_device.py` to see if you installed it correctly |
0 commit comments