Skip to content
View infinityengi's full-sized avatar

Highlights

  • Pro

Block or report infinityengi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
infinityengi/README.md

Hi β€” I'm Om Prakash Sahu πŸ‘‹

Machine Perception β€’ Autonomous Systems β€’ Embedded Automotive


  • βœ… Focus areas: Perception (LiDAR/Camera), 3D detection & segmentation, SLAM/localization, control (MPC), and automotive systems engineering (ISO 26262, V-Model).
  • βœ… Stack: Python, C++, ROS1/2, TensorFlow/PyTorch, PointPillars, OpenCV, Docker, CI/CD, Plotly, Jupyter Lab.

Key skills & technologies

Python C++ ROS LiDAR Deep Learning Docker TensorFlow PyTorch OpenCV MPC

The Notebooks and Docker badges on each project indicate quick reproducibility for live demos.


Projects

Legend: πŸ“’ = Notebook(s) β€’ 🐳 = Docker β€’ βš™οΈ = C++/embedded β€’ πŸ€– = ROS β€’ πŸ”¬ = Research/experiments β€’ 🏁 = Demo/reproducible


Visual Lane Following Robot (ACDC) β€” ROS on Jetson Nano πŸ€–πŸ

Short: ROS system for lane detection, localization + steering (MPC) running on NVIDIA Jetson Nano; includes simulation + real-world tests.

Link: https://github.com/infinityengi/visual-lane-following-robot-acdc

Tags: ROS C++ Python Jetson Nano MPC Simulation Robotics.


goal-driven-td3-nav β€” TD3 + ROS Noetic + Gazebo πŸ€–πŸ³πŸ”¬πŸ

Short: TD3-based deep RL for goal-driven mobile robot navigation in ROS Noetic + Gazebo. Trains policies from Velodyne LiDAR using PyTorch, Docker-ready, TensorBoard logging.

Link: https://github.com/infinityengi/goal-driven-td3-nav

Tags: ROS Gazebo PyTorch TD3 LiDAR Docker TensorBoard Noetic Research.


A. Perception & 3D Sensing

1. semantic-image-segmentation β€” image segmentation starter kit πŸ“’πŸ³

Short: Reproducible starter kit: data pipelines, color→class mapping, U-Net baseline, augmentations, training & export (SavedModel/ONNX/TFLite).

Link: https://github.com/infinityengi/semantic-image-segmentation

Tags: U-Net Data Pipeline Augmentation TensorFlow PyTorch Experiment Tracking Export.


2. point-cloud-semantic-segmentation β€” LiDAR semantic segmentation πŸ“’πŸ³πŸ”¬

Short: TensorFlow-based pipelines for point-cloud segmentation, cross-modal label transfer, and interactive Plotly visualizations.

Link: https://github.com/infinityengi/point-cloud-semantic-segmentation

Notebooks: 1_Semantic_Point_Cloud_Segmentation.ipynb β€’ 2_Boosting_Semantic_Point_Cloud_Segmentation.ipynb

Tags: Point Cloud Semantic Segmentation TensorFlow Augmentation Plotly Docker.


3. 3D-object-detection β€” LiDAR-based detection & visualization πŸ“’πŸ”¬

Short: Reproducible 3D detection pipeline (PointPillars) on KITTI with end-to-end notebooks for dataset prep, training/inference, and visualization.

Link: https://github.com/infinityengi/3D-object-detection

Tags: 3D Object Detection PointPillars KITTI LiDAR Visualization Python Notebooks.

Quick-run: Preprocessing notebook, anchor & hyperparameter inspection, 2D/BEV visualizers.


4. Semantic Grid Mapping β€” BEV & occupancy grid mapping πŸ“’πŸ”¬πŸ³

Short: Reproducible framework for grid-based environment representation using camera and LiDAR; demonstrates semantic segmentation β†’ BEV and occupancy grid mapping (PointPillars baseline).

Link: https://github.com/infinityengi/Semantic-Grid-Mapping

Highlights: pillarization, evidential prediction head, IPM + multi-camera stitching for 360Β° BEV.

Tags: LiDAR BEV Semantic Segmentation PointPillars Computer Vision Python TensorFlow Docker Notebooks.

Quick-run: notebooks/01_pointcloud_grid_mapping.ipynb β€’ notebooks/02_camera_grid_mapping.ipynb


5. inverse-perspective-mapping-cpp β€” IPM in C++ / OpenCV βš™οΈ

Short: C++ implementation of Inverse Perspective Mapping for BEV generation, with configuration and OpenCV backend. Good for embedded/real-time tasks.

Link: https://github.com/infinityengi/inverse-perspective-mapping-cpp

Tags: C++ OpenCV IPM BEV Real-time Embedded.

Quick-start: conda env or pip + Jupyter notebooks (notebooks/ quick-run cell).


B. Localization & Mapping

6. AutoSeg-Localization β€” segmentation + localization research framework πŸ“’πŸ³πŸ”¬

Short: Reproducible framework combining semantic segmentation with vehicle localization; Dockerized, curated notebooks, and experiment-tracking ready.

Link: https://github.com/infinityengi/AutoSeg-Localization

Tags: Localization Semantic Segmentation Docker Experiment Tracking Notebooks Research.


C. Control, Systems Engineering & Safety

7. control-perception-hubs β€” MPC, Robust Control, Sensor Fusion learning hub πŸ“’πŸ”¬

Short: Knowledge hub with tutorials, reference implementations and small reproducible projects across MPC, robust & networked control, and sensor fusion.

Link: https://github.com/infinityengi/control-perception-hubs

Tags: MPC Control Theory Sensor Fusion Tutorials Notebooks.


8. v-model-automotive-portfolio β€” V-Model for automotive projects πŸ“’βš™οΈπŸ

Short: Systems-engineering toolkit that maps artifacts to V-Model phases; includes lane-keep-assist case study with traceability, tests, and firmware examples.

Link: https://github.com/infinityengi/v-model-automotive-portfolio

Tags: V-Model ISO 26262 AUTOSAR HIL/SIL Systems Engineering Traceability.


9. functional-safety-iso26262 β€” practical ISO 26262 notes & templates πŸ“’

Short: Concise notes, tutorials and worked case studies for ISO 26262 β€” includes HARA exercise, management, HW/SW guidance and templates for safety cases.

Link: https://github.com/infinityengi/functional-safety-iso26262

Tags: ISO 26262 Functional Safety HARA Safety Case Automotive.


D. ADAS

10. ADAS-HandsOn-Repo β€” ACC, AEB, LKA mini-projects & CI-friendly examples πŸ“’πŸ

Short: Hands-on repositories for common ADAS features with reproducible notebooks and CI examples.

Link: https://github.com/infinityengi/ADAS-HandsOn-Repo

Tags: ACC AEB LKA CI Notebooks ADAS.


11. Lane Detection Using K-Means Clustering β€” OpenCV pipeline βš™οΈ

Short: Color clustering + polynomial fitting pipeline for robust yellow lane detection under challenging conditions.

Link: https://github.com/infinityengi/Lane-Detection-Using-K-Means-Clustering

Tags: OpenCV Computer Vision K-Means Lane Detection.


12. Curved Lane Detection (Sliding Window) β€” histogram + sliding window method βš™οΈ

Short: Classic sliding-window lane detection with polynomial fitting, tested under varying curvature & lighting conditions.

Link: https://github.com/infinityengi/curved-lane-detection-sliding-window

Tags: Lane Detection Sliding Window Polynomial Fit.


E. Knowledge & Productivity Tools

13. Professional Workflow Notes β€” workflow templates & docs πŸ“’

Short: Templates and a repeatable workflow from ideation β†’ delivery: docs, diagrams, pseudocode drafts, and AI-assisted engineering docs.

Link: https://github.com/infinityengi/professional-workflownotes

Tags: Workflow Templates Docs Engineering.


πŸ“« Contact


Pinned Loading

  1. goal-driven-td3-nav goal-driven-td3-nav Public

    TD3-based deep RL for goal-driven mobile robot navigation in ROS Noetic + Gazebo. Trains policies from Velodyne LiDAR using PyTorch, Docker-ready, TensorBoard logging.

    Python

  2. point-cloud-semantic-segmentation point-cloud-semantic-segmentation Public

    Semantic segmentation of LiDAR point clouds using TensorFlow and Jupyter notebooks. Includes reproducible Docker environments, baseline models, data preprocessing, augmentation strategies, and lite…

    Jupyter Notebook

  3. semantic-image-segmentation semantic-image-segmentation Public

    Reproducible starter kit for semantic image segmentation (notebooks, Docker, augmentation, common baselines (e.g., U-Net), and experiment tracking).

    Jupyter Notebook

  4. visual-lane-following-robot-acdc visual-lane-following-robot-acdc Public

    Course project at ika, RWTH Aachen. ROS-based lane-following system for a 1:10 autonomous vehicle using Jetson Nano, Gazebo simulation data, and integrated perception, localization, and motion-plan…

    Jupyter Notebook