-
Notifications
You must be signed in to change notification settings - Fork 2
Home
The Hierarchical Quantum Pathfinding Algorithm (HQP) finds optimal paths through a maze represented as a 2D grid (0s for navigable cells, 1s for walls). It is designed to handle both static walls and dynamic obstacles, with a focus on efficiency and adaptability.
The algorithm uses a two-level planning strategy:
- The maze is divided into smaller regions (typically 4x4 cells) using the
_build_regionsmethod. - Each region has an ID and stores:
- Cells
- Center
- Bounds
- Region-level planning uses
_region_level_search, which applies the A* algorithm at the region level. - It finds a sequence of regions connecting the start region to the goal region, using:
- Precomputed connections via
_identify_region_connections - A heuristic based on region center distances
- Precomputed connections via
- This reduces complexity by planning first at a coarser level.
- After a region path is found, detailed paths are built within regions using
_local_astarand_full_astar. - These implement A*, considering:
- Static walls
- Dynamic obstacles
- Points to avoid
-
_build_path_through_regionschains local paths into a full route from start to goal.
Inspired by:
Hierarchical Pathfinding and AI-Based Learning in Strategy Game Design, which show efficiency gains in large graphs by using subgraphs.
The "quantum-inspired" part refers to parallel path exploration, inspired by quantum superposition. Implemented in _explore_quantum_paths:
- Generates multiple candidate paths (default = 3, configurable via
num_parallel_paths) - Uses varying:
- Region entry/exit points
- Local A* variations
- If fewer than expected paths are generated:
- Uses
_create_path_variationto modify existing paths
- Uses
Note: "Parallel" is metaphorical, not actual parallel processing. It means logically exploring multiple options simultaneously.
Inspired by:
TransPath: Learning Heuristics For Grid-Based Pathfinding via Transformers
- Each path is scored by:
- Length (number of steps)
- Penalties for obstacles
Penalty = 100 * (1.0 - i/len(path))
whereiis the index of the cell in the path
- The lowest scoring path is selected.
-
Caching:
- Paths cached using
_generate_cache_key
(start, goal, dynamic obstacles)
- Paths cached using
-
Cache Invalidation:
-
update_dynamic_obstacleschecks affected regions - Calls
invalidate_cache_regionto clear paths involving changed areas
-
-
Dynamic Obstacle Consideration:
- A* search and scoring avoid paths with obstacles
Crucial for real-time robotics or video games where environments change unexpectedly.
| Approach | Method | Efficiency | Optimality | Dynamic Obstacle Handling |
|---|---|---|---|---|
| Traditional A* | Single heuristic search | Moderate | Guaranteed | Limited (recompute) |
| Hierarchical Quantum-Inspired | Hierarchical, multiple paths | High | Configurable | Adaptive (cache update) |
| Feature | Description |
|---|---|
| Efficiency | Hierarchical planning reduces complexity by avoiding a full maze search. |
| Adaptability | Caching & invalidation allow rapid adjustments to changing environments. |
| Robustness | Multiple path exploration improves reliability, especially with dynamic obstacles. |
| Applications | - Robotics: Autonomous navigation in environments like warehouses - Video Games: AI opponent pathfinding and procedural levels - Logistics: Real-time route planning under changing conditions |
-
Generalization:
Region size andnum_parallel_pathsneed to be tuned per use-case. -
Optimality:
May not always guarantee the shortest path depending on heuristic and obstacles. -
Computational Cost:
Multiple path generation can be expensive — mitigated by caching.
The Hierarchical Quantum-Inspired Pathfinding Algorithm is a powerful blend of hierarchical planning and quantum-inspired exploration. It is especially effective in dynamic environments, outperforming traditional A* in adaptability and efficiency. This makes it ideal for robotics, gaming, and logistics applications.
Note: This algorithm was discovered by my group members and me during project-based learning sessions. We created a maze game and realized traditional pathfinding algorithms were lacking in adaptability and robustness.