Jiayao Mai*
Bangyan Liao*
Zhenjun Zhao†
Yingping Zeng
Haoang Li
Javier Civera
Tailin Wu
Yi Zhou✉
Peidong Liu✉
*Equal contribution · †Project lead · ✉Corresponding authors
Official implementation of the ICLR 2026 paper: "Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning"
NPC reveals that robust optimization, global optimization, polynomial root-finding, and sampling all share a common predictor-corrector structure, and learns efficient solver policies via reinforcement learning.
Homotopy methods are ubiquitous across scientific computing, from Graduated Non-Convexity (GNC) in robust optimization to annealed Langevin dynamics in sampling. Despite their apparent diversity, these methods all follow a common predictor-corrector (PC) structure. Yet practical solvers rely on hand-crafted heuristics for step size selection and termination criteria, which are often suboptimal and require tedious per-task tuning.
NPC (Neural Predictor-Corrector) is the first unified framework that:
- Reveals the shared predictor-corrector structure underlying these diverse homotopy problems
- Replaces hand-crafted heuristics with learned policies trained via reinforcement learning (PPO)
At each homotopy level, NPC:
- Observes the current homotopy level, corrector statistics, and convergence velocity
- Decides the predictor step size and corrector tolerance
- Learns to optimally balance accuracy and efficiency across problem classes
- 🔗 First unified framework for homotopy methods spanning robust optimization, global optimization, polynomial root-finding, and sampling
- 🤖 RL-based policy learning via PPO replaces all hand-crafted predictor-corrector heuristics
- ✅ No per-instance tuning trains once on a problem class and generalizes to unseen instances
- 🚀 State-of-the-art efficiency with superior stability across all benchmarks
Clone the repository and install dependencies:
git clone git@github.com:maijiayao1/NPC.git
cd NPC
pip install -r requirements.txtTrain a new NPC model on a target problem class:
python script/GNC_PPO_training.py --model-save-path="model/your_model_name"Monitor training with TensorBoard:
tensorboard --logdir=./logs/ppo_gnc_tensorboardThen open http://localhost:6006 in your browser.
Evaluate a trained model:
python script/GNC_PPO_inference.py --model-save-path="model/your_model_name"NPC achieves consistent speedups across all benchmark tasks. See the paper for the full evaluation.
Rotation error (log E_R) and translation error (log E_t) are reported on a log₁₀ scale. NPC matches classical accuracy while reducing iterations and runtime by 4–10×.
| Sequence | Method | log(E_R) ↓ | log(E_t) ↓ | Iter | Time (s) |
|---|---|---|---|---|---|
| bunny | Classic GNC | -0.85 | -2.76 | 783 | 161.00 |
| IRLS GNC | -0.85 | -2.75 | 309 | 61.59 | |
| NPC + GNC | -0.85 | -2.71 | 169 | 19.15 | |
| cube | Classic GNC | -1.12 | -2.89 | 486 | 89.34 |
| IRLS GNC | -1.10 | -2.90 | 141 | 26.13 | |
| NPC + GNC | -1.11 | -2.86 | 86 | 7.86 | |
| dragon | Classic GNC | -0.80 | -2.82 | 859 | 177.11 |
| IRLS GNC | -0.80 | -2.82 | 486 | 95.93 | |
| NPC + GNC | -0.80 | -2.80 | 201 | 26.42 |
NPC is trained on the Aquarius sequence and evaluated on unseen sequences (zero-shot generalization).
NPC consistently achieves a better trade-off between efficiency (fewer iterations) and precision across all four task domains.
NPC/
├── assets/ # Figures and teaser image
├── files/ # Slides and poster (PDF)
├── Environment/
│ ├── GNC_CostFactor_PointCloudRegistration.py
│ ├── GNC_Env.py
│ └── point_cloud_registration_utils.py
├── script/
│ ├── GNC_PPO_training.py # Training script
│ └── GNC_PPO_inference.py # Evaluation script
├── model/ # Saved model checkpoints
└── requirements.txt
If you find this work useful, please consider citing:
@article{mai2026neural,
title={Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning},
author={Mai, Jiayao and Liao, Bangyan and Zhao, Zhenjun and Zeng, Yingping and Li, Haoang and Civera, Javier and Wu, Tailin and Zhou, Yi and Liu, Peidong},
journal={arXiv preprint arXiv:2602.03086},
year={2026}
}For questions or feedback, feel free to reach out: