[Paper] [Project Page] [Jittor Version] [Demo]
- Source code of AR123.
- Evaluation code.
- Training code.
- Rendering code.
- Pretrained weights of AR123.
- Rendered dataset under the Zero123plus Setting.
If you have any questions about our AR-1-to-3, feel free to contact us via [email protected].
If our work is helpful to you or gives you some inspiration, please star this project and cite our paper. Thank you!
@inproceedings{zhang2025ar,
title={AR-1-to-3: Single Image to Consistent 3D Object via Next-View Prediction},
author={Zhang, Xuying and Zhou, Yupeng and Wang, Kai and Wang, Yikai and Li, Zhen and Jiao, Shaohui and Zhou, Daquan and Hou, Qibin and Cheng, Ming-Ming},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={26273--26283},
year={2025}
}We recommend using Python>=3.10, PyTorch>=2.1.0, and CUDA>=12.1.
conda create --name ar123 python=3.10
conda activate ar123
pip install -U pip
# Ensure Ninja is installed
conda install Ninja
# Install the correct version of CUDA
conda install cuda -c nvidia/label/cuda-12.1.0
# Install PyTorch and xformers
# You may need to install another xformers version if you use a different PyTorch version
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.22.post7
# For Linux users: Install Triton
pip install triton
# Install other requirements
pip install -r requirements.txtWe provide our rendered objaverse subset under the Zero123++ configuration to facilitate reproducibility and further research.
Please download and place it into zero123plus_renders.
😃😃😃 We render and assemble this dataset based on the Blender software. For the beginners not familiar with Blender, we also provide mesh rendering codes that can run automatically on the cmd. Please refer to the render README for more details.
Download checkpoints and put them into ckpts.
To synthesize multiple new perspective images based on a single-view image, please run:
CUDA_VISIBLE_DEVICES=0 python run.py --base configs/ar123_infer.yaml --input_path examples/c912d471c4714ca29ed7cf40bc5b1717.png --mode itomv
To generate 3D asset based on the synthesized multiple new views, please run:
CUDA_VISIBLE_DEVICES=0 python run.py --base configs/ar123_infer.yaml --input_path examples/c912d471c4714ca29ed7cf40bc5b1717.png --mode mvto3d
You can also directly obtain 3D asset based on a single-view image by running:
CUDA_VISIBLE_DEVICES=0 python run.py --base configs/ar123_infer.yaml --input_path examples/c912d471c4714ca29ed7cf40bc5b1717.png --mode ito3d
To train the default model, please run:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py \
--base configs/ar123_train.yaml \
--gpus 0,1,2,3,4,5,6,7 \
--num_nodes 1参数说明:
--base: Path to configuration file--gpus: GPU device ID in use--num_nodes: Node number in use
Please refer to eval_2d.py.
Please refer to eval_3d.py.
We thank the authors of the following projects for their excellent contributions to 3D generative AI!
In addition, we would like to express our sincere thanks to Jiale Xu for his invaluable assistance here.

