Skip to content

sail-sg/TreeMeshGPT

Repository files navigation


🌲 TreeMeshGPT: Artistic Mesh Generation with Autoregressive Tree Sequencing

Stefan LionarJiabin LiangGim Hee Lee
Sea AI Lab Garena National University of Singapore

CVPR 2025

     

💡 About

Dialogue_Teaser

This is the official repository of 🌲 TreeMeshGPT: Artistic Mesh Generation with Autoregressive Tree Sequencing.

TreeMeshGPT is an autoregressive Transformer designed to generate high-quality artistic meshes from input point clouds. Unlike conventional autoregressive models that rely on next-token prediction, TreeMeshGPT retrieves the next token from a dynamically growing tree structure, enabling localized mesh extensions and enhanced generation quality. Our novel Autoregressive Tree Sequencing method introduces an efficient face tokenization strategy, achieving a 22% compression rate compared to naive tokenization. This approach reduces training difficulty while allowing for the generation of meshes with finer details and consistent normal orientation. With 7-bit discretization, TreeMeshGPT supports meshes with up to 5,500 faces, while the 9-bit model extends this capability to 11,000 faces.

Table of Contents
  1. Code availability
  2. Getting started
  3. Inference
  4. Training
  5. Acknowledgement
  6. Citation

📌 Code Availability

  • Google Colab demo – Run TreeMeshGPT in your browser.
  • Inference - Generate artistic mesh conditioned on point cloud sampled from dense mesh (inference.py)
  • Tokenizer - Create input-output pair for Autoregressive Tree Sequencing (tokenizer.py)
  • Training script - Training script with dummy data

🚀 Getting Started

To set up TreeMeshGPT, follow the steps below:

1. Clone the repository and create conda environment

git clone https://github.com/sail-sg/TreeMeshGPT.git
cd TreeMeshGPT

conda create -n tmgpt python=3.11
conda activate tmgpt

2. Install PyTorch≥2.5.0 with CUDA support

pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124

PyTorch≥2.5.0 is needed as we use FlexAttention for the point cloud condition during training. Older versions should work for inference.

3. Install additional dependencies

pip install -r requirements.txt

4.Download pre-trained models

Download and arrange the pre-trained models using the following commands:

mkdir checkpoints

# 7-bit model
gdown 1IERM76szBQq9oAMoFw1Sbgp3kx97_Kib
mv treemeshgpt_7bit.pt checkpoints/

# 9-bit model
gdown 19Auv48x7kgoODRS7dij8QijWzDqRjq97
mv treemeshgpt_9bit.pt checkpoints/

Alternatively, you can download them manually from the following links and put them inside checkpoints folder:

🎨 Inference

We provide a demo of artistic mesh generation conditioned on a point cloud sampled from a dense mesh. The dense meshes are generated using the text-to-3D model from Luma AI. Our demo can also be run on Google Colab.

🔹 Run the Inference Script

Use the following command to generate a mesh:

python inference.py

The output will be saved to generation folder.

🔹 Arguments Summary

Argument Type Default Description
--version str 7bit Select model version: 7bit or 9bit.
--ckpt_path str ./checkpoints/treemeshgpt_7bit.pt (7-bit) / ./checkpoints/treemeshgpt_9bit.pt (9-bit) Path to the model checkpoint.
--mesh_path str demo/luma_cat.glb Path to the input mesh file.
--decimation bool True Enable or disable mesh decimation. Recommendation: True if input is dense mesh and False if input is Objaverse mesh.
--decimation_target_nfaces int 6000 Target number of faces after decimation. Use smaller number if generated mesh contains too many small triangles.
--decimation_boundary_deletion bool True (7-bit) / False (9-bit) Allow deletion of boundary vertices of decimated mesh. Set to True if generated mesh contains too many small triangles.
--sampling str uniform (7-bit) / fps (9-bit) Sampling method: uniform if 7-bit and fps if 9-bit.

🔹 Other example usages

  • Run with 9-bit model and a mesh specified in --mesh_path:

    python inference.py --version 9bit --mesh_path demo/luma_bunny.glb
  • Set --decimation_boundary_deletion to True and optionally use a lower --decimation_target_nfaces if the default configuration results in meshes with too many small triangles:

    python inference.py --version 9bit --mesh_path demo/luma_box.glb --decimation_boundary_deletion True --decimation_target_nfaces 2000
  • Run without decimation (e.g., for Objaverse evaluation):

    python inference.py --decimation False --mesh_path demo/objaverse_pig.obj

🛠️ Training

We provide training script with dummy data. First, install wandb:

pip install wandb

The dummy data is located in dummy_data/mesh. To further help getting started, we also provide the subset of our dataset with less than 500 faces used for the tokenization effectiveness experiment in our paper. The dataset can be downloaded here:

📦 mesh_500.zip.

Launch the script below and keep it running in separate process to apply data augmentation and tokenize the meshes. This script will create .pkl files to be loaded into training dataloader.

python train_create_pkl.py

Ideally, iterating through the entire dataset in train_create_pkl.py should be as fast as, or faster than, a single training epoch. To ensure this, consider running multiple instances of the script in parallel—each handling a different subset of the data—or implementing a custom queue-based pipeline to process meshes immediately after they are loaded during training.

To train the model, launch:

accelerate launch --mixed_precision=fp16 train.py

Note: This script only works with batch size 1. Adjust MESH_IN and PKL_OUT in train_create_pkl.py as well as TRAIN_PATH and VAL_PATH in train.py to your own.

Acknowledgement

Our code is built on top of PivotMesh codebase. Our work is also inspired by these projects:

Citation

If you find our code or paper useful, please consider citing us:

@article{lionar2025treemeshgpt,
  title={TreeMeshGPT: Artistic Mesh Generation with Autoregressive Tree Sequencing},
  author={Lionar, Stefan and Liang, Jiabin and Lee, Gim Hee},
  journal={arXiv preprint arXiv:2503.11629},
  year={2025}
}

About

[CVPR 2025] TreeMeshGPT: Artistic Mesh Generation with Autoregressive Tree Sequencing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages