Build docker image:
make build
You may need to install the NVIDIA container toolkit to bring the GPUs to work with Docker.
We provide one leaf point cloud with the extracted skeleton. The same leaf-skeleton pair is replicated inside the 'data/val' and 'data/train' folders, this is needed to use the distribution losses that need more than one sample to work.
With this leaf one can test the data loading, training, and testing with the provided network's weights. To perform a full training, one can download the BonnBeetClouds or the Pheno4D dataset and extract the leaves skeletons.
We also provide the checkpoints for computing the generative metrics that we use in the paper.
Download the data here: data.zip,
the weights here: best.ckpt,
the checkpoint for PointNet here: pointnet_on_single_view.pth,
and the checkpoint for 3D+CLIP embeddings here: pointmlp_8k.pth.
Unzip data.zip and copy the folder data and the weights into the main folder, copy the checkpoints in the src/metrics folder.
Execute
make download
You can test the point cloud generation running
make generate
You can train the network running
make train
By default this will use the leaves in data, where there are three defined folders train, val, test.
You can compute the metrics between the leaves in data and the generated leaves (you need to first generate the samples!) running
make compute_gen_metrics
By default this will try and compare up to 1.000 leaves, using the 3D+CLIP model for all three metrics: FID, CMMD, improved Precision and Recall. The realism for precision and recall is set to 0.5.
All these parameters can be changed / passed to the file compute_generative_metrics.py.
We also provide the script to compute and save the embeddings extracted from the training data to used them during training to compute the distribution losses. You can compute this running
make compute_target
This automatically save a tensor with the name of the dataset in the metrics folder.
Notice that in the config folder there are three configuration files:
- config_bbc.yaml: dataset configuration file for training the network, it specifies information about the real-world data, the network parameters, the size of the used skeletons
- test_config.yamls: configuration used to compute the generative metrics, it specifies the location of the real-world data and of the generated samples
- generate_config.yamls: configuration for the generative procedure, it specifies information about the type of leaf skeleton and about the network
In general, we follow the Python PEP 8 style guidelines. Please install black to format your python code properly. To run the black code formatter, use the following command:
black -l 120 path/to/python/module/or/package/
To optimize and clean up your imports, feel free to have a look at this solution for PyCharm.
