Project Page | Paper | Data
This repository contains an implementation for the SIGGRAPH Asia 2024 paper A Simple Approach to Differentiable Rendering of SDFs.
We present a simple algorithm for differentiable rendering by expandibng the lower-dimensional boundary integral into a thin band that is easy to sample. This in turn makes it easy to integrate rendering into gradient-based optimization pipelines, enabling simple, robust, accurate, and efficient 3D reconstruction and inverse rendering.
First create a new conda environment
conda create -n relax python=3.10
conda activate relax
Then install the following packages
pip install mitsuba==3.5
pip install numpy matplotlib fastsweep scikit-fmm scikit-image lpips
conda install -c conda-forge ffmpeg
Lastly, download the data files and unzip it under the project folder.
To start running the codes, simply try
python train.py --scene ficus
More generally, users can specify configurations, experiment name, and many other parameters (see the code for details) through
python train.py --scene <SCENE> --config <CONFIG> --name <EXP_NAME> --lr <LEARNING_RATE> --batch_size <BATCH_SIZE> --spp <SPP>
The required --scene argument specifies the target object, which should have the same name as the subfolder under data/. When --config is not specified, the code will run the default.json configuration. We also provide the turbo.json configuration for faster optimization on simpler objects. The --name argument specifies the experiment name, which is by default %Y-%m-%d_%H-%M-%S.
After running train.py, we can evaluate the final results by
python evaluate/eval_all.py --scene<SCENE> --exp_dir <EXP_DIR>
Here the --exp_dir should be the path to the subfolder under exp/.
Here we provide a brief walkthrough of the code files in this project.
data/- For this codebase, we limit the data to be synthetic and within the unit cube.
- Each subdirectory should contain the data needed for that one scene, which requires the following parts
- The shapes, usually as
objorplyfiles. - The materials of the shapes, specified in the
XMLfile. - The light source, specified in the
XMLfile. - The
XMLfile that specifies how the shapes, materials, and lighting are put together as a scene. Please see the XML file format for more info.
- The shapes, usually as
configs/default.jsonis the default configuration.fast.jsonprovides a faster optimization configuration, often more suitable for only optimizing the geometry (e.g. vbunny).
evaluate/eval_2d.pyevaluates novel view synthesis and relighting.eval_3d.pyevaluates the Chamfer distance.eval_inverse.pyevaluates the normal and albedo map.eval_all.pycalls the above three files to evaluate them all.
integrators/base.pycontains the base integrator that is the superclass of all other custom integrators.direct.pycontains non-differentiable direct integratorsdirect_diff.pycontains differentiable direct integratorsalbedo.py,geometry.py, andnormal.pycontain specialized integrators for inverse rendering evaluation.
utils.pycontains global parameters and utility functions.sdf.pyimplements the customSDFclass and sphere tracing functions.train.pyis the main entry point for optimization. It loads the ground-truth synthetic scene and the cameras to render reference images, runs the main optimization loop, and saves parameters accordingly.
In addition to the usual shape files and scene XML files, users should double-check that the target shapes are within the unit cube. We provide render.py for this quick check, which loads a scene and renders a single image from the front. The XML file should also contain a similar dummy placeholder as below if the textures are jointly optimized, and the dummy_sdf should be the first shape in the XML file.
<bsdf type="principled" id="main-bsdf">
<texture type="volume" name="base_color">
<volume type="gridvolume" name="volume">
<string name="filename" value="./data/textures/red.vol"/>
</volume>
</texture>
<texture type="volume" name="roughness">
<volume type="gridvolume" name="volume">
<string name="filename" value="./data/textures/gray.vol"/>
</volume>
</texture>
<float name="specular" value="1.000000"/>
</bsdf>
<shape type="sphere" id="dummy_sdf">
<transform name="to_world">
<scale value="0.00001"/>
<translate z="0"/>
<translate y="20"/>
<translate x="0"/>
</transform>
<ref id="main-bsdf" name="bsdf"/>
</shape>
Here we list a few noteworthy parameters that are up to users' design choice
hide_envdecides to hide the environment map background or not.num_sensor_thetaandnum_sensor_phispecifies the sensor distribution. By default the sensors spread across the unit hemisphere, withnum_sensor_thetasensors on the first row andnum_sensor_phirows of sensors. The number of sensors decreases as the row index increases.spp_gradspecifies how many samples are backpropagated for gradient computation.sdf_modedetermines eitherlinearorcubicinterpolation of the SDF grid. Thecubicinterpolation gives smooth geometry and is theoretically more robust, but it is also much slower than thelinearinterpolation.sdf_epsis theepsparameter in the paper, which controls the extent of relaxation.sdf_deriv_epsis the threshold on the SDF directional derivative
Well, looks like you have reached the bottom of this README; have fun playing with the codes!
If you find our work useful in your research, please consider citing:
@inproceedings{zichen2024relaxedboundary,
author = {Wang, Zichen and Deng, Xi and Zhang, Ziyi and Jakob, Wenzel and Marschner, Steve},
title = {A Simple Approach to Differentiable Rendering of SDFs},
booktitle = {ACM SIGGRAPH Asia 2024 Conference Proceedings},
year = {2024},
}
