Florian Hofherr1,2 Bjoern Haefner1,2,† Daniel Cremers1,2
1Technical University of Munich
2Munich Center for Machine Learning
WACV 2025
The bidirectional reflectance distribution function (BRDF) is an essential tool to capture the complex interaction of light and matter. Recently, several works have employed neural methods for BRDF modeling, following various strategies, ranging from utilizing existing parametric models to purely neural parametrizations. While all methods yield impressive results, a comprehensive comparison of the different approaches is missing in the literature. In this work, we present a thorough evaluation of several approaches, including results for qualitative and quantitative reconstruction quality and an analysis of reciprocity and energy conservation. Moreover, we propose two extensions that can be added to existing approaches: A novel additive combination strategy for neural BRDFs that split the reflectance into a diffuse and a specular part, and an input mapping that ensures reciprocity exactly by construction, while previous approaches only ensure it by soft constraints.
@inproceedings{hofherr2025neuralBRDF,
title = {On Neural BRDFs: A Thorough Comparison of State-of-the-Art Approaches},
author = {Hofherr, Florian and Haefner, Bjoern and Cremers, Daniel},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2025}
}
conda env create -f environment.yml
conda activate n-brdf
bash download_data.sh
Download both the DiLiGenT-MV real-world dataset (8 GB) and the synthetic dataset (37 GB). The real-world data contains the simplified meshes (see Section B.2 in the appendix of the paper). All parts of the DiLiGenT-MV real-world dataset that are not relevant to this work have been removed. Please refer to the official DiLiGenT-MV website for the full dataset, including the meshes at the original resolution. The datasets follow the folder structure of the DiLiGenT-MV dataset.
The training is started by running train.py
. The model and the object to be fitted are specified in the configs (see the next section). The training should take between 30 minutes and 1 hour.
The files in configs/
are used to set the training parameters. The main file is train.yaml
. Use -/model_config/<name_model_config_file>@_here_
to choose a model specified by one of the files in configs/model_config
. The object to be trained on is specified by setting name_experiment
to the name of one of the object folders in the datasets. The dataloader automatically determines whether the real or synthetic dataset is used. Use dir
in train.yaml
to specify the output folder for the experiment. Besides the results (see the Inspecting the Results section), Hydra also copies the selected configs for this experiment into this folder. Please refer to the paper for a more detailed discussion of the parameters.
For each object, several quantities like the Laplace-Beltrami eigenfunctions, ray-mesh intersections, and shadow maps, can be precomputed once and reused across all experiments involving that object to accelerate training. Precomputation is triggered automatically if use_precomputed_quantities: True
is set in configs/train.yaml
and no existing files are found. The results are stored in _precomputed_quantities/
within the respective object's folder in data/
. Depending on the mesh size, precomputation may take up to an hour.
When you run the training, Hydra creates the folder specified under dir:
in train.yaml
, where all results are saved. Metrics and visualizations are logged to TensorBoard (view with tensorboard --logdir <path/to/experiment>
). Tensorboard recursively shows all experiments in the specified directory, i.e. you can compare multiple experiments. Model checkpoints are periodically saved during training.
For the experiments with the Fresnel Microfacet BRDF, we adapted the code from the official repository into our framework. The FMBRDF implementation is located in src/fmbrdf.py
, and we provide the original checkpoints in the fmbrdf_models/
directory.
The real-world dataset is a subset of the DiLiGenT-MV dataset, where parts unnecessary for this work (such as the estimated meshes) have been removed. Additionally, the meshes have been simplified using the qslim implementation in libigl (see Section B.2 in the appendix of the paper).
The (semi-)synthetic dataset uses a publicly available collection of meshes for the object geometries. The MERL BRDF database has been used to render 25 random BRDFs of different materials on randomly selected meshes.
Parts of the code have been adapted from the implementation of the 2022 ECCV paper Intrinsic Neural Fields.