Skip to content

Discrepancy between paper and code in the normal consistency loss #248

@lopeLH

Description

@lopeLH

In the original 2DGS paper, the normal consistency loss for a given pixel is defined as:

Image

However, in the code, the loss is:

normal_error = (1 - (rend_normal * surf_normal).sum(dim=0))[None]

with surf_normal and rend_normal defined as:

surf_normal = depth_to_normal(viewpoint_camera, surf_depth)
surf_normal = surf_normal.permute(2,0,1)
# remember to multiply with accum_alpha since render_normal is unnormalized.
surf_normal = surf_normal * (render_alpha).detach()

# get normal map
# transform normal from view space to world space
render_normal = allmap[2:5]
render_normal = (render_normal.permute(1,2,0) @ (viewpoint_camera.world_view_transform[:3,:3].T)).permute(2,0,1)

If I understand correctly, this implies the following pixel consistency loss formula:

$\mathcal{L}_n = 1 - \big( \sum_i w_i n_i \big) ^{\top} \bigg( \lfloor \sum_i w_i \rfloor ~ N \bigg)$

where $N$ is surf_normal, $\sum_i w_i n_i$ is rend_normal, $\sum_i w_i$ is render_alpha, and $\lfloor \cdot \rfloor$ denotes the stop gradient operator.

However, this is not exactly the same as the formula in the paper.

Is this discrepancy intentional? What are the effects or differences in behaviour between these two formulas? It would be great to clarify this, since a lot of papers are adopting a similar loss term, inspired and citing 2DGS, but it's unclear if they are using the paper or the implementation here as the reference.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions