Skip to content

How to extend Ogre2DepthCamera to output normals #1236

@TheusStremens

Description

@TheusStremens

Hi there, I'm trying to extend the Ogre2DepthCamera in gz-rendering8 to also provide the normals as an output. My first idea was to have a simple Ogre camera and set a material to it, similar with the custom_shaders example, however, as pointed in this issue, it's not possible anymore due to the big changes introduced by Ogre2 compared with Ogre 1. So, I dediced to try to tweak the Ogre2DepthCamera to also get the normals, similar to the gazebo-classic version.

The goal is to enable a future sonar simulation in Gazebo, porting the OSG sonar simulation to Gazebo.

However, I have no experience with Ogre2/gz-rendering and it's being really hard to develop this without documentation and reference, even using AI. I hope that with this issue the community may help me to see the directions that I'm taking and about some concepts from Ogre2 tied to gz-rendering.

My current implementation is the fork: https://github.com/TheusStremens/gz-rendering/tree/feat/add_normals_to_depth_camera

Desired behavior

The minimal desired behavior is to output the xyz normals. There are also those extra features that is more towards the sonar simulation needs:

  • If the model has a normal mapping, this should be used, otherwise, use the geometric normals. This enhances the sonar simulation.
  • The normalized dot product between the view normal and the view position. This represents how much "reflection" is received back to the sonar.

Alternatives considered

I started extending the base DepthCamera to also have a connection for the normals point-cloud. And created a new example derived from the depth_camera one, to have a base scene with a pump model, with a custom normal mapping of a brick wall. In this way I can tweak the Ogre2DepthCamera and see the results in terms of depth and normal output.

My first attempt was by adding new shaders for the depth camera material:

// Material for Normal
material DepthCameraNormal
{
  technique
  {
    pass
    {
      vertex_program_ref DepthCameraNormalVS { }
      fragment_program_ref DepthCameraNormalFS { }
    }
  }
}

And a vertex and fragment shaders:

#version ogre_glsl_ver_330

vulkan_layout( OGRE_POSITION ) in vec4 vertex;
vulkan_layout( OGRE_NORMAL ) in vec3 normal;

vulkan( layout( ogre_P0 ) uniform Params { )
  uniform mat4 worldViewProj;
  uniform mat4 viewMatrix;
vulkan( }; )

out gl_PerVertex
{
  vec4 gl_Position;
};

vulkan_layout( location = 0 )
out vec3 viewNormal;

void main()
{
  gl_Position = worldViewProj * vertex;
  // Transform normal to view (camera) space
  viewNormal = normalize((viewMatrix * vec4(normal, 0.0)).xyz);
}
#version ogre_glsl_ver_330

vulkan_layout( location = 0 )
in vec3 viewNormal;

vulkan_layout( location = 0 )
out vec4 fragColor;

void main()
{
  fragColor = vec4(normalize(viewNormal), 1.0);
}

Inside the Ogre2DepthCamera I created the new texture/scene pass/output texture handling, and a
material switcher to set this new material with the new shaders:

void Ogre2DepthCameraNormalMaterialSwitcher::cameraPreRenderScene(
    Ogre::Camera * /*_cam*/)
{
  auto engine = Ogre2RenderEngine::Instance();
  auto ogreRoot = engine->OgreRoot();
  Ogre::HlmsManager *hlmsManager = ogreRoot->getHlmsManager();

  this->materialMap.clear();
  this->datablockMap.clear();

  // Construct one now so that datablock->setBlendblock is as fast as possible
  const Ogre::HlmsBlendblock *noBlend =
      hlmsManager->getBlendblock(Ogre::HlmsBlendblock());

  // Get the normal material
  Ogre::MaterialPtr normalMaterial =
      Ogre::MaterialManager::getSingleton().getByName("DepthCameraNormal",
          Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);

  if (!normalMaterial)
  {
    gzerr << "DepthCameraNormal material not found" << std::endl;
    hlmsManager->destroyBlendblock(noBlend);
    return;
  }

  // Ensure material is loaded
  if (normalMaterial->getLoadingState() == Ogre::Resource::LOADSTATE_UNLOADED)
    normalMaterial->load();

  // Iterate through all items in the scene
  auto itor = this->scene->OgreSceneManager()->getMovableObjectIterator(
      Ogre::ItemFactory::FACTORY_TYPE_NAME);

  while (itor.hasMoreElements())
  {
    Ogre::MovableObject *object = itor.peekNext();
    Ogre::Item *item = static_cast<Ogre::Item *>(object);

    const size_t numSubItems = item->getNumSubItems();
    for (size_t i = 0; i < numSubItems; ++i)
    {
      Ogre::SubItem *subItem = item->getSubItem(i);

      if (!subItem->getMaterial().isNull())
      {
        // Store the original material
        this->materialMap.push_back({subItem, subItem->getMaterial()});
        // Set the normal material
        subItem->setMaterial(normalMaterial);
      }
      else
      {
        // This item uses HLMS (High Level Material System) datablock.
        // We need to store the datablock and force the low-level normal material
        Ogre::HlmsDatablock *datablock = subItem->getDatablock();

        // Store the original datablock by temporarily setting a material
        // This allows us to restore it later
        this->materialMap.push_back({subItem, Ogre::MaterialPtr()});

        // Force the low-level normal material
        subItem->setMaterial(normalMaterial);

        // Adjust blending to avoid transparency artifacts
        const Ogre::HlmsBlendblock *blendblock = datablock->getBlendblock();

        // Disable blending (transparency) to avoid artifacts
        if (blendblock->mSourceBlendFactor != Ogre::SBF_ONE ||
            blendblock->mDestBlendFactor != Ogre::SBF_ZERO ||
            blendblock->mBlendOperation != Ogre::SBO_ADD ||
            (blendblock->mSeparateBlend &&
             (blendblock->mSourceBlendFactorAlpha != Ogre::SBF_ONE ||
              blendblock->mDestBlendFactorAlpha != Ogre::SBF_ZERO ||
              blendblock->mBlendOperationAlpha != Ogre::SBO_ADD)))
        {
          hlmsManager->addReference(blendblock);
          this->datablockMap[datablock] = blendblock;
          datablock->setBlendblock(noBlend);
        }
      }
    }
    itor.moveNext();
  }

  // Remove the reference count on noBlend we created
  hlmsManager->destroyBlendblock(noBlend);
}

//////////////////////////////////////////////////
void Ogre2DepthCameraNormalMaterialSwitcher::cameraPostRenderScene(
    Ogre::Camera * /*_cam*/)
{
  auto engine = Ogre2RenderEngine::Instance();
  Ogre::HlmsManager *hlmsManager = engine->OgreRoot()->getHlmsManager();

  // Restore original blending to modified datablocks
  for (const auto &[datablock, blendblock] : this->datablockMap)
  {
    datablock->setBlendblock(blendblock);
    // Remove the reference we added
    hlmsManager->destroyBlendblock(blendblock);
  }
  this->datablockMap.clear();

  // Restore original materials
  for (auto &subItemMat : this->materialMap)
  {
    if (subItemMat.second.isNull())
    {
      // This was an HLMS item, restore to HLMS mode by setting datablock
      subItemMat.first->setDatablock(subItemMat.first->getDatablock());
    }
    else
    {
      // Restore low-level material
      subItemMat.first->setMaterial(subItemMat.second);
    }
  }
  this->materialMap.clear();
}

However, when I display the colored normals, it seems to be wrong. A lot of scars, and problems with
black faces of the cube even when the face is towards the camera, as shown in the picture below.

Image

After debugging, checking if the space was correct, it seems that the normals that Ogre provides are
already weird. If instead of using the Ogre normals, it calculates directly in the fragment shader:

void main()
{
  // Original code from OSG.
  /*
    vec3 nViewPos = normalize(viewPos);
    vec3 nViewNormal = normalize(viewNormal);
    result.z = abs(dot(nViewPos, nViewNormal));
  */

  // The first attempt was to use the view normal from the vertex shader, but how explained
  // there, those normals are incorrect. So we compute the normal per-fragment here using
  // derivatives of the view position.

  vec3 dpdx = dFdx(viewPos);
  vec3 dpdy = dFdy(viewPos);
  vec3 nViewNormal = normalize(cross(dpdx, dpdy)); // geometric normal in view space.

  // Make it consistent for visualization.
  if (!gl_FrontFacing) {
    nViewNormal = -nViewNormal;
  }

  // In typical view space, the camera looks down -Z, and points in front of the camera
  // have negative z, so viewPos points from camera to fragment, and the direction from
  // fragment to camera is -viewPos.
  vec3 nViewPos = normalize(-viewPos);
  // Calculate how much the surface faces the camera (like the reference shader).
  float normalDotView = max(dot(nViewPos, nViewNormal), 0.0);
  // RGB = normal, A = dot product.
  fragColor = vec4(nViewNormal, normalDotView);
}

It works, despite is not smooth, as shown in the picture below.

Image

So, AI made the following affirmation, which I don't know if it's true, and I would be glad if
someone could confirm or invalidate it:

"You cannot use a material switcher to obtain correct normals because replacing the material bypasses
the original HLMS PBS shading pipeline—losing per-material data like normal maps, tangents,
deformation, and TBN reconstruction—so the shader no longer has the geometry and material context
needed to compute accurate normals."

And suggested to go in a HLMS direction. I took a look in some of the templates in gz-rendering, and saw that 800.PixelShader_piece_ps.any has a LoadNormalData which is essentially what I want: the normal mapping if any, geometric normal otherwise.

So, my last try was to add an HLMS listener in the Ogre2DepthCamera and modify the
800.PixelShader_piece_ps.any, in the hlms_render_depth_only property to include a:

@property( gz_depthcam_normals )
    // view-space direction from fragment -> camera (camera is at origin in view space)
    float3 V = normalize( -inPs.pos );

    // Normal must be in the same space as V.
    // In this template, pixelData.normal is used with viewDir computed from -inPs.pos,
    // so pixelData.normal is in view/camera space at this stage.
    float3 N = normalize( pixelData.normal );

    float normalDotView = saturate( dot( N, V ) ); // = max(dot(N,V),0)

    outPs_colour0 = float4( N * 0.5 + 0.5, normalDotView );
@else
... equal as it was

In the Ogre2DepthCamera the listener:

class NormalsHlmsListener final : public Ogre::HlmsListener
{
public:
  explicit NormalsHlmsListener(Ogre::uint32 passIdentifier)
  : passIdentifier_(passIdentifier) {}

  void preparePassHash(const Ogre::CompositorShadowNode* shadowNode,
                       bool casterPass, bool dualParaboloid,
                       Ogre::SceneManager* sceneManager,
                       Ogre::Hlms* hlms) override
  {
    (void)shadowNode;
    (void)casterPass;
    (void)dualParaboloid;

    bool enable = false;

    const Ogre::CompositorPass* pass = sceneManager->getCurrentCompositorPass();
    if (pass)
    {
      const Ogre::CompositorPassDef* def = pass->getDefinition();
      if (def && def->mIdentifier == passIdentifier_)
        enable = true;
    }

    // IMPORTANT: set both ways so it doesn't "leak" across passes
    hlms->_setProperty("gz_depthcam_normals", enable ? 1 : 0);
  }

private:
  Ogre::uint32 passIdentifier_{0};
};

However, there is this warning about listener conflicts, and that this may cause problems even with
multiple depth cameras.

// Add Hlms listener to set the correct normal output when rendering the normals target.
auto hlmsManager = ogreRoot->getHlmsManager();
Ogre::Hlms* hlmsPbs = hlmsManager->getHlms(Ogre::HLMS_PBS);
this->dataPtr->normalsHlmsListener =
std::make_unique<NormalsHlmsListener>(kDepthCamNormalsPassId);

// NOTE: this overwrites any existing listener on PBS.
// If gz-rendering already set one, you'll need a small "multiplexer" listener.
hlmsPbs->setListener(this->dataPtr->normalsHlmsListener.get());

gzdbg << "Depth texture created" << std::endl;

this->CreateWorkspaceInstance();

Despite the potential issue of the HLMS listener, the result is exactly what I need (the pump model has a brick normal mapping, all the other objects don't):

Image

I also confirmed that the depth output still valid, and both are ok.

I saw that there is a Ogre2GzHlmsSphericalClipMinDistance which is also a HLMS listener, and that the Ogre2GpuRays use it, so, I thought that one possibility is:

  1. Create an Ogre2GzNormalOutput HLMS listener, similar to Ogre2GzHlmsSphericalClipMinDistance but that enable the property gz_depthcam_normals, with a public method to enable/disable it.
  2. Add Ogre2GzNormalOutput to the Ogre2GzHlmsPbs and expose it.
  3. Add a method in the RenderEngine to obtain the normal output customization
  4. In the Ogre2DepthCamera the normal pass should be at a different workspace, and in its Render() do something similar to GPU Rays:
  Ogre2GzNormalOutput &normalCustomization =
      engine->normalOutput();


  this->UpdateDepthRenderTarget();
  normalCustomization.enable();
  this->UpdateNormalRenderTarget();
  normalCustomization.disable();

So, in the end, I tried a material switcher approach that did not work, and a HLMS listener that
works but could be wrong in the whole architecture. Could someone bring something to light about
both paths? Thanks in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    Status

    Inbox

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions