Skip to content

Lightcalibration fitting box#2089

Open
Ibujah wants to merge 26 commits intodev_FittingBoxfrom
lightcalibration_FittingBox
Open

Lightcalibration fitting box#2089
Ibujah wants to merge 26 commits intodev_FittingBoxfrom
lightcalibration_FittingBox

Conversation

@Ibujah
Copy link

@Ibujah Ibujah commented Mar 19, 2026

Lighting calibration (implements Self-calibrated Near-light Photometric Stereo using a Geometric Proxy )

Uncalibrated near-light photometric stereo method, where the parameters of a physics-based illumination model for LEDs are estimated on-the-fly from a geometric proxy obtained, e.g., by MVS. A graduated optimisation procedure has been developed, which progressively refines the illumination model in a scale-aware manner.

Algorithms :

  • Directionnal lighting : linear estimation
  • Directionnal lighting : robust estimation with Huber loss
  • Directionnal lighting rgb intensity : robust estimation with Huber loss
  • Punctual lighting : robust estimation with extra ponderation on terminator
  • Punctual lighting rgb intensity: robust estimation Huber loss

Classes :

  • Generic lighting data, Directionnal lighting data, Punctual lighting data, with saving and loading methods
  • Sphere data for lighting calibration, from center, radius, calibration and image
image

@Ibujah Ibujah requested a review from jmelou March 19, 2026 09:58
@Ibujah Ibujah self-assigned this Mar 19, 2026
@fabiencastan fabiencastan added this to the 3.4.0 milestone Mar 19, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces an on-the-fly lighting calibration workflow (directional / point / LED) and associated data models/IO, and wires “lightId” into SfM views so multiple images can share a light during calibration.

Changes:

  • Add View::lightId support and load it from SfM JSON.
  • Add new lighting estimation module types (calibration spheres, lighting models + JSON IO, Ceres-based minimisations) and update the lighting calibration CLI accordingly.
  • Add a new pipeline tool to convert lighting parameters into a specific camera/view frame.

Reviewed changes

Copilot reviewed 23 out of 23 changed files in this pull request and generated 13 comments.

Show a summary per file
File Description
src/software/utils/main_split360Images.cpp Passes lightId into newly created View instances.
src/software/pipeline/main_lightingConversionPerCamera.cpp New CLI tool to convert light parameters to a given view frame.
src/software/pipeline/main_lightingCalibration.cpp Updates calibration CLI to new sphere/light data flow and outputs lighting JSON.
src/software/pipeline/CMakeLists.txt Registers the new aliceVision_lightingConversionPerCamera executable.
src/aliceVision/sphereDetection/sphereDetection.cpp Changes preprocessing path for ONNX inference (currently breaks tensor layout).
src/aliceVision/sfmDataIO/jsonIO.cpp Loads lightId for views (but does not save it yet).
src/aliceVision/sfmData/View.hpp Adds lightId field, ctor param, getter/setter, and equality update.
src/aliceVision/lightingEstimation/sphereData.hpp / .cpp New calibration-sphere container + JSON loader.
src/aliceVision/lightingEstimation/lightingMinimisations.hpp / .cpp New Ceres-based minimisation routines for lighting fitting.
src/aliceVision/lightingEstimation/lightingData.hpp / .cpp New lighting model types + JSON save/load + frame conversion.
src/aliceVision/lightingEstimation/lightingCalibration.hpp / .cpp Refactors calibration pipeline around new data structures and minimisers.
src/aliceVision/lightingEstimation/ellipseGeometry.hpp / .cpp Adds function to sample sphere points/normals over the image.
src/aliceVision/lightingEstimation/CMakeLists.txt Adds new sources/headers and links against Ceres.
src/aliceVision/keyframe/KeyframeSelector.cpp Passes lightId into created views.
pyTests/* + pyTests/constants.py Updates Python tests and constants to account for lightId.
Comments suppressed due to low confidence (2)

src/aliceVision/sphereDetection/sphereDetection.cpp:128

  • cv::dnn::blobFromImage() was commented out, but the input tensor is still declared as NCHW {1,3,H,W} and filled by flattening the cv::Mat memory (HWC). This will feed the model with incorrectly ordered data (and possibly wrong shape), breaking detection. Either restore blobFromImage() (and read from the returned blob), or change the tensor shape/order to match the current imageOpencv layout.
    // uint8 -> float32
    imageOpencv.convertTo(imageOpencv, CV_32FC3, 1 / 255.0);

    // HWC to CHW
    //cv::dnn::blobFromImage(imageOpencv, imageOpencv);

    // Inference on CPU
    // TODO: use GPU
    Ort::MemoryInfo memoryInfo = Ort::MemoryInfo::CreateCpu(OrtAllocatorType::OrtArenaAllocator, OrtMemType::OrtMemTypeDefault);

    // Initialize input tensor
    std::vector<int64_t> inputShape = {1, 3, imageAlice.height(), imageAlice.width()};
    const size_t inputSize = std::accumulate(begin(inputShape), end(inputShape), 1, std::multiplies<size_t>());
    std::vector<float> inputTensor(inputSize);
    inputTensor.assign(imageOpencv.begin<float>(), imageOpencv.end<float>());

src/software/pipeline/main_lightingCalibration.cpp:60

  • Several CLI parameters/variables are now unused (method, doDebug, saveAsModel, ellipticEstimation). This is confusing for users and can trigger unused-variable warnings. Either wire these options into the new calibration path (or remove them from the CLI and variable list).
    std::string inputPath;
    std::string inputDetection;
    std::string outputJSON;
    std::string method;
    std::string outLightType;
    bool doDebug;
    bool saveAsModel;
    bool ellipticEstimation;


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 82 to 87
void loadView(sfmData::View& view, bpt::ptree& viewTree)
{
view.setViewId(viewTree.get<IndexT>("viewId", UndefinedIndexT));
view.setPoseId(viewTree.get<IndexT>("poseId", UndefinedIndexT));
view.setLightId(viewTree.get<IndexT>("lightId", UndefinedIndexT));

Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

loadView() now reads lightId, but saveView() never serializes it. This will drop View::_lightId on save/load round-trips. Please add a lightId field in saveView() (ideally only when not UndefinedIndexT, consistent with other IDs).

Copilot uses AI. Check for mistakes.
Comment on lines +137 to +144
Eigen::VectorXf delta = deltaFull(subIndices);
Eigen::VectorXf a = aFull(subIndices);
Eigen::VectorXf b = bFull(subIndices);
Eigen::MatrixX3f directions = directionsFull(subIndices, Eigen::placeholders::all);

Eigen::VectorXf factors = (-b - delta.cwiseSqrt()).cwiseQuotient(2.0 * a);

pixels = pixelsFull(subIndices, Eigen::placeholders::all);
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This uses Eigen advanced indexing with std::vector indices and Eigen::placeholders::all (eg deltaFull(subIndices) and directionsFull(subIndices, ...)). The project requires Eigen 3.3, which does not support this API, so this will not compile. Please replace with explicit gathering into new Eigen vectors/matrices (or use an Eigen::VectorXi with manual loop).

Suggested change
Eigen::VectorXf delta = deltaFull(subIndices);
Eigen::VectorXf a = aFull(subIndices);
Eigen::VectorXf b = bFull(subIndices);
Eigen::MatrixX3f directions = directionsFull(subIndices, Eigen::placeholders::all);
Eigen::VectorXf factors = (-b - delta.cwiseSqrt()).cwiseQuotient(2.0 * a);
pixels = pixelsFull(subIndices, Eigen::placeholders::all);
const std::size_t nb = subIndices.size();
Eigen::VectorXf delta(nb);
Eigen::VectorXf a(nb);
Eigen::VectorXf b(nb);
Eigen::MatrixX3f directions(nb, 3);
pixels.resize(nb, 2);
points.resize(nb, 3);
normals.resize(nb, 3);
for (std::size_t k = 0; k < nb; ++k)
{
const unsigned int idx = subIndices[k];
delta(static_cast<Eigen::Index>(k)) = deltaFull(idx);
a(static_cast<Eigen::Index>(k)) = aFull(idx);
b(static_cast<Eigen::Index>(k)) = bFull(idx);
directions.row(static_cast<Eigen::Index>(k)) = directionsFull.row(idx);
pixels.row(static_cast<Eigen::Index>(k)) = pixelsFull.row(idx);
}
Eigen::VectorXf factors = (-b - delta.cwiseSqrt()).cwiseQuotient(2.0 * a);

Copilot uses AI. Check for mistakes.
Comment on lines +64 to +70
ALICEVISION_LOG_DEBUG(" - " << imagePath.string());

std::shared_ptr<std::vector<CalibrationSphere>> sphereList = calibrationSpheres.at(viewId);
if (!sphereList)
{
ALICEVISION_LOG_WARNING("No detected sphere found for '" << imagePath << "'.");
return false;
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calibrationSpheres.at(viewId) will throw if the view has no entry in the map (eg views without detected spheres). Since this function already handles “no sphere” cases, use find() (or count()) before accessing to avoid an exception that aborts the whole calibration.

Copilot uses AI. Check for mistakes.
Comment on lines +134 to +137
normalsList.push_back(normals_cur(keptIntensities, Eigen::placeholders::all));
pixelsRGBfList.push_back(pixelsRGBf_cur(keptIntensities, Eigen::placeholders::all));
if (usePose)
pointsList.push_back(points_cur(keptIntensities, Eigen::placeholders::all));
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This uses Eigen advanced indexing (normals_cur(keptIntensities, Eigen::placeholders::all) etc). With Eigen 3.3 (repo requirement), this API is unavailable and will not compile. Please replace with manual row selection into new matrices (eg loop over indices and copy rows).

Suggested change
normalsList.push_back(normals_cur(keptIntensities, Eigen::placeholders::all));
pixelsRGBfList.push_back(pixelsRGBf_cur(keptIntensities, Eigen::placeholders::all));
if (usePose)
pointsList.push_back(points_cur(keptIntensities, Eigen::placeholders::all));
// Manually select rows corresponding to keptIntensities (Eigen 3.3 compatible)
const std::size_t nbKept = keptIntensities.size();
Eigen::MatrixX3f normals_kept(nbKept, 3);
Eigen::MatrixX3f pixelsRGBf_kept(nbKept, 3);
for (std::size_t i = 0; i < nbKept; ++i)
{
const unsigned int rowIdx = keptIntensities[i];
normals_kept.row(static_cast<Eigen::Index>(i)) = normals_cur.row(static_cast<Eigen::Index>(rowIdx));
pixelsRGBf_kept.row(static_cast<Eigen::Index>(i)) = pixelsRGBf_cur.row(static_cast<Eigen::Index>(rowIdx));
}
normalsList.push_back(normals_kept);
pixelsRGBfList.push_back(pixelsRGBf_kept);
if (usePose)
{
Eigen::MatrixX3f points_kept(nbKept, 3);
for (std::size_t i = 0; i < nbKept; ++i)
{
const unsigned int rowIdx = keptIntensities[i];
points_kept.row(static_cast<Eigen::Index>(i)) = points_cur.row(static_cast<Eigen::Index>(rowIdx));
}
pointsList.push_back(points_kept);
}

Copilot uses AI. Check for mistakes.
Comment on lines +199 to 205
std::shared_ptr<CalibrationData> calibrationData = std::make_shared<CalibrationData>();

image::Image<float> patch;
patch = imageFloat.block(minISphere, minJSphere, 2 * radius, 2 * radius);

int nbPixelsPatch = 4 * radius * radius;
Eigen::VectorXf imSphere(nbPixelsPatch);
Eigen::MatrixXf normalSphere(nbPixelsPatch, lightSize);

int currentIndex = 0;

for (size_t j = 0; j < patch.cols(); ++j)
if (!calibrationData->prepareView(viewIt.first, sfmData, calibrationSpheres, usePose))
{
for (size_t i = 0; i < patch.rows(); ++i)
{
float distanceToCenter = sqrt((i - radius) * (i - radius) + (j - radius) * (j - radius));
if (distanceToCenter < 0.95 * radius && (patch(i, j) > 0.2) && (patch(i, j) < 0.8))
{
imSphere(currentIndex) = patch(i, j);

normalSphere(currentIndex, 0) = (float(j) - radius) / radius;
normalSphere(currentIndex, 1) = (float(i) - radius) / radius;
normalSphere(currentIndex, 2) = -sqrt(1 - normalSphere(currentIndex, 0) * normalSphere(currentIndex, 0) -
normalSphere(currentIndex, 1) * normalSphere(currentIndex, 1));
normalSphere(currentIndex, 3) = 1;
if (lightSize > 4)
{
normalSphere(currentIndex, 4) = normalSphere(currentIndex, 0) * normalSphere(currentIndex, 1);
normalSphere(currentIndex, 5) = normalSphere(currentIndex, 0) * normalSphere(currentIndex, 2);
normalSphere(currentIndex, 6) = normalSphere(currentIndex, 1) * normalSphere(currentIndex, 2);
normalSphere(currentIndex, 7) = normalSphere(currentIndex, 0) * normalSphere(currentIndex, 0) -
normalSphere(currentIndex, 1) * normalSphere(currentIndex, 1);
normalSphere(currentIndex, 8) = 3 * normalSphere(currentIndex, 2) * normalSphere(currentIndex, 2) - 1;
}
++currentIndex;
}
}
ALICEVISION_LOG_ERROR("Could not prepare view");
return false;
}
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prepareView() returns false for legitimate “skip” cases (eg filenames containing ambient, missing sphere entries), but lightCalibration() treats any false as a fatal error and aborts the whole run. Consider distinguishing “skipped” from “error” (eg return enum/status), or treat these cases as non-fatal and continue to the next view.

Copilot uses AI. Check for mistakes.
av_to_vis(2, 2) = -1.;
Eigen::Matrix4f RT = pose->getTransform().getHomogeneous().cast<float>() * av_to_vis;

Eigen::Vector3f newLightPosition = RT.block<3, 3>(0, 0) * lightPosition + RT.block<3,1>(3,0);
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Translation extraction is incorrect: RT.block<3,1>(3,0) starts at row 3 and reads past the 4x4 matrix (Eigen will assert in debug and is UB in release). This should be the translation column RT.block<3,1>(0,3).

Suggested change
Eigen::Vector3f newLightPosition = RT.block<3, 3>(0, 0) * lightPosition + RT.block<3,1>(3,0);
Eigen::Vector3f newLightPosition = RT.block<3, 3>(0, 0) * lightPosition + RT.block<3,1>(0,3);

Copilot uses AI. Check for mistakes.
av_to_vis(2, 2) = -1.;
Eigen::Matrix4f RT = pose->getTransform().getHomogeneous().cast<float>() * av_to_vis;

Eigen::Vector3f newLightPosition = RT.block<3, 3>(0, 0) * lightPosition + RT.block<3,1>(3,0);
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same translation-block bug for LED conversion: RT.block<3,1>(3,0) is out-of-bounds/UB. Use the translation column RT.block<3,1>(0,3) when transforming positions.

Suggested change
Eigen::Vector3f newLightPosition = RT.block<3, 3>(0, 0) * lightPosition + RT.block<3,1>(3,0);
Eigen::Vector3f newLightPosition = RT.block<3, 3>(0, 0) * lightPosition + RT.block<3,1>(0,3);

Copilot uses AI. Check for mistakes.
auto& lights = fileTree.get_child("lights");
for (auto itLight = lights.get_child("").begin(); itLight != lights.get_child("").end(); itLight++)
{
IndexT lightId = stoi(itLight->second.get("lightId", "0"));
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stoi is used without std:: (IndexT lightId = stoi(...)), which will not compile. Use std::stoi/std::stoul (or ptree.get<IndexT>(...)) to parse the ID.

Suggested change
IndexT lightId = stoi(itLight->second.get("lightId", "0"));
IndexT lightId = itLight->second.get<IndexT>("lightId", IndexT(0));

Copilot uses AI. Check for mistakes.

ceres::LossFunction* loss = new ceres::HuberLoss(epsilonHuberLoss);

problem.AddResidualBlock(dynamic_cost, nullptr, params, 1);
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A Huber loss is allocated but not used: loss is created, then AddResidualBlock is called with nullptr as the loss function. This disables the intended robust fitting and also leaks the loss object. Pass loss to AddResidualBlock (Ceres will take ownership).

Suggested change
problem.AddResidualBlock(dynamic_cost, nullptr, params, 1);
problem.AddResidualBlock(dynamic_cost, loss, params, 1);

Copilot uses AI. Check for mistakes.
Comment on lines +104 to +107
std::cout << "viewId" << viewId << std::endl;
auto& view = sfmData.getViews().at(viewId);
auto& pose = sfmData.getPoses().at(view->getPoseId());

Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sfmData.getViews().at(viewId) / getPoses().at(...) will throw on invalid/missing IDs, causing an unhandled exception and abrupt exit. Since this is a CLI tool, consider checking existence (find) and returning EXIT_FAILURE with a clear error message when the viewId/poseId is not found.

Suggested change
std::cout << "viewId" << viewId << std::endl;
auto& view = sfmData.getViews().at(viewId);
auto& pose = sfmData.getPoses().at(view->getPoseId());
std::cout << "viewId " << viewId << std::endl;
// Safely retrieve the view corresponding to the requested viewId
const auto& views = sfmData.getViews();
auto itView = views.find(viewId);
if (itView == views.end())
{
ALICEVISION_LOG_ERROR("View with id " << viewId << " not found in SfM data.");
return EXIT_FAILURE;
}
auto& view = itView->second;
// Safely retrieve the pose associated with the view
const auto& poses = sfmData.getPoses();
const auto poseId = view->getPoseId();
auto itPose = poses.find(poseId);
if (itPose == poses.end())
{
ALICEVISION_LOG_ERROR("Pose with id " << poseId << " (for view " << viewId << ") not found in SfM data.");
return EXIT_FAILURE;
}
auto& pose = itPose->second;

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants