Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions meshroom/aliceVision/CameraInit.py
Original file line number Diff line number Diff line change
Expand Up @@ -340,12 +340,12 @@ class CameraInit(desc.AVCommandLineNode, desc.InitNode):
The software can support images without any metadata but it is recommended to have them for robustness.

### Metadata
Metadata allow images to be grouped together and provide an initialization of the focal length (in pixel unit).
Metadata allows images to be grouped together and provide an initialization of the focal length (in pixel unit).
The needed metadata are:
* **Focal Length**: the focal length in mm.
* **Make** & **Model**: this information allows to convert the focal in mm into a focal length in pixels using an
* **Make** & **Model**: this information allows converting the focal in mm into a focal length in pixels using an
embedded sensor database.
* **Serial Number**: allows to uniquely identify a device so multiple devices with the same Make, Model can be
* **Serial Number**: allows uniquely identifying a device so multiple devices with the same Make, Model can be
differentiated and their internal parameters are optimized separately (in the photogrammetry case).
"""

Expand Down Expand Up @@ -420,7 +420,7 @@ class CameraInit(desc.AVCommandLineNode, desc.InitNode):
desc.ChoiceParam(
name="rawColorInterpretation",
label="RAW Color Interpretation",
description="Allows to choose how RAW data are color processed:\n"
description="Allows you to choose how RAW data are color processed:\n"
" - None: Debayering without any color processing.\n"
" - LibRawNoWhiteBalancing: Simple neutralization.\n"
" - LibRawWhiteBalancing: Use internal white balancing from libraw.\n"
Expand Down Expand Up @@ -449,7 +449,7 @@ class CameraInit(desc.AVCommandLineNode, desc.InitNode):
desc.ChoiceParam(
name="viewIdMethod",
label="ViewId Method",
description="Allows to choose the way the viewID is generated:\n"
description="Allows you to choose the way the viewID is generated:\n"
" - metadata : Generate viewId from image metadata.\n"
" - filename : Generate viewId from filename using regex.",
value="metadata",
Expand Down
4 changes: 2 additions & 2 deletions meshroom/aliceVision/ColorCheckerDetection.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ class ColorCheckerDetection(desc.AVCommandLineNode):

Outputs:
- the detected color charts position and colors
- the associated transform matrix from "theoric" to "measured"
assuming that the "theoric" Macbeth chart corners coordinates are:
- the associated transform matrix from "theoretical" to "measured"
assuming that the "theoretical" Macbeth chart corners coordinates are:
(0, 0), (1675, 0), (1675, 1125), (0, 1125)

Dev notes:
Expand Down
4 changes: 2 additions & 2 deletions meshroom/aliceVision/ExportMaya.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,10 @@ def processChunk(self, chunk):
chunk.logManager.end()
raise RuntimeError()

#Check that we have Only one intrinsic
# Check that we have only one intrinsic
intrinsics = data.getIntrinsics()
if len(intrinsics) > 1:
chunk.logger.error("Only project with a single intrinsic are supported")
chunk.logger.error("Only projects with a single intrinsic are supported")
chunk.logManager.end()
raise RuntimeError()

Expand Down
2 changes: 1 addition & 1 deletion meshroom/aliceVision/KeyframeSelection.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ class KeyframeSelection(desc.AVCommandLineNode):

category = "Utils"
documentation = """
Allows to extract keyframes from a video and insert metadata.
Allows extracting keyframes from a video and inserting metadata.
It can extract frames from a synchronized multi-cameras rig.

You can extract frames at regular interval by configuring only the min/maxFrameStep.
Expand Down
2 changes: 1 addition & 1 deletion src/aliceVision/feature/PointFeature.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ class PointFeature
Vec2f getOrientationVector() const { return Vec2f(std::cos(orientation()), std::sin(orientation())); }

/**
* @brief Return the orientation of the feature as a vector scaled to the the scale of the feature.
* @brief Return the orientation of the feature as a vector scaled to the scale of the feature.
* @return a vector corresponding to the orientation of the feature scaled at the scale of the feature.
*/
Vec2f getScaledOrientationVector() const { return scale() * getOrientationVector(); }
Expand Down
2 changes: 1 addition & 1 deletion src/aliceVision/matching/guidedMatching.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ struct distanceRatio
*/
inline bool update(std::size_t index, DistT dist)
{
if (dist < bd) // best than any previous
if (dist < bd) // better than any previous
{
idx = index;
// update and swap
Expand Down
2 changes: 1 addition & 1 deletion src/aliceVision/multiview/triangulation/Triangulation.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ void TriangulateNViewAlgebraicSpherical(const std::vector<Vec3> &xs,
* @brief Compute a 3D position of a point from several images of it. In particular,
* compute the projective point X in R^4 such that x ~ PX.
* Algorithm is Lo-RANSAC
* It can return the the list of the cameras set as intlier by the Lo-RANSAC algorithm.
* It can return the list of the cameras set as inlier by the Lo-RANSAC algorithm.
*
* @param[in] x are 2D coordinates (x,y,1) in each image
* @param[in] Ps is the list of projective matrices for each camera
Expand Down
4 changes: 2 additions & 2 deletions src/aliceVision/mvsUtils/mapIO.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -606,13 +606,13 @@ unsigned long getNbDepthValuesFromDepthMap(int rc, const MultiViewParams& mp, in
bool fromTiles = false;

// get nbDepthValues from metadata
if (utils::exists(depthMapPath)) // untilled
if (utils::exists(depthMapPath)) // untiled
{
fileExists = true;
const oiio::ParamValueList metadata = image::readImageMetadata(depthMapPath);
nbDepthValues = metadata.get_int("AliceVision:nbDepthValues", -1);
}
else // tilled
else // tiled
{
std::vector<std::string> mapTilePathList;
getTilePathList(rc, mp, EFileType::depthMapFiltered, customSuffix, mapTilePathList);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ namespace sfm {
* @param tracksMap the input map of tracks
* @param tracksPerView tracks grouped by views
* @param viewId the view of interest identifier
* @return false if an error occured
* @return false if an error occurred
*/
bool buildSfmDataFromDepthMap(sfmData::SfMData & output,
const sfmData::SfMData & sfmData,
Expand Down
8 changes: 4 additions & 4 deletions src/aliceVision/track/tracksUtils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -233,16 +233,16 @@ void tracksToIndexedMatches(const TracksMap& tracks, const std::vector<IndexT>&
}
}

void tracksLength(const TracksMap& tracks, std::map<std::size_t, std::size_t>& occurenceTrackLength)
void tracksLength(const TracksMap& tracks, std::map<std::size_t, std::size_t>& occurrenceTrackLength)
{
for (TracksMap::const_iterator iterT = tracks.begin(); iterT != tracks.end(); ++iterT)
{
const std::size_t trLength = iterT->second.featPerView.size();

if (occurenceTrackLength.end() == occurenceTrackLength.find(trLength))
occurenceTrackLength[trLength] = 1;
if (occurrenceTrackLength.end() == occurrenceTrackLength.find(trLength))
occurrenceTrackLength[trLength] = 1;
else
occurenceTrackLength[trLength] += 1;
occurrenceTrackLength[trLength] += 1;
}
}

Expand Down
4 changes: 2 additions & 2 deletions src/aliceVision/track/tracksUtils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -124,9 +124,9 @@ void tracksToIndexedMatches(const TracksMap& tracks, const std::vector<IndexT>&
/**
* @brief Return the occurrence of tracks length.
* @param[in] tracks all tracks of the scene as a map {trackId, track}
* @param[out] occurenceTrackLength : the occurrence length of each trackId in the scene
* @param[out] occurrenceTrackLength : the occurrence length of each trackId in the scene
*/
void tracksLength(const TracksMap& tracks, std::map<std::size_t, std::size_t>& occurenceTrackLength);
void tracksLength(const TracksMap& tracks, std::map<std::size_t, std::size_t>& occurrenceTrackLength);

/**
* @brief Return a set containing the image Id considered in the tracks container.
Expand Down
2 changes: 1 addition & 1 deletion src/software/pipeline/main_depthMapEstimation.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ int aliceVision_main(int argc, char* argv[])
// clang-format on

CmdLine cmdline("Dense Reconstruction.\n"
"This program estimate a depth map for each input calibrated camera using Plane Sweeping, a multi-view stereo algorithm notable "
"This program estimates a depth map for each input calibrated camera using Plane Sweeping, a multi-view stereo algorithm notable "
"for its efficiency on modern graphics hardware (GPU).\n"
"AliceVision depthMapEstimation");
cmdline.add(requiredParams);
Expand Down
4 changes: 2 additions & 2 deletions src/software/utils/main_colorCheckerDetection.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ void drawSVG(const cv::Ptr<cv::mcc::CChecker>& checker, const std::string& outpu
// Push back the quad representing the color checker
quadsToDraw.push_back(QuadSVG(checker->getBox()));

// Transform matrix from 'theoric' to 'measured'
// Transform matrix from 'theoretical' to 'measured'
cv::Matx33f tMatrix = cv::getPerspectiveTransform(MACBETH_CCHART_CORNERS_POS, checker->getBox());

// Push back quads representing color checker cells
Expand Down Expand Up @@ -306,7 +306,7 @@ struct MacbethCCheckerQuad : Quad
_imgOpt(imgOpt),
_cellMasks(std::vector<cv::Mat>(24))
{
// Transform matrix from 'theoric' to 'measured'
// Transform matrix from 'theoretical' to 'measured'
_transformMat = cv::getPerspectiveTransform(MACBETH_CCHART_CORNERS_POS, _cchecker->getBox());

// Create an image boolean mask for each cchecker cell
Expand Down
Loading