Skip to content

Commit 4cd680d

Browse files
committed
Update dev documentation
1 parent 543e744 commit 4cd680d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+10120
-6076
lines changed

_sources/cameras.rst.txt

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,15 @@ model that is complex enough to model the distortion effects:
2424
for fisheye lenses and note that all other models are not really capable of
2525
modeling the distortion effects of fisheye lenses. The ``FOV`` model is used by
2626
Google Project Tango (make sure to not initialize ``omega`` to zero).
27+
- ``SIMPLE_FISHEYE``, ``FISHEYE``: Use these camera models for fisheye
28+
lenses with equidistant projection where distortion can be ignored
29+
or has been pre-corrected. These models use the equidistant projection
30+
(theta = atan(r)) without any distortion parameters. ``SIMPLE_FISHEYE``
31+
has a single focal length (f), while ``FISHEYE`` has two (fx, fy).
32+
- ``SIMPLE_DIVISION``, ``DIVISION``: Use these camera models, if you know the
33+
calibration parameters a priori. Similar to ``SIMPLE_RADIAL`` and ``RADIAL``
34+
models, they can model simple radial distortion effects. The two models
35+
have first-order local equivalence for small distortions.
2736

2837
You can inspect the estimated intrinsic parameters by double-clicking specific
2938
images in the model viewer or by exporting the model and opening the
@@ -44,4 +53,4 @@ fix the intrinsic parameters during the reconstruction
4453

4554
Please, refer to the camera models header file for information on the parameters
4655
of the different camera models:
47-
https://github.com/colmap/colmap/blob/main/src/colmap/sensor/models.h
56+
https://github.com/colmap/colmap/blob/main/src/colmap/sensor/models.h

_sources/faq.rst.txt

Lines changed: 63 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,10 @@ Alternatively, you can also produce a dense model without a sparse model as::
175175
Since the sparse point cloud is used to automatically select neighboring images
176176
during the dense stereo stage, you have to manually specify the source images,
177177
as described :ref:`here <faq-dense-manual-source>`. The dense stereo stage
178-
now also requires a manual specification of the depth range::
178+
now also requires a manual specification of the depth range.
179+
180+
Finally, in this case, fusion will fail to successfully match points if min_num_pixels is
181+
left at the default (greater than 1). So also set that parameter, as below::
179182

180183
colmap patch_match_stereo \
181184
--workspace_path path/to/dense/workspace \
@@ -184,6 +187,7 @@ now also requires a manual specification of the depth range::
184187

185188
colmap stereo_fusion \
186189
--workspace_path path/to/dense/workspace \
190+
--StereoFusion.min_num_pixels 1 \
187191
--output_path path/to/dense/workspace/fused.ply
188192

189193

@@ -371,7 +375,7 @@ If you encounter the following error message::
371375
or the following:
372376

373377
ERROR: Feature matching failed. This probably caused by insufficient GPU
374-
memory. Consider reducing the maximum number of features.
378+
memory. Consider reducing the maximum number of features.
375379

376380
during feature matching, your GPU runs out of memory. Try decreasing the option
377381
``--FeatureMatching.max_num_matches`` until the error disappears. Note that this
@@ -387,6 +391,63 @@ required GPU memory will be around 400MB, which are only allocated if one of
387391
your images actually has that many features.
388392

389393

394+
Speedup bundle adjustemnt
395+
-------------------------
396+
397+
The following describes practical ways to reduce bundle adjustment runtime.
398+
399+
- **Reduce the problem size**
400+
401+
Limit the number of correspondences so that BA solves a smaller problem:
402+
403+
- Reduce features by decreasing ``--SiftExtraction.max_image_size`` and/or
404+
``--SiftExtraction.max_num_features``.
405+
- Reduce matching pairs (and avoid ``exhaustive_matcher`` when possible) by
406+
decreasing ``--SequentialMatching.overlap``,
407+
``--SpatialMatching.max_num_neighbors``, or ``--VocabTreeMatching.num_images``.
408+
- Reduce matches by decreasing ``--FeatureMatching.max_num_matches``.
409+
- Enable experimental landmark pruning to drop redundant 3D points using
410+
``--Mapper.ba_global_ignore_redundant_points3D 1``.
411+
412+
- **Utilize GPU acceleration**
413+
414+
Enable GPU-based Ceres solvers for bundle adjustment by setting
415+
``--Mapper.ba_use_gpu 1`` for the ``mapper`` and ``--BundleAdjustmentCeres.use_gpu 1``
416+
for the standalone ``bundle_adjuster``. Several parameters control when and which
417+
GPU solver is used:
418+
419+
- The GPU solver is activated only when the number of images exceeds
420+
``--BundleAdjustmentCeres.min_num_images_gpu_solver``.
421+
- Select between the direct dense, direct sparse, and iterative sparse GPU solvers
422+
using ``--BundleAdjustmentCeres.max_num_images_direct_dense_gpu_solver`` and
423+
``--BundleAdjustmentCeres.max_num_images_direct_sparse_gpu_solver``
424+
425+
.. Attention:: COLMAP's official CUDA-enabled binaries are not distributed with
426+
ceres[cuda] until Ceres 2.3 is officially released. To use the GPU solvers you
427+
must compile Ceres with the CUDA/cuDSS support and link that build to COLMAP.
428+
429+
**Note:** Low GPU utilization for the Schur-based sparse solver (cuDSS) can occur
430+
when the Schur-complement matrix becomes less sparse (i.e., exhibits more fill-in).
431+
Typical causes include:
432+
433+
- High image covisibility
434+
- Shared camera intrinsics.
435+
436+
- **Additional practical tips**
437+
438+
- Improve initial conditions by tuning observation-filtering parameters so BA
439+
receives more inliers and fewer outliers, or by supplying accurate priors
440+
(e.g., intrinsics, poses).
441+
- Fix or restrict refinement of parameters when possible (e.g., hold intrinsics
442+
fixed if they are known) to reduce the number of optimized variables.
443+
- Reduce LM iterations or relax convergence tolerances to trade a small amount of
444+
accuracy for runtime: ``--Mapper.ba_global_max_num_iterations``,
445+
``--Mapper.ba_global_function_tolerance``.
446+
- Reduce the frequency of expensive global BA passes with mapper options:
447+
``--Mapper.ba_global_frames_freq``, ``--Mapper.ba_global_points_freq``,
448+
``--Mapper.ba_global_frames_ratio`` and ``--Mapper.ba_global_points_ratio``.
449+
450+
390451
Trading off completeness and accuracy in dense reconstruction
391452
-------------------------------------------------------------
392453

_sources/index.rst.txt

Lines changed: 39 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -21,31 +21,7 @@ About
2121
COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo
2222
(MVS) pipeline with a graphical and command-line interface. It offers a wide
2323
range of features for reconstruction of ordered and unordered image collections.
24-
The software is licensed under the new BSD license. If you use this project for
25-
your research, please cite::
26-
27-
@inproceedings{schoenberger2016sfm,
28-
author={Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael},
29-
title={Structure-from-Motion Revisited},
30-
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
31-
year={2016},
32-
}
33-
34-
@inproceedings{schoenberger2016mvs,
35-
author={Sch\"{o}nberger, Johannes Lutz and Zheng, Enliang and Pollefeys, Marc and Frahm, Jan-Michael},
36-
title={Pixelwise View Selection for Unstructured Multi-View Stereo},
37-
booktitle={European Conference on Computer Vision (ECCV)},
38-
year={2016},
39-
}
40-
41-
If you use the image retrieval / vocabulary tree engine, please also cite::
42-
43-
@inproceedings{schoenberger2016vote,
44-
author={Sch\"{o}nberger, Johannes Lutz and Price, True and Sattler, Torsten and Frahm, Jan-Michael and Pollefeys, Marc},
45-
title={A Vote-and-Verify Strategy for Fast Spatial Verification in Image Retrieval},
46-
booktitle={Asian Conference on Computer Vision (ACCV)},
47-
year={2016},
48-
}
24+
The software is licensed under the new BSD license.
4925

5026
The latest source code is available at `GitHub
5127
<https://github.com/colmap/colmap>`_. COLMAP builds on top of existing works and
@@ -79,6 +55,44 @@ for questions and the `GitHub issue tracker <https://github.com/colmap/colmap>`_
7955
for bug reports, feature requests/additions, etc.
8056

8157

58+
Citation
59+
--------
60+
61+
If you use this project for your research, please cite::
62+
63+
@inproceedings{schoenberger2016sfm,
64+
author={Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael},
65+
title={Structure-from-Motion Revisited},
66+
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
67+
year={2016},
68+
}
69+
70+
@inproceedings{schoenberger2016mvs,
71+
author={Sch\"{o}nberger, Johannes Lutz and Zheng, Enliang and Pollefeys, Marc and Frahm, Jan-Michael},
72+
title={Pixelwise View Selection for Unstructured Multi-View Stereo},
73+
booktitle={European Conference on Computer Vision (ECCV)},
74+
year={2016},
75+
}
76+
77+
If you use the global SfM pipeline (GLOMAP), please cite::
78+
79+
@inproceedings{pan2024glomap,
80+
author={Pan, Linfei and Barath, Daniel and Pollefeys, Marc and Sch\"{o}nberger, Johannes Lutz},
81+
title={{Global Structure-from-Motion Revisited}},
82+
booktitle={European Conference on Computer Vision (ECCV)},
83+
year={2024},
84+
}
85+
86+
If you use the image retrieval / vocabulary tree engine, please cite::
87+
88+
@inproceedings{schoenberger2016vote,
89+
author={Sch\"{o}nberger, Johannes Lutz and Price, True and Sattler, Torsten and Frahm, Jan-Michael and Pollefeys, Marc},
90+
title={A Vote-and-Verify Strategy for Fast Spatial Verification in Image Retrieval},
91+
booktitle={Asian Conference on Computer Vision (ACCV)},
92+
year={2016},
93+
}
94+
95+
8296
Acknowledgments
8397
---------------
8498

_sources/install.rst.txt

Lines changed: 27 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,8 @@ Dependencies from the default Ubuntu repositories::
8181
libboost-graph-dev \
8282
libboost-system-dev \
8383
libeigen3-dev \
84-
libfreeimage-dev \
84+
libopenimageio-dev \
85+
openimageio-tools \
8586
libmetis-dev \
8687
libgoogle-glog-dev \
8788
libgtest-dev \
@@ -93,9 +94,14 @@ Dependencies from the default Ubuntu repositories::
9394
libqt6openglwidgets6 \
9495
libcgal-dev \
9596
libceres-dev \
97+
libsuitesparse-dev \
9698
libcurl4-openssl-dev \
9799
libssl-dev \
98100
libmkl-full-dev
101+
# Fix issue in Ubuntu's openimageio CMake config.
102+
# We don't depend on any of openimageio's OpenCV functionality,
103+
# but it still requires the OpenCV include directory to exist.
104+
sudo mkdir -p /usr/include/opencv4
99105

100106
Alternatively, you can also build against Qt 5 instead of Qt 6 using::
101107

@@ -151,13 +157,14 @@ Dependencies from `Homebrew <http://brew.sh/>`__::
151157
ninja \
152158
boost \
153159
eigen \
154-
freeimage \
160+
openimageio \
155161
curl \
156162
libomp \
157163
metis \
158164
glog \
159165
googletest \
160166
ceres-solver \
167+
suitesparse \
161168
qt \
162169
glew \
163170
cgal \
@@ -170,7 +177,7 @@ Configure and compile COLMAP::
170177
cd colmap
171178
mkdir build
172179
cd build
173-
cmake -GNinja
180+
cmake .. -GNinja
174181
ninja
175182
sudo ninja install
176183

@@ -259,6 +266,7 @@ Install miniconda and run the following commands::
259266
glog \
260267
gtest \
261268
ceres-solver \
269+
suitesparse \
262270
qt \
263271
glew \
264272
sqlite \
@@ -359,15 +367,22 @@ meaningful traces for reported issues.
359367
Documentation
360368
-------------
361369

362-
You need Python and Sphinx to build the HTML documentation::
370+
1. Install latest pycolmap for up-to-date pycolmap API documentation.
371+
2. Build the documentation::
372+
373+
cd path/to/colmap/doc
374+
pip install -r requirements.txt
375+
make html
376+
open _build/html/index.html # preview results
363377

364-
cd path/to/colmap/doc
365-
sudo apt-get install python
366-
pip install sphinx
367-
make html
368-
open _build/html/index.html
378+
Alternatively, you can build the documentation as PDF, EPUB, etc.::
369379

370-
Alternatively, you can build the documentation as PDF, EPUB, etc.::
380+
make latexpdf
381+
open _build/pdf/COLMAP.pdf
371382

372-
make latexpdf
373-
open _build/pdf/COLMAP.pdf
383+
2. Clone the website repository `colmap/colmap.github.io <https://github.com/colmap/colmap.github.io>`__.
384+
3. Copy the contents of the generated files at ``_build/html`` to the cloned respository root.
385+
4. Create a pull request to the `colmap/colmap.github.io <https://github.com/colmap/colmap.github.io>`__
386+
repository with the updated files.
387+
5. (Optional, if main release) Copy the previous release as legacy to the "legacy" folder,
388+
under a folder with the release number `see here <https://github.com/colmap/colmap.github.io/tree/master/legacy>`__.

_sources/pycolmap/index.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ To build PyCOLMAP from source, follow these steps:
2929
* On Windows, after installing COLMAP via VCPKG, run in powershell::
3030

3131
python -m pip install . `
32-
--cmake.define.CMAKE_TOOLCHAIN_FILE="$VCPKG_INSTALLATION_ROOT/scripts/buildsystems/vcpkg.cmake" `
32+
--cmake.define.CMAKE_TOOLCHAIN_FILE="$VCPKG_ROOT/scripts/buildsystems/vcpkg.cmake" `
3333
--cmake.define.VCPKG_TARGET_TRIPLET="x64-windows"
3434

3535
Some features, such as cost functions, require that `PyCeres

_sources/tutorial.rst.txt

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -163,14 +163,13 @@ Data Structure
163163

164164
COLMAP assumes that all input images are in one input directory with potentially
165165
nested sub-directories. It recursively considers all images stored in this
166-
directory, and it supports various different image formats (see `FreeImage
167-
<http://freeimage.sourceforge.net/documentation.html>`_). Other files are
168-
automatically ignored. If high performance is a requirement, then you should
169-
separate any files that are not images. Images are identified uniquely by their
170-
relative file path. For later processing, such as image undistortion or dense
171-
reconstruction, the relative folder structure should be preserved. COLMAP does
172-
not modify the input images or directory and all extracted data is stored in a
173-
single, self-contained SQLite database file (see :doc:`database`).
166+
directory, and it supports various different image formats by OpenImageIO. Other
167+
files are automatically ignored. If high performance is a requirement, then you
168+
should separate any files that are not images. Images are identified uniquely by
169+
their relative file path. For later processing, such as image undistortion or
170+
dense reconstruction, the relative folder structure should be preserved. COLMAP
171+
does not modify the input images or directory and all extracted data is stored
172+
in a single, self-contained SQLite database file (see :doc:`database`).
174173

175174
The first step is to start the graphical user interface of COLMAP by running the
176175
pre-built binaries (Windows: ``COLMAP.bat``, Mac: ``COLMAP.app``) or by executing

0 commit comments

Comments
 (0)