You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorial/ReconstructionSystem/make_fragments.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ The first step of the scene reconstruction system is to create fragments from sh
8
8
Input arguments
9
9
``````````````````````````````````````
10
10
11
-
The script runs with ``python run_system.py [config] --make``. In ``[config]``, ``["path_dataset"]`` should have subfolders *image* and *depth* to store the color images and depth images respectively. We assume the color images and the depth images are synchronized and registered. In ``[config]``, the optional argument ``["path_intrinsic"]`` specifies path to a json file that stores the camera intrinsic matrix (See :ref:`reading_camera_intrinsic` for details). If it is not given, the PrimeSense factory setting is used instead.
11
+
The script runs with ``python run_system.py [config] --make``. In ``[config]``, ``["path_dataset"]`` should have subfolders ``image`` and ``depth`` to store the color images and depth images respectively. We assume the color images and the depth images are synchronized and registered. In ``[config]``, the optional argument ``["path_intrinsic"]`` specifies the path to a json file that stores the camera intrinsic matrix (See :ref:`reading_camera_intrinsic` for details). If it is not given, the PrimeSense factory setting is used instead.
12
12
13
13
.. _make_fragments_register_rgbd_image_pairs:
14
14
@@ -21,7 +21,7 @@ Register RGBD image pairs
21
21
:lines: 5,35-60
22
22
:linenos:
23
23
24
-
The function reads a pair of RGBD images and registers the ``source_rgbd_image`` to the ``target_rgbd_image``. Open3D function ``compute_rgbd_odometry`` is called to align the RGBD images. For adjacent RGBD images, an identity matrix is used as initialization. For non-adjacent RGBD images, wide baseline matching is used as an initialization. In particular, function ``pose_estimation`` computes OpenCV ORB feature to match sparse features over wide baseline images, then performs 5-point RANSAC to estimate a rough alignment. It is used as the initialization of ``compute_rgbd_odometry``.
24
+
The function reads a pair of RGBD images and registers the ``source_rgbd_image`` to the ``target_rgbd_image``. Open3D function ``compute_rgbd_odometry`` is called to align the RGBD images. For adjacent RGBD images, an identity matrix is used as initialization. For non-adjacent RGBD images, wide baseline matching is used as an initialization. In particular, function ``pose_estimation`` computes OpenCV ORB feature to match sparse features over wide baseline images, then performs 5-point RANSAC to estimate a rough alignment, which is used as the initialization of ``compute_rgbd_odometry``.
Copy file name to clipboardExpand all lines: docs/tutorial/ReconstructionSystem/refine_registration.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ Refine registration
6
6
Input arguments
7
7
``````````````````````````````````````
8
8
9
-
This script runs with ``python run_system.py [config] --refine``. In ``[config]``, ``["path_dataset"]`` should have subfolders *fragments* which stores fragments in .ply files and a pose graph in a .json file.
9
+
This script runs with ``python run_system.py [config] --refine``. In ``[config]``, ``["path_dataset"]`` should have subfolders ``fragments`` which stores fragments in ``.ply`` files and a pose graph in a ``.json`` file.
10
10
11
11
The main function runs ``local_refinement`` and ``optimize_posegraph_for_scene``. The first function performs pairwise registration on the pairs detected by :ref:`reconstruction_system_register_fragments`. The second function performs multiway registration.
12
12
@@ -20,7 +20,7 @@ Fine-grained registration
20
20
:lines: 5,39-92
21
21
:linenos:
22
22
23
-
Two options are given for the fine-grained registration. The ``color`` is recommended since it uses color information to prevent drift. Details see [Park2017]_.
23
+
Two options are given for the fine-grained registration. The ``color`` option is recommended since it uses color information to prevent drift. See [Park2017]_ for details.
24
24
25
25
26
26
Multiway registration
@@ -32,7 +32,7 @@ Multiway registration
32
32
:lines: 5,17-36
33
33
:linenos:
34
34
35
-
This script uses the technique demonstrated in :ref:`multiway_registration`. Function ``update_posegrph_for_refined_scene`` builds a pose graph for multiway registration of all fragments. Each graph node represents a fragments and its pose which transforms the geometry to the global space.
35
+
This script uses the technique demonstrated in :ref:`multiway_registration`. Function ``update_posegrph_for_refined_scene`` builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.
36
36
37
37
Once a pose graph is built, function ``optimize_posegraph_for_scene`` is called for multiway registration.
Copy file name to clipboardExpand all lines: docs/tutorial/ReconstructionSystem/register_fragments.rst
+4-6Lines changed: 4 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ Once the fragments of the scene are created, the next step is to align them in a
8
8
Input arguments
9
9
``````````````````````````````````````
10
10
11
-
This script runs with ``python run_system.py [config] --register``. In ``[config]``, ``["path_dataset"]`` should have subfolders *fragments* which stores fragments in .ply files and a pose graph in a .json file.
11
+
This script runs with ``python run_system.py [config] --register``. In ``[config]``, ``["path_dataset"]`` should have subfolders ``fragments`` which stores fragments in ``.ply`` files and a pose graph in a ``.json`` file.
12
12
13
13
The main function runs ``make_posegraph_for_scene`` and ``optimize_posegraph_for_scene``. The first function performs pairwise registration. The second function performs multiway registration.
14
14
@@ -62,7 +62,7 @@ Multiway registration
62
62
:lines: 5,85-104
63
63
:linenos:
64
64
65
-
This script uses the technique demonstrated in :ref:`multiway_registration`. Function ``update_posegrph_for_scene`` builds a pose graph for multiway registration of all fragments. Each graph node represents a fragments and its pose which transforms the geometry to the global space.
65
+
This script uses the technique demonstrated in :ref:`multiway_registration`. Function ``update_posegrph_for_scene`` builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.
66
66
67
67
Once a pose graph is built, function ``optimize_posegraph_for_scene`` is called for multiway registration.
68
68
@@ -75,16 +75,14 @@ Once a pose graph is built, function ``optimize_posegraph_for_scene`` is called
75
75
Main registration loop
76
76
``````````````````````````````````````
77
77
78
-
The function ``make_posegraph_for_scene`` below calls all the functions introduced above.
78
+
The function ``make_posegraph_for_scene`` below calls all the functions introduced above. The main workflow is: pairwise global registration -> multiway registration.
The main workflow is: pairwise global registration -> multiway registration.
87
-
88
86
Results
89
87
``````````````````````````````````````
90
88
@@ -112,4 +110,4 @@ The following is messages from pose graph optimization.
112
110
CompensateReferencePoseGraphNode : reference : 0
113
111
114
112
115
-
There are 14 fragments and 52 valid matching pairs between fragments. After 23 iteration, 11 edges are detected to be false positive. After they are pruned, pose graph optimization runs again to achieve tight alignment.
113
+
There are 14 fragments and 52 valid matching pairs among the fragments. After 23 iteration, 11 edges are detected to be false positive. After they are pruned, pose graph optimization runs again to achieve tight alignment.
Copy file name to clipboardExpand all lines: docs/tutorial/ReconstructionSystem/system_overview.rst
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
System overview
4
4
-----------------------------------
5
5
6
-
The system has three main steps:
6
+
The system has 4 main steps:
7
7
8
8
**Step 1**. :ref:`reconstruction_system_make_fragments`: build local geometric surfaces (referred to as
9
9
fragments) from short subsequences of the input RGBD sequence. This part uses :ref:`rgbd_odometry`, :ref:`multiway_registration`, and :ref:`rgbd_integration`.
@@ -12,7 +12,7 @@ fragments) from short subsequences of the input RGBD sequence. This part uses :r
12
12
13
13
**Step 3**. :ref:`reconstruction_system_refine_registration`: the rough alignments are aligned more tightly. This part uses :ref:`icp_registration`, and :ref:`multiway_registration`.
14
14
15
-
**Step 3**. :ref:`reconstruction_system_integrate_scene`: integrate RGB-D images to generate a mesh model for
15
+
**Step 4**. :ref:`reconstruction_system_integrate_scene`: integrate RGB-D images to generate a mesh model for
16
16
the scene. This part uses :ref:`rgbd_integration`.
17
17
18
18
.. _reconstruction_system_dataset:
@@ -22,14 +22,14 @@ Example dataset
22
22
23
23
We use `the SceneNN dataset <http://people.sutd.edu.sg/~saikit/projects/sceneNN/>`_ to demonstrate the system in this tutorial. Alternatively, there are lots of excellent RGBD datasets such as `Redwood data <http://redwood-data.org/>`_, `TUM RGBD data <https://vision.in.tum.de/data/datasets/rgbd-dataset>`_, `ICL-NUIM data <https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html>`_, and `SUN3D data <http://sun3d.cs.princeton.edu/>`_.
24
24
25
-
The tutorial uses the 016 sequence from the SceneNN dataset. The sequence is from `SceneNN oni file archieve<https://drive.google.com/drive/folders/0B-aa7y5Ox4eZUmhJdmlYc3BQSG8>`_. The oni file can be extracted into color and depth image sequence using `OniParser from the Redwood reconstruction system <http://redwood-data.org/indoor/tutorial.html>`_. Alternatively, any tool that can convert an .oni file into a set of synchronized RGBD images will work. This is a `quick link <https://drive.google.com/open?id=11U8jEDYKvB5lXsK3L1rQcGTjp0YmRrzT>`_ to download the rgbd sequence used in this tutorial. Some helper scripts can be found from ``ReconstructionSystem/scripts``.
25
+
The tutorial uses sequence ``016`` from the SceneNN dataset. This is a `quick link <https://drive.google.com/open?id=11U8jEDYKvB5lXsK3L1rQcGTjp0YmRrzT>`_ to download the RGBD sequence used in this tutorial. Alternatively, you can download from the original dataset from `SceneNN oni file archive<https://drive.google.com/drive/folders/0B-aa7y5Ox4eZUmhJdmlYc3BQSG8>`_, and then extract the ``oni`` file into color and depth image sequence using `OniParser from the Redwood reconstruction system <http://redwood-data.org/indoor/tutorial.html>`_ or other convertion tools. Some helper scripts can be found from ``ReconstructionSystem/scripts``.
Put all color images in the *image* folder, and all depth images in the *depth* folder. Run following commands from the root folder.
32
+
Put all color images in the ``image`` folder, and all depth images in the ``depth`` folder. Run following commands from the root folder.
33
33
34
34
.. code-block:: sh
35
35
@@ -44,7 +44,7 @@ Put all color images in the *image* folder, and all depth images in the *depth*
44
44
:lines: 1-
45
45
:linenos:
46
46
47
-
We assume the color images and the depth images are synchronized and registered. ``"path_intrinsic"`` specifies path to a json file that stores the camera intrinsic matrix (See :ref:`reading_camera_intrinsic` for details). If it is not given, the PrimeSense factory setting is used. For your own dataset, use an appropriate camera intrinsic and visualize a depth image (likewise :ref:`rgbd_redwood`) prior to use the system.
47
+
We assume that the color images and the depth images are synchronized and registered. ``"path_intrinsic"`` specifies path to a json file that stores the camera intrinsic matrix (See :ref:`reading_camera_intrinsic` for details). If it is not given, the PrimeSense factory setting is used. For your own dataset, use an appropriate camera intrinsic and visualize a depth image (likewise :ref:`rgbd_redwood`) prior to use the system.
48
48
49
49
.. note:: ``"python_multi_threading": true`` utilizes ``joblib`` to parallelize the system using every CPU cores. With this option, Mac users may encounter an unexpected program termination. To avoid this issue, set this flag as ``false``.
0 commit comments