You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,4 +38,4 @@ Most chapters have a short YouTube video explaining the key concepts, and some c
38
38
39
39
## Open material
40
40
41
-
This work is licensed under a <arel="license"href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License.
41
+
This work is licensed under a <arel="license"href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copy file name to clipboardExpand all lines: acquisition/acquisition.tex
+13-13Lines changed: 13 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -84,7 +84,7 @@ \subsection{Georeferencing the range measurements}
84
84
\marginnote{inertial measurement unit (IMU)}\index{inertial measurement unit (IMU)}
85
85
Only when we accurately know the orientation of the laser scanner, can we know the direction (in a global coordinate system) in which a laser pulse is emitted from the aircraft.
86
86
87
-
By combining the global position and the global orientation of the laser scanner with the range measurement from the laser scanner, the georeferenced 3D position of the point on the target object that reflected the lase pulse can be computed.
87
+
By combining the global position and the global orientation of the laser scanner with the range measurement from the laser scanner, the georeferenced 3D position of the point on the target object that reflected the laser pulse can be computed.
88
88
89
89
90
90
%%%
@@ -93,7 +93,7 @@ \subsection{Echo detection}
93
93
A lidar system performs ranging measurements using the time-of-flight principle that allows us to compute range from a time measurement using the known speed of light in the air.
94
94
The time measurement starts when the laser pulse is emitted and is completed when a backscattered echo of that signal is detected.
95
95
In practice one emitted pulse can even lead to multiple echoes in the case when an object reflects part of the laser pulse, but also allows part of the pulse to continue past the object.
96
-
Notice that lidar pulses are typically emitted in a slightly divergent manner. As a result the footprint of the pules at ground level is several centimetres in diameter, which increases the likelihood of multiple echoes.
96
+
Notice that lidar pulses are typically emitted in a slightly divergent manner. As a result the footprint of the pulse at ground level is several centimetres in diameter, which increases the likelihood of multiple echoes.
97
97
98
98
\begin{figure}
99
99
\centering
@@ -106,9 +106,9 @@ \subsection{Echo detection}
106
106
With the direct detection lidar systems that we focus on in this book, the echoes are derived from the backscattered waveform by using a thresholding technique. This essentially means that an echo is recorded whenever the power of the waveform exceeds a fixed threshold (see Figure~\ref{fig:lidar-multipulse}b).
107
107
108
108
An echo can also be referred to as a \emph{return}.
109
-
For each return the return count is recorded,
109
+
For each return the return count is recorded,
110
110
\marginnote{return}\index{return}
111
-
\eg\ the first return is the first echo received from an emitted laser pules and the last return is the last received echo (see Figure~\ref{fig:lidar-multipulse}). The return count can in some cases be used to determine if an echo was reflected on vegetation or ground (ground should then be the last return).
111
+
\eg\ the first return is the first echo received from an emitted laser pulse and the last return is the last received echo (see Figure~\ref{fig:lidar-multipulse}). The return count can in some cases be used to determine if an echo was reflected on vegetation or ground (ground should then be the last return).
112
112
113
113
114
114
%%%
@@ -129,9 +129,9 @@ \subsection{Anatomy of a lidar system}%
129
129
It then passes through a set of optics (lenses and mirrors) so that it leaves the scanner in an appropriate direction.
130
130
After the pulse interacts with the scattering medium, it is reflected back into the scanning optics which then directs the signal into a telescope.
131
131
The telescope converges the signal through a field diaphragm (essentially a tiny hole around the point of convergence).
132
-
The field diaphragm blocks stray light rays (\eg\ sunlight reflected into the optics from any angle) from proceeding in the optical pipeline.
133
-
Next, the light signal is recollimated so that it again consists only of parallel light rays.
134
-
The final step of the optical part is the inference filter which blocks all wavelengths except for the wavelength of the laser source.
132
+
The field diaphragm blocks stray light rays (\eg\ sunlight reflected into the optics from any angle) from proceeding in the optical pipeline.
133
+
Next, the light signal is recollimated so that it again consists only of parallel light rays.
134
+
The final step of the optical part is the interference filter which blocks all wavelengths except for the wavelength of the laser source.
135
135
This is again needed to block stray light rays from distorting the measurement.
136
136
137
137
The electronic part consists of a photodetector, which first transforms the light signal into an electrical current, which is then converted to a digital signal using the analogue-to-digital converter.
@@ -150,7 +150,7 @@ \subsection{Laser wavelength}
150
150
151
151
\subsection{Scanning patterns}
152
152
In order to improve the capacity to quickly scan large areas, a number of rotating optical elements are typically present in a lidar system. Using these optical elements, \ie\ mirrors or prisms, the emitted laser pulse is guided in a cross-track direction (\ie\ perpendicular to the along-track direction in which the aircraft moves, see Figure~\ref{fig:acqLidar}), thereby greatly increasing the scanned ground area per travelled meter of the aircraft.
153
-
Figure~\ref{fig:lidar-patterns} depicts a number of possible configurations of rotating optics and shows the resulting scanning patterns. It is clear that density of points on the ground is affected by the scanning pattern. The top example for example, yields much higher densities on edges of the scanned area. In practice more uniform patterns, such as the bottom two examples are often preferred.
153
+
Figure~\ref{fig:lidar-patterns} depicts a number of possible configurations of rotating optics and shows the resulting scanning patterns. It is clear that density of points on the ground is affected by the scanning pattern. The top example for example, yields much higher densities on edges of the scanned area. In practice more uniform patterns, such as the bottom two examples, are often preferred.
Apart from lidar there are also other sensor techniques that can be used to acquire elevation data. Some of these are active sensors just like lidar (a signal is generated and emitted from the sensor), whereas others are passive (using the sun as light source). And like lidar, these sensors themselves only do range measurements, and need additional hardware such as a GPS receiver and an IMU to georeference the measurements. What follows is a brief description of the three other important acquisition techniques used in practice.
\caption{Strip adjustment for lidar point clouds}%
306
306
\label{fig:lidarStripAdj}
307
307
\end{figure}
308
-
If the strip adjustments process fails or is omitted, a `ghosting' effect can occur as illustrated in Figure~\ref{fig:lidarGableRoof} (top).
308
+
If the strip adjustment process fails or is omitted, a `ghosting' effect can occur as illustrated in Figure~\ref{fig:lidarGableRoof} (top).
309
309
Photogrammetry knows a similar process called aerial triangulation, in which camera positions and orientation parameters (one set for each image) are adjusted to fit with each other. Errors in the aerial triangulation can lead to a noisy result for the dense matching as seen in Figure~\ref{fig:dim}.
Photogrammetry suffers from a few other problems as well, such as surfaces that have a homogeneous texture that make it impossible to find distinguishing features that can be used for matching.
402
-
This may also happen in poor lightning conditions, for example in the shadow parts of an image.
Photogrammetry suffers from a few other problems as well, such as surfaces that have a homogeneous texture that makes it impossible to find distinguishing features that can be used for matching.
402
+
This may also happen in poor lighting conditions, for example in the shadow parts of an image.
An example of moving objects are flocks of birds flying in front of the scanner. These can cause outliers high above the ground, as illustrated in Figure~\ref{fig:outliers}.
Copy file name to clipboardExpand all lines: appendices/pcformats/pcformats.tex
+5-6Lines changed: 5 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@ \section{LAS format}%
78
78
The LAS standard, currently at version 1.4\marginnote{LAS v1.4-R15 is the latest version}, is maintained by the American Society for Photogrammetry and Remote Sensing (ASPRS) and, as the name implies, was designed for datasets that originate from (airborne) lidar scanners.
However, in practice it is also used for other types of point cloud, \eg\ those derived from dense image matching.
81
-
LAS files are binary and unlike the PLY format the fields are prescribed, \ie\ the attributes for each point record are their types (number of bits) cannot be modified.
81
+
LAS files are binary and unlike the PLY format the fields are prescribed, \ie\ the attributes for each point record and their types (number of bits) cannot be modified.
82
82
Table~\ref{tab:las-record} shows the composition of the base record type for the latest version of LAS (v1.4).
83
83
\begin{table*}
84
84
\centering
@@ -94,7 +94,7 @@ \section{LAS format}%
94
94
\texttt{return\_number} & unsigned int & 4 & total pulse return number for a given output pulse \\
95
95
\texttt{number\_of\_returns} & unsigned int & 4 & total number of returns for a given pulse \\
96
96
\texttt{synthetic} & boolean & 1 & flag to indicate whether the point is synthetic, \ie\ if it has been artificially generated rather measured by the lidar sensor \\
97
-
\texttt{key\_point} & boolean & 1 & flag to mark the point that should not be removed or thinned during data processing \\
97
+
\texttt{key\_point} & boolean & 1 & flag to mark points that should not be removed or thinned during data processing \\
98
98
\texttt{overlap} & boolean & 1 & flag to identify points that are part of an overlap region between different lidar flight lines\\
99
99
\texttt{scanner\_channel} & unsigned int & 2 & used to distinguish between the different channels of a multi-channel lidar system, where each channel might correspond to a different laser or beam angle \\
100
100
\texttt{scan\_direction\_flag} & boolean & 1 & direction at which the scanner mirror was travelling at the time of the output pulse. A bit value of 1 is a positive scan direction, and a bit value of 0 is a negative scan direction (where positive scan direction is a scan moving from the left side of the in-track direction to the right side and negative the opposite) \\
@@ -166,8 +166,8 @@ \section{LAZ format}%
166
166
\index{LAZ format}
167
167
168
168
A compressed variant of the LAS format, dubbed ``LAZ'', exists.
169
-
\marginnote{\citet{Isenburg13} describes in details the LAS format}
170
-
While it is not maintained by an `official' organisation like the LAS standard, it is an open standard and it is widely used, especially for very big dataset.
169
+
\marginnote{\citet{Isenburg13} describes in detail the LAS format}
170
+
While it is not maintained by an `official' organisation like the LAS standard, it is an open standard and it is widely used, especially for very big datasets.
171
171
Through the use of lossless compression algorithms that are specialised for point cloud data, a LAZ file can be compressed into a fraction of the storage space required for the equivalent LAS file, without any loss of information.
172
172
This makes it more effective than simply using ZIP compression on a LAS file.
173
173
In addition, support for LAZ is typically built into point cloud reading and writing software, so to the user it is no different than opening a LAS file (although the compression and decompression operations do take extra time).
@@ -181,7 +181,6 @@ \section{LAZ format}%
181
181
Typically information is quite similar for points that are close to each other in space.
182
182
Therefore, a greater compression factor can often be achieved after spatially sorting the points.
183
183
184
-
In practice, for the AHN4 dataset, the LAZ file of a given area is about 10X more compact than its LAS counterpart.
184
+
In practice, for the AHN5 dataset, the LAZ file of a given area is about 10X more compact than its LAS counterpart.
185
185
\marginnote{LAZ = about 10X compacter than LAS}
186
186
However, the main disadvantage is that reading and writing a LAZ file is slower than a LAS file, since more operations need to be performed.
An hydrographic chart is a map of the underwater world specifically intended for the safe navigation of ships, see Figure~\ref{fig:enc} for an example.
15
+
A hydrographic chart is a map of the underwater world specifically intended for the safe navigation of ships, see Figure~\ref{fig:enc} for an example.
16
16
In its digital form, it is often called an electronic navigational chart, or ENC\@.%
17
17
\index{electronic navigational chart (ENC)}
18
-
The information appearing on an ENC are standardised, and there are open formats.
18
+
The information appearing on an ENC is standardised, and there are open formats.
19
19
20
20
21
21
%
@@ -108,7 +108,7 @@ \subsection{Generalisation is required to obtain good depth contours}%
108
108
109
109
%
110
110
111
-
Also, because of the safety constraint, depth-contours can only be modified such that the safety is respected at all times: contours can only be pushed towards the deeper side during generalisation, as illustrated in Fig~\ref{fig:genvalidornot}.
111
+
Also, because of the safety constraint, depth-contours can only be modified such that the safety is respected at all times: contours can only be pushed towards the deeper side during generalisation, as illustrated in Figure~\ref{fig:genvalidornot}.
0 commit comments