-
Notifications
You must be signed in to change notification settings - Fork 7
How to guarantee the quality of survey data
Read about it on a separate page:
Many people fly with cameras that are externally triggered through some pulse of 3.3V or 5V on a port of the camera. in some cameras this causes the shutter to action immediately, in others it triggers the process for taking a picture, which actually has a delay of up to hundreds of ms before the shutter is actually actioned.
If you fly a multirotor at 5 m/s, then 100ms delay in taking a picture i a difference of half a meter. Not even talking about a fixed wing flying at 12 m/s or more. This delay should probably be taken into account, but it may be variable.
Another issue is that the frequency of the readouts of the GPS of many vehicle GPS's is only up to 5Hz. This means that a new position is only available every 200 ms. For navigation that's no problem, but in the worst case you're 300 ms off, which is 1.5m on a multirotor travelling at 5 m/s, plus the accuracy of the GPS.
If you're using a camera with GPS, it's usually mounted under or within the vehicle where GPS reception is pretty bad. That means that the positions derived from there are not usually good. The other issue is that such GPS readings use a filtering method calibrated for pedestrians and you'll see large deviations in the final processing just after the vehicle has taken a turn. The GPS position appears to not being able to make the turn and causes large deviations.
Tip
For these cameras with GPS, although the position isn't very accurate, but the camera usually gets synchronized with the GPS time, which is incredibly accurate. So if you fly at 5 m/s and the timestamp in the photo is accurate to the second, you can relocate your photos to be 5m accurate using the autopilot log. Just compare the timestamps in the log and interpolate between the positions.
It all sounds like bad news up to now, but as indicated before, there are many error components in the signal have a normal distribution (they are randomly located around the real value). When you take many, many samples and use those samples to calculate your estimate, that estimate over more samples asymptotically converges to the real value. This is valid for situations where the sensor and the world doesn't change, so they're both fixed.
In the worst case, nothing is fixed, so the readings have a certain accuracy associated with them around the real value. In the best case, both the world and the sensor are fixed and over many samples you converge.
Photogrammetry is an inbetween case. The world is assumed not to change, but the sensor itself moves over the terrain. There are some elements in the world that do change (leaving cars, moving vegetation), but that number is considered too low to have significant impact.
Photogrammetry builds up a model from measurements and the number of 3D points in that model is huge in comparison to the positions of each camera (photo) that you have. So there's lots of room to figure out what the real position of the camera is. This is an iterative process:
- Figure out the position of cameras and specify a 'camera matrix' with a bunch of parameters specifying position, lens distortion and focal lengths.
- Determine visible matching 2D points and triangulate 3D points through the matrices.
- Figure out the error, improve the parameters
- From improved matrices, retriangulate the 3D points.
This is basically a process of model fitting, where the camera positions and distortions are allowed to change. Eventually the model settles, when the iterative process perceives there are no further improvements that can be made by changing the parameters.
So if you assume that the world is static, then the bunch of cameras got repositioned using the data from those measurements (thousands of points), thereby improving the position significantly. This will have removed large parts of the random errors that exist. However, if there are consistent errors (all positions are 2 meters to the east), then this iterative process will not be able to cancel those.
If you have consistent errors related to another parameter like flying direction, for example some filter that overestimates the actual position in the forward direction, then you can deal with such errors by not flying each lane in the same direction.
When the photogrammetry software prints "0.7m error", this means that by considering all the points together, the root mean square error of the data was found to be 0.7m. This error doesn't refer to the absolute location of the model in the real world, but only to the consistency of the model itself. So it can be considered a measure of the quality of the model, whether that is the 2D orthomosaic or the 3D point cloud.
0.7m sounds great, but it's not an upper limit. At the same time that you have many points better than 0.7m precision, you also have many points that deviate more than 0.7m, or it would never be 0.7m average.
You should nowadays be able to achieve about 1x GSD RMSE in your model. So if you have 2cm / pixel, your model error shouldn't be larger than 2cm horizontally. Vertically it's about 3x GSD.
Absolute accuracy is related to how your model is located correctly in real world coordinates. How does this work?
In photogrammetry, you collect real world data in two ways:
- The photos. What you snap there is really there and you can post-process it.
- The ground control points. You make attempts to acquire these points with as high accuracy as possible.
If the photos are sharp, then usually you'd get really good results for the model. However, if the ground control points you're collecting deviate a lot, than the model may be located incorrectly in real world space.
One example here is that the altitude tends to be inaccurate for GPS readings. Altitude in GPS is usually 2-3x worse vs. the horizontal position accuracy.
This means that besides the model being somewhat offset from the horizontal position, it may also be slightly tilted if one of the position had a higher elevation reading. Such tilts can cause large deviations in contour lines for example.
There are two main concerns in photogrammetry:
- Correct scale
- Correct absolute positioning in the world
Guaranteeing correct scale is a lot easier. You can use measuring bars and indicate what the real distance should be between two points.
As indicated before, the absolute positioning including "attitude" of the model is a lot harder, especially because the elevation has a larger error component.
It's therefore better to use methods that use a "relative" method for determining position (like RTK), instead of taking individual isolated readings from a single GPS.
The error for the base station (the reference) may still be as high as the expected GPS accuracy itself, but all other points that were established in reference to that station will probably be accurate to 10cm precise, even in elevation (assuming you get a proper fix and not get caught in an integer ambiguity).
So using the vehicle's GPS is great for a quick "reconnaisance run", but I doubt whether it can be used for serious engineering work, because the elevation and horizontal position needs to be pretty precise.
There are expensive GPS units on the market ($25k and upwards) which can take position measurements to really good accuracies in the centimeter range. There are "RTK networks" for example that broadcast the corrections to be applied and some of these expensive receivers can connect to those.
These receivers are more expensive, because they're dual band (L1+L2), which allows them to compensate for ionospheric delays almost instantly and have very sensitive, high-quality antennas that are better able to reject multicasting effects. They can achieve highly accurate positions in about 10 minutes.
RTK (Real Time Kinematics) is a method to derive high quality points by not only receiving the signal from the satellite (the "data"), but also measure the phase of the signal to improve the accuracy of the measurements. It can be applied to both L1 and L1+L2 receivers. However, dual-band receivers can cancel some errors instantly because they measure at different frequencies, which means that L1 receivers typically take more time to establish a good fix.
There are RTK networks or base station with a known reference that can be used. In such cases, even L1 receivers are good matches to be used with those, assuming that the L1 receiver is statically positioned.
RTK is a good method for collecting static ground control points. You simply connect to one of the networks or the reference station and you should get a good fix in about 2 minutes, assuming a good view of the sky. L1 receivers, once again, take more time and it's easy to lose the fix for these modules.
RTK therefore usually works with two modules: one is the reference and may or may not be part of your setup. The other one is called the "rover" and is statically positioned in different locations for some time or actually kinematically moving.
Because RTK uses the phase of the signal to improve the accuracy, you need modules that emit the raw data of the module describing the signal received by the satellites. There are very few GPS modules that actually do this. Also, the modules need to use a more sensitive antenna than is normally commercially available to reduce the time-to-fix and get better data from the available satellites.
Assuming you have access to such a module and an antenna, you can use "RTKLIB" as a means to collect raw data (and store it). It can process the data to provide insight into the current position in "real-time", but I found that post-processing the raw data after the fact gives a much better eventual precision. I'm only using RTKLIB to collect static ground control points. As indicated before though, you need access to a reference station that has a known position.
If there's no reference station you can connect to in order to receive a stream of correction data, you can establish one yourself. If you have a dual-band module, then this doesn't take as much time as an L1 one.
The reference station is established by collecting hours and hours of GPS data in a single location; the more the better. Then you run the data through a "static PPP" processing run, which in the end gives you the final calculated position. "static" because you're not supposed to move this base reference station and PPP.
In post processing you can also derive the positions of other modules around the field, but you can only do this if the reference station is receiving and recording data at the same time as the rover modules.
The idea here is that the reference station records at least 2 hours, which should converge to a position of about 50cm already. Then you read out the reference position and reuse the same file to do a processing run, where you now use the position as a reference to determine the error correction to be applied to the other rover readings.
Because you use a reference station, RTK readings and post processing sessions are relative. You apply an estimate of a position to the base station (which, yes, may be slightly wrong) but all other rover positions are relative to that (minus potential large errors that could creep up due to integer ambiguities. beware!), so will have the same absolute error, but will be accurate to centimeter level in relation to the base station.
In other words, with these positions from simple L1 RTK modules you will be able to get correct scale for the entire model, but the entire model may be off by the absolute error of the reference base station. However, your model will never be tilted, nor have varying scale. This is very different when you collect the ground control points using independent, inaccurate GPS readings.
Cross validation is the method used to validity the built model with the collected data is correct and to measure the variation of the error inbetween the ground control points for example. Of particular interest is the error that can be expected inbetween control points. When you build the model, it's pretty obvious that the error close to a control point will be very small. What's not often well quantified is the error that can be expected between the points or at a reasonable distance away from there.
Most photogrammetric post-processing software only requires 5 control points to initiate processing. This doesn't mean that 5 GCP's is always enough for every mission.
How many points are really needed depends on a number of factors:
- the quality of the camera sensor, the optics and the amount of distortion of the lens
- the size of the area to be surveyed
- the accuracy of the method for determining the ground control points
- the expected accuracy
Here's a great post on diydrones about this issue:
http://diydrones.com/profiles/blogs/how-accurate-are-your-maps-and-models
Notice especially how gcp's near the borders and near vegetation have larger error components.
When you post process survey data, it becomes apparent that the generated points and data at the borders of the survey area have much less precision vs. the data that has many more matches. In fact, you may see large distortion near the borders. These larger distortions seem to appear from the lane of flight at the boundaries outward.
As GPS can be 5m off and there is wind and some tilt to remain on track, I use the following method:
- Determine the exact boundaries of the area to be surveyed
- Plan the flight and verify that the legs and coverage is reasonable and actually covers that area. Adjust the flying direction to align the legs with one border
- Grow the area by 10 meters
- The expected horizontal accuracy of your model is about 1x the GSD.
- The expected vertical accuracy of your model is about 3x the GSD.
- The accuracy is a mean, so inbetween ground control points you should expect the error to increase up to 3x the GSD, depending the distance. This means you need to establish a correct distance between ground control points, which depends on camera quality, whether you have obliqueness in images, etc.
- Survey beyond the exact borders of the area to be surveyed to guarantee you get decent coverage and no significant distortions within your survey area due to poor photo coverage.