The Evolution of Accurate Drone Mapping

Over the past decade, the process of creating accurate 3D maps and models from drone photos has come a long way. Drone capture capabilities and photogrammetry processing software and services have improved in cost and efficiency to a point where 3D digital twins are a staple for surveying and inspection. 

Making these models accurate and proving the accuracy – the fundamental of any survey – has also come a long way. 

In the process of a typical drone capture we know the location of the drone when each photo was taken to 3 5m accuracy with the onboard GNSS (GPS and other global satellite positioning constellations). Smash them together in photogrammetry software, and we have a 3D model! However, this model is floating in space – it may look impressive, but it is not necessarily a good representation of the real earth. 

Photogrammetry software does its best to match pixels in overlapping photos, making assumptions of camera focal length and lens distortions, camera orientations, camera positions, and whether the pixel in one photo is the same point as a pixel in another photo. There are many errors that occur through this process, which are managed to statistically achieve the best possible result – the 3D model.

This model can effectively be considered malleable – or flexible – and when we can lock in some parameters to the real world we can take that model and fit it, stretch it and bend it to fit the constraints we can find in the real world. Surveyors then need to be able to confirm that their measurements on the model equate to real world values. 

The first way this was done was to use GCPs (Ground Control Points) – targets on the ground, typically placed by a surveyor, with known coordinates in a known coordinate system. This was time-consuming on-site, as it required traveling the entire site with survey equipment, or placing and collecting ‘smart’ GCPs. Processing time was also longer both for the operator to pick those GCPs and the processing to re-iterate camera positions over and over based on GCPs and tie points. Errors in the system due to lens distortions and surface matching would creep in between the GCPs. 

Putting the control at the camera turns the workflow around. By knowing where the camera was for each photo, the AT (aerial trianglulation or matching) process in photogrammetry is more

This post was originally published by SUAS News on . Please visit the original post to read the complete article.

Reply