We've introduced a report to give you better insights into how your map has processed through DroneDeploy. The report highlights key information, points out flight discrepancies, and brings forth factors that may need to be adjusted or addressed in order to make the best possible map.
Access the processing report by visiting the Map Details section of your map.
*Important Note Regarding Availability
The Processing Report was implemented in October 2019. Maps processed prior to this date will not have a Processing Report. Please reach out to Support by contacting [email protected] to reprocess and populate a new report.
Name given to the project at the time of processing.
Engine used to process the map. DroneDeploy uses our own proprietary algorithm so this should be the default engine.
Date of Capture
Date flown. The date will be derived from the reported drone EXIF data at the time of flight.
Date the images were processed (or reprocessed) in DroneDeploy
There are two possible modes for processing: Terrain and Structures. Terrain mode is primarily for nadir imagery, while Structures will be for oblique imagery and subject matter with 3D capture.
Ground Sample Distance
Area Bounds (Coverage)
Area of the processed map and percent coverage of selected region of interest.
Camera used for the flight.
The quality and accuracy summary will provide insight into possible problem areas in the processing of the map. Expected values will be marked in green, and possible problem areas will be marked in orange.
Represents the image quality for stitching. A high texture image will have crisp, detailed features that are easily recognisable. A low texture image would be blurry, over exposed, or smooth. Thermal images are often low texture.
Median Shutter Speed
Shutter speed for the camera. <1/80 is likely to produce motion blur and could result in incorrect scale or measurements.
A more detailed explanation of the mode chosen and whether or not it's appropriate for the dataset uploaded.
Images Upload (Aligned %)
This is the number of images that provided the correct data needed to be used in processing. Unaligned images could be lacking EXIF data, or not provide the proper overlap for the subject in the image.
Principle Point: The initial values provided to us from the EXIF data may be adjusted in photogrammetry if they deviate from the expected values. A high variation from principle point can be raised to [email protected]
Focal Length: The focal length is estimated during the photogrammetry process. If it varies more than 5% for the reference value, that could indicate a problem with the lens or camera. The use of zoom during mapping, or inclusion of zoomed images from the same camera can cause the focal length to be calculated incorrectly.
This provides some of the most key information into the processing potential of your map. In this section, you will see black dots representing each of the images collected for the site.
Photogrammetry works by matching keypoints across multiple photos in order to understand the relative position of all the cameras. Because of this, providing the drone camera several high quality images of every point on every surface is key to producing a good map and 3D model. This is created via "overlap" or, how much image overlaps in content with the neighboring images.
This can also be referred to as image density. For the best possible DroneDeploy processing, we recommend and image density 8-9 images per pixel. (This roughly equates to 75/70 overlap.)
Flying over buildings and trees can cause major issues for your overlap on the tops of those objects. Learn more using the following articles:
ROI: Region of Interest
Aligned: Images that were successfully geolocated and aligned.
Aligned Cameras: For each image taken, DroneDeploy uses image content to place the camera in 3D space. Additionally the metadata of the images is used to geolocation image and thus the final map. 100% alignment means all of the uploaded images were used in the reconstruction of the map. Unaligned cameras (red X's on the Quality Review set) will indicate camera locations that were not able to be located in 3D and therefore not used in the reconstruction.
Unaligned images often do not provide the visual data needed to successfully stitch (i.e. water, homogenous data, moving trees, insufficient overlap to match other images).
RMSE or Root Mean Squared Error: The camera location XYZ root mean squared error (RMSE) is a measure of the spread of the error of the solved camera XYZ versus the location specified by the GPS value recorded in the images. Therefore, as an example, a 10ft (3m) Camera Location XYZ RMSE means that in general the solved XYZ image location should be within 10ft of the supplied GPS location.
*Please note that camera location error does not correspond to the true accuracy of a map. For example, poor GPS conditions can cause large camera location errors but if images are properly collected the processed map will still be highly accurate when you measure distances and volumes in a small part of the map. To truly measure map accuracy you must include checkpoints or an object with known dimensions which can be measured in the processed map to check for differences.
If you have included GCPs on your map, they will also be outlined in the processing report.
Dronedeploy maps that do not use GCPs but are captured of the same location are automatically aligned with previous days. This section (if it appears) will describe the before and after, along with the required transformation.
Find out more here:
Automatic Map Alignment
For more information reach out to us at [email protected]
Updated 6 months ago