Welcome, Commercial Drone Pilots!
Join our growing community today!
Sign up

Drone Deploy/mapping tips

I know it's really hard to line up two different pictures in your brain to compare, but for whatever it's worth, here is the "sparse" mesh I came up with using my tool chain. This is a screenshot of a 3d viewer so there is some oblique perspective in there. I also connected up the points to make a deluaney mesh. (My tool chain doesn't [yet?] produce a dense mesh ... I figure I'll let the tools that are really good at generating a dense mesh have that feature for themselves.) :p

Screenshot from 2020-02-02 09-39-11.png
 
@clolsonus

  1. I'll start a support ticket and get you copied.
  2. I will see if I can process again in Metashape, but it may take me a couple of days.
  3. This is correct, partially. Panchromatic (grayscale) is used, but a lower resolution color image is combined for pan-sharpening and for the final point cloud creation and classification. My post-processing does not "color-correct" and is based solely on exposure correction more specifically to reacquire information lost due to poor exposure on both ends of the spectrum. We see a gain in the number of tie-points which typically results in 3-5% more points in the final cloud. Scaling images down as is the default for Pix4D greatly reduces resolution and number of points in the cloud resulting in a poorer orthorectification.
  4. While this is correct that there are holes in the cloud because of the reduced overlaps this is a minor cause as the overlap analysis shows. Only 3-4 images per pixel are required to create tie-points and the report shows that even at the highest elevations of the site that you had 4-6. There are points in the native cloud, we just done see them in the output data because it was decimated during processing and then again by a gridding method as exported.
  5. The more important causes are that point cloud decimation and as we agree here - the lack of repeatable distinct features. The sheer number of features in this case is a little different than what is sometimes referred to as homogeneous terrain. The are so many objects of distinction here that are visible, just not enough matches from different perspectives to determine what is foreground versus what is background. What the machine runs into with homogeneous subjects is that the subject appears featureless because the details are so small in a reoccurring condition that individual photos cannot even be stitched together. In either case this is exacerbated as flight altitude is continually decreased. Another issue that appears as altitudes decreases is surface warping. If GCP's are not used the relative accuracy of the map will suffer greatly and it becomes exponentially worse as altitude drops and site size increases.
  6. All of this is pretty standard photogrammetry theory and much simpler geometry and triangulation. Everything we do here is based on triangulation all the way through our output datasets. Just for those less technical an image "pair" would be at least 3 images and a tie-point most preferably from 3 pairs hence why we would like 9 images per pixel.
How does an SRTM fit into all of this?
 
Last edited:
That was one of the reasons that led me down the path of working on my own mapping tool chain. For our project's use case we needed *max* detail. I am constantly being pressured to fly lower, fly lower, fly lower, get more detail! Yet they simultaneously want to cover larger and larger areas, and venture into regions with crazy terrain and tall trees. So when we got some of our early map results back and discovered the detail had been reduced (even in the online map) we were substantially disappointed. Further, when we tried to download the geotiff and discovered it wouldn't allow us to do that until we cut the resolution down by a further 8x, we realized it was just a no-go for our project. Pix4d does even worse (in my own experiences comparing results with my own data sets.)

Our in-house tools serve our own specific use case where we literally draw the original images out in a big pile (after we've carefully computed their placement, warping, and alignment using a process similar to what the big companies use.) This gives us a 'perfect' stitch and simultaneously gives us *all* the original max resolution of our images. And as a bonus we can flip around between all the images and perspectives that cover a point of interest ... all live on our stitched ortho map. For us it adds dimension and detail to our maps that a static orthophoto or dense point cloud really can't do.

This doesn't seem to be a use case that any of the expensive tools have considered or covered. And maybe it's not very common to want to hunt through the finest/pickiest details of your image sets, but it is what we need to do for our projects (looking for invasive plants in hard to reach forest areas.)

Anyway, I agree it is curious that DD could generate a plausible elevation map and plausible ortho mosaic that covers the entire data set, while seemingly not having the underlying point cloud to support that. Perhaps there is a threshold of confidence required for the 3d point cloud and those areas just didn't quite make the cut.

Thanks again for running my data set and sharing the results! This stuff is super interesting and fascinating (at least to me!) :)

Curt.
They're generating the plausible map because the data is there, we just can't see it in the level of detail of the export. If I can run this in Metashape it will probably take 2 days, but I will have the max dense point cloud. Or at least what is max with the limitation of 64GB of RAM.
 
Since this thread has morphed a bit, does anyone have experience with Carlson Photocapture for processing?
I went through the trial and it worked very well. I would say a middle of the road. Not as much control as Pix4D or Metashape, but more than DroneDeploy. I never got to try the collaboration pieces. Their pricing model could be very attractive for smaller shops, but with the amount of data we push it would cost 50% more than DroneDeploy in our scenario.
 
  • Like
Reactions: Dave Pitman
So how does the "Terrain Awareness" actually work? If you set Alt for say 90'AGL it uses GPS data to help attempt to maintain 90'AGL? I'm curious how "reactive" it is. Next time I'm mapping I may give it a test in an area with steep elevation changes and see how well (as well as I can determine with my old Optical Orbs LOL) it can react to terrain changes (not trees and poles LOL).

Terrain following combines your base flight altitude with a check of the space shuttle radar topography flight data based upon GPS positioning and the barometric altimeter to hold a specific altitude in flight. For example, I set my minimum altitude for flight to 330 feet AGL and the Pixhawk will compare this against what the shuttle says the terrain height is to command a climb or dive to maintain 330 feet AGL. If I set the flight altitude at 90 feet AGL and there is a 150 foot cliff face within the area of flight, the aircraft may or may not impact the cliff face because of the abrupt change in height. The rule of thumb is to set an altitude that easily clears terrain across the entire spectrum of your flight area. After all, technology fails when you need it the most. If you plan on testing it in high terrain or more specifically, in an area with rapid terrain changes I would also recommend the teflon lined stain resistant shorts in addition to the Optical Orbs. Not the best conditions.
 
  • Haha
Reactions: BigAl07
Seems like 200ft would be plenty above a TA surface, but you know your area and provider better than I do so kudos keeping safety first priority.

I'm old and my aircraft is a bit more expensive than the off the shelf prosumer variety. That, and I hate walking around with soiled pants when the technology fails.
 
While your description points to the right characteristics of TA flight we need to clarify that the altitude that is held is a specific value above the SRTM which is changing, not a constant above home point as is usually the case.

330' AGL = 0.5in/px for me with a 42MPx camera
If I plug in 330' AGL then that is what the aircraft maintains throughout flight relative to the ground. On the base station I watch the aircraft climb and dive to maintain the programmed flight altitude. It works (for me) in areas without large changes in grade in short distances. In mountainous areas the results will probably vary.
 
  • Like
Reactions: adm_geomatics
330' AGL = 0.5in/px for me with a 42MPx camera
If I plug in 330' AGL then that is what the aircraft maintains throughout flight relative to the ground. On the base station I watch the aircraft climb and dive to maintain the programmed flight altitude. It works (for me) in areas without large changes in grade in short distances. In mountainous areas the results will probably vary.

It would be an interesting coding challenge to factor in the comfortable rate of climb/decent of the vehicle and 'optimize' the elevation changes to keep the AGL as accurate as possible while simultaneously maintaining safe clearances. Possible bonus points for figuring out the best heading to fly the transacts to minimize wasted turning effort and minimize the mount of altitude changing (possibly by flying curvy trajectories that follow the terrain contour lines?) For my fixed wing airplanes, I haven't added any kind of terrain awareness (yet?) but I do estimate the wind on board and automatically pick a transact heading that minimizes wasted energy/time turning back around at the end of the line.
 
330' AGL = 0.5in/px for me with a 42MPx camera
If I plug in 330' AGL then that is what the aircraft maintains throughout flight relative to the ground. On the base station I watch the aircraft climb and dive to maintain the programmed flight altitude. It works (for me) in areas without large changes in grade in short distances. In mountainous areas the results will probably vary.
Well said and that is quite the camera! For those using Phantoms and other mid-level drones like the Yuneec H520 at 20mp we are more likely to follow 200-250ft AGL. With the smoothing that occurs in the SRTM at very large grade breaks we have never seen an issue of putting the drone in danger. One recommendation when TA flying is to run parallel with the contours so that the drone is not continually rising and dropping. Running with the contours (as best as possible) creates a much smoother experience and more reliable vertical data.
 
  • Like
Reactions: clolsonus
If you have a strong predominate slope or cliff, it is also advised, for the reasons @R Martin mentioned about transitions, to adjust your flight direction to be parallel with the rising terrain rather than perpendicular. You don't want the bird having to make radical corrections on each pass if possible. Rather gaining or loosing altitude as gradually as possible on each pass.

ha..beat me!
Here's an example, but with Litchi. I was able to plan in two separate directions in the same flight, but maintain 225ft AGL. Running parallel with the 160ft slope on the northeast side provided an exponentially better map than the other contractor that was flying at a static 300ft.

Sample Litchi Custom TA Plan
 
Well said and that is quite the camera! For those using Phantoms and other mid-level drones like the Yuneec H520 at 20mp we are more likely to follow 200-250ft AGL. With the smoothing that occurs in the SRTM at very large grade breaks we have never seen an issue of putting the drone in danger. One recommendation when TA flying is to run parallel with the contours so that the drone is not continually rising and dropping. Running with the contours (as best as possible) creates a much smoother experience and more reliable vertical data.

I hear you. My previous choice limited me to roughly 95-110 feet AGL to obtain a little worse accuracy which prompted us to find a better solution. I also understand that not everyone has the option of deeper pockets and we fly the best that we can afford to get the job done. If I was going to attempt it with the Inspire I would have to split it up into multiple flights that ran parallel to the terrain feature.
 
It would be an interesting coding challenge to factor in the comfortable rate of climb/decent of the vehicle and 'optimize' the elevation changes to keep the AGL as accurate as possible while simultaneously maintaining safe clearances. Possible bonus points for figuring out the best heading to fly the transacts to minimize wasted turning effort and minimize the mount of altitude changing (possibly by flying curvy trajectories that follow the terrain contour lines?) For my fixed wing airplanes, I haven't added any kind of terrain awareness (yet?) but I do estimate the wind on board and automatically pick a transact heading that minimizes wasted energy/time turning back around at the end of the line.

I fly a fixed-wing VTOL configuration so roger that on the flight plan and wind. I plan the mission in the office but the actual flight planning is done on site in the field to take local conditions into account. I use a modified ArduPilot interface for planning and control so its really easy to get it right with the aid of a personal weather station. The one drawback to the system is that it takes a lot of room to operate in. My lead in, lead out and lane separation are set to 250 feet. I could do less but don't see the need of stressing the airframe.
 
re: terrain awareness.

Not trying to steal DD's thunder, but if you really want Terrain Awareness, you probably should look into Map Pilot.

Map Pilot has had terrain awareness for some years now. It is a paid app vs. DD flight, and only ios. But it is fully mature and is probably where DD is trying to get to.

  • Map Pilot's SRTM tiles are downloaded and do not require a live connection in the field.
  • In addition to SRTM, you can use your own created DTM to provide a very accurate terrain aware flight if needed.

Yeap, MME had terrain awareness since about two years back.
 
There are several softwares that have had it for years which is why it is exciting that DroneDeploy finally produced on the community feature request. They are using very accurate surfaces and the solution will continue to improve as we use it through their machine learning.
 

Members online

No members online now.

Forum statistics

Threads
4,408
Messages
38,212
Members
6,243
Latest member
Uncle Ricky