Welcome, Commercial Drone Pilots!
Join our growing community today!
Sign up

Manual Mapping

R.P..R

Well-Known Member
Joined
Jan 5, 2019
Messages
316
Reaction score
150
Location
NorCal
Website
www.rprsolution.com
During the recent asset management survey that I did yesterday for MRC Development, I decided to do it manually, the process is long, but rewarding. The sUAS that flew the mission is made by Yuneec and it is the Typhoon H Plus, and this drone does not have mapping capabilities, in short this drone does not have it all. But it's fun to fly doing minor task.


For manual mapping, I record a video, while flying over my grid. After my mission, I convert the video using Video to Image Converter (for MAC) and export/convert an output file to TIFF every 4 seconds, this will give me an overlap close to 70% edit the photos in Lightroom, save as jpeg, when finish editing, and set coordinates to the photos in Geosetter. I am using RecapPro to render the materials. I could have used stitch align, but it is easier to add coordinates, even though there are errors, but for a none mapping sUAS the results is doable.


12931294
 
Do you end up with flight data that provides gps location at the time of each image or do you just fudge it with geosetter? I messed around with doing something similar with a runcam hd mounted nadir on an RC airplane and just flew around randomly for 10 minutes. I wrote a little python script that correlated the movie with the flight data and then could auto-extract the frames at some interval and geotag them. I ended up with a decent map, but you could definitely see rolling shutter issues in the final fit not to mention image quality and resolution isn't in the same league as the sony a6000 I normally would use. Still was an interesting and satisfying exercise.
 
  • Like
Reactions: R.P..R
Do you end up with flight data that provides gps location at the time of each image or do you just fudge it with geosetter? I messed around with doing something similar with a runcam hd mounted nadir on an RC airplane and just flew around randomly for 10 minutes. I wrote a little python script that correlated the movie with the flight data and then could auto-extract the frames at some interval and geotag them. I ended up with a decent map, but you could definitely see rolling shutter issues in the final fit not to mention image quality and resolution isn't in the same league as the sony a6000 I normally would use. Still was an interesting and satisfying exercise.

I just geotag them in Geosetter. To get rid of blurry converted photos is speed, this is hard for an RC airplane to slow down to a certain crawl, and the next fix is have to video to jpeg conversation of 3 seconds interval and a 4 seconds interval, so you can replace blurry images from one to the other, and before I forget Lightroom is also great in fixing photos.

RecapPro has geotagging capabilities like of that of mission planner, but I never fully explored it, because it tends to crash my system, when I attempt to.

There’s another workflow, mission planner can do it in bulk, by setting point according to you TLog.

For for a quick n’ dirty supplement to submitting a weekly “asset management” mapping is a bonus that clients like, and like you, manual mapping is fun and rewards personal satisfaction. [emoji4][emoji1687]
 
  • Like
Reactions: clolsonus
I need to checkout what mission planner does one of these days. My trigger interval is often < 1 second (our typical use-case is flying low and gathering max detail) and the time stamp on the image (sony a6000) has a one second resolution, so that can sometimes make correlating trigger events with images slightly interesting.
 
Last edited:
I need to checkout what mission planner does one of these days. My trigger interval is often < 1 second (our typical use-case is flying low and gathering max detail) and the time stamp on the image (sony a6000) has a one second resolution, so that can sometimes make correlating trigger events with images slightly interesting.

Mine is purely by converting video to Jpeg, because the Yuneec Typhoon H Plus does not have a POI capture feature.
 
I had a wild crazy idea a few years ago (that I acted on unfortunately). I wrote a python script that goes through any video frame by frame, finds features, matches features, and then computes the affine transform from each frame to the next. This includes rotation and x, y shift. So if you camera is pointed forward, the rotation you observe from frame to frame correlates to roll. If the camera is pointed straight down, the rotation correlates to yaw (divide the amount of rotation in each video frame by the time step and you get rates.) I also collect IMU data on the flights. So I can do a best fit correlation with the corresponding data channel in my flight log (yaw gyro or roll gyro usually.) If it all works (and it usually does) this gives me an exact mapping between a frame a video and the corresponding time stamp in the data log. Then when I extract a frame, I can geotag it (not exactly) but pretty well. One cool thing is that because this is a feature matching based approach, I can see 'through' a prop, or have part of the nose or landing gear in the video. It adds noise, but usually the correlation still is spot on. There are probably way simpler ways to do this, but I had an itch to scratch so I did it this goofy way. (I originally went down this path to see if I could use feature matching to smooth a video ... and that actually works pretty well if you feed it clean input with lots of good features.) If the typhoon captures a data log that isn't terrible to parse, I could do some experiments on my end, but it might be way more messing around than it's worth.
 
  • Like
Reactions: R.P..R
I had a wild crazy idea a few years ago (that I acted on unfortunately). I wrote a python script that goes through any video frame by frame, finds features, matches features, and then computes the affine transform from each frame to the next. This includes rotation and x, y shift. So if you camera is pointed forward, the rotation you observe from frame to frame correlates to roll. If the camera is pointed straight down, the rotation correlates to yaw (divide the amount of rotation in each video frame by the time step and you get rates.) I also collect IMU data on the flights. So I can do a best fit correlation with the corresponding data channel in my flight log (yaw gyro or roll gyro usually.) If it all works (and it usually does) this gives me an exact mapping between a frame a video and the corresponding time stamp in the data log. Then when I extract a frame, I can geotag it (not exactly) but pretty well. One cool thing is that because this is a feature matching based approach, I can see 'through' a prop, or have part of the nose or landing gear in the video. It adds noise, but usually the correlation still is spot on. There are probably way simpler ways to do this, but I had an itch to scratch so I did it this goofy way. (I originally went down this path to see if I could use feature matching to smooth a video ... and that actually works pretty well if you feed it clean input with lots of good features.) If the typhoon captures a data log that isn't terrible to parse, I could do some experiments on my end, but it might be way more messing around than it's worth.

I ran into a thread similar to what you are talking about, where operators are geoprocessing tools using Python. Using both script tools and Python toolboxes. A more advance approach to the basic makers. Can you guide me to the basics?
 
I ran into a thread similar to what you are talking about, where operators are geoprocessing tools using Python. Using both script tools and Python toolboxes. A more advance approach to the basic makers. Can you guide me to the basics?

Pretty much everything I've been writing lately is in python, so the basic prerequisite for my stuff is to have python installed. I run linux 99% of the time so it's already there on my systems, but if you run windows/mac you might take a look at installing anaconda python.

After that are a couple extra python packages that need to get pip installed.

It's really not that hard to get all the pieces installed, it's just a few steps, a little patience, and a willingness to wrangle some command line commands once in a while. Also, if the yuneec doesn't save out any data logs with IMU data then my whole basic approach would not really be applicable.

If you are up for some experimentation I'm happy to walk you through the steps.

Thanks,

Curt.
 
Pretty much everything I've been writing lately is in python, so the basic prerequisite for my stuff is to have python installed. I run linux 99% of the time so it's already there on my systems, but if you run windows/mac you might take a look at installing anaconda python.

After that are a couple extra python packages that need to get pip installed.

It's really not that hard to get all the pieces installed, it's just a few steps, a little patience, and a willingness to wrangle some command line commands once in a while. Also, if the yuneec doesn't save out any data logs with IMU data then my whole basic approach would not really be applicable.

If you are up for some experimentation I'm happy to walk you through the steps.

Thanks,

Curt.

Thanks, Curt. The Yuneec sUAS is able to save IMU data, but I will confirm. I’m now fascinated with your workflow, as it sound to be a time saver versus creating makers and geotagging.

Thanks,
-Akoni
 
Here is a really crappy video I posted a while back that shows feature tracking from one frame to the next. Even with half the view obscured by the nose and looking through the prop, there is still enough tracking info we can glean to automatically match the video up with the flight log.


And once you have an accurate time match between video frames and flight data log, you can start doing silly things like drawing a hud on the video, or drawing sparse mesh points from stitching the nadir camera into the tail camera and they should mostly match up. (Different flight, different airplane.)


When the preprocessing work is finished, then I know the lon, lat, alt, and roll, pitch, yaw of the camera for every individual video frame. I can go through and grab frames at whatever interval I want.

All of this work is open-source (MIT license) and available on my github repository. It's mostly all written in python (+opencv) and should run on any platform that supports python v3. (But you will have to get a bit familiar with python and git in order to play with the tools yourself.) I'm doing much of this work in the context of university research, so the tools are developed "guts out" style. I haven't bothered trying to hide everything behind a polished UI. For the most part I'm the only user, but I'm happy if other people want to come check this stuff out and give me some feedback. Certainly many things could be streamlined and improved give some additional testers and feedback.
 
Here is a really crappy video I posted a while back that shows feature tracking from one frame to the next. Even with half the view obscured by the nose and looking through the prop, there is still enough tracking info we can glean to automatically match the video up with the flight log.


And once you have an accurate time match between video frames and flight data log, you can start doing silly things like drawing a hud on the video, or drawing sparse mesh points from stitching the nadir camera into the tail camera and they should mostly match up. (Different flight, different airplane.)


When the preprocessing work is finished, then I know the lon, lat, alt, and roll, pitch, yaw of the camera for every individual video frame. I can go through and grab frames at whatever interval I want.

All of this work is open-source (MIT license) and available on my github repository. It's mostly all written in python (+opencv) and should run on any platform that supports python v3. (But you will have to get a bit familiar with python and git in order to play with the tools yourself.) I'm doing much of this work in the context of university research, so the tools are developed "guts out" style. I haven't bothered trying to hide everything behind a polished UI. For the most part I'm the only user, but I'm happy if other people want to come check this stuff out and give me some feedback. Certainly many things could be streamlined and improved give some additional testers and feedback.


Instant fan of what you do... Specially this work is A+

 
Thanks for the kind words! What I'm trying to do is create a way to present the images in their fullest possible detail and preserve all the different camera perspectives for any specific feature/point on the map. My use case is hunting needles in a hay stack (usually invasive plants.) The commercial map stitching tools are amazing (and I've only been able to try a few), but I've been frustrated when I run into their limitations for my specific use cases. So I figured I would just DIY my own solution. How hard could it be? Now I'm 5 years into it ... and beginning to figure a few things out.
 
A lot of interesting things, I just extracted frames from a video because it didn't give me the batteries to get a circular flight out of a structure. What I hadn't thought of was geotag the photos. A topic that I'm going to investigate.

Thank you
 
After my mission, I convert the video using Video to Image Converter (for MAC) and export/convert an output file to TIFF every 4 seconds, this will give me an overlap close to 70% edit the photos in Lightroom, save as jpeg, when finish editing, and set coordinates to the photos in Geosetter.
For anyone else looking for a similar, but automated workflow, Pix4D and PhotoScan/MetaShape both have the ability to import and extract still images from videos. I've only played with it a couple of times, but it works fairly well.
 
It’s not accurate and you don’t have control. A 9min video you will end up with 1k plus images.
When you convert the motion to still, you want to batch edit for clarity. Importing, yes. All services you will be able to, geotagging has to be done manually, or run a script to closely match the telemetry.
 
It’s not accurate and you don’t have control.
Add control when you get your model in Pix4D or PhotoScan, just like any other project with GCPs.

A 9min video you will end up with 1k plus images.
During the import feature, you can choose how many frames you want to extract.

If you've got ground control (which would make this easier) than the geotagging isn't really necessary. Having geotags will cut down on the processing time, but most software will be able to reconstruct the data without knowing where the images were taken from. If you don't have ground control, then you're not really "mapping" so you could just scale the model and orient it at a later step or introduce photo points from Google Earth and be just as "accurate" as the on-board GPS and matching the telemetry.
 
In Pix4D I have tried it several times. This last time I decided to do it by hand. I also want to learn how to use the geosetter to geotag the images. Learning different methods does not take place. Something that Pix4D does not allow and at certain times it may be necessary to have the images with coordinates.

For those interested in doing it by hand, here at least the extraction process using the most popular video viewer on all platforms, the VLC.

 

Members online

No members online now.

Forum statistics

Threads
4,277
Messages
37,605
Members
5,969
Latest member
KC5JIM