Welcome, Commercial Drone Pilots!
Join our growing community today!
Sign up

Open-source Image Stitching and Mapping

clolsonus

Well-Known Member
Joined
Feb 22, 2019
Messages
229
Reaction score
148
Age
55
I wanted to introduce myself and an open-source mapping tool chain I have been working on. My name is Curt Olson. I've been involved in UAV's and avionics development since the mid-2000's and have been experimenting with image stitching and mapping for about 5 years now. My degree is in computer science, but right now I'm working for the U of MN UAS Lab and pretending to be an aerospace engineer.

In support of a couple different projects that have come through our lab, we have been developing an open-source image stitching/mapping solution. Our tool chain is written entirely in python + opencv. We hope this makes the code more accessible for folks that like to poke under the hood. We also hope it makes the code extensible for people that would like to do something similar, but no existing solutions quite do what you want. As a research lab, we continually bump into the limitations of commercial tools with respect to our projects and use cases, so we really like building solutions on top of open-source software, and we try to keep things simple and accessible when we can.

You can find the entire git repository with some sample data sets here: UASLab/ImageAnalysis

I put together a long rambling youtube video that shows the stitched results and details here:

I also want to say briefly why we built our own software and/or what makes it different from other solutions. One of our current projects is essentially searching for needles in the haystack. We are surveying heavily wooded areas (often with steep inaccessible terrain) for an invasive vine called oriental bittersweet. Oriental bittersweet will eventually take over whole stands of trees, kill them, and leave a big tangled mess of vines. It can be very difficult to eradicate. We found that in the winter, we can actually spot the brightly colored berries which generally stay attached to the vines well into February. Identifying oriental bittersweet requires highly detailed low level imagery. We found that commercial tool chains always give up some resolution to make the ortho-mosaic. We also have found that when identifying small things in the map, having access to all the original full res images, all the views & angles can help immensely deciding if something is or isn't the thing we are looking for. So the end result of our software is a mapping tool that let's you view the original images (and have access to view image that covers your center of view).

This toolset isn't intended to compete directly with tools that produce dense meshes and highly detailed DEM's ... I'll leave that to the experts. We produce a 3d surface model of the area, but it is for the purpose of projecting whole images onto the surface, not building a full 3d model of every pixel in the survey area (again I'll leave that up to the experts and the tools that already do that really well.) So why would you be interested in this package?
  • It is another (legit hopefully) open-source mapping solution, so it doesn't cost money to download and use.
  • Your data is your data. It is kept on your hard drive and you decide who to share it with. (I generally trust cloud solutions, but we had to sign an agreement giving drone deploy access to all our data sets to use for their own purposes in order to get an educational license from them. I think pix4d had similar terms.)
  • It is written entirely in python + opencv (for anyone who likes python.)
  • It is designed to hunt for needles in your haystacks. The map viewer tool shows the original images as 3d textures stretched [almost] perfectly over the ground surface. This is why it needs to be a local app on your PC, as far as I know the same thing can't be done in a web-based viewer. You aren't looking at the original images with no lost resolution. The map viewer can apply different enhancement filters (i.e. level equalization & color equalization) to help the details pop out for a human viewer (or help save poorly exposed images.) In the future it could be possible to connect this up with machine vision algorithms or additional custom-use-case filters. The viewer also makes all the images that cover your center of view available to look at.
  • The software is written in a guts-out research lab style. (In my opinion that's a feature) :) This makes it slightly harder for casual users than polished commercial tools, but it also let's you see the process and tune the processing pipeline as you go to achieve better results.
If you would like to try a live demo (windows), you can download the map viewer here: https://github.com/UASLab/ImageAnalysis/releases/download/v20190215/7a-explore.zip Just extract the zip archive anywhere and run the 7a-explore.exe application inside. Before you run the app, download and unzip a dataset from here: UASLab/ImageAnalysis

The data sets include all the full original imagery (so they are huge!) but this is exactly the point, the visualizer is letting you hunt through the full original image set (placed and stretched and aligned with each other) with no lost detail.

If you are someone that wants to try processing a set of your own images, you probably should contact me. Currently the tool chain expects a CSV file with one line per image that gives the image name, the lon, lat, alt, as well as the roll, pitch, yaw of the aircraft at the time the image was taken. We are setup to do this easily with our own in-house autopilot and camera system, but we might need to build a small bit of glue code to create this file for images collected on other systems.

I'm posting here because I'd love to get some feed back, comments, questions, or maybe just confused looks? If there are a few brave souls interested in trying out the software that would also be cool. I'd love to connect up with other people interested in the nuts and bolts of image stitching tool chains. I really haven't any places where people are talking shop so to speak. Most people are either so far out ahead of me I can't understand what they are talking about, or they are doing this commercially and need to protect their own interests. I just wanted to say hi (so "Hi!") and say here's a thing I've been working on which is useful for us, so might be useful for someone else too?

Thanks,

Curt (U of MN UAS Lab)
 
Curt, it is true that I enjoyed all 13+ minutes. :)

I would like to pass on your information to our local Noxious Weed Control Board staff. Your software and approach looks like an excellent fit for what they are chartered to do, if they can find the funding for flights over areas with reported infestations, and for technical support in the office.

I've downloaded the linked files, and hope to find the time to process a collection of very accurate, georeferenced images processed with a KlauPPK product, data from a local CORS, and Pix4D Mapper. If you wish, feel free to PM me through this forum site, although I am focused more on applying than developing software.

The terrain in your video appears to be fairly flat. Thanks to receding glaciers, our local properties may be somewhat flat, hilly, or a mix. Many properties have areas dense with tall evergreen trees and bushes. With time I can imagine other use-cases emerging for where your software is a good fit. For example, forest health/damage assessment at the individual treetop level, and other applications where examining the details of an object is critical to gaining or improving information.

Thanks for sharing!
 
I have lots of image sets, albeit .jpgs . What do I need to do to try and process them with your kit?

Hi Dave,

Outside of needing better documentation (something that's always true) there are two 'hard' things you need to get started.

1. You need an initial estimate of your lens calibration. This is the part where you take 50 or 100 different pictures of a checkerboard pattern and then run a little script that pops out the calibration. (The optimizer can later refine this based on your actual images, but you need something sort of close to start with.)

2. For you image set you need a CSV file with one line per image of the following format:

File Name, Lat (decimal degrees), Lon (decimal degrees), Alt (meters MSL), Roll (decimal degrees), Pitch (decimal degrees), Yaw (decimal degrees)

pitch, roll, and yaw are the aircraft orientation at the time the image was taken. (Then you also need to know the approximate camera mounting offset angle.) I'm making it sound more difficult than it is. With a quad copter you probably have your camera on a gimble always looking exactly straight down.

I have my own system all setup for generating this file based on my flight log data and camera trigger events, but everyone else here would be using a different system so there may need to be a little bit of experimentation needed here to figure out the easiest way to get this done.

3. One more thing that could be harder for casual computer users (who want to process their own image sets) is that you need to follow the README-InstallDev instructions (found at the github repo) to make sure you've installed all the prerequisite packages so you can run the software.

Much of this can be streamlined going forward, but I'm just stepping out of my development cave right now. It would be helpful to find a few brave guinea pigs who won't get frustrated at the first thing that breaks and are willing to try a few things and work with me. This also helps me figure out what I can do better and what I can do to make things easier for everyone else.

Thanks!

Curt.
 
Curt, it is true that I enjoyed all 13+ minutes. :)

I would like to pass on your information to our local Noxious Weed Control Board staff. Your software and approach looks like an excellent fit for what they are chartered to do, if they can find the funding for flights over areas with reported infestations, and for technical support in the office.

I've downloaded the linked files, and hope to find the time to process a collection of very accurate, georeferenced images processed with a KlauPPK product, data from a local CORS, and Pix4D Mapper. If you wish, feel free to PM me through this forum site, although I am focused more on applying than developing software.

The terrain in your video appears to be fairly flat. Thanks to receding glaciers, our local properties may be somewhat flat, hilly, or a mix. Many properties have areas dense with tall evergreen trees and bushes. With time I can imagine other use-cases emerging for where your software is a good fit. For example, forest health/damage assessment at the individual treetop level, and other applications where examining the details of an object is critical to gaining or improving information.

Thanks for sharing!

Hi John,

Thanks for the kind words. I'm working with the noxious weed folks in Minnesota so this is kind of my primary focus right now. I would call the terrain in the video "hilly" by minnesota standards. I'm sure that is closer to 'flat' for anyone from a state with mountainous regions. :) I have a couple other data sets in areas that do have locally steep terrain (Winona, MN) but the flights were over private land so I'm not sure at this point if I can make those data sets publicly available. I had some camera triggering issues in those flights that led to poor overlap, so the data sets themselves don't stitch super well (or at least there are areas with some issues ... pix4d totally puked on one of them, drone deploy did ok, my software also struggled and I had to do some coaxing to get good results out.) Anyway, the big lesson I learned out of those flights is that 70% side and end-lap is pretty important for the stitching process ... more so when you fly over steep wooded terrain where the system can struggle to find good matching features between images.

There is a group at the UMN ag school looking at some specific tree diseases, so I'm hoping to be able to do some demo flights for them this spring or summer. I missed them last year because I was busy doing so much other flying. We have another invasive project (spotted wing drosophila aka fruit fly) where we are flying airborne insect traps around dusk at different altitudes. Fun project, but no mapping involved there ... it's more like fishing where some evenings you catch some things and some evenings you come up totally empty.

If you had a small or medium size data set you could share, I could look at processing them here and try to figure out what things need to be accounted for to make it easy for you to run the same data sets yourself through my software. We could PM about that if you were interested.

Thanks again!

Curt.
 
What O/S does your system run on? How large of download to install and how long to process a set of images, say 100 photos on a compatible system. Sounds very interesting!
 
Thanks Curt, I will PM you later this weekend about sending a couple of datasets through DropBox. I'll limit the number of images per set. One will be from an Inspire 2 with the KlauPPK kit. The other will be from a Parrot Anafi with remarkable satellite lock that day. Both sets will include an almost-worse-case example (slopes with dense, tall trees; missions at 280 ft AGL; snow patches; winter sun). Looking forward to some exploring, and providing user feedback. Meanwhile I will open and install per github info. I'm using a Lenovo P70 running Windows 10.
 
What O/S does your system run on? How large of download to install and how long to process a set of images, say 100 photos on a compatible system. Sounds very interesting!

I do all my development on Linux, but the code is 100% python so theoretically it will just work on any platform. (That theory is not well tested, but I've tried to do everything the python way to keep the code as portable as possible.)

The python source code .zip file is 5.7Mb to download. That doesn't include installing python itself or opencv or any of the other prerequisites.

Time to process is a hard number to nail down. Depending on image resolution and some of the processing choices you can make (and your computer hardware), I think that could be done in an hour or two without needing a bitcoin mining level computer. The largest dataset I've processed so far is about 2400 (24Mpixel) images. I didn't time it out precisely because I was doing several things at once on my computer, but it probably took more than a day, less than two days. Right now with large image sets the most time consuming step is doing the n->m_closest image matching step.
 
  • Like
Reactions: rvrrat14
Thanks Curt, I will PM you later this weekend about sending a couple of datasets through DropBox. I'll limit the number of images per set. One will be from an Inspire 2 with the KlauPPK kit. The other will be from a Parrot Anafi with remarkable satellite lock that day. Both sets will include an almost-worse-case example (slopes with dense, tall trees; missions at 280 ft AGL; snow patches; winter sun). Looking forward to some exploring, and providing user feedback. Meanwhile I will open and install per github info. I'm using a Lenovo P70 running Windows 10.
John, I'd love to have a crack at one or two of your data sets when you get a chance to package them up and share them. Thanks!
 
Hi Curt - I am really interested in this and would like to know.more and how to get hold of it, the ability to look
I wanted to introduce myself and an open-source mapping tool chain I have been working on. My name is Curt Olson. I've been involved in UAV's and avionics development since the mid-2000's and have been experimenting with image stitching and mapping for about 5 years now. My degree is in computer science, but right now I'm working for the U of MN UAS Lab and pretending to be an aerospace engineer.

In support of a couple different projects that have come through our lab, we have been developing an open-source image stitching/mapping solution. Our tool chain is written entirely in python + opencv. We hope this makes the code more accessible for folks that like to poke under the hood. We also hope it makes the code extensible for people that would like to do something similar, but no existing solutions quite do what you want. As a research lab, we continually bump into the limitations of commercial tools with respect to our projects and use cases, so we really like building solutions on top of open-source software, and we try to keep things simple and accessible when we can.

You can find the entire git repository with some sample data sets here: UASLab/ImageAnalysis

I put together a long rambling youtube video that shows the stitched results and details here:

I also want to say briefly why we built our own software and/or what makes it different from other solutions. One of our current projects is essentially searching for needles in the haystack. We are surveying heavily wooded areas (often with steep inaccessible terrain) for an invasive vine called oriental bittersweet. Oriental bittersweet will eventually take over whole stands of trees, kill them, and leave a big tangled mess of vines. It can be very difficult to eradicate. We found that in the winter, we can actually spot the brightly colored berries which generally stay attached to the vines well into February. Identifying oriental bittersweet requires highly detailed low level imagery. We found that commercial tool chains always give up some resolution to make the ortho-mosaic. We also have found that when identifying small things in the map, having access to all the original full res images, all the views & angles can help immensely deciding if something is or isn't the thing we are looking for. So the end result of our software is a mapping tool that let's you view the original images (and have access to view image that covers your center of view).

This toolset isn't intended to compete directly with tools that produce dense meshes and highly detailed DEM's ... I'll leave that to the experts. We produce a 3d surface model of the area, but it is for the purpose of projecting whole images onto the surface, not building a full 3d model of every pixel in the survey area (again I'll leave that up to the experts and the tools that already do that really well.) So why would you be interested in this package?
  • It is another (legit hopefully) open-source mapping solution, so it doesn't cost money to download and use.
  • Your data is your data. It is kept on your hard drive and you decide who to share it with. (I generally trust cloud solutions, but we had to sign an agreement giving drone deploy access to all our data sets to use for their own purposes in order to get an educational license from them. I think pix4d had similar terms.)
  • It is written entirely in python + opencv (for anyone who likes python.)
  • It is designed to hunt for needles in your haystacks. The map viewer tool shows the original images as 3d textures stretched [almost] perfectly over the ground surface. This is why it needs to be a local app on your PC, as far as I know the same thing can't be done in a web-based viewer. You aren't looking at the original images with no lost resolution. The map viewer can apply different enhancement filters (i.e. level equalization & color equalization) to help the details pop out for a human viewer (or help save poorly exposed images.) In the future it could be possible to connect this up with machine vision algorithms or additional custom-use-case filters. The viewer also makes all the images that cover your center of view available to look at.
  • The software is written in a guts-out research lab style. (In my opinion that's a feature) :) This makes it slightly harder for casual users than polished commercial tools, but it also let's you see the process and tune the processing pipeline as you go to achieve better results.
If you would like to try a live demo (windows), you can download the map viewer here: https://github.com/UASLab/ImageAnalysis/releases/download/v20190215/7a-explore.zip Just extract the zip archive anywhere and run the 7a-explore.exe application inside. Before you run the app, download and unzip a dataset from here: UASLab/ImageAnalysis

The data sets include all the full original imagery (so they are huge!) but this is exactly the point, the visualizer is letting you hunt through the full original image set (placed and stretched and aligned with each other) with no lost detail.

If you are someone that wants to try processing a set of your own images, you probably should contact me. Currently the tool chain expects a CSV file with one line per image that gives the image name, the lon, lat, alt, as well as the roll, pitch, yaw of the aircraft at the time the image was taken. We are setup to do this easily with our own in-house autopilot and camera system, but we might need to build a small bit of glue code to create this file for images collected on other systems.

I'm posting here because I'd love to get some feed back, comments, questions, or maybe just confused looks? If there are a few brave souls interested in trying out the software that would also be cool. I'd love to connect up with other people interested in the nuts and bolts of image stitching tool chains. I really haven't any places where people are talking shop so to speak. Most people are either so far out ahead of me I can't understand what they are talking about, or they are doing this commercially and need to protect their own interests. I just wanted to say hi (so "Hi!") and say here's a thing I've been working on which is useful for us, so might be useful for someone else too?

Thanks,

Curt (U of MN UAS Lab)
Hi Curt this is just what I am looking for how can I get hold of the app.?
 
Hi Curt - I am really interested in this and would like to know.more and how to get hold of it, the ability to look
Hi Curt this is just what I am looking for how can I get hold of the app.?

What I have written is not really an 'app' in the conventional sense. It is a collection of python scripts that run the steps of feature detection, feature matching, optimization (aka sparse bundle adjustment), optional outlier detection/removal, generating the visual result and then display the final result. The map viewer is an 'app' that can be shared along with the data sets that are created. (So the hard work is with the stitching process, but sharing is relatively easy except for some to data set sizes.)

You can grab a copy of the entire project here: UASLab/ImageAnalysis

If you poke around the source code you'll see a lot of extra code that I've experimented with (not every idea is a winner, but sometimes there is enough of a seed there to keep it floating around just in case.) You'll also see a script for generating the camera/lens calibration from a set of images (or a movie) of the classic checkerboard pattern. There is code to frame grab movies and geotag the frames so you can stitch a static map together from an action cam movie. There is also some code there to overlay a hud on top of an action cam movie. There is a bit of commonality in all these things ... from map stitching to aligning flightdata with video frames to augmented reality.

I do mostly fixed wing flying, so here's an example of some of the augmented reality stuff I've done (and also shows our in-house autopilot hardware and software in action.)

1132
Curt.
 
3. One more thing that could be harder for casual computer users (who want to process their own image sets) is that you need to follow the README-InstallDev instructions (found at the github repo) to make sure you've installed all the prerequisite packages so you can run the software.

Hi Curt,

Yeah, I'm fairly green at working with toolkits in the terminal environment. I did successfully set up everything needed for running ODM but I'll admit it wasn't fun. I also have the Anaconda Python dist. installed with both py 2.7 and 3.5 environments but looking at your README, that is just the beginning.

Currently, I uses drone-based photogrammetry for land use planning and some material volume audits. The geo-tiffs that I get out of MME (MapsMadeEasy) are very close if not just as high resolution as the images used to generate it. So, I'm not really missing any detail by this process. (sample below)

I am still interested in what you have going and I hope you pop in from time to time with your progress. The concept of what you are doing is valid and for what you are using it for is perfect. I suspect if you could then get the computer to, on it's own, look for instances of what you tell it to look for, you would have a commercial product, not that that is what you are after, just sayin. In medicine, I believe it is being shown that the computer is better at finding patterns and properties in imagery far better than humans. Probably astronomy too. So it only makes sense that a program that can search and find certain plants, foot prints, dying trees, etc, etc, would be very valuable in many areas. I have no idea if this is part of your road map but it seems like it might be.

Dave
 

Attachments

  • compare.JPG
    compare.JPG
    492.2 KB · Views: 21
I have been playing around with computing a "false" NDVI index (false because it's from an RGB camera) and displaying it in real time using the OpenGL shader language. Trees cast an eerie green glow on a snow covered field in this example. The interesting thing here is that GLSL enables tuning and changing the vegetation index without touching the original images. It would be possible to create custom indices to bring out some feature of interest in the imagery (maybe outback joe was wearing a purple scarf when he disappeared.) It should be possible to select and change indices or go back and forth with original imagery all in real time using the opengl map visualizer app. Reminder: my use case is hunting needles (aka invasive plant species.)
1188
 
We went out Tuesday to fly our 1/2 scale X-56 project and in the down time while I was waiting for the aircraft assembly elves to do their magic, I put our phantom 4 in the air and mapped as much of our flying field as I could in one flight (@ 130' AGL.) We collected 531 images. I ran them through my open-source fitting/stitching tool chain. Here are a couple screen grabs from the visualizer. The flight was near solar noon so you can see the shadow of the phantom and the surround bright spot in just about every picture ... the fit came out really nice though, so I thought I'd share ....

The thing (I think) our system does differently from most other similar software packages is that the visualizer (the end result) draws the original images in there entirety (using opengl). As you zoom in, you know you are getting the full detail of the original images because you are seeing the original images (just rotated, stretched, and nudged so they fit [almost] perfectly together.) And because all the original images are there to look at, you can cycle between all the different views that cover a particular point of interest. Our ultimate use case is finding invasive plants, so we needed a tool chain that was optimized for finding needles in haystacks, and open and shareable among our team without having to buy more seats.

1303
1304
1305
 
  • Like
Reactions: BigAl07
I thought I'd share one more thing about our custom (python) mapping and visualization pipeline. (The system is all open-source, free, and available if anyone is interested in experimenting with it ....)

Our map visualizer is written using an opengl scene graph engine (i.e. 3d graphics engine.) Because the map visualizer draws the original imagery in real time, we can also apply instant visual effects to the imagery in real time (or at load time) or both times. Some things I like to do include adaptive histogram equalization of the V channel in HSV color map space. This adjusts the light/dark/contrast across the full visual range and really helps the details of the images pop out. I also have started applying vignette correction (if needed.) And finally, what I want to show here is that we have the ability to write opengl shader language filters to enhance aspects of the imagery in real time. Shaders give you access to the full power of your 3d graphics card to do fancy things (or in my case simplistic things.) What I've done is created a simple code snippet that scans the entire display pixel by pixel as it is rendered to the screen and enhances the 'red' areas while de-emphasizing everything else. For our oriental bittersweet project, the vines hold onto their reddish berries throughout the winter and are clearly visible, especially set against a snowy background. Here are a couple pictures showing the natural (equalized) image and then the filtered/enhanced view that highlights the areas of oriental bittersweet:

1327
1328


1329
1330
 
  • Like
Reactions: Halifax
We are working on several projects your software might be of use for change detection of hard surfaces...and other projects with true NDVI from distinct band multispectral sensors...However, I am not clear if your solution will help us, if so, we would like to test your software, perhaps we can chat offline.

Stephen M.
 
Hi Stephen,

I'm out of town this weekend, but definitely would be happy to chat and brain storm a bit to see if anything makes sense.

Curt.
 
Hi there guys, Sorry this is my first post so this may not be the best location for this question but it seemed appropriate. Can someone tell me what kind of software a video like this would be made from? I'm looking to add buildings to empty hillsides in a video flyover with my Mavic 2. thank you guys!!
 
That video looks 100% computer graphics (look closely at the trees.) It seems pretty well done. It wouldn't surprise me if they used some fancy 3d modeling tool like creator/openflight ... or maybe there's something newer and more popular these days. They could image an area and generate a base 3d model with one of the popular commercial stitching tools and then import that and add their own house and tree models, cars, people, road furniture, and other decorations on top.

I opened up that video in it's own tab and youtube suggested I watch this video after:
(From google maps to 3d maps in photoshop) No idea if there is a fancy algorithm connection or youtube just randomly picked that for no particular reason and there is no connection. (I haven't watched the photoshop video yet so no idea ...)

There are fancy hollywood tricks for blending 3d models mostly seamlessly with video like this:

I have done some far less fancy blending of 3d models and video like this (drawing sun/moon markers and drawing my flight path and hud in the video). In my work I use the result as an engineering tool so I'm actually interested in the small discrepancies between the video and the virtual (that shows how well or poorly my autopilot is estimating it's position and orientation.) So I'm not trying to hide the mismatches ... I'm trying to tune our attitude estimation so the mismatches get smaller and smaller. Here is an autoland at dusk + 30 minutes:

I'm guessing the sort of stuff in your video is doable, but probably with a fair bit of learning curve if you haven't done 3d modeling or computer graphics before. (I'm terrible at 3d modelling ... about as terrible as if you hand a crayon to an infant ... so most things I do are generated by computer code/scripts and not by fancy software and artistry.

I don't know if any of that helps, but my guess is they grabbed base imagery from google earth or similar and never actually flew their own aircraft or captured any real footage.
 

Members online

No members online now.

Forum statistics

Threads
4,277
Messages
37,605
Members
5,969
Latest member
KC5JIM