Welcome, Commercial Drone Pilots!
Join our growing community today!
Sign up

Open-source Image Stitching and Mapping

clolsonus

Member
Joined
Feb 22, 2019
Messages
10
Reaction score
2
Age
50
I wanted to introduce myself and an open-source mapping tool chain I have been working on. My name is Curt Olson. I've been involved in UAV's and avionics development since the mid-2000's and have been experimenting with image stitching and mapping for about 5 years now. My degree is in computer science, but right now I'm working for the U of MN UAS Lab and pretending to be an aerospace engineer.

In support of a couple different projects that have come through our lab, we have been developing an open-source image stitching/mapping solution. Our tool chain is written entirely in python + opencv. We hope this makes the code more accessible for folks that like to poke under the hood. We also hope it makes the code extensible for people that would like to do something similar, but no existing solutions quite do what you want. As a research lab, we continually bump into the limitations of commercial tools with respect to our projects and use cases, so we really like building solutions on top of open-source software, and we try to keep things simple and accessible when we can.

You can find the entire git repository with some sample data sets here: UASLab/ImageAnalysis

I put together a long rambling youtube video that shows the stitched results and details here:

I also want to say briefly why we built our own software and/or what makes it different from other solutions. One of our current projects is essentially searching for needles in the haystack. We are surveying heavily wooded areas (often with steep inaccessible terrain) for an invasive vine called oriental bittersweet. Oriental bittersweet will eventually take over whole stands of trees, kill them, and leave a big tangled mess of vines. It can be very difficult to eradicate. We found that in the winter, we can actually spot the brightly colored berries which generally stay attached to the vines well into February. Identifying oriental bittersweet requires highly detailed low level imagery. We found that commercial tool chains always give up some resolution to make the ortho-mosaic. We also have found that when identifying small things in the map, having access to all the original full res images, all the views & angles can help immensely deciding if something is or isn't the thing we are looking for. So the end result of our software is a mapping tool that let's you view the original images (and have access to view image that covers your center of view).

This toolset isn't intended to compete directly with tools that produce dense meshes and highly detailed DEM's ... I'll leave that to the experts. We produce a 3d surface model of the area, but it is for the purpose of projecting whole images onto the surface, not building a full 3d model of every pixel in the survey area (again I'll leave that up to the experts and the tools that already do that really well.) So why would you be interested in this package?
  • It is another (legit hopefully) open-source mapping solution, so it doesn't cost money to download and use.
  • Your data is your data. It is kept on your hard drive and you decide who to share it with. (I generally trust cloud solutions, but we had to sign an agreement giving drone deploy access to all our data sets to use for their own purposes in order to get an educational license from them. I think pix4d had similar terms.)
  • It is written entirely in python + opencv (for anyone who likes python.)
  • It is designed to hunt for needles in your haystacks. The map viewer tool shows the original images as 3d textures stretched [almost] perfectly over the ground surface. This is why it needs to be a local app on your PC, as far as I know the same thing can't be done in a web-based viewer. You aren't looking at the original images with no lost resolution. The map viewer can apply different enhancement filters (i.e. level equalization & color equalization) to help the details pop out for a human viewer (or help save poorly exposed images.) In the future it could be possible to connect this up with machine vision algorithms or additional custom-use-case filters. The viewer also makes all the images that cover your center of view available to look at.
  • The software is written in a guts-out research lab style. (In my opinion that's a feature) :) This makes it slightly harder for casual users than polished commercial tools, but it also let's you see the process and tune the processing pipeline as you go to achieve better results.
If you would like to try a live demo (windows), you can download the map viewer here: https://github.com/UASLab/ImageAnalysis/releases/download/v20190215/7a-explore.zip Just extract the zip archive anywhere and run the 7a-explore.exe application inside. Before you run the app, download and unzip a dataset from here: UASLab/ImageAnalysis

The data sets include all the full original imagery (so they are huge!) but this is exactly the point, the visualizer is letting you hunt through the full original image set (placed and stretched and aligned with each other) with no lost detail.

If you are someone that wants to try processing a set of your own images, you probably should contact me. Currently the tool chain expects a CSV file with one line per image that gives the image name, the lon, lat, alt, as well as the roll, pitch, yaw of the aircraft at the time the image was taken. We are setup to do this easily with our own in-house autopilot and camera system, but we might need to build a small bit of glue code to create this file for images collected on other systems.

I'm posting here because I'd love to get some feed back, comments, questions, or maybe just confused looks? If there are a few brave souls interested in trying out the software that would also be cool. I'd love to connect up with other people interested in the nuts and bolts of image stitching tool chains. I really haven't any places where people are talking shop so to speak. Most people are either so far out ahead of me I can't understand what they are talking about, or they are doing this commercially and need to protect their own interests. I just wanted to say hi (so "Hi!") and say here's a thing I've been working on which is useful for us, so might be useful for someone else too?

Thanks,

Curt (U of MN UAS Lab)
 
  • Like
Reactions: dempseycreekkid

John Githens

Active Member
Joined
Mar 13, 2018
Messages
29
Reaction score
13
Location
Island County, Washington State
Website
www.aerialwhidbey.com
Curt, it is true that I enjoyed all 13+ minutes. :)

I would like to pass on your information to our local Noxious Weed Control Board staff. Your software and approach looks like an excellent fit for what they are chartered to do, if they can find the funding for flights over areas with reported infestations, and for technical support in the office.

I've downloaded the linked files, and hope to find the time to process a collection of very accurate, georeferenced images processed with a KlauPPK product, data from a local CORS, and Pix4D Mapper. If you wish, feel free to PM me through this forum site, although I am focused more on applying than developing software.

The terrain in your video appears to be fairly flat. Thanks to receding glaciers, our local properties may be somewhat flat, hilly, or a mix. Many properties have areas dense with tall evergreen trees and bushes. With time I can imagine other use-cases emerging for where your software is a good fit. For example, forest health/damage assessment at the individual treetop level, and other applications where examining the details of an object is critical to gaining or improving information.

Thanks for sharing!
 

clolsonus

Member
Joined
Feb 22, 2019
Messages
10
Reaction score
2
Age
50
I have lots of image sets, albeit .jpgs . What do I need to do to try and process them with your kit?
Hi Dave,

Outside of needing better documentation (something that's always true) there are two 'hard' things you need to get started.

1. You need an initial estimate of your lens calibration. This is the part where you take 50 or 100 different pictures of a checkerboard pattern and then run a little script that pops out the calibration. (The optimizer can later refine this based on your actual images, but you need something sort of close to start with.)

2. For you image set you need a CSV file with one line per image of the following format:

File Name, Lat (decimal degrees), Lon (decimal degrees), Alt (meters MSL), Roll (decimal degrees), Pitch (decimal degrees), Yaw (decimal degrees)

pitch, roll, and yaw are the aircraft orientation at the time the image was taken. (Then you also need to know the approximate camera mounting offset angle.) I'm making it sound more difficult than it is. With a quad copter you probably have your camera on a gimble always looking exactly straight down.

I have my own system all setup for generating this file based on my flight log data and camera trigger events, but everyone else here would be using a different system so there may need to be a little bit of experimentation needed here to figure out the easiest way to get this done.

3. One more thing that could be harder for casual computer users (who want to process their own image sets) is that you need to follow the README-InstallDev instructions (found at the github repo) to make sure you've installed all the prerequisite packages so you can run the software.

Much of this can be streamlined going forward, but I'm just stepping out of my development cave right now. It would be helpful to find a few brave guinea pigs who won't get frustrated at the first thing that breaks and are willing to try a few things and work with me. This also helps me figure out what I can do better and what I can do to make things easier for everyone else.

Thanks!

Curt.
 

clolsonus

Member
Joined
Feb 22, 2019
Messages
10
Reaction score
2
Age
50
Curt, it is true that I enjoyed all 13+ minutes. :)

I would like to pass on your information to our local Noxious Weed Control Board staff. Your software and approach looks like an excellent fit for what they are chartered to do, if they can find the funding for flights over areas with reported infestations, and for technical support in the office.

I've downloaded the linked files, and hope to find the time to process a collection of very accurate, georeferenced images processed with a KlauPPK product, data from a local CORS, and Pix4D Mapper. If you wish, feel free to PM me through this forum site, although I am focused more on applying than developing software.

The terrain in your video appears to be fairly flat. Thanks to receding glaciers, our local properties may be somewhat flat, hilly, or a mix. Many properties have areas dense with tall evergreen trees and bushes. With time I can imagine other use-cases emerging for where your software is a good fit. For example, forest health/damage assessment at the individual treetop level, and other applications where examining the details of an object is critical to gaining or improving information.

Thanks for sharing!
Hi John,

Thanks for the kind words. I'm working with the noxious weed folks in Minnesota so this is kind of my primary focus right now. I would call the terrain in the video "hilly" by minnesota standards. I'm sure that is closer to 'flat' for anyone from a state with mountainous regions. :) I have a couple other data sets in areas that do have locally steep terrain (Winona, MN) but the flights were over private land so I'm not sure at this point if I can make those data sets publicly available. I had some camera triggering issues in those flights that led to poor overlap, so the data sets themselves don't stitch super well (or at least there are areas with some issues ... pix4d totally puked on one of them, drone deploy did ok, my software also struggled and I had to do some coaxing to get good results out.) Anyway, the big lesson I learned out of those flights is that 70% side and end-lap is pretty important for the stitching process ... more so when you fly over steep wooded terrain where the system can struggle to find good matching features between images.

There is a group at the UMN ag school looking at some specific tree diseases, so I'm hoping to be able to do some demo flights for them this spring or summer. I missed them last year because I was busy doing so much other flying. We have another invasive project (spotted wing drosophila aka fruit fly) where we are flying airborne insect traps around dusk at different altitudes. Fun project, but no mapping involved there ... it's more like fishing where some evenings you catch some things and some evenings you come up totally empty.

If you had a small or medium size data set you could share, I could look at processing them here and try to figure out what things need to be accounted for to make it easy for you to run the same data sets yourself through my software. We could PM about that if you were interested.

Thanks again!

Curt.
 

rvrrat14

Well-Known Member
Joined
Jan 6, 2018
Messages
163
Reaction score
53
Age
57
Location
U.S.
What O/S does your system run on? How large of download to install and how long to process a set of images, say 100 photos on a compatible system. Sounds very interesting!
 

John Githens

Active Member
Joined
Mar 13, 2018
Messages
29
Reaction score
13
Location
Island County, Washington State
Website
www.aerialwhidbey.com
Thanks Curt, I will PM you later this weekend about sending a couple of datasets through DropBox. I'll limit the number of images per set. One will be from an Inspire 2 with the KlauPPK kit. The other will be from a Parrot Anafi with remarkable satellite lock that day. Both sets will include an almost-worse-case example (slopes with dense, tall trees; missions at 280 ft AGL; snow patches; winter sun). Looking forward to some exploring, and providing user feedback. Meanwhile I will open and install per github info. I'm using a Lenovo P70 running Windows 10.
 

clolsonus

Member
Joined
Feb 22, 2019
Messages
10
Reaction score
2
Age
50
What O/S does your system run on? How large of download to install and how long to process a set of images, say 100 photos on a compatible system. Sounds very interesting!
I do all my development on Linux, but the code is 100% python so theoretically it will just work on any platform. (That theory is not well tested, but I've tried to do everything the python way to keep the code as portable as possible.)

The python source code .zip file is 5.7Mb to download. That doesn't include installing python itself or opencv or any of the other prerequisites.

Time to process is a hard number to nail down. Depending on image resolution and some of the processing choices you can make (and your computer hardware), I think that could be done in an hour or two without needing a bitcoin mining level computer. The largest dataset I've processed so far is about 2400 (24Mpixel) images. I didn't time it out precisely because I was doing several things at once on my computer, but it probably took more than a day, less than two days. Right now with large image sets the most time consuming step is doing the n->m_closest image matching step.
 
  • Like
Reactions: rvrrat14

clolsonus

Member
Joined
Feb 22, 2019
Messages
10
Reaction score
2
Age
50
Thanks Curt, I will PM you later this weekend about sending a couple of datasets through DropBox. I'll limit the number of images per set. One will be from an Inspire 2 with the KlauPPK kit. The other will be from a Parrot Anafi with remarkable satellite lock that day. Both sets will include an almost-worse-case example (slopes with dense, tall trees; missions at 280 ft AGL; snow patches; winter sun). Looking forward to some exploring, and providing user feedback. Meanwhile I will open and install per github info. I'm using a Lenovo P70 running Windows 10.
John, I'd love to have a crack at one or two of your data sets when you get a chance to package them up and share them. Thanks!
 

Hmgmi5

New Member
Joined
Feb 23, 2019
Messages
1
Reaction score
0
Age
49
Hi Curt - I am really interested in this and would like to know.more and how to get hold of it, the ability to look
I wanted to introduce myself and an open-source mapping tool chain I have been working on. My name is Curt Olson. I've been involved in UAV's and avionics development since the mid-2000's and have been experimenting with image stitching and mapping for about 5 years now. My degree is in computer science, but right now I'm working for the U of MN UAS Lab and pretending to be an aerospace engineer.

In support of a couple different projects that have come through our lab, we have been developing an open-source image stitching/mapping solution. Our tool chain is written entirely in python + opencv. We hope this makes the code more accessible for folks that like to poke under the hood. We also hope it makes the code extensible for people that would like to do something similar, but no existing solutions quite do what you want. As a research lab, we continually bump into the limitations of commercial tools with respect to our projects and use cases, so we really like building solutions on top of open-source software, and we try to keep things simple and accessible when we can.

You can find the entire git repository with some sample data sets here: UASLab/ImageAnalysis

I put together a long rambling youtube video that shows the stitched results and details here:

I also want to say briefly why we built our own software and/or what makes it different from other solutions. One of our current projects is essentially searching for needles in the haystack. We are surveying heavily wooded areas (often with steep inaccessible terrain) for an invasive vine called oriental bittersweet. Oriental bittersweet will eventually take over whole stands of trees, kill them, and leave a big tangled mess of vines. It can be very difficult to eradicate. We found that in the winter, we can actually spot the brightly colored berries which generally stay attached to the vines well into February. Identifying oriental bittersweet requires highly detailed low level imagery. We found that commercial tool chains always give up some resolution to make the ortho-mosaic. We also have found that when identifying small things in the map, having access to all the original full res images, all the views & angles can help immensely deciding if something is or isn't the thing we are looking for. So the end result of our software is a mapping tool that let's you view the original images (and have access to view image that covers your center of view).

This toolset isn't intended to compete directly with tools that produce dense meshes and highly detailed DEM's ... I'll leave that to the experts. We produce a 3d surface model of the area, but it is for the purpose of projecting whole images onto the surface, not building a full 3d model of every pixel in the survey area (again I'll leave that up to the experts and the tools that already do that really well.) So why would you be interested in this package?
  • It is another (legit hopefully) open-source mapping solution, so it doesn't cost money to download and use.
  • Your data is your data. It is kept on your hard drive and you decide who to share it with. (I generally trust cloud solutions, but we had to sign an agreement giving drone deploy access to all our data sets to use for their own purposes in order to get an educational license from them. I think pix4d had similar terms.)
  • It is written entirely in python + opencv (for anyone who likes python.)
  • It is designed to hunt for needles in your haystacks. The map viewer tool shows the original images as 3d textures stretched [almost] perfectly over the ground surface. This is why it needs to be a local app on your PC, as far as I know the same thing can't be done in a web-based viewer. You aren't looking at the original images with no lost resolution. The map viewer can apply different enhancement filters (i.e. level equalization & color equalization) to help the details pop out for a human viewer (or help save poorly exposed images.) In the future it could be possible to connect this up with machine vision algorithms or additional custom-use-case filters. The viewer also makes all the images that cover your center of view available to look at.
  • The software is written in a guts-out research lab style. (In my opinion that's a feature) :) This makes it slightly harder for casual users than polished commercial tools, but it also let's you see the process and tune the processing pipeline as you go to achieve better results.
If you would like to try a live demo (windows), you can download the map viewer here: https://github.com/UASLab/ImageAnalysis/releases/download/v20190215/7a-explore.zip Just extract the zip archive anywhere and run the 7a-explore.exe application inside. Before you run the app, download and unzip a dataset from here: UASLab/ImageAnalysis

The data sets include all the full original imagery (so they are huge!) but this is exactly the point, the visualizer is letting you hunt through the full original image set (placed and stretched and aligned with each other) with no lost detail.

If you are someone that wants to try processing a set of your own images, you probably should contact me. Currently the tool chain expects a CSV file with one line per image that gives the image name, the lon, lat, alt, as well as the roll, pitch, yaw of the aircraft at the time the image was taken. We are setup to do this easily with our own in-house autopilot and camera system, but we might need to build a small bit of glue code to create this file for images collected on other systems.

I'm posting here because I'd love to get some feed back, comments, questions, or maybe just confused looks? If there are a few brave souls interested in trying out the software that would also be cool. I'd love to connect up with other people interested in the nuts and bolts of image stitching tool chains. I really haven't any places where people are talking shop so to speak. Most people are either so far out ahead of me I can't understand what they are talking about, or they are doing this commercially and need to protect their own interests. I just wanted to say hi (so "Hi!") and say here's a thing I've been working on which is useful for us, so might be useful for someone else too?

Thanks,

Curt (U of MN UAS Lab)
Hi Curt this is just what I am looking for how can I get hold of the app.?
 

clolsonus

Member
Joined
Feb 22, 2019
Messages
10
Reaction score
2
Age
50
Hi Curt - I am really interested in this and would like to know.more and how to get hold of it, the ability to look
Hi Curt this is just what I am looking for how can I get hold of the app.?
What I have written is not really an 'app' in the conventional sense. It is a collection of python scripts that run the steps of feature detection, feature matching, optimization (aka sparse bundle adjustment), optional outlier detection/removal, generating the visual result and then display the final result. The map viewer is an 'app' that can be shared along with the data sets that are created. (So the hard work is with the stitching process, but sharing is relatively easy except for some to data set sizes.)

You can grab a copy of the entire project here: UASLab/ImageAnalysis

If you poke around the source code you'll see a lot of extra code that I've experimented with (not every idea is a winner, but sometimes there is enough of a seed there to keep it floating around just in case.) You'll also see a script for generating the camera/lens calibration from a set of images (or a movie) of the classic checkerboard pattern. There is code to frame grab movies and geotag the frames so you can stitch a static map together from an action cam movie. There is also some code there to overlay a hud on top of an action cam movie. There is a bit of commonality in all these things ... from map stitching to aligning flightdata with video frames to augmented reality.

I do mostly fixed wing flying, so here's an example of some of the augmented reality stuff I've done (and also shows our in-house autopilot hardware and software in action.)

1132
Curt.
 

Dave Pitman

Well-Known Member
Joined
Dec 23, 2017
Messages
441
Reaction score
263
Location
Washington State
3. One more thing that could be harder for casual computer users (who want to process their own image sets) is that you need to follow the README-InstallDev instructions (found at the github repo) to make sure you've installed all the prerequisite packages so you can run the software.
Hi Curt,

Yeah, I'm fairly green at working with toolkits in the terminal environment. I did successfully set up everything needed for running ODM but I'll admit it wasn't fun. I also have the Anaconda Python dist. installed with both py 2.7 and 3.5 environments but looking at your README, that is just the beginning.

Currently, I uses drone-based photogrammetry for land use planning and some material volume audits. The geo-tiffs that I get out of MME (MapsMadeEasy) are very close if not just as high resolution as the images used to generate it. So, I'm not really missing any detail by this process. (sample below)

I am still interested in what you have going and I hope you pop in from time to time with your progress. The concept of what you are doing is valid and for what you are using it for is perfect. I suspect if you could then get the computer to, on it's own, look for instances of what you tell it to look for, you would have a commercial product, not that that is what you are after, just sayin. In medicine, I believe it is being shown that the computer is better at finding patterns and properties in imagery far better than humans. Probably astronomy too. So it only makes sense that a program that can search and find certain plants, foot prints, dying trees, etc, etc, would be very valuable in many areas. I have no idea if this is part of your road map but it seems like it might be.

Dave
 

Attachments

clolsonus

Member
Joined
Feb 22, 2019
Messages
10
Reaction score
2
Age
50
I have been playing around with computing a "false" NDVI index (false because it's from an RGB camera) and displaying it in real time using the OpenGL shader language. Trees cast an eerie green glow on a snow covered field in this example. The interesting thing here is that GLSL enables tuning and changing the vegetation index without touching the original images. It would be possible to create custom indices to bring out some feature of interest in the imagery (maybe outback joe was wearing a purple scarf when he disappeared.) It should be possible to select and change indices or go back and forth with original imagery all in real time using the opengl map visualizer app. Reminder: my use case is hunting needles (aka invasive plant species.)
1188
 

New Threads

Members online

Forum statistics

Threads
1,790
Messages
16,591
Members
2,993
Latest member
Aeropapa