Welcome, Commercial Drone Pilots!
Join our growing community today!
Sign up

How to inspect quality of flat roofs

OK - thanks for the clarification about the folder locations - I can work with that once the test stitching has finished (but still waiting for the stitching : 293/705? <-- are those the number of pictures in ZTEST??)

Great test pictures I must say in ztest!

About the FLIR: no they are not georeferenced unfortunately - so stitching will be not easy or even not feasible (didn't think about the georeference as a obvious needed telemetry aspect)

I'll finish first the ZTEST stitching and then try with my testdata (around 200 pics). Once that works I'll post an update
Oh yes - another question: I see different maps under ImageAnalysis like SRTM where I see African Australia,... but not Europe (I live in Belgium-Europe) - is that a problem?
Thanks
Marc
 
OK - thanks for the clarification about the folder locations - I can work with that once the test stitching has finished (but still waiting for the stitching : 293/705? <-- are those the number of pictures in ZTEST??)

Great test pictures I must say in ztest!

About the FLIR: no they are not georeferenced unfortunately - so stitching will be not easy or even not feasible (didn't think about the georeference as a obvious needed telemetry aspect)

I'll finish first the ZTEST stitching and then try with my testdata (around 200 pics). Once that works I'll post an update
Oh yes - another question: I see different maps under ImageAnalysis like SRTM where I see African Australia,... but not Europe (I live in Belgium-Europe) - is that a problem?
Thanks
Marc

293/705 is the number of pair comparisons it is planning to do. There are some options you can give to the processing script to trade quality for time. You can give a smaller image scale so that fewer keypoints are found and comparisons go more quickly. You can also reduce the distance radius for pair comparisons to generate fewer comparisons. There are a few other tricks if you want to bail out early or do an intermediate rendering of what you've done so far ... requires running some scripts from the command line. But the tradeoff is usually fewer match pairs found, possibly more missing images from the final stitch, etc.

Belgium should be covered under Eurasia (that's just how the SRTM folks organized the data) so you should be fine processing local data. I've run a dataset from a farm in Israel and that worked too.

One of the reasons I picked ztest is for the mature corn ... that is *really* hard to find proper matching keypoints and do a good stitch. (Feel free to upload the dataset to dronedeploy or pix4d cloud and see how much of that they manage to fit together.) If you know you are in this situation (corn, crops, forest, etc.) that confounds the stitching process, I have a different matching strategy available, you can invoke it with the --match-strategy=smart option to the process.py script. This uses the geotag and orientation information in the images along with the SRTM ground surface to predict where matches should be, and this lets the matcher throw away a boatload of extraneous noise and find true matches where other tools just punt or completely bungle the stitch. As it processes image pairs, it uses the matches it finds to triangulate and estimate actual ground elevation and correct for magnetic compass errors so that the prediction gets better and better as it runs. But this isn't the default because it slower than the default scheme (and the default scheme performs pretty comparably to DD/pix4d.) You can rerun the matcher later with the 'smart' option enabled and find matches that the default scheme missed.

For the flir images, I have used flight logs + time (on my fixed wing drones) to geotag images after the flight. I don't know if that is possible for you, but sometimes there are creative options if you are willing to work for them.

Best regards,

Curt.
 
OK - I tried stitching my dataset and got the next message:

"project processed with arguments: Namespace(cam_calibration=False, camera=None, detector=&apos;SIFT&apos;, filter=&apos;gms&apos;, force_altitude=None, grid_detect=1, ground=None, group=0, match_ratio=0.75, match_strategy=&apos;traditional&apos;, max_angle=25.0, max_dist=None, min_chain_length=3, min_dist=None, min_pairs=25, orb_max_features=20000, pitch_deg=-90.0, project=&apos;ytest&apos;, refine=False, reject_margin=0, roll_deg=0.0, scale=0.4, star_line_threshold_binarized=8, star_line_threshold_projected=10, star_max_size=16, star_response_threshold=30, star_suppress_nonmax_size=5, surf_hessian_threshold=600, surf_noctaves=4, yaw_deg=0.0) <--------------------------------------------- IS THIS ALL OK?
Step 1: setup the project
Creating analysis directory: ytest/ImageAnalysis
project: creating meta directory: ytest/ImageAnalysis/meta
project: creating cache directory: ytest/ImageAnalysis/cache
project: creating state directory: ytest/ImageAnalysis/state
project: project configuration doesn&apos;t exist: ytest/ImageAnalysis/config.json
Continuing with an empty project configuration
Created project: ytest
Camera auto-detected: DJI_FC220 DJI FC220 None <-------- CAMERA NOT RECOGNISED
Camera file: ../cameras/DJI_FC220.json
../cameras/DJI_FC220.json: json load error:
[Errno 2] No such file or directory: &apos;../cameras/DJI_FC220.json&apos;
Camera autodetection failed. Consider running the new camera script to create a camera config and then try running this script again.<-------- HELP PLEASE ;-)


--> something with the 1b-set-camera-config.py I suppose?
I looked up the specs for the CDD:
Sensor Dimensions:
6.17 mm x 3.47 mm (0.243 in x 0.136 in)
The lens is a 28 mm focal length with 78.8 degree field of view at f/2.2
capturing photos at 4000 x 3000 pixels

you need something else here?
 
Last edited:
OK - I tried stitching my dataset and got the next message:

"project processed with arguments: Namespace(cam_calibration=False, camera=None, detector=&apos;SIFT&apos;, filter=&apos;gms&apos;, force_altitude=None, grid_detect=1, ground=None, group=0, match_ratio=0.75, match_strategy=&apos;traditional&apos;, max_angle=25.0, max_dist=None, min_chain_length=3, min_dist=None, min_pairs=25, orb_max_features=20000, pitch_deg=-90.0, project=&apos;ytest&apos;, refine=False, reject_margin=0, roll_deg=0.0, scale=0.4, star_line_threshold_binarized=8, star_line_threshold_projected=10, star_max_size=16, star_response_threshold=30, star_suppress_nonmax_size=5, surf_hessian_threshold=600, surf_noctaves=4, yaw_deg=0.0) <--------------------------------------------- IS THIS ALL OK?
Step 1: setup the project
Creating analysis directory: ytest/ImageAnalysis
project: creating meta directory: ytest/ImageAnalysis/meta
project: creating cache directory: ytest/ImageAnalysis/cache
project: creating state directory: ytest/ImageAnalysis/state
project: project configuration doesn&apos;t exist: ytest/ImageAnalysis/config.json
Continuing with an empty project configuration
Created project: ytest
Camera auto-detected: DJI_FC220 DJI FC220 None <-------- CAMERA NOT RECOGNISED
Camera file: ../cameras/DJI_FC220.json
../cameras/DJI_FC220.json: json load error:
[Errno 2] No such file or directory: &apos;../cameras/DJI_FC220.json&apos;
Camera autodetection failed. Consider running the new camera script to create a camera config and then try running this script again.<-------- HELP PLEASE ;-)


--> something with the 1b-set-camera-config.py I suppose?
I looked up the specs for the CDD:
Sensor Dimensions:
6.17 mm x 3.47 mm (0.243 in x 0.136 in)
The lens is a 28 mm focal length with 78.8 degree field of view at f/2.2
capturing photos at 4000 x 3000 pixels

you need something else here?

You would normally run the new camera script (by specifying the project, it can examine the image meta data):

./99-new-camera.py ytest --ccd-width 6.17

That will probably fail because I forgot install a python module on the virtual machine. You can do this yourself by running:

sudo dnf install python3-exiv2

(the sudo password for the virtualmachine is "uavlab")

After that you should be able to successfully run the new camera script. If it works, can you please send me the config that is created so I can add the camera specs to the project? Oh, if the lens has a radial distortion there's an additional step to help estimate that which is currently a bit of a chicken/egg thing. This is an area where this project is a bit 'guts out'. But once you have the camera config nailed down you don't have to worry after that.

Curt.
 
OK - creation of camera-file succeeded and I'm trying the process.py script on my ytest-set. I do see that the terminal screen regularly just stops (closes) and I have to rerun the scripts so it continues (after the script checked previous work) - what is causing this you think? Not handy to have this timely job been done @night or in the background...
the scripts gives me currently 3051/4562 comparisons (200 images)

But I'm still very enthusiastic with the project!
Greetings
Marc
 

Attachments

  • DJI_FC220.json.zip
    459 bytes · Views: 2
OK - creation of camera-file succeeded and I'm trying the process.py script on my ytest-set. I do see that the terminal screen regularly just stops (closes) and I have to rerun the scripts so it continues (after the script checked previous work) - what is causing this you think? Not handy to have this timely job been done @night or in the background...
the scripts gives me currently 3051/4562 comparisons (200 images)

But I'm still very enthusiastic with the project!
Greetings
Marc

Hi Marc, thanks for the camera config, I have added that to the repository. For the other issue, I might suspect the virtual machine could be running out of memory (?) A 'lightweight' method to monitor memory usage would be to right click on the terminal window and select "new window". Then inside the new window run the "top" command. This will show you a summary of your memory usage and what resources the top processes are consuming.

If you have more memory available on your mac, you could shutdown the virtual machine and then increase the amount of memory allocated to it. By default I picked 4Gb which works for smaller image sets, but you are starting to get into the medium size range. The largest job I have run through this toolset is 2800 images, but I ran that on a dedicated linux PC with 32Gb of ram (not using the virtual machine.)

Best regards,

Curt.
 
OK - clear - I'll check this when stitching the full set.
Currently I took a couple of the pics and ran the script (around 350 comparisons) - fully finished
Then I used the command ./explorerer ytesty. (is the small test) and goto following error + a lot of textlines:

dexError: list index out of range
[uavlab@localhost scripts]$ ./explorer.py ytesty
no histogram templates found...
Known pipe types:
glxGraphicsPipe
(all display modules loaded.)
:device(warning): /dev/input/event4 is not readable, some features will be unavailable.
:device(warning): /dev/input/event6 is not readable, some features will be unavailable.
Loading surface: ytesty/ImageAnalysis/models/surface.bin
Generating Delaunay mesh and interpolator ...
No annotations file found.
driver vendor VMware, Inc.
alpha_scale_via_texture True
color_scale_via_lighting True
copy_texture_inverted False
max_3d_texture_dimension 2048
max_clip_planes 8
max_cube_map_dimension 8192
max_lights 8
max_texture_dimension 8192
max_texture_stages 8
max_vertex_transform_indices 0
max_vertex_transforms 0
shader_model 2
supports_3d_texture True
supports_basic_shaders True
supports_compressed_texture True
supports_cube_map True
supports_depth_stencil True
supports_depth_texture True
supports_generate_mipmap True
supports_shadow_filter True
supports_tex_non_pow2 True
supports_texture_combine True
supports_texture_dot3 True
supports_texture_saved_result True
supports_two_sided_stencil True
max_vertices_per_array 3000
max_vertices_per_primitive 3000
supported_geom_rendering 1006935
supports_multisample False
supports_occlusion_query True
prefers_triangle_strips False
Loading models:
100%|█████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 440.48it/s]
Loading base textures:
100%|██████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 38.51it/s]
Traceback (most recent call last):
File "./explorer.py", line 692, in <module>
app.load( os.path.join(proj.analysis_dir, "models") )
File "./explorer.py", line 306, in load
self.sortImages()
File "./explorer.py", line 451, in sortImages
top_entry = result_list[-1-self.top_image]
IndexError: list index out of range


Any clue?
Thanks
Marc
 
Hi Marc, I've seen this happen when the drone (phantom 3/4) didn't geotag the images with gps altitude (and I forgot to override the geotagged altitude with a more correct one), but my understanding is that the mavic does this properly. Would you be able to share this subset of images? It might be easier if I just take a look on my end. (The explorer.py script does dump out a bunch of debug info about your graphics card because I was having issues with some systems not self reporting their own capabilities accurately and I needed to code around that.) Thanks for your patience, I learn (and often fix) new things with each new data set and each person that uses the software.
 
Dataset sent via wetransfer
Thanks a lot already for your help!
 
Dataset sent via wetransfer
Thanks a lot already for your help!

The first time through is always the hardest, sorry for the learning curve, but you can't beat the price. :)
 
true and as well as the swift feedback which is outstanding as well ;)
 

Members online

No members online now.

Forum statistics

Threads
4,277
Messages
37,605
Members
5,969
Latest member
KC5JIM