Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error:Less than 2 images defined. when running "Step-by-step example (individual images)" #14

Open
kakghiroshi opened this issue Apr 11, 2022 · 16 comments · Fixed by #15

Comments

@kakghiroshi
Copy link

Hi!
I was recently following the steps in the" Step-by-step example (individual images) "to run the code.I got some problem in step

external SfM tool and external surface reconstruction tool

1、When i download the official SfM (Structure-from-Motion) reconstruction containing the DSLR images and the cube map images you posed in

An SfM (Structure-from-Motion) reconstruction containing the DSLR images and the cube map images must be created using an external tool. We provide an example SfM reconstruction here (unzip into the same directory as the input data archives).

the TX、TY and TZ in image.txt is

-inf -inf inf

I estimate the scaled factor using
mkdir sparse_reconstruction_scaled ${PIPELINE_PATH}/SfMScaleEstimator -s sparse_reconstruction -si . -i scan_clean -o sparse_reconstruction_scaled --cube_map_face_camera_id 0 .
when i execute the command
${PIPELINE_PATH}/ImageRegistrator \ --scan_alignment_path scan_clean/scan_alignment.mlp \ --occlusion_mesh_path surface_reconstruction/surface.ply \ --occlusion_splats_path surface_reconstruction/splats.ply \ --multi_res_point_cloud_directory_path multi_res_point_cloud_cache \ --image_base_path . \ --state_path sparse_reconstruction_scaled/colmap_model \ --output_folder_path dslr_calibration_jpg \ --observations_cache_path observations_cache \ --camera_ids_to_ignore 0
the error"Less than 2 images defined"occured.(PS: The input data of this code incluing sparse_reconstruction_scaled/colmap_model
微信截图_20220409201221
I found that the problem is with function ReadColmapImages in the file colmap_model.cc.
微信截图_20220409201811
The information was successfully read into the string line.But looks like the function std::istringstream can not recognized

-inf -inf inf

in line,so the data after the first

inf (Either positive or negative)

can't be read into
>> new_image->image_T_global.data()[4] >> new_image->image_T_global.data()[5] >> new_image->image_T_global.data()[6] >> new_image->camera_id >> new_image->file_path;respectively.
微信截图_20220411125642

By the way,I am running this project on ubuntu 20.04.The environment is built according to the instructions in dockfile.

Any suggestion about how to deal with this problem would be welcome.

微信截图_20220411130458
I also want to know how did you get two camera model and

-inf -inf inf

by using COLMAP.

I would really appreciate it if you could answer my question!

@ClementPinard
Copy link
Collaborator

Hello,

it seems the reader is not working properly. image_id should obviously be only an integer, so I suspect that expected and actual orders are different.

Are you able to run COLMAP on your computer ? Can you try the same thing with a SfM model created from scratch instead of the provided one ? The given model dates from 2017 so it's possible the file format has changed since.

@kakghiroshi
Copy link
Author

kakghiroshi commented Apr 11, 2022

Thanks for your quick reply! Of course I can run COLMAP on my computer, but I don't know how to import the intrinsics parameters of the two cameras into COLMAP, and I can't set the camera id to 0. It seems that the camera id must be greater than 0.This is the model I created:

微信截图_20220411155225
微信截图_20220411155239
As you have mention in your paper

T. Schöps, J. L. Schönberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, A. Geiger, "A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos", Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

I include rendered cube map images for each scan position into the SfM reconstruction,so there are 35 images in total(23 original and 12 rendered cube map).
You can see that only 28 images are output successfully while the provided one output 33 images,maybe there are other operations that need to be done in between?
I would be very grateful if you could give the corresponding steps.
According to the model I created I changed the code from ${PIPELINE_PATH}/SfMScaleEstimator -s sparse_reconstruction -si . -i scan_clean -o sparse_reconstruction_scaled --cube_map_face_camera_id 0 to ${PIPELINE_PATH}/SfMScaleEstimator -s sparse_reconstruction -si . -i scan_clean -o sparse_reconstruction_scaled --cube_map_face_camera_id 2.

Then,error occurred.
微信截图_20220411160428
Do you have any suggestion?

@ClementPinard
Copy link
Collaborator

You can see that only 28 images are output successfully while the provided one output 33 images,maybe there are other operations that need to be done in between?

It's not unusual to have missing images when you run COLMAP without refinement. Here is a set of advices to add more images to the model :
https://colmap.github.io/faq.html (see Increase number of matches
I do concede that it can be somehow empirical, just test different parameters and see how it goes. The good news is that there are very few images so the trial and error should not be too painful.

Then,error occurred. Do you have any suggestion?

It's strange that the intrinsics file was not created. See this line that is normally run when launching cubeMapRenderer :
https://github.com/ETH3D/dataset-pipeline/blob/master/src/exe/cube_map_renderer.cc#L146

Can you specify the options you used for this command ?

@kakghiroshi
Copy link
Author

Can you specify the options you used for this command ?

Sure,I executed the exact same command as you provided:

${PIPELINE_PATH}/CubeMapRenderer -c scan_clean/scan1.ply -o cube_maps/scan1.ply --size 2048 ${PIPELINE_PATH}/CubeMapRenderer -c scan_clean/scan2.ply -o cube_maps/scan2.ply --size 2048.

Actually the intrinsics file has been created, but the code can't read it.

微信截图_20220411165946

At the same time, if I used my own created model, should the ignored camera id be modified according to my model, for example ignored_camera_id set to 2.

微信截图_20220411155225

@ClementPinard
Copy link
Collaborator

ClementPinard commented Apr 11, 2022

It's indeed very strange. The error you get is definitely a problem of the file not being open (it still hasn't even checked if it is readable), meaning the file appears to be missing.

One shot in the dark : try to replace . by the actual absolute path in your SfMScaleEstimator command.

${PIPELINE_PATH}/SfMScaleEstimator -s sparse_reconstruction -si ${PWD} -i scan_clean -o sparse_reconstruction_scaled --cube_map_face_camera_id 2

@kakghiroshi
Copy link
Author

kakghiroshi commented Apr 11, 2022

Thanks for your kindly help!
Problem

Actually the intrinsics file has been created, but the code can't read it.

is solved.I still follow the Step-by-step example (individual images) to run the code.

At the same time,I found that although the translation vector (TX, TY, TZ) is numeric in my model.when output after step SfMScaleEstimator it become

inf

again.

As you can see,

if(opt::GlobalParameters().scale_factor != 0){
new_image->image_T_global.translation() *= opt::GlobalParameters().scale_factor;
}else{
LOG(ERROR) << "Please load point clouds before images";
}
shows that if scale_factor=0,errorPlease load point clouds before images occured.
I do get this error:
微信截图_20220412143017
Code
float scale = opt::GlobalParameters().scale_factor;

and
<< " " << colmap_image.image_T_global.data()[4] / scale

shows that if scaled_factor equal to 0,code will definitely output

inf

From

scale_factor = 0; // 0 means that it will be replaced by inverse of first point cloud transform scale
,the default value of scaled_factor is 0,it should be replaced by inverse of first point cloud transform scale. So maybe the point cloud didn't load correctly?
Do you have any suggestion?

@ClementPinard
Copy link
Collaborator

Just did some tests on my own using the docker image, and did not stumble upon your problems. Interestingly enough, there was another scale factor problem.

Try to cheeckout #15 and see how it goes, maybe it will solve your problem ?

Otherwise, the tests pass on my side so not sure what is your problem, maybe there are some problems with the std lib and your gcc version.

@kakghiroshi
Copy link
Author

I have set the global factor to 1 as you suggested #15.

Running a test with my own model using the docker image

A Dockerfile is given to build everything from scratch in ubuntu 20.04.

Still in ImageRegistrator,error
微信截图_20220413103237
occures at

void CheckEqualHelper(int a, int b) {
CHECK_EQ(a, b);
}

When using the model provided by

We provide an example SfM reconstruction here (unzip into the same directory as the input data archives).

,this problem wouldn't happen.(But I still don't know if the code will work as it's still iterating)
This problem arises very strangely because my model is also generated with COLMAP.What is the key to making aequal to b
My models are listed below:
cameras.txt
images.txt
points3D.txt

@ClementPinard
Copy link
Collaborator

(Sorry the issue got automatically fixed, wasn't intentional, I reopened it)

@ClementPinard
Copy link
Collaborator

Glad to see we are making progress ! :)

See Colmap's thin prism fisheye definition here :
https://github.com/colmap/colmap/blob/master/src/base/camera_models.h#L349

As you can see, it's 12 parameters. ETH3D checks that the camera model described in the cameras.txt matches the number of floats in the line, hence the given assertion. For some reason, ETH3D reads 13 parameters which then fails because because it's not 12.

See here for how the camera line is read :

bool ReadColmapCameras(const std::string& cameras_txt_path,

It seems to me the cameras.txt you provided is fine, so something goes wrong during the reading.

I will do tests on my own, might be worth it trying to log what are these 13 parameters to understand where things went wrong.

@kakghiroshi
Copy link
Author

kakghiroshi commented Apr 15, 2022

Thanks for your work!:thumbsup:
I have found the cause of the problem.I firstly using COLMAP-3.7-windows-no-cuda.zip in windows,I guess Windows have different output format compare with Linux(build from source code) version.Specifically,the escape characters they use are different, so the model generated by windows is incorrectly recognized in linux,like:
微信截图_20220414181801阿斯顿
The actual size of the parameter line is 55,The code does recognize that the string size is 56.I guess the extra string contains nothing, so it is 0.Using the model created by Linux version COLMAP,the problem is fixed:
微信截图_20220415141524
Stil running Step-by-step example 😤 ,still in ImageRegistrator using code

${PIPELINE_PATH}/ImageRegistrator
--scan_alignment_path scan_clean/scan_alignment.mlp
--occlusion_mesh_path surface_reconstruction/surface.ply
--occlusion_splats_path surface_reconstruction/splats.ply
--multi_res_point_cloud_directory_path multi_res_point_cloud_cache
--image_base_path .
--state_path sparse_reconstruction_scaled/colmap_model
--output_folder_path dslr_calibration_jpg
--observations_cache_path observations_cache
--camera_ids_to_ignore 2

Problem
微信截图_20220415143946

occurred.I try to replace --image_base_path . \ with --image_base_path dslr_images \,it doesn't work.

微信截图_20220415144110
The code I added:
微信截图_20220415143730
Any suggestion would be needed.Thanks again!

@kakghiroshi
Copy link
Author

Hi!
Recently I am trying to apply your code to my project.However, the difference is that the point cloud we obtain does not contain the property of color.The point cloud is densified by slam and stitched together.In the step CubeMapRenderer ,cube map face images are rendered from the laser scans with color.I wonder if the property of intensity can be used to render cube map face images instead of color(for our point cloud don't have the property of color) in step CubeMapRenderer.
Of course, the code needs to be modified to achieve intensity image rendering.
Can you give me some advice?Thanks! 😊

@ClementPinard
Copy link
Collaborator

Thanks for your insight ! Indeed I forgot about the line ending confusion between linux and windows... good to know, I'll try to update the README to wanr people about this issue !

For your problem, this is the main limitation for this work, it needs colored point clouds, which is not easy to get, and the scanner that are able to do that are very slow. You can't user the point cloud from a handheld device like the Zeb Horizon.

Shameless plug, but you might want to have a look at my own project that uses colorless point clouds : https://github.com/ClementPinard/depth-dataset-builder (there's an ArXiV article that comes with it)

It uses the functions from this repo as well, but the localisation of frames with respect to the lidar point cloud is mostly done with ICP, and not by trying to incorporate the Lidar rendering in the COLMAP reconstruction

If you look at the original publication (https://www.eth3d.net/data/schoeps2017cvpr.pdf , Figure 4), they did try ICP and got worse results, but it's not so bad given that it allows you to use colorless pointcloud. They did not implement it in this repo because it was probably a very manuel alignment which could not be converted into a usable tool without some additional work.

@ClementPinard
Copy link
Collaborator

For your problem with ImageRegistrator, it depends what the filepaht is in the images.txt of your colmap model. In the provided one, the path is in the format dslr_images/DSC_0285.JPG It means that image base path must be the path to the parent of the dslr_images folder.

In a sense, your second screenshot is right with that assumption. What is weird is your first screenshot where the file_path (which should be literally the path written in the colmap file) is said to be ./DSC_0275.JPG It's weird that there is the . in this path.

Did you modify the images.txt file manually ? If not, is the file different from the provided one ?

@kakghiroshi
Copy link
Author

This project may indeed be helpful in our work! Thank you for your dedication.I will add this project in my to-do list.

For your problem with ImageRegistrator, it depends what the filepaht is in the images.txt of your colmap model. In the provided one, the path is in the format dslr_images/DSC_0285.JPG It means that image base path must be the path to the parent of the dslr_images folder.

I do check my filepath in the images.txt and solve this problem,thanks again!
For our camera's low resolution(1024*768 ,probably.),running Step-by-step example (camera rig images)maybe meaningful.The code works perfect.
Given that I have successfully run both examples, I try to observe their alignment in ImageRegistrator .
Here comes several questions:

1: In Step-by-step example (individual images),the resulting depth map I get has many holes.
微信图片_20220420195743
微信图片_20220420195739
Compare with your posted depth map:
image(capture from README.md)
Do you know what cause this?

2: There are rarely scan reprojection points on the left wall(annotated by white box),but opposite on the wall facing(annotated by yellow box) and on the left(annotated by red box,weird for different density in the same wall).
b9c0da3f4d5cdeaa2b6cf2a7eed7d32
what makes this difference?

3:In Step-by-step example (camera rig images),depth map only covers the area in the center of the image, the edge of the image does not contain the depth information.Is it normal?
微信截图_20220420202440
Looking forward to your reply! 😁

@ClementPinard
Copy link
Collaborator

Most likely, the occlusion model is in front of the actual model. The points are thus deemed occluded and are not shown. A first solution would to try other options to generate the occlusion mesh so that it has a better quality.

You might also want to have a look at the global parameters : https://github.com/ETH3D/dataset-pipeline/blob/master/src/opt/parameters.h

More specifically, in this case, the problem is that the occlusion_depth_threshold is too low, and thus too strict. If a point has a depth greater than occlusion depth + threshold, then it's not visible. Raise the threshold and more points will show.

For the second use case, it's indeed pretty weird. Looks like a problem with the camera model. Somehow the polygons around the center of the image did not render, making the occlusion depth there at 0, which means everything is occluded. You can check that intuition by looking at the "occlusion depth map" in the select mode.

If this is true, this might be a bug that we have to look into. I know that some extreme camera models + parameters can give up rendering at the edge of the screen, but I didn't know it occurred for the low-res cameras !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants