-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error:Less than 2 images defined. when running "Step-by-step example (individual images)" #14
Comments
Hello, it seems the reader is not working properly. image_id should obviously be only an integer, so I suspect that expected and actual orders are different. Are you able to run COLMAP on your computer ? Can you try the same thing with a SfM model created from scratch instead of the provided one ? The given model dates from 2017 so it's possible the file format has changed since. |
It's not unusual to have missing images when you run COLMAP without refinement. Here is a set of advices to add more images to the model :
It's strange that the intrinsics file was not created. See this line that is normally run when launching Can you specify the options you used for this command ? |
Sure,I executed the exact same command as you provided:
Actually the intrinsics file has been created, but the code can't read it. At the same time, if I used my own created model, should the ignored camera id be modified according to my model, for example ignored_camera_id set to 2. |
It's indeed very strange. The error you get is definitely a problem of the file not being open (it still hasn't even checked if it is readable), meaning the file appears to be missing. One shot in the dark : try to replace
|
Thanks for your kindly help!
is solved.I still follow the At the same time,I found that although the
again. As you can see, dataset-pipeline/src/io/colmap_model.cc Lines 130 to 134 in dc4a106
Please load point clouds before images occured.I do get this error: Code dataset-pipeline/src/io/colmap_model.cc Line 166 in dc4a106
and dataset-pipeline/src/io/colmap_model.cc Line 172 in dc4a106
shows that if scaled_factor equal to 0,code will definitely output
From dataset-pipeline/src/opt/parameters.h Line 64 in dc4a106
scaled_factor is 0,it should be replaced by inverse of first point cloud transform scale. So maybe the point cloud didn't load correctly?Do you have any suggestion? |
Just did some tests on my own using the docker image, and did not stumble upon your problems. Interestingly enough, there was another scale factor problem. Try to cheeckout #15 and see how it goes, maybe it will solve your problem ? Otherwise, the tests pass on my side so not sure what is your problem, maybe there are some problems with the std lib and your gcc version. |
I have set the global factor to 1 as you suggested #15. Running a test with my own model using the docker image
Still in ImageRegistrator,error dataset-pipeline/src/io/colmap_model.cc Lines 784 to 786 in dc4a106
When using the model provided by
,this problem wouldn't happen.(But I still don't know if the code will work as it's still iterating) |
(Sorry the issue got automatically fixed, wasn't intentional, I reopened it) |
Glad to see we are making progress ! :) See Colmap's thin prism fisheye definition here : As you can see, it's 12 parameters. ETH3D checks that the camera model described in the cameras.txt matches the number of floats in the line, hence the given assertion. For some reason, ETH3D reads 13 parameters which then fails because because it's not 12. See here for how the camera line is read : dataset-pipeline/src/io/colmap_model.cc Line 53 in dc4a106
It seems to me the cameras.txt you provided is fine, so something goes wrong during the reading. I will do tests on my own, might be worth it trying to log what are these 13 parameters to understand where things went wrong. |
Thanks for your work!:thumbsup:
occurred.I try to replace
|
Hi! |
Thanks for your insight ! Indeed I forgot about the line ending confusion between linux and windows... good to know, I'll try to update the README to wanr people about this issue ! For your problem, this is the main limitation for this work, it needs colored point clouds, which is not easy to get, and the scanner that are able to do that are very slow. You can't user the point cloud from a handheld device like the Zeb Horizon. Shameless plug, but you might want to have a look at my own project that uses colorless point clouds : https://github.com/ClementPinard/depth-dataset-builder (there's an ArXiV article that comes with it) It uses the functions from this repo as well, but the localisation of frames with respect to the lidar point cloud is mostly done with ICP, and not by trying to incorporate the Lidar rendering in the COLMAP reconstruction If you look at the original publication (https://www.eth3d.net/data/schoeps2017cvpr.pdf , Figure 4), they did try ICP and got worse results, but it's not so bad given that it allows you to use colorless pointcloud. They did not implement it in this repo because it was probably a very manuel alignment which could not be converted into a usable tool without some additional work. |
For your problem with ImageRegistrator, it depends what the filepaht is in the In a sense, your second screenshot is right with that assumption. What is weird is your first screenshot where the Did you modify the |
This project may indeed be helpful in our work! Thank you for your dedication.I will add this project in my to-do list.
I do check my filepath in the images.txt and solve this problem,thanks again! 1: In Step-by-step example (individual images),the resulting depth map I get has many holes. 2: There are rarely scan reprojection points on the left wall(annotated by white box),but opposite on the wall facing(annotated by yellow box) and on the left(annotated by red box,weird for different density in the same wall). 3:In Step-by-step example (camera rig images),depth map only covers the area in the center of the image, the edge of the image does not contain the depth information.Is it normal? |
Most likely, the occlusion model is in front of the actual model. The points are thus deemed occluded and are not shown. A first solution would to try other options to generate the occlusion mesh so that it has a better quality. You might also want to have a look at the global parameters : https://github.com/ETH3D/dataset-pipeline/blob/master/src/opt/parameters.h More specifically, in this case, the problem is that the For the second use case, it's indeed pretty weird. Looks like a problem with the camera model. Somehow the polygons around the center of the image did not render, making the occlusion depth there at 0, which means everything is occluded. You can check that intuition by looking at the "occlusion depth map" in the select mode. If this is true, this might be a bug that we have to look into. I know that some extreme camera models + parameters can give up rendering at the edge of the screen, but I didn't know it occurred for the low-res cameras ! |
Hi!
I was recently following the steps in the" Step-by-step example (individual images) "to run the code.I got some problem in step
1、When i download the official SfM (Structure-from-Motion) reconstruction containing the DSLR images and the cube map images you posed in
the TX、TY and TZ in image.txt is
I estimate the scaled factor using
mkdir sparse_reconstruction_scaled ${PIPELINE_PATH}/SfMScaleEstimator -s sparse_reconstruction -si . -i scan_clean -o sparse_reconstruction_scaled --cube_map_face_camera_id 0
.when i execute the command
${PIPELINE_PATH}/ImageRegistrator \ --scan_alignment_path scan_clean/scan_alignment.mlp \ --occlusion_mesh_path surface_reconstruction/surface.ply \ --occlusion_splats_path surface_reconstruction/splats.ply \ --multi_res_point_cloud_directory_path multi_res_point_cloud_cache \ --image_base_path . \ --state_path sparse_reconstruction_scaled/colmap_model \ --output_folder_path dslr_calibration_jpg \ --observations_cache_path observations_cache \ --camera_ids_to_ignore 0
the error"Less than 2 images defined"occured.(PS: The input data of this code incluing
sparse_reconstruction_scaled/colmap_model
)I found that the problem is with function
ReadColmapImages
in the filecolmap_model.cc
.The information was successfully read into the string
line
.But looks like the functionstd::istringstream
can not recognizedin
line
,so the data after the firstcan't be read into
>> new_image->image_T_global.data()[4] >> new_image->image_T_global.data()[5] >> new_image->image_T_global.data()[6] >> new_image->camera_id >> new_image->file_path;
respectively.By the way,I am running this project on ubuntu 20.04.The environment is built according to the instructions in
dockfile
.Any suggestion about how to deal with this problem would be welcome.
I also want to know how did you get two camera model and
by using COLMAP.
I would really appreciate it if you could answer my question!
The text was updated successfully, but these errors were encountered: