Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Script test doesn't yield expected results. Could there be a bug? #5

Open
presentp opened this issue Aug 10, 2023 · 7 comments
Open

Comments

@presentp
Copy link

presentp commented Aug 10, 2023

Dear joreeves

I tried to use your script but did not arrive at the expected result. I generated camera postions from 17 pictures in Meshroom via the StructureFromMotion node. I exported this to a json file via the ConvertSfMFormat node per your instructions. I then ran your script to generate the transforms.json file and arrived at following results.
As example 1 of 17 source pictures:
IMG_3299

Results:
MeshroomCameras_ToInstantNGP_Result01
MeshroomCameras_ToInstantNGP_Result02
MeshroomCameras_ToInstantNGP_Result03

I got these messages when I ran the script:
image

The dense point cloud I generated from Meshroom some nodes down the line however seems perfectly fine so I do not really doubt the correctness of the generated camerapositions, not in Meshroom and also not in your script as can be seen above in the InstantNGP screenshots.
MeshroomDensePointcloud_Result01

Then I tested to generate the transforms.json file anew via the instantngp-batch procedure described over here:
https://github.com/jonstephens85/instantngp-batch
The procedure first runs colmap to detect camera postions anew and then uses the script colmap2nerf.py to generate the transfroms.json file. This procedure yields significantly better results. See below.
ColmapCamerasViaColmap2Nerf_ToInstantNGP_Result01
ColmapCamerasViaColmap2Nerf_ToInstantNGP_Result02
ColmapCamerasViaColmap2Nerf_ToInstantNGP_Result03

However, I have needed to calculate the camerapositions twice now, once in Meshroom and once via Colmap. I would prefer to only do this once and see promise for your script if it would yield the same results as the Colmap procedure.
I was wondering if there maybe could be a bug somewhere in your script? For example, I see that the considered volume that the neural network calculates is considerably larger when using the transforms.json file generated by your script compared to the volume used by the colmap generated transforms.json file.
Looking forward to any answer you can give. Please let me know if you need any of my testfiles. I am more than happy to provide those.

For your convenience I have already included both transforms.json files generated by Colmap and by Meshroom and your script:
transforms_FromColmap.json.txt
transforms_FromMeshroom.json.txt

Kind regards
Paulus

@joreeves
Copy link
Owner

joreeves commented Aug 11, 2023

@presentp Could you share the sfm json from Meshroom? Something that immediately sticks out to me is that the scene scale in the first is different than the second. You may need to change the --aabb_scale or --scale to correct this.

@presentp
Copy link
Author

presentp commented Aug 11, 2023

@joreeves Hereby the sfm.json file: sfm.json.txt
Indeed, I set the --aabb_scale to 64 when using your converter to try to change results, but it had no effect. The deafult value of 16 yields the same results.
What is the effect of the --scale value?

@joreeves
Copy link
Owner

@presentp the --scale will change the size of the scene. The default value is 1, a larger value will increase the size. I will take a look at the sfm file and see if there is a discernable issue that's causing the differences in solve.

@presentp
Copy link
Author

presentp commented Aug 11, 2023

@joreeves Ok, if you need the 17 reference pictures, I have included them below in 2 parts. And thanks for your help!
_Images01.zip
_Images02.zip

@presentp
Copy link
Author

@joreeves Hi, I tested different --scale settings 0.01, 100 and 10. 10 Seemed to provide a decent bound to the rays. However, the unitbox and the camera symbols become very small as you can see in the pictures below. Also the problem does not seem to be addressed by this tweaking. I have the impression that maybe there is a wrong mapping of camerapictures to camerapositions. For example, it is very weird that the top row of images generates colored rays from each camera origin instead of generating a merged radiancemodel. I have uploaded the '.ingp' and 'transforms.json' file with which you can see my result in InstantNGP via this WeTransfer link: https://we.tl/t-fIXOHF9GzZ
MeshroomCameras_ToInstantNGP_Scale10_01
MeshroomCameras_ToInstantNGP_Scale10_02

@avclubvids
Copy link

Are the cameras solving correctly in both COLMAP and Meshroom? You took a horizontal line of photos at two distances from the building, yes? If the camera poses look pretty good then the issue is probably elsewhere. I see this error in your run: "PoseId 376357150 not found in transforms, skipping image: IMG_3315" - is it possible that one image is not solving correctly and throwing the others off?

@Thomacdebabo
Copy link

Thomacdebabo commented Nov 17, 2023

Ok so it has to do with the intrinsics from meshroom not working properly. if I add the intrinsics from colmap it works fine. So one thing that I noticed is that meshroom always exports "radial3" intrinsics which only offer k1,k2,k3 while colmap exports k1,k2 and p1,p2... not sure if that might cause an issue...

Also one issue I found has to do with the calculation of the camera angles where the *2 should be inside the braket and not outside. Corrected lines:

camera_angle_x = math.atan(out["w"] / (out['fl_x']* 2)) * 2 #*2 was outside the brakets
camera_angle_y = math.atan(out["h"] / (out['fl_y']* 2)) * 2

Also for some reason my k3 values was way off so setting k3=0 improved the reconstruction a lot.
Overall the best solution would probably be to use colmap to get the intrinsics and use those instead of the meshroom ones.

Update: So after some testing I found that the bug in the camer_angle calculation doesn't seem to do anything so my guess is that it is not used by instant nerf after all.

And I was able to resolve the issue alltogether by stating out["is_fisheye"] = True, in this case k3 doesn't need to be set to 0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants