How to use RGB-D depth map with cuRobo? #93
Replies: 14 comments
-
Note that our query spheres in (2) are twice the size of voxel_size, this could be reduced to exactly match the size of voxels. Since we voxelize the world and store as SDF, we can only render up to the voxel_size. You should still only see a plane of cubes if the camera is pointing to an empty table. Did you change the camera pose in simulation? |
Beta Was this translation helpful? Give feedback.
-
Thanks Reply. Yes, I changed the camera pose in the simulator so that the converted point cloud can fit the desktop. I understand using voxel to render, my problem is that the ESDF itself constructed from the depth map (which is supposed to be a thin plane) looks wrong. It seems that when building the ESDF, the large area behind the thin plane is also included in some form, causing voxels to appear in those places. I found that if the But if I visualize the latter, I will see a large thick object composed of tiny cubes instead of a plane. |
Beta Was this translation helpful? Give feedback.
-
Can you share an image of the voxelmanager rendering the pointcloud? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Note that |
Beta Was this translation helpful? Give feedback.
-
2&3. I add In order to focus on visualizing the depth map, I did not add the ground and pillar to curobo's WorldConfig, only added to isaac-sim. |
Beta Was this translation helpful? Give feedback.
-
I have dumped the data, you can load and have a try if you are willing: import pickle
with open('data_info.pkl', 'rb') as f:
data = pickle.load(f)
depth_image = data['depth_image']
intrinsics = data['intrinsics']
pose = data['pose']
data_camera = CameraObservation(
depth_image=depth, intrinsics=intrinsics, pose=camera_pose
) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Looking at your voxel visualization, this can happen if your depth is very noisy. Can I get a stream of images in a pickle file? That will help me debug what's going on. |
Beta Was this translation helpful? Give feedback.
-
One other thing I observed is that your cube has very low height compared to the table. So reducing the voxel size in nvblox can help. Try 0.005 for the voxel size here: |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Previously tsdf integrator did not support dynamic scenes. We recently upgraded cuRobo to use 0.0.5 version of nvblox which adds dynamic scene support for tsdf. We haven't updated the examples yet and will update in the next release. Generally we have found tsdf to work more reliably than occupancy with single view images. This issue captures some good information on how to debug nvblox + curobo. If your issue is resolved close this issue and we will convert it to a discussion so others might find it useful. |
Beta Was this translation helpful? Give feedback.
-
I understand. Looking forward to future updates on your great work! |
Beta Was this translation helpful? Give feedback.
-
If it’s not a bug, please use discussions: https://github.com/NVlabs/curobo/discussions
Please provide the below information in addition to your issue:
Hi guys, thanks for your great work!!!
I want to use CuRobo with RGB-D camera, and use the api in examples:
But even when RGB-D shoots the desktop, the point cloud should be very thin, but the generated occupancy is always very bloated ( a thick occupancy that approximates a cube is generated instead of a plane).
I'm confused about this, and I haven't found any other API examples or documentation. Is there a more accurate way to model point clouds?
Beta Was this translation helpful? Give feedback.
All reactions