The update of tsdf with dynamic scene is too slow. #97
-
If it’s not a bug, please use discussions: https://github.com/NVlabs/curobo/discussions Please provide the below information in addition to your issue:
Hello, guys. Thanks for your great work! I want to use RGB-D depth map with curobo, and used tsdf integrator according to issue #92. The quality of SDF mapping has improved a lot compared to 'occupancy', but the update of dynamic objects has become very slow. As shown in the video below, the tsdf can be quickly updated when objects are detached. But when new objects are added or moved, it takes 5-10 seconds to complete the update. This delay is too long, making the robotic arm unable to be used in dynamic tasks and scenes. 20231223-200123.mp4I am not familiar with nvblox and didn't find some adjustable parameters. Is this an inherent flaw of the nvblox algorithm? Do you have any opinions or suggestions? Thanks again. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 14 replies
-
If you check the code here We only update the esdf every 5 simulation steps. Removing that if condition will make it update at the same rate as the camera window. |
Beta Was this translation helpful? Give feedback.
-
In my code I update the esdf each simulation step. You can observe the 14th second of the video. The response of tsdf to the disappearance of the object is immediate. But when I put the object in, the response of tsdf will be particularly slow. And the correct response time is as long as 5-10 seconds, which already includes dozens of simulation steps. |
Beta Was this translation helpful? Give feedback.
-
Sorry, I misremembered, I did use time_step to update esdf, the frequency is ten times lower than simulator-step. The code of simulation step: def run(self):
while simulation_app.is_running():
self.world.step(render=True)
step_index = self.world.current_time_step_index
if step_index % 10 == 0:
# get frame data
data = self.cam.get_frame()
camera_to_ee = data['camera_pose']
camera_pose = self.get_camera_pose(camera_to_ee)
camera_pose = format_pose.from_transforma_mat(camera_pose)
# update esdf
voxels = self.planner.update_rgbd(
depth=data['depth'],
intrinsics=data['intrinsics'],
camera_pose=camera_pose,
)
# visualization -- simulator
self.visual_voxels(voxels)
# visualization -- opencv-window
depth_image = data['raw_depth']
color_image = data['rgb']
depth_colormap = cv2.applyColorMap(
cv2.convertScaleAbs(depth_image, alpha=100), cv2.COLORMAP_JET
)
images = np.hstack((color_image, depth_colormap))
cv2.namedWindow("Align Example", cv2.WINDOW_NORMAL)
cv2.imshow("Align Example", images)
key = cv2.waitKey(1)
print("finished program")
simulation_app.close()
The code of VoxelManager self.voxel_viewer = VoxelManager(2000, size=0.03)
def visual_voxels(self, voxels):
if voxels.shape[0] > 0:
voxels = voxels[voxels[:, 2] > 0]
if voxels.shape[0]>2000:
voxels = voxels[::(voxels.shape[0]//2000), :]
voxels = voxels.cpu().numpy()
self.voxel_viewer.update_voxels(voxels[:, :3]) |
Beta Was this translation helpful? Give feedback.
You are missing a decay layer call:
Add this in the first line of self.planner.update_rgbd()
world_model.decay_layer("world")
This lets nvblox know that you are moving forward in time and should account for dynamic changes in the scene.