Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align Depth ROS Topic #31

Open
Dirodgu opened this issue Apr 16, 2021 · 3 comments
Open

Align Depth ROS Topic #31

Dirodgu opened this issue Apr 16, 2021 · 3 comments

Comments

@Dirodgu
Copy link

Dirodgu commented Apr 16, 2021

Hi everyone!

I'm wondering if it would be possible to use align_depth topic from original ROS realsense drivers with this package. I need it to use the aligned image of the color camera with the depth picture in Gazebo.

Thank in advance for any help!

@Drojas251
Copy link

I too am interested in this question. If the align_depth topic is not available, do you have any suggestions for aligning the RGB to Depth image ourselves?

Thank you!

@Dirodgu
Copy link
Author

Dirodgu commented Apr 25, 2021

Hi again,

I'm been searching how to use aligned_depth_to_color topics and I found two solutions. First of all, I've had to say that I haven't found any way of publishing directly aligned_depth_to_color topics using pal-robotics/realsense_gazebo_plugin package (please, if anyone knows that I'm wrong, feel free to edit the answer). Nevertheless, the found solutions are exposed below:

  1. The first solution is based on hardcoding the files of the realsense-ros packages so the field of view and optical frame position parameters from the depth camera of _d435.urdf.gazebo.xacro file match with the ones of the color camera. To apply this solution is recomended to clone the package from the repo and start your own branch or , you can just start a fork from the original package. This solution is not really recomended because replaces the depth topics for fake aligned_depth_to_color topics, but is the fastest solution I've though.

  2. The second solution is baed on developing a new ROS node to subscribe to /color/camera_info, /color/image_raw, /depth/camera_info and /depth/image_raw. Thus, you can obtain color and not aligned depth images, as well as, the extrinsics and camera calibration parameters via the ROS published information. Then, you can implement an algorithm such as the following for aligning both pictures. After that, you can publish into your ROS net the new picture in an aligned_depth_to_color topic. I did not test this solution because I have no time to do so. However, it seems to me that it is the most accurate way to obtain an aligned_depth_to_color image topic using pal-robotics package.

I hope this answer would help other users of this package in the future too!

@nickswalker
Copy link

If you (like me) are thinking: "the plugin publishes an RGB pointcloud, so surely it aligns depth and color internally..." This plugin is based on the Kinect ROS Gazebo plugin, and inherits its code for publishing the point cloud, which as far as I can tell, does not map color over correctly. It acts as though the color data is in depth optical frame and maps it straight on top, ignoring the fact that these cameras are offset. It does not reproject the depth data. I couldn't find anyone directly reporting this issue, but this request was close: ros-simulation/gazebo_ros_pkgs#415

To get both an aligned_depth_to_color/image_raw and a depth/color/points that has the correct color information, disable the plugin's pointcloud in your URDF, then use depth_image_proc (adjust namespace as appropriate for your setup):

<group ns="camera">
  <node pkg="nodelet" type="nodelet" name="nodelet_manager" args="manager" />

  <node pkg="nodelet" type="nodelet" name="nodelet1"
        args="load depth_image_proc/register nodelet_manager">
    <remap from="rgb/camera_info" to="color/camera_info"/>
    <remap from="depth/camera_info" to="depth/camera_info"/>
    <remap from="depth/image_rect" to="depth/image_raw"/>
    <remap from="depth_registered/camera_info" to="aligned_depth_to_color/camera_info"/>
    <remap from="depth_registered/image_rect" to="aligned_depth_to_color/image_raw"/>
  </node>

  <node pkg="nodelet" type="nodelet" name="nodelet2"
        args="load depth_image_proc/point_cloud_xyzrgb nodelet_manager">
    <remap from="rgb/image_rect_color" to="color/image_raw"/>
    <remap from="rgb/camera_info" to="color/camera_info"/>
    <remap from="depth_registered/camera_info" to="aligned_depth_to_color/camera_info"/>
    <remap from="depth_registered/image_rect" to="aligned_depth_to_color/image_raw"/>
    <remap from="depth_registered/points" to="depth/color/points"/>
  </node>
</group>

The plugin's incorrectly colored point cloud (note the soup can texture mapping onto the background):
Screenshot from 2022-04-13 15-16-15

And using the depth_image_proc to produce the correct registered depth and pointcloud:
Screenshot from 2022-04-13 15-10-09

Not point-for-point comparable because the can position is different, but you get the idea.

nickswalker added a commit to hcrlab/hcrl_gazebo that referenced this issue Apr 14, 2022
Disable realsense gazebo plugin pointcloud simulation
The plugin doesn't actually register the depth data to the RGB frame, which causes the color information to misalign with depth boundaries, especially when the object is close to the camera
depth_image_proc also produces the intermediate "aligned_depth_to_color" topics like the real camera now
See pal-robotics/realsense_gazebo_plugin#31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants