Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rs.rs2_deproject_pixel_to_point does not work properly in Gazebo #12739

Closed
zyzgy opened this issue Mar 9, 2024 · 7 comments
Closed

rs.rs2_deproject_pixel_to_point does not work properly in Gazebo #12739

zyzgy opened this issue Mar 9, 2024 · 7 comments

Comments

@zyzgy
Copy link

zyzgy commented Mar 9, 2024

Required Info
Camera Model { D435i }
Firmware Version 2.54.2.0
Operating System & Version Linux 22.04
Kernel Version (Linux Only) 6.5.0-21-generic
Platform PC
SDK Version { legacy / 2.. }
Language {python}
Segment {Robots }

Issue Description

I used a D435i for measuring the distance from an object using rs.rs2_deproject_pixel_to_point, it gave me accurate coordinates of the object in the real world with the help of YOLOV8.

However, when I try the same code in Gazebo, it doesn't work.
for example, there is a person in front of my camera. The output is: person: x: 8.51 y: -0.7365765380859375 z: 1.6941261291503906

image

When I move the person two cells further along the red line, The output is: person: x: 8.32 y: -0.48608860015869143 z: 1.2062197875976564

which makes me so confused. I think the value of x should increase, not decrease.
So my problem is :
Is this a problem with gazebo software? or did I do something wrong?

@zyzgy zyzgy changed the title rs.rs2_deproject_pixel_to_point does not properly work in Gazebo rs.rs2_deproject_pixel_to_point does not work properly in Gazebo Mar 9, 2024
@MartyG-RealSense
Copy link
Collaborator

Hi @zyzgy In the RealSense coordinate system the positive X axis points to the right, as described at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/d435i.md#sensor-origin-and-coordinate-system

From the perspective of standing behind the camera and looking forwards, instead of the diagram's perspective of looking at the front of the camera, it appears that the person would be stepping back in the -X direction and so reducing the X value.

image

@zyzgy
Copy link
Author

zyzgy commented Mar 10, 2024

Hi @zyzgy In the RealSense coordinate system the positive X axis points to the right, as described at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/d435i.md#sensor-origin-and-coordinate-system

From the perspective of standing behind the camera and looking forwards, instead of the diagram's perspective of looking at the front of the camera, it appears that the person would be stepping back in the -X direction and so reducing the X value.

image

Thank you so much for your reply! Sorry that I didn't explain my code clearly. Because I want to use values in rviz, I changed the coordinate.

` real_coords = rs.rs2_deproject_pixel_to_point(self.intr, [center_x, center_y], depth_mm)
  msg.position.x = real_coords[2] * depth_scale
  msg.position.y = -real_coords[0] * depth_scale
  msg.position.z = -real_coords[1] * depth_scale

  self.get_logger().info(f"{label_name}: x: {msg.position.x}  y: {msg.position.y}  z: {msg.position.z}`

So in my case, when x increases, it means z increases In the RealSense coordinate system.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 10, 2024

In the RealSense ROS wrapper, it automatically converts the RealSense coordinate system (X-right, Y-down, Z-forward) into the standard ROS axes (X-forward, Y-left, Z-up) by performing a multiplication on the rotation quaternions, as described at IntelRealSense/realsense-ros#1099

In your code, it looks as though you are assigning the values to the appropriate axes using the 0-2 index numbers to achieve the same conversion between ROS and RealSense coordinate systems.

@zyzgy
Copy link
Author

zyzgy commented Mar 10, 2024

In the RealSense ROS wrapper, it automatically converts the RealSense coordinate system (X-right, Y-down, Z-forward) into the standard ROS axes (X-forward, Y-left, Z-up) by performing a multiplication on the rotation quaternions, as described at IntelRealSense/realsense-ros#1099

In your code, it looks as though you are assigning the values to the appropriate axes using the 0-2 index numbers to achieve the same conversion between ROS and RealSense coordinate systems.

I'm sorry, I didn't understand what you meant.

  real_coords = rs.rs2_deproject_pixel_to_point(self.intr, [center_x, center_y], depth_mm)
  msg.position.x = real_coords[0] * depth_scale
  msg.position.y = real_coords[1] * depth_scale
  msg.position.z = real_coords[2] * depth_scale

  self.get_logger().info(f"{label_name}: x: {msg.position.x}  y: {msg.position.y}  z: {msg.position.z}

Do I get a coordinate in ROS or a coordinate in the RealSense coordinate system with this code? What I'm trying to accomplish is to pass this coordinate into my Nav2 Costmap plugin to generate a specific costmap shape at the foot of a detected obstacle (e.g. a person).

@zyzgy
Copy link
Author

zyzgy commented Mar 11, 2024

Hi!
I found where the error was. When I use the realsense camera in the real world, I subscribe to camera/camera/aligned_depth_to_color/image_raw, that is, the color image and the depth image are aligned, so real_coords = rs.rs2_deproject_pixel_to_point(self.intr, [center_x, center_y], depth_mm ) is correct. However, when I experimented in gazebo, the color image and depth image were not aligned. So the depth in real_coords = rs.rs2_deproject_pixel_to_point(self.intr, [center_x, center_y], height_mm) is wrong and that's why I don't get the correct result at all.

I found the following method on the Internet to align color and depth images.


import pyrealsense2 as rs

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
profile = pipeline.start(config)
stream_profile_color = profile.get_stream(rs.stream.color)
stream_profile_depth = profile.get_stream(rs.stream.depth)
color_intrs = stream_profile_color.as_video_stream_profile().get_intrinsics()
depth_intrs = stream_profile_depth.as_video_stream_profile().get_intrinsics()
print("color_intrs:", color_intrs)
print("depth_intrs:", depth_intrs)
align_to = rs.stream.color
align = rs.align(align_to)
x = 668
y = 508
frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)

But I don’t think it is for the camera in simulator. Is there any other way?
Thank u!

@zyzgy
Copy link
Author

zyzgy commented Mar 11, 2024

pal-robotics/realsense_gazebo_plugin#31

Hi, I found this. I think it works.

@MartyG-RealSense
Copy link
Collaborator

Hi @zyzgy I'm pleased to hear that you found a solution with the gazebo plugin. Thanks very much for the update!

@zyzgy zyzgy closed this as completed Mar 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants