Real robot interface to connect with ManiSkill2
Runs camera capturing and visualization as separate processes to make it closer to using ROS
❗ This repo is still under heavy development, so API might be changed without notice
From PyPI
python3 -m pip install -U real-robot
If you need to install the optional xarm
dependency, you must install from GitHub repo
python3 -m pip install -U real-robot[xarm]@git+https://github.com/KolinGuo/RealRobot.git
Calibrated camera poses are stored in real_robot/assets/hec_camera_poses/ and loaded in real_robot/sensors/camera.py.
- Capture Color/Depth/IR images from RS camera, do
python3 -m real_robot.tools.rs_capture
- Get detailed device information of connected RealSense device
(e.g., supported stream configs, sensor intrinsics, extrinsics between sensors), do
python3 -m real_robot.tools.enumerate_rs_devices > info.log
- Convert recorded ROS bag file and save rgb & depth images into .npz, do
python3 -m real_robot.tools.convert_rosbag_to_frames <bag_path>
To fix an unindexed rosbag recorded from RSDevice
, do
apt install -y python3-rosbag
rosbag reindex <bag_path>
- When controlling xArm7 with
motion_mode="cartesian_online"
andwait=True
, sometimes the robot will be stuck indefinitely inwait_move()
due toget_state()
returns wrong state (i.e., returnsstate=1
thinking it's still in motion). Simple solution can be just to control briefly via UFACTORY Studio to get it unstuck.
0.1.0
- Added
SharedObject
to create/mount objects stored inSharedMemory
- Enabled
RSDevice
to run as a separate process (nowCamera
will createRSDevice
as a separate process) - Enabled
RSDevice
to record camera streams as a rosbag file - Enabled
XArm7
to run as a separate process (for streaming robot states) - Enabled
CV2Visualizer
andO3DGUIVisualizer
to run as separate processes (for visualization) - Added a default
FileHandler
to all Logger created throughreal_robot.utils.logger.get_logger
- Allow enabling selected camera streams from
RSDevice
andsensors.camera.Camera
- Added
sensors.simsense_depth.SimsenseDepth
class to generate depth image from stereo IR images
real_robot.agents.xarm
- Change
XArm7
parameters for clarity (safety_boundary
=>safety_boundary_mm
,boundary_clip_eps
=>boundary_clip_mm
) - Add
get_gripper_position()
to get gripper opening width in mm or m - Add
gripper_speed
parameter toset_action()
to control gripper speed
- Change
real_robot.utils.visualization.visualizer
- Rename
Visualizer
methodshow_observation()
=>show_obs()
- Rename
real_robot.sensors.camera
CameraConfig
now accepts aconfig
parameter- Rename
CameraConfig
parameterparent_pose_fn
=>parent_pose_so_name
real_robot.utils.realsense
RSDevice
now acceptsdevice_sn
instead of anrs.device
RSDevice
now acceptsconfig
as parameter (width
,height
,fps
) instead ofrs.config
- Switch from
gym
togymnasium
- Rename all
control_mode
by removingpd_
prefix for clarity. No PD controller is used. real_robot.agents.xarm
XArm7
will not clear "Safety Boundary Limit" error automatically inset_action()
- For
motion_mode == "position"
, switch from usingset_tool_position()
toset_position()
- Enable gripper and set to maximum speed in
reset()
- Remove all Loggers created as global variables (they will be created
at import, which might not be saved under
REAL_ROBOT_LOG_DIR
) - Bugfix in xArm-Python-SDK: enable
wait=True
for modes other than position mode
0.0.2
- Added motion_mode to XArm7 agent
- Added several control_mode:
pd_ee_pos
,pd_ee_pose_axangle
,pd_ee_delta_pose_axangle
,pd_ee_pose_quat
,pd_ee_delta_pose_quat