To achieve a dynamic visual servo controller for a wide range of objects. This is the code implementation of the paper https://drive.google.com/file/d/1aolkaKbTb5v53AI-93A_E8BlWVKve1Hy/view?usp=sharing. (Under review)
- We use conda to manage the Python env, and install the environment.yaml;
- Install pybullet_object according to https://github.com/eleramp/pybullet-object-models;
- For the real experiments, you need to install frankapy and franka-interface according to https://iamlab-cmu.github.io/frankapy/index.html and https://github.com/iamlab-cmu/franka-interface.
- models and src file contains robot simulation model;
- trainNN file contains scripts that run the NN(controller), and run the train.py, which needs to modify the dataset direction based on yours;
- The scripts file contain code running the simulator environment, and real experiences;
--run_camera.py can lauch the realsense camera and publish image to rostopic;
--run_model.py can run the trained model and publish output to rostopic;
--roRobot_joint.py can accept the output from run_model.py and use frankapy to control the real robot.
--dynamic_visual_servo.py is the scripts that launch a pybullet simulator that verifies the simulation performance. Pls note we do not implement the gripper part, but the output can be recorded if need;
- runNN file contains code file when doing the NN inference;
- yolov5 contains the code that detects the bounding box of target objects;
- The kinematics file contains the frankpy kinematics, such as IK.
They can be found at https://drive.google.com/drive/folders/1lPT0FiArjju4aQTpktE_4pZ7mRmUr6sX?usp=sharing
-
For simulation: just run dynamic-visual_servo.py;
-
For the real world: In our case, we have two PCs, one is used to control the Franka, on this pc, we launch the run_camera.py and frankapy; On another PC with 1080Ti GPU, we run run_model.py and toRobot_joint.py which run the NN and send out the control signals. We use ros to do the communication between the two PCs.
- To increase the inference speed in the real world, for the goal image branch, we load the goal image NN branch output since the goal image is fixed;
- In the real world, we decrease the velocity to half in Z direction considering the real-time issue.
Commercial use is prohibited. Drop an email to [email protected] if you have interest.