This project allows to read the orientation of the right arm estimated by three different sensors: smartwatch, kinect and leap motion. The controller collects these data and evaluates the linear and the angular velocity to be sent to the robot in order to move it.
The hardware is composed by a smartwatch, a kinect, a leap motion and the robot we want to move. The modules are wrote in cpp or in python.
The architecture is composed by three sensors that get information regarding the arm and interface to the PC using the respective drivers. The adapter nodes receive the orientation data from the respective sensors and in different ways convert them into RPY data and then send everything to the controller. The controller receives RPY angles which are easier to interpret than quaternions and converts them into linear and angular velocities, makes the average and sends the actual velocities to the robot.
This module takes in input the tf transforms from the kinect data preprocessed by the openny_tracker module, and gives as output the corresponding RPY (roll-pitch-yaw) data.
This module takes in input images taken by two cameras are analyzed to reconctruct a 3D representation of what the device sees. Tracking algorithms interprat the 3D data and infer the pitch and the roll of occluded objects.
The smartwatch module takes as input the data sent by the smartwatch, filtered by the Complementary Filter Node, and gives as output the corresponding RPY (roll-pitch-yaw) data.
The controller module takes as input the RPY data sent by the three adapter of the sensors and converts them into linear and angular velocities. Then it computes the weighted average between the velocities available and gives as output the actual velocity that the robot has to take.
During the test phase the simulator listen at the topic /cmd_vel and a simulated hrp_automower moves in Gazebo according to the message received.
The objective of this modules is to create a 3D pointcloud map from the images acquired by a Microsoft Kinect in a ROS environment on Linux, to transmit it to a Windows based Unity project which will tweak and improve the map in order to make it more user-friendly before sending it to the Oculus visor weared by the user. The Kinect could be even mounted on a moving robot in order to create a real-time dynamic map of its surrounding.
Hardware prerequisites:
-
LG G Watch R W110
-
LG G6 H870
-
Husqvarna Automower
Software prerequisites:
-
ROS kinetic, to download it follow this guide.
-
Imu Stream, a set of Android applications (mobile and wear) to stream IMU data from the smartwatch to an MQTT broker. For more feature follow this guide.
-
Mosquitto on Ubuntu, to download it follow this guide.
-
GAZEBO robotic simulator for ROS, to download it follow this guide.
- Clone this repository in your workspace through the command
git clone
- Follow the README in the src/kinect_listener folder
- Follow the README in the src/leap_teleop folder
- Follow the README in the src/mqtt_ros_bridge folder
-
(ONLY IF YOU WANT TO SIMULATE ON GAZEBO)For the simulation of the Husqvarna Automower on GAZEBO install all the dependencies (for more info about this part look at this guide
sudo apt-get install ros-kinetic-gazebo-ros-control sudo apt-get install ros-kinetic-joint-state-controller sudo apt-get install ros-kinetic-hector-gazebo-plugins sudo apt-get install ros-kinetic-hector-gazebo sudo apt-get install python-pygame
-
Setup the model path
export GAZEBO_MODEL_PATH=[your path]/src/haro/am_gazebo/models:$GAZEBO_MODEL_PATH
- Follow the README in the Unity branch
Note: the Kinect-Unity-Oculus interface section it still is on the relative "Unity" branch since it must be cloned on a different machine than the one above. The relative complete Readme it can be found on the above mentioned branch.
-
Compile your workspace
catkin_make
-
Kinect:
(Terminal 1) roslaunch openni_launch openni.launch camera:=openni
-
Smartwatch: Check the Mosquitto broker status.
sudo service mosquitto status
-
Start the Mosquitto broker. (if the broker is already active skip this step)
(Terminal 2) mosquitto
-
Leap Motion:
(Terminal 3) LeapControlPanel (Terminal 4) roslaunch leap_motion sensor_sender.launch
-
In another terminal tab launch the controller and all the other nodes (inside the launch file you could comment components not needed, for example those for the simulation)
(Terminal 5) roslaunch controller controller.launch
-
In order to start the simulation on gazebo
(Terminal 6) roslaunch am_gazebo am_gazebo_hrp.launch gui:=true
-
To run the Kinect-Unity-Oculus side follow the guide in the Unity branch
The three sensors were fully tested and we can conclude that the Husqvarna Automower is totally controllable through the usage of them. In order to look at the simulation developed during the test phase follow this link. All the three modules (Kinect-Unity-Oculus) have been thoroughly tested and have demonstrated to be fully working. The final implementation allows the user to visualize the entirety of the map in a realistic and dynamic way while the virtual environment keeps expanding as the robot explores its surroundings. You can see the implemented Kinect-Unity-Oculus architecture working in the following videos:
Kinect - Unity - Oculus interface Video 1
Kinect - Unity - Oculus interface Video 2
During the test phase some issues raised, one of them is the fact that the connection between smartwatch, smartphone and computer introduce a considerable delay that retard the movement of the robot with respect to the smartwatch orientation.
- Noel Alejandro Avila Campos: [email protected]
- Nicola De Carli: [email protected]
- Angelica Ginnante: [email protected]
- Adam Berka: [email protected]
- Nicolas Dejon: [email protected]
- Enrico Casagrande: [email protected]
- Alberto Ghiotto: [email protected]
- Alberto Grillo: [email protected]
- Claudio Curti: [email protected]
- Francesca Cantoni: [email protected]