-
Notifications
You must be signed in to change notification settings - Fork 1
Conclusions
The 2018 edition just ended! The team ended up staying at the 65th position with 133 teams participating to the event. Now is the time to wrap up what has worked in the robot's system and what needs to be refactored, improved or re-thought ⚙️.
- While creating the strategy, make sure it is working for any robot size so the last-minute changes don't lead to a complete refactor or the previous strategies. And try to keep the robot frame at the physical center of the robot, even if it is away from the odom frame. We had to redo the whole strategy on the PR even if we had a working one on the first prototype (see the
navigation_navigator
section for solutions). - Use git tags when a feature is 100% working, so one can quickly troubleshoot if a bug is a software problem or not.
- Please don't be shy to use loginfo / logwarning instead of debug. Crucial infos that could lead to horrible bugs shouldn't be in debug mode (looking at you, pathfinder's 20 seconds to find a path).
- Use rviz at its maximum, it is VERY useful when testing on a rush.
- Follow the Project Roadmap if it exists! Many of these problems could have been avoided as they were written in the roadmap. Anyone can change it if and when it is needed, but a team needs organization.
- Make the arm step a blocking process instead of a topic publish
- Wait for AX-12 to scan the motors
- Run a test movement on each actuator
- Make the robot move in order to remove the "cale (système pour avoir la position exacte du robot à l'état initial)" and go back to it's initial position
- Prevent launching a strategy or arming the robot when the 12V is off
- Brighter and more visible LEDs, messages that ask if the 12V is on
- Label buttons
- Wait some seconds after the jack is plugged in before listening for the pulled event, so no surprise when plugging it late
- Indication on which team we chose during the following screens (ex: green / orange background)
- Add a physical element to restart the ROS system (program independent from ROS)
- Connect all the HMI stuff directly to the Raspberry Pi, because we have more pins available, no serial communication problems and no rosserial
- The tasks can only be executed one by one, whether they are services or actions; and the code cannot easily be modified to support that. A whole new code structure must be thought through if this needs to be supported (probably).
- SOLVED The scheduler can wait for a message to be received on a certain topic, but it cannot check if the message has certain attributes set to certain values yet. Can be easily modified.
- SOLVED For an action to be successful, the response message is checked for a specific bool variable with specific names. Find a better way.
- Add the possibility to fetch data from topic/service and to reuse it on later calls (ex: fetch pos of nearest cube, then send a message to actuators with this pos).
- SOLVED Add default timeouts for when action servers are not responding (=> ex: can't simulate a match with ax12 calls).
- SOLVED Rethink the pretty print spammy thingy, and find a way to log infos (not using python print) so we can inspect .ros/logs when the strategy messed up (started from linux service) => Don't print the full tree each time, only print one line when actions start and finish.
- The link with AI is kinda awkward: if for example the robot takes 8 balls from a container, ai/scheduler should move the map's balls into the robot's container in the map DB manually.
- The JSON objects are not the right solution for the GET, as the pathfinder (C++) takes so long to parse it, we have disabled the fetching before each goto (nobody modified the objects anyway). It would be nice to add that feature again (but faster), in order to dynamically disable collisions on certain objects.
- The objects are mostly rectangles / circles with some properties, those could be passed via ROS messages.
- Just abandon the map ?! Seriously it stores 90% of useless information... A reactive system with internal storage of data depending on nodes which are interested seems to be more easy to use...
Too strict.
- No way to move close to an edge because of the pathfinder, no way to return in the map when a manual movement (pwm, asserv) moved us into the walls margins (no path).
- Add a feature to disable the pathfinder on a goal
- Navigator must handle ROS frames for managing both the robot's pos and the destination pos. E.g. when filling up the AI strategy (see
general
section), give the dest position for the center of the robot (or any other robot feature such as a gripper), and not the wheel's axis (cuz the axis position could change between robot versions). - Display path-linked RViz markers
- Highlight objects in rviz that prevent us to go to a goal
- rviz visual indication that the pathfinder found no path
- Pathfinder should give the turning direction for each waypoint
- Put a warning when the images are too big
- More loginfo on time taken, number of points in path
- Logerror when no path found
- Could be remade in C++ for performance boosts.
- Problem : e.g. when pushing cubes, how not to treat them as an obstacle..? (they moved from their expected position, can't be recognized..)
- Let us enable/disable certain sensor sources (lidar and/or belt), see
sensors workflow
section. Easy to add. - Highlight objects that block us in rviz
- Let the system control the max speed dynamically (for approaching an object slowly) and adapt navigation_navigator.
- Smart PWM : pwm until we are no longer moving (to hug a wall)
- Simulation: pwm, blocking against a wall
- On arm, do a test movement on all actuators
- Connect AX12 to actuators_dispatcher isntead of directly inputting the action calls in scheduler.
- Add to
generate_arduino.sh
the generate roslib step (rosserial)
- Find a way to disable only certain sensors. We had to disable collisions when pushing cubes because the lidar was detecting them, but then we had no detection on the belt sensors for enemy robots.
- Feature to toggle on/off belt sensors and lidar individually
- Make a dynamic/static frame thingy in rviz so we can quickly align the lidar frame and adjust for the physical offsets.
- Make the rviz panels for processing_lidar_objects work again, that would have been useful.
- Objects classifier has to classify as map when the obstacle is in the robot
We do a lot of badass things, but I think the whole architecture is far too complicated for the cup ! We manage a lot of things that never happen and we do not manage a lot of problematic things that really happen !
My advice is to make the architecture evolve, as a 2.0 version instead of improving it as a 1.1 version. Keep things simpler so we do not need 2 hours during the cup to debug a shitty small thing in the corner of the system 😛
(==> See discussion on Slack)
- No communication between the robots this year
- it works pretty good, but the two robots do not do a lot of actions, so...
- collision avoidance system ok, so no worry about a collision between robots
- system could be improved with a little communication (position for instance) but not a priority
- idea: only the small robot adapts to the GR's position (one-way communication).
- During the cup, all tests have been made connecting to the robot through Ethernet cable
- quite annoying to do tests (cable vs robot) but Rviz worked and no communication problems !
- not the best solution because only 1 person per robot...
- maybe think to improve tests with Ethernet connected (removable mast on top of robot, winding mechanism, ...)
- no WiFi at the cup is life !
- not the best solution for git repositories, in order to sync the robots and the computers we create local servers everywhere (robot, personnel computer), not the best idea but no choice... Possible solutions:
- Keep the n git servers and make a super clear tutorial for newcomers...
- Robots connect to an internet-enabled wifi network at Compiègne and no more servers, use FTP at the competition
- consider simple FTP everywhere from any PC to the robots: no commits made on the robots themselves, but way simpler system.
Why an electronics section in the software conclusions ? Just because it's easier to have a single place where all the stuff is written :)
- The Raspberry Pi is powerful enough to manage all the ROS system ==>no need to change for better boards
- Rosserial limitation to use Arduino Mega or ESP is quit annoying, a Mega takes a lot of place ! (and the ESP Mini has not enough GPIO) ==> for next year, try to port some code using the bigger ESP or the Teensy boards (+ make custom distribution boards?)
- All the cables have been changed this year to use crimped connectors (to avoid multiple short-circuit), the platform is more badass, but not really better... ==> no miracle solution but we had a lot of crimped connectors broken (maybe 5 or 10% of all cables) and it's quite hard to see it, specially during the competition... ==> crimped connectors are, in my mind, much more better than standard pins ==> Manufacturing distribution boards can be a solution to order and connection problems
Oh the wonderful HMI <3 This shit made me crazy more than once during the whole year, even if at the cup we had a lot of troubles with it... BUT ! I'm convinced this way to launch the robot is the good solution ! No SSH, no connection, just power on the robot, select all your stuff and banzai ! For next year, improve the concept !
- Do not use a dedicated board, connect the HMI directly to the Raspberry Pi
- Use better buttons !
- Larger screen ? ==> https://goo.gl/fqi7c7
- Integration on small robot was not really good, but on the big one, it's fucking amazing !
- More important : electronics should be reliable (1 robot has not been launched during a match because the buttons were disconnected...)
==> Manufacture our own final PCB!
- Problems on small robot because too many ticks by the odometry wheels (robot to fast) ==> change the board, using a board with a hardware counter (quadrature) ==> ask the UTT about their boards because they use the same controller for small robot motors and all is integrated
- Try to move to a new base for the big robot ? (not mandatory, for now it's working and I think it's powerful enough)
- EXTREMELY IMPORTANT : this year, both robots didn't do any balls/cubes actions because the movement is either imprecise, too harsh, non-adaptive or in general too flimsy. Find a way to have exact positioning (implement displacement strategies, find if the odom system has precision problems), and adapt PIDs/asserv code to have perfectly-moving robots (seriously, look at RCVA: their robots always go either zero-or-full speed (step 1 for us), the max speed is adapted based on the situation (step 2), and they never have movement imprecision (step 3). This is why their actuators don't need to be position-forgiving.). This is the first priority task before doing any special actions. ==> the ROS navigation stack also has to be the first to be perfectly functioning!
Wiki UTCoupe 2018, Université de Technologie de Compiègne, France.
Any questions or comments ? Wanna join our team ? Contact us at [email protected]!