-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making movebase work #62
Comments
OK, so this is the brief summary of what we planned to do. No it's time to get going on this. Then, we should create the setup for the test cases, e.g. have a shared topological map which we only slightly adapt to our own environments, but have the names consistent. I suggest, UOL (@Jailander, myself, @cdondrup) create a first prototype of the test setup. Then we should have a hangout to discuss the next steps. Here's a doodle I ask you all to fill to arrange this. Many thanks |
forgot @hawesie... sorry |
I forgot we can't check what params we were using on Bob during the deployment because his hard drive died. I'm pretty sure we were using the current strands_movebase ones though. However, @kunzel did some changes to the dwa params to make it follow the global path more closely. It was only added to scitos_2d_nav, because it makes navigation in simulation work much better. The differences are
Maybe @cdondrup has some insight on these params, since he has been looking at dwa? |
Actually setting The |
yes, from what i understand, the short sim time is what makes a big difference here: since both those values only look at the endpoint of the simulated trajectories, if we increase their size we increase the number of trajectories that don't go along the global plan, but have an endpoint close to it |
Isn't that what you want for obstacle avoidance? |
depends. if we increase the rate of the global planner it might be better to keep the sim time smaller? with this version we had the robot rotating a lot more in place, at least in simulation. |
We are indeed using some custom parameters. We had our old hiwi run around and optimize the parameters in such a way that the behaviour would be 'optimal' and we have the following in our "site parameter"-file: local_costmap:
width: 4
height: 4
resolution: 0.02
# Improves navigation in tight corridors tremendously.
inflation_layer:
inflation_radius: 0.5
DWAPlannerROS:
max_vel_y: 0.0
min_vel_y: 0.0 I can't really say what the idea was behind these changes, just that it felt a bit more stable after he played around with the parameters. In general navigation seems okayish, apart from of course the driving towards obstacles and the slightly awkward behaviour in our kitchen that I described during the GA. According to our launch file we are supposedly running the human aware navigation, but he doesn't stare at people so maybe something went going wrong there. I'll check more accurately on our next run with Karl. |
Could you please submit a link to the "whole" set of files, as this already assumes there would be a standard you deviate from ;-) |
You can check out the strands karl repo. This should have everything we use, including the above file. We took most of the stuff from what we had before in the marathon system and updated it with everything new and relevant from the review system. |
@Pandoro @Jailander @cdondrup Can I ask you to fill the doodle poll at https://doodle.com/poll/56f7c5ydtqny8p78 as well, please? |
For the record, I link the current versions of params here: BHAM
RWTH
UOLmissing KTH
|
@bfalacerda do you remember why we move to our own NavFn? 5618362 |
yes, so we could reconfigure the default_tolerance param a runtime. This makes the robot not go to the exact pose of the intermediary waypoints. |
right, I remember... |
A git diff to the branch we are using reveals no changes to the parameters. |
@marc-hanheide in Linda we use the standard parameters namely the ones at https://github.com/strands-project/strands_movebase/tree/indigo-devel/strands_movebase/strands_movebase_params |
simulation env in strands-project/strands_morse#138 |
This makes me want to investigate how the robot behaves if we set |
very useful @nilsbore ! Quick summary of today's hangout:
|
I have forked navigation into strands-project and added my debug out changes to the "strands-testing" branch. These changes are not permanent as they make the code less efficient, so this branch should only be used for debugging etc. Therefore I don't think a "release" would be a good idea. To use it, download and build the repo in you workspace then restart navigation, and run the (ugly) python script dwa_planner/scripts/log.py. This will open a window with six images, each showing the score for the critics used in the code. The axis for each graph are: y=angular velocity, x=linear velocity, centre is zero and the values are scaled by 10. so (x=10,y=30) represents (lv=0, wv=0), (x=15,y=20) represents (lv=0.5, wv=-0.1). The graphs are updated in real time as the robot drives, but some timesteps will be missed due to the speed of the redraw. If this is a problem then you can log the topic debug topic and replay it slower... |
@cburbridge agreed, not to be released. I just thought we'll eventually fix code in navigation and then we'd have our own. But for now, let's leave it as a debug tool. |
People will (probably) like to hear that Jenkins can now run simulation-enabled test (more precisely, it can run test in a virtual OpenGL/X environment, hence allowing us to write unit test that use the simulator and/or other GUIs. strands-project/strands_navigation#270 is a first example (though a stupid one as it only starts the simulator but doesn't do anything with it, but it's a test). One outcome of any tests we now run on jenkins (devel or PR) is now a highly compressed (1 fps) video (see https://lcas.lincoln.ac.uk/jenkins/view/Pull%20Requests/job/pr-indigo-strands_navigation/98/artifact/Xvnc.mp4 for the corresponding example) showing the virtual desktop (jump to minute 6 in said video to see the simulator appearing). This video should help to identify causes of failure in unit tests. Having written all this, I should probably write the same to the mailing list... Anyway, so the infrastructure is there to actually write more elaborate tests... now somebody needs to do it ;-) |
I talked to my people and this is the (probably not as satisfying as hoped for) gist of it: 1. The SCITOSDriver specifies a whole lot of maximum/minimum velocities and maximum positive/negative accelerations. These values are passed down to the MCU (Main Control Unit) and will be used as soft limits in the motor controller. Most of the time these constraints are met, but apparently it cannot be guaranteed (only in extreme situations). This means that the "drive parameters" of all STRANDS robots are currently specified here.
Rotations given in degrees / {second,second^2} and translations given in meters / {second, second^2}. The changes @bfalacerda made with respect to the DWA parameters should reflect those parameters. The parameters can also be tweaked to achieve softer acceleration, more abrupt braking, etc. 2. (paraphrasing Stefan, as I don't know a lot about electronics): The force basically imposes a threshold for the motor current, which is encoded as a 8 bit PWM (hence the values 0-255 cover the complete analogue spectrum). From a motor controlling perspective it would be ideal to always drive at full force, but due to wheel slippage and things like relying on the robot to get stopped by the charging station's threshold, we compromise and use a value of typically around 80-120. Therefore lowering the force too much will increase the stopping distance. Also, with a force of around 100 the acceleration/deceleration limits set (see above) aren't necessarily reached, so increasing the force will enable the robot to get closer to the set limits if they are "quite high". There is no real formula or table I can give you regarding the force and the accelerations. But setting lower values for accelerations should mitigate the influence of changing the force on the acceleration/deceleration behaviour. |
given the values are defined in the Mira config file would it make sense to expose them as rosparams and have DWA configured from those params? obviously we would expose them as rad/s(^2) where needed? |
Certainly possible, they can be queried in MIRA and therefore exposed as rosparams. Though note that they are not dynamically reconfigurable, they are written to the MCU at startup and then can't be changed during runtime. |
I have done some test in the |
I made some improvements to how the clouds are processed by movebase, which should gains us some more CPU on the main computer: #64 . |
In response to #62 (comment): @creuther can you comment? A delay of almost half a second can indeed explain a lot of stuttering I guess. Is there a way to reduce this or is there a way to make DWA aware of such a delay (asking very silly questions)... |
agreed in meeting today:
|
Sorry I wasn't there, had forgotten to put it in my calendar. Will follow up on this. |
It seems that the moving of the laser and chest camera frames, together with the changing of the footprint, has improved navigation quite a bit in some aspects. What I've seen here is that the robot is traversing doors noticeably better than before. Overall, when I ran navigation for a few hours the other day it seemed more robust but that is harder to pen down to something specific. Time to run the navigation tests =P. |
Currently failing tests in simulation due to parameters:
|
Maybe it's time to switch |
if navigation is being launched using strands_navigation.launch, then morse is also using strands_movebase. see strands-project/strands_morse#144 |
regarding chest cam, we could add it yes, i guess it'll be important for the wheelchairs |
yes that would be brilliant, however I think jenkins can only run simulations in the fast mode without cameras (right @marc-hanheide ?) but we could still run the tests independently using a flag like for the real robot |
Ok, I was just looking at this launch file: https://github.com/strands-project/strands_morse/blob/indigo-devel/mba/launch/move_base_arena_nav.launch . |
I'll have a look at adding the chest cam sometime during this week. |
Can we please leave the default without chestcam in simulation. Many students and other people (including jenkins) don't have a suitable GPU and can't run the full Morse. Of course it would be great to add chestcam and a suitable flag. |
Of course, it will be an option. |
Once we have a navigation set-up that runs in simulation and on the robot, I can add that to the tests and have a flag as well, yes. |
We did the first real robot tests based on the test scenarios in simulation. You find a few videos here: https://lcas.lincoln.ac.uk/owncloud/index.php/s/GppNjlMSMyaHTVf# Short summary or what isn't working:
|
Many thanks, @Jailander and @cdondrup for getting going on this. Question to @cdondrup: I presume this was based on the latest set of parameters etc. Just for the record, can you comment and link the exact commits/released versions you have been using for your tests. Also, given you previous experience, would you say this was already better than what we had in the past, as we had some improvements (accelerations, laser position, etc) implemented already? In any case, I put a doodle for the next meeting out: https://doodle.com/poll/mtynvcxw23fnsmdp Please pick you slots quickly, so we can have the meeting in short term. |
i'd expect the robot to at least try to get closer to the wp in the second situation, even if he doesnt manage to get through it... I'll also try that and see what happens. Is Regarding the last one, that's the expected behaviour and I also think it's acceptable. we cant really hope to do anything better than that. I have a suggestion regarding this intermediate waypoints issue, and also all the weird trajectories the robot takes when following a policy between nodes: the current version changes goal once we get into the influence area of a node, and the next action is also move_base/human_aware. I suggest a two step lookahead instead, i.e., if we have a waypoint sequence A->B->C->D, all with move_base, once we get to the influence area of B, we send a move_base goal to D. This will surely make the robot's movement much smoother (e.g., it'll probably avoid the weird nav in the AAF corridors), and will also make it more robust to occupied intermediate nodes. The only drawback I see is when the robot is doing B->C->D, and ends up not getting into C's influence area, for example because of an obstacle. We'd have to be careful with that during execution, and with the nav stats. @Jailander does this sound sensible? Another option would be to change the move_base goal immediately after the current target becomes the closest_node. That would have the same advantages and issues more or less. |
@bfalacerda that could be possible I'm unsure of the consequences of this in terms of making the robot follow the actual topological route and not taking another route, I'll put it in as soon as I have time on a separate branch and test it in simulation. About the XY tolerance you are right this this should have worked, I'll rerun the tests today EDIT: we already change the move_base goal immediately after the current target becomes the closest_node (forgot to write this) |
This needs fixing in the simulation as it might have an effect on the testing results strands-project/strands_morse#143 |
@Jailander when was that changed? is nav smoother now? |
oops no my mistake, just checked, I did that in the ICRA branch but never pushed it upstream as that branch is substantially different from our main system, the behaviour it had was worse, the reason for that is that when the robot changed goal sooner it was going faster so it stopped harder (but in that case there was a bigger time gap between move base goals because we switched maps too), I am not sure if we really should try it in the main system too, do you think its necessary? |
I think he shouldn't stop at all, if you send a new goal to move_base it usually changes quite smoothly. We have deceleration now because he was getting close to the goal before receiving the new one, but if we send new goals from further away it should be smoother. Why does he stop when he gets a new goal? |
hmm in that case it could have been because there was no (metric) map for some time, I'll test it on normal conditions and tell you. |
Minutes of Meeting:
|
All releases are out. Time to update and run the tests. |
Unfortunately, it looks like Rosie's PCB power board has fried again, meaning we have no main computer atm. We'll see what happens but it likely won't be possible to run tests this week. |
Ah, bugger... OK, we'll have to see what we can do here and hopefully Birmingham @kunzel @bfalacerda will be able to run on their site? |
yes, we have student that's going to help, I'm meeting him today to set up the testing. Just to confirm, the idea now is to run the static tests one by one and report the result right? |
So we have the robot ready to run tests, we just need to make some arenas resembling the scenarios.
|
team members @cburbridge @bfalacerda @cdondrup @Pandoro @Jailander @denisehe @TobKoer @PDuckworth
Identified issues
Test cases
static tests
a. all chairs on one side
b. chairs on both sides
dynamic tests
Thinks to do
To prevent regressions:
Further ideas:
The text was updated successfully, but these errors were encountered: