-
Notifications
You must be signed in to change notification settings - Fork 196
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execution of trajectory with same start and end point #295
Comments
Apologies for the late response. Yes, this is an issue. I believe it is the same as in ros-industrial/industrial_core#114. See the ros-industrial/industrial_core#114 (comment) by @shaun-edwards at the end there for the current status of that instance of this problem. |
It seems this issue is happening in both the I found the same race-condition in the motoman-specific implementation that @shaun-edwards describes in As said, I can create a PR but am not sure in which context to submit the code changes as I am only able to test it on our hardware together with the modifications made in #259 due to our multi-group configuration. |
I don't believe we could merge this approach, as that would unconditionally introduce a delay in all code-paths that submit new trajectories. There have been various PRs in the past that worked to reduce delays (such as ros-industrial/industrial_core#214), so deliberately introducing this 0.5 seconds would not be a good idea. I haven't looked at the issue recently, so I can't suggest an alternative approach right now, other than perhaps introducing some additional 'intelligence' that takes the trajectory into account (ie: intermediate points) instead of just the last one and the current state. |
That would make sense. Having said that, I forgot to mention that the delay is actually conditional. It would only be enforced when the goal trajectory is actually closed (start and end point are equal). In all other cases, the driver's behavior would be unchanged. This is the commit containing the changes: Adamsua-lab@f07425f But still, the delay is there in some cases, which might not be desired. It looks like a similar fix has been proposed by @Jmeyer1292 for |
So a proper approach would seem to not be time, but state-based. Taking the currently active trajectory point into account would seem to address the issue while also not introducing additional artificial delays. I've not verified whether that information would be available in the streamer, but I believe it is. |
I'm not sure if I am understanding correctly. By state, do you mean the motion state (moving/not moving)?
Would this not be the same as the initial trajectory point of the goal trajectory? Here is what I think this entire issue really boils down to: The This means we would need to "wait" with the All these changes can be made conditional so that the behavior of the driver remains the same as before when a goal trajectory is submitted where the trajectory start and end point are not the same. Let me know if this alternative could be considered. |
I'd have to check the code, but with "state-based" I meant: in addition to robot state, take trajectory execution state into account (ie: which traj pts have we streamed to the robot). Looking at So the full condition could be: bool within_constraints == robot_jointstate_at_last_pt(current_state) && have_streamed_all_pts(traj); or something like that. For very short trajectories we might still want to include But for longer trajectories keeping track of which pts we've actually executed seems like it would address this. |
Just to clarify: I'm not being difficult just for the sake of being difficult. I'd just like to arrive at a solution that is robust against timing and potential race-conditions. |
For sure, I understand that. We don't want a half-baked solution that could potentially introduce more issues. I think I'll have to do some investigating to fully understand how the ROS- and controller-side communicate trajectories and the robot state. I'll then try to implement something and will then just push it to our lab's fork of the motoman repo and will share the link to the commit for further discussion. It'll have to wait until the new year though. Am swamped with end-of-the-year deadlines currently. In case anyone else who is following this would like to contribute this fix, feel free to do so. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
I've recently noticed that when trying to execute a trajectory on hardware (not simulation) with the same start and end point, the driver's behavior is to detect that the last trajectory point is the same as the current joint state and to then set the the goal status to succeeded ("Already within goal constraints, setting goal succeeded").
This check is done in line 423ff of
joint_trajectory_action.cpp
.I'm assuming that this is not desired behavior for the case where the start and end point of a trajectory are the same and the trajectory still needs to be executed?
I have implemented something to address this special case without impacting the original purpose of this verification and can submit the change as a PR.
The text was updated successfully, but these errors were encountered: