Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execution of trajectory with same start and end point #295

Closed
tdl-ua opened this issue Sep 26, 2019 · 9 comments
Closed

Execution of trajectory with same start and end point #295

tdl-ua opened this issue Sep 26, 2019 · 9 comments

Comments

@tdl-ua
Copy link
Contributor

tdl-ua commented Sep 26, 2019

I've recently noticed that when trying to execute a trajectory on hardware (not simulation) with the same start and end point, the driver's behavior is to detect that the last trajectory point is the same as the current joint state and to then set the the goal status to succeeded ("Already within goal constraints, setting goal succeeded").

This check is done in line 423ff of joint_trajectory_action.cpp.

I'm assuming that this is not desired behavior for the case where the start and end point of a trajectory are the same and the trajectory still needs to be executed?

I have implemented something to address this special case without impacting the original purpose of this verification and can submit the change as a PR.

@gavanderhoorn
Copy link
Member

Apologies for the late response.

Yes, this is an issue. I believe it is the same as in ros-industrial/industrial_core#114.

See the ros-industrial/industrial_core#114 (comment) by @shaun-edwards at the end there for the current status of that instance of this problem.

@tdl-ua
Copy link
Contributor Author

tdl-ua commented Oct 22, 2019

It seems this issue is happening in both the industrial_core implementation used for single-groups systems and the motoman-specific implementation for multi-group systems found in this repo.

I found the same race-condition in the motoman-specific implementation that @shaun-edwards describes in industrial_core version. To mitigate it, I implemented a 0.5 second delay after the trajectory goal is published before withinGoalConstraints() in controllerStateCB(). At least for now, I've found that this delay seems to avoid the premature withingGoalConstraints() detection--at least on our hw.

As said, I can create a PR but am not sure in which context to submit the code changes as I am only able to test it on our hardware together with the modifications made in #259 due to our multi-group configuration.

@gavanderhoorn
Copy link
Member

To mitigate it, I implemented a 0.5 second delay after the trajectory goal is published before withinGoalConstraints() in controllerStateCB().

I don't believe we could merge this approach, as that would unconditionally introduce a delay in all code-paths that submit new trajectories.

There have been various PRs in the past that worked to reduce delays (such as ros-industrial/industrial_core#214), so deliberately introducing this 0.5 seconds would not be a good idea.

I haven't looked at the issue recently, so I can't suggest an alternative approach right now, other than perhaps introducing some additional 'intelligence' that takes the trajectory into account (ie: intermediate points) instead of just the last one and the current state.

@tdl-ua
Copy link
Contributor Author

tdl-ua commented Oct 24, 2019

That would make sense. Having said that, I forgot to mention that the delay is actually conditional. It would only be enforced when the goal trajectory is actually closed (start and end point are equal). In all other cases, the driver's behavior would be unchanged.

This is the commit containing the changes: Adamsua-lab@f07425f

But still, the delay is there in some cases, which might not be desired.

It looks like a similar fix has been proposed by @Jmeyer1292 for industrial_core (ros-industrial/industrial_core@cbdf113) without making the delay conditional on whether the goal trajectory is closed. In the comments there, this approach also was not very popular.

@gavanderhoorn
Copy link
Member

So a proper approach would seem to not be time, but state-based.

Taking the currently active trajectory point into account would seem to address the issue while also not introducing additional artificial delays.

I've not verified whether that information would be available in the streamer, but I believe it is.

@tdl-ua
Copy link
Contributor Author

tdl-ua commented Nov 13, 2019

So a proper approach would seem to not be time, but state-based.

I'm not sure if I am understanding correctly. By state, do you mean the motion state (moving/not moving)?

Taking the currently active trajectory point into account

Would this not be the same as the initial trajectory point of the goal trajectory?


Here is what I think this entire issue really boils down to: The withinGoalConstraints() evaluation is called within the controllerStateCB() function before the manipulator has even started moving. Because the start and end point of the trajectory are the same, obviously this function will return true. The last_robot_status_->in_motion.val == industrial_msgs::TriState::FALSE condition is also true meaning that the line active_goal_map_[robot_id].setSucceeded(); is executed prematurely.

This means we would need to "wait" with the withinGoalConstraints() evaluation until last_robot_status_->in_motion.val switches from false to true. This "waiting" would actually not block anything. The callback function controllerStateCB() would just return before withinGoalConstraints() is called as long as the manipulator has not started moving so that the line active_goal_map_[robot_id].setSucceeded(); is not called prematurely.

All these changes can be made conditional so that the behavior of the driver remains the same as before when a goal trajectory is submitted where the trajectory start and end point are not the same.

Let me know if this alternative could be considered.

@gavanderhoorn
Copy link
Member

I'd have to check the code, but with "state-based" I meant: in addition to robot state, take trajectory execution state into account (ie: which traj pts have we streamed to the robot).

Looking at in_motion is almost a subset of that.

So the full condition could be:

bool within_constraints == robot_jointstate_at_last_pt(current_state) && have_streamed_all_pts(traj);

or something like that.

For very short trajectories we might still want to include have_seen_in_motion_go_up() (or something like that), as with the buffer of 5 pts on the MotoROS side those would get streamed immediately and we'd still run into the problem.

But for longer trajectories keeping track of which pts we've actually executed seems like it would address this.

@gavanderhoorn
Copy link
Member

Just to clarify: I'm not being difficult just for the sake of being difficult.

I'd just like to arrive at a solution that is robust against timing and potential race-conditions.

@tdl-ua
Copy link
Contributor Author

tdl-ua commented Nov 14, 2019

For sure, I understand that. We don't want a half-baked solution that could potentially introduce more issues.

I think I'll have to do some investigating to fully understand how the ROS- and controller-side communicate trajectories and the robot state. I'll then try to implement something and will then just push it to our lab's fork of the motoman repo and will share the link to the commit for further discussion.

It'll have to wait until the new year though. Am swamped with end-of-the-year deadlines currently. In case anyone else who is following this would like to contribute this fix, feel free to do so.

@ros-industrial ros-industrial locked and limited conversation to collaborators Feb 10, 2022
@gavanderhoorn gavanderhoorn converted this issue into discussion #467 Feb 10, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Development

No branches or pull requests

2 participants