Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should controllers be allowed to use U_GOAL or only U_EQ? #102

Closed
adamhall opened this issue Sep 29, 2022 · 6 comments
Closed

Should controllers be allowed to use U_GOAL or only U_EQ? #102

adamhall opened this issue Sep 29, 2022 · 6 comments
Labels
bug Something isn't working question Further information is requested

Comments

@adamhall
Copy link
Contributor

Carrying on from #93.

Many controllers have been using env.U_GOAL either as a linearization point or in the cost which uses the true system mass, not the prior mass, which is what symbolic.U_EQ is for. Should controllers have access to U_GOAL or they should exclusively be using U_EQ. @Justin-Yuan What are your thoughts on this for the RL cost and Normalization?

@adamhall adamhall added bug Something isn't working question Further information is requested labels Sep 29, 2022
@Justin-Yuan
Copy link
Contributor

  • For RL cost I think it should use the true parameters since that's the only source of information for learning, and if that is not the true ones, there's no way an RL agent can learn well when testing in the true environment.
  • For RL normalization, I think both will be fine to use, but the true parameters might be better because the action space normalization is treated as part of the environment in our current design (if we change the normalization that means the task itself also changes). So if we treat priors as part of the algorithmic design, it shouldn't affect the task (normalization) right?

@Justin-Yuan
Copy link
Contributor

But for the control methods, this can be tricky since the cost function is part of both the control algorithm and the environment. The ideal case is we have a clear boundary between what's given as the environment/task (which will be used in evaluation) and what's part of the control algorithm. I'd say the cost itself (nonlinear quadratic) is still part of the task side (since we need it in evaluation anyways), but anything that uses linearization (needed in algo optimizations) can use the prior.

@Justin-Yuan
Copy link
Contributor

@adamhall Do we currently have anywhere that needs to be fixed regarding this issue?

@JacopoPan
Copy link
Member

@adamhall @Justin-Yuan status?

@Justin-Yuan
Copy link
Contributor

I am leaning towards using symbolic.U_EQ for linearization and env.U_EQ for cost function or reward, the current/updated symbolic model should already be able to expose U_EQ, but I'm not sure if the MPC controllers have been updated to use them as well? @adamhall

@Federico-PizarroBejarano
Copy link
Collaborator

Closing issue due to staleness

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants