-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RL Fly To A Point #158
Comments
Hey @zcase, There is an example of this in the repo. In I thought that the agent could stabilize to multiple points, so I tried changing |
@adamhall So off the main branch when I do what you mentioned, I get a KeyError for stab and it doesn't work. Im currently trying to chase down why though. It looks like its not under the registery or is missing from it. I try it with the track task and that doesn't work as well. |
@adamhall so it looks like |
That's very strange because it works perfectly for me off the main branch. Can you send me:
|
@adamhall So the error was user error. I was putting stab for the task rather than quadrotor. However, Just like my other issue I run into the issue when trying to view with the gui that the pybulllet env will start up and immediately close as if the program has finished. It is to fast to even see if the drone reached its goal. With these being the same issue and you pointed out that to train and RL to fly to a point we can use the stab config then I think we can close this and just figure out in the other one why the gui isn't staying open to visually see the drone do a track or stabilize at a point. Thoughts? Also thank you very much for your help! |
@adamhall or @Federico-PizarroBejarano:
Is there a way to add with the config the area of randomization to the stabilization? I.e the initial Thoughts? |
@zcase, sorry I was away for a bit! Some responses below:
If you send me the items referenced here I can better understand your issue.
We don't currently have the gym configured this way, however, if you know the points you wish to go to, you could generate a trajectory (using lines or fitting a spline or something) and then use that trajectory in a trajectory tracking setup. I am realizing now, however, that I don't have the ability to follow custom trajectories in the current main branch. I'll look into putting this in. You can play around with _generate_trajectory to try and get something like this working. |
Hi @zcase, sorry I have been traveling and super busy, but will try to catch up on the issues. I will note that that there is a way to add a custom trajectory in the gym, the example is here:
Also, in terms of training a RL model to stabilize to any point, what you are describing of changing the stabilization goal is not currently in the gym. However, it should be reasonably easy to implement by simply changing the reset function of the quadrotor to change the goal as well as the initial position. In this way, during training, whenever it successfully stabilizes to one point it will start a new episode where it starts at a new point with a new goal. You would of course have to extend the observation space of the PPO to include the new point its being stabilized to using |
Closing this issue due to inactivity. Can reopen if necessary. |
I am new to this repo, but have used the gym_pybullet_drones repo in the past. Switching to this because it seemed like through discussions this repo was setup more to potentially go from sim to reality. My question is what would be the best way to start training an RL algorithm using the repo to fly to a point using the existing PPO algorithm?
Seems like a lot of people used different branches of this repo so not sure if the main branch is best or another branch and which scripts are needed or if I need to create my own.
The text was updated successfully, but these errors were encountered: