You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A3C is usually based on having multiple workers playing in parallel. This implementation here expects a GUI to run the game in and record its state via screenshots. Getting that to work would probably require multiple machines running the model in parallel, including all the communication code. The implementation is currently not geared towards such a scenario, so it would require a significant rewrite of large parts of the code to get these things -- and thus A3C -- to work.
Thank you.
If the environment was Gym-like then it would be easy to make it work with any RL algo.
For example, create the env, render the state, take an action as input, etc.
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample()) # take a random action
env.close()
This is quite nice!
There are several A3C PyTorch implementations for Atari.
Is it possible to do the same with this Truck environment?
Thank you.
The text was updated successfully, but these errors were encountered: