Car behavioral cloning based on Nvidia's end-to-end deep learning approach [1]. Nvidia proposes a deep architecture that works well for real cars in real world scenarios given that they have enough computing power. Later studies suggest shallower architectures suitable for deployment on slower hardware [2] or incorporating a second LSTM network to capture temporal dynamic behavior as well [3]. Reinforcement Learning [4] is another alternative approach, but it is beyond the scope of this repo.
To test these models, we can use one of the various simulated environments out there, like Udacity's self driving car simulator [5], CARLA [6] and AirSim [7]. However, we are using an MIT RACECAR [8] based platform running Jetson TX2. This repo is inspired by some other works [9].
TODO
TODO
[1]: End-to-End Deep Learning for Self-Driving Cars | Blog post, Paper
[2]: An End-to-End Deep Neural Network for Autonomous Driving Designed for Embedded Automotive Platforms
[3]: Autonomous Vehicle Control: End-to-end Learning in Simulated Urban Environments
[4]: Reinforcement Learning for Autonomous Driving | Source 1, Source 2, Source 3, Source 4
[5]: Udacity Self-Driving Car Simulator
[6]: CARLA: An Open Urban Driving Simulator | Github repo, Paper
[7]: AirSim | Github Repo, Autonomous Driving using End-to-End Deep Learning: an AirSim tutorial