This repository hosts the code related to the paper:
Marco Rosano, Antonino Furnari, Luigi Gulino, and Giovanni Maria Farinella, "On Embodied Visual Navigation in Real Environments Through Habitat". International Conference on Pattern Recognition (ICPR). 2020. Download the paper
For more details please see the project web page at https://iplab.dmi.unict.it/EmbodiedVN.
This code is built on top of the Habitat-api/Habitat-lab project. Please see the Habitat project page for more details.
This repository provides the following components:
-
The official PyTorch implementation of the proposed Domain Adaptation approach, incuding the generalized noise models to simulate the inaccuracy of real sensors and actuators;
-
the virtual 3D model of the proposed environment, acquired using the Matterport 3D scanner, and used to carry on all the experiments;
-
the real images of the proposed environment, labeled with their pose. The sparse 3D reconstruction was performed using the COLMAP Structure from Motion tool, then aligned with the Matterport virtual 3D map.
-
An integration with CycleGAN to train and evaluate navigation models on Habitat with domain translated images.
-
The checkpoints of the best performing navigation model and of the CycleGAN sim2real domain adaptation.
- Python >= 3.7, use version 3.7 to avoid possible issues.
- Other requirements will be installed via
pip
in the following steps.
-
(Optional) Create an Anaconda environment and install all on it (
conda create -n DA-habitat python=3.7
) -
Install the customized Habitat-lab (this repo):
git clone https://github.com/rosanom/habitat-domain-adaptation.git cd habitat-domain-adaptation/ pip install -r requirements.txt python setup.py develop --all # install habitat and habitat_baselines
-
(Optional) Download the test scenes data, as suggested in the Habitat-lab repository, and extract the
data
folder in zip tohabitat_domain_adaptation/data/
wherehabitat_domain_adaptation/
is the github repository folder. To verify that the tool was successfully installed, runpython examples/benchmark.py
. -
Download our dataset from here, and extract it to
habitat_domain_adaptation/
. Inside thedata
folder you should see this structure:datasets/pointnav/orangedev/v1/... real_images/orangedev/... scene_datasets/orangedev/... orangedev_checkpoints/...
-
Move to the
habitat_domain_adaptation/
parent directory and CLONE thehabitat-sim
repository, following the install from source instructions. (The development and testing was done on commitbfbe9fc30a4e0751082824257d7200ad543e4c0e
, if the last version of the simulator will not work properly, please consider to checkout to this commit). -
Copy the custom noise model files and paste them in the simulator directory. Specifically, copy the
habitat_domain_adaptation/habitat_sim_noise_model/__init__.py
file tohabitat-sim-path/habitat-sim/
, overwrite it. Copy the files inhabitat_domain_adaptation/habitat_sim_noise_model/agent/controls/
tohabitat-sim-path/habitat-sim/agent/controls/
. Overwrite the__init__.py
file. -
Continue the
habitat-sim
installation procedure. Skip the conda environment creation in point 2. if the conda env. was already created (point 0. of this guide) or if conda is not used.
To verify that habitat_domain_adaptation
and Habitat-sim
with the custom noise model are working correctly, take a look at the habitat_domain_adaptation/test/test_noisy_sensors.py
. You can just run it as is or play with the simulator parameters in the script.
All data can be found inside the habitat_domain_adaptation/data/
folder:
- the
datasets/pointnav/orangedev/v1/...
folder contains the generated train and validation navigation episodes files; - the
real_images/orangedev/...
folder contains the real world images of the proposed environment and thecsv
file with their pose information (obtained with COLMAP); - the
scene_datasets/orangedev/...
folder contains the 3D mesh of the proposed environment. orangedev_checkpoints/
is the folder where the checkpoints are saved during training. Place the checkpoint file here if you want to restore the training process or evaluate the model. The system will load the most recent checkpoint file.
There are two configuration files:
habitat_domain_adaptation/configs/tasks/pointnav_orangedev.yaml
and
habitat_domain_adaptation/habitat_baselines/config/pointnav/ddppo_pointnav_orangedev.yaml
.
In the first file you can change the robot's properties, the sensors used by the agent, the amount of noise to be introduced in the sensors and in the actuators.
...
# noisy actions
ACTION_SPACE_CONFIG: customrobotnoisy
NOISE_MODEL:
ROBOT: Universal
CONTROLLER: Proportional
NOISE_STD: 0.05 # in meters
ROT_NOISE_STD: 5.0 # in degrees
...
...
# noisy loc. sensor
POINTGOAL_WITH_GPS_COMPASS_SENSOR:
GOAL_FORMAT: "POLAR"
DIMENSIONALITY: 2
GPS_NOISE_AMOUNT: 0.2 # meters
ROT_NOISE_AMOUNT: 7.0 # degrees
GOAL_SENSOR_UUID: pointgoal_with_gps_compass
...
In the second file you can change the learning parameters, if training or evaluating using real images, if using the CycleGAN sim2real model.
...
TRAIN_W_REAL_IMAGES: True
EVAL_W_REAL_IMAGES: True
SIM_2_REAL: False #use cycleGAN
...
In order to use CycleGAN on Habitat for the sim2real domain adaptation, follow these steps:
-
clone the CycleGAN repository in the repository root (
habitat_domain_adaptation/
); -
rename the
pytorch-CycleGAN-and-pix2pix
folder tocyclegan
; -
download our CycleGAN checkpoint file from here and extract it to
cyclegan/checkpoints/orangedev/
; -
add the CycleGAN repo path to the
~/.bashrc
file. Open it with your text editor and add this line at the end:export PYTHONPATH=$PYTHONPATH:/absolute/path/to/cyclegan/
then, source the
~/.bashrc
file.
To train the navigation model using the DD-PPO RL algorithm, run:
sh habitat_baselines/rl/ddppo/single_node_orangedev.sh
To evaluate the navigation model using the DD-PPO RL algorithm, run:
sh habitat_baselines/rl/ddppo/single_node_orangedev_eval.sh
For more information about DD-PPO RL algorithm, please check out the habitat-lab dd-ppo repo page.
The code in this repository, the 3D models and the images of the proposed environment are MIT licensed. See the LICENSE file for details.
The trained models and the task datasets are considered data derived from the correspondent scene datasets.
- Matterport3D based task datasets and trained models are distributed with Matterport3D Terms of Use and under CC BY-NC-SA 3.0 US license.
- Gibson based task datasets, the code for generating such datasets, and trained models are distributed with Gibson Terms of Use and under CC BY-NC-SA 3.0 US license.
If you use the code/data of this repository in your research, please cite the paper:
@inproceedings{rosano2020navigation,
title={On Embodied Visual Navigation in Real Environments Through Habitat},
author={Rosano, Marco and Furnari, Antonino and Gulino, Luigi and Farinella, Giovanni Maria},
booktitle={International Conference on Pattern Recognition (ICPR)}
year={2020}
}