Skip to content

oppo-us-research/NARUTO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

NARUTO

Neural Active Reconstruction from Uncertain Target Observations (CVPR-2024)

YouTube

This is an official implementation of the paper "NARUTO: Neural Active Reconstruction from Uncertain Target Observations".

Ziyue Feng*†1,2, Huangying Zhan*†‑1, Zheng Chen†1,3, Qingan Yan1,
Xiangyu Xu1, Changjiang Cai1, Bing Li2, Qilun Zhu2, Yi Xu1

1 OPPO US Research Center, 2 Clemson University 3 Indiana University

* Co-first authors with equal contribution
† Work done as an intern at OPPO US Research Center
‑ Corresponding author

Table of Contents

πŸ“ Repo Structure

# Main directory
β”œβ”€β”€ NARUTO (ROOT)
β”‚   β”œβ”€β”€ assets                                      # README assets
β”‚   β”œβ”€β”€ configs                                     # experiment configs
β”‚   β”œβ”€β”€ data                                        # dataset
β”‚   β”‚   └── MP3D                                    # Matterport3D for Habitat data
β”‚   β”‚   └── mp3d_data                               # Matterport3D raw Dataset
β”‚   β”‚   └── Replica                                 # Replica SLAM Dataset
β”‚   β”‚   └── replica_v1                              # Replica Dataset v1
β”‚   β”œβ”€β”€ envs                                        # environment installation 
β”‚   β”œβ”€β”€ results                                     # experiment results
β”‚   β”œβ”€β”€ scripts                                     # scripts
β”‚   β”‚   └── data                                    # data related scripts
β”‚   β”‚   └── evaluation                              # evaluation related scripts
β”‚   β”‚   └── installation                            # installation related scripts
β”‚   β”‚   └── naruto                                  # running NARUTO scripts
β”‚   β”œβ”€β”€ src                                         # source code
β”‚   β”‚   └── data                                    # data code
β”‚   β”‚   └── evaluation                              # evaluation code
β”‚   β”‚   └── layers                                  # pytorch layers
β”‚   β”‚   └── naruto                                  # NARUTO framework code
β”‚   β”‚   └── planning                                # planning code
β”‚   β”‚   └── simulator                               # simulator code
β”‚   β”‚   └── slam                                    # SLAM code
β”‚   β”‚   └── utils                                   # utility code
β”‚   β”‚   └── visualization                           # visualization code
β”‚   β”œβ”€β”€ third_parties                               # third_parties
β”‚   β”‚   └── coslam                                  # CoSLAM 
β”‚   β”‚   └── habitat_sim                             # habitat-sim
β”‚   β”‚   └── neural_slam_eval                        # evaluation tool


# Data structure
β”œβ”€β”€ data                                            # dataset dir
β”‚   β”œβ”€β”€ MP3D                                        # Matterport3D data
β”‚   β”‚   └── v1
β”‚   β”‚       └── tasks
β”‚   β”‚           └── mp3d_habitat
β”‚   β”‚               β”œβ”€β”€ 1LXtFkjw3qL
β”‚   β”‚               └── ...
β”‚   β”œβ”€β”€ replica_v1                                  # Replica-Dataset
β”‚   β”‚   └── apartment_0
β”‚   β”‚       └── habitat
β”‚   β”‚           └── replicaSDK_stage.stage_config.json
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ Replica                                     # Replica SLAM Dataset
β”‚   β”‚   └── office_0
β”‚   β”‚   └── ...

# Configuration structure
β”œβ”€β”€ configs                                         # configuration dir
β”‚   β”œβ”€β”€ default.py                                  # Default overall configuration
β”‚   β”œβ”€β”€ MP3D                                        # Matterport3D 
β”‚   β”‚   └── mp3d_coslam.yaml                        # CoSLAM default configuration for MP3D
β”‚   β”‚   └── {SCENE}
β”‚   β”‚       └── {EXP}.py                            # experiment-specific overall configuraiton, inherit from default.py
β”‚   β”‚       └── coslam.yaml                         # scene-specific CoSLAM configuration, inherit from mp3d_coslam.yaml
β”‚   β”‚       └── habitat.py                          # scene-specific HabitatSim configuration
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ Replica                                     # Replica
β”‚   β”‚   └── ...

NOTE: default.py/{EXP}.py has the highest priority that can override configurations in other config files (e.g. mp3d_coslam.yaml/coslam.yaml, habitat.py)

# Result structure
β”œβ”€β”€ results                                         # result dir
β”‚   β”œβ”€β”€ MP3D                                        # Matterport3D result
β”‚   β”‚   └── GdvgFV5R1Z5
β”‚   β”‚       └── {EXPERIMENT_SETUP}
β”‚   β”‚           └── run_{COUNT}
β”‚   β”‚               └── eval_result.txt
β”‚   β”‚               └── coslam
β”‚   β”‚                   └── checkpoint
β”‚   β”‚                   └── mesh
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ Replica                                     # Replica result
β”‚   β”‚   └── office_0
β”‚   β”‚       └── {EXPERIMENT_SETUP}
β”‚   β”‚           └── run_{COUNT}
β”‚   β”‚               └── eval_result.txt
β”‚   β”‚               └── coslam
β”‚   β”‚                   └── checkpoint
β”‚   β”‚                   └── mesh
β”‚   β”‚   └── ...

πŸ› οΈ Installation

Install NARUTO

# Clone the repo with the required third parties.
git clone --recursive https://github.com/oppo-us-research/NARUTO.git

# We assume ROOT as the project directory.
cd NARUTO
ROOT=${PWD}

In this repo, we provide two types of environement installations: Docker and Anaconda.

Users can optionally install one of them. The installation process includes:

  • installation of Habitat-Sim, where we install our updated Habitat-Sim, where the geometry compilation is updated.

  • installation of Co-SLAM, which is used as our mapping backbone.

  • installation of other packages required to run NARUTO.

[Optional 1] Docker Environment

Follow the steps to install the Docker environment:

# Build Docker image
bash scripts/installation/docker_env/build.sh

# Run Docker container
bash scripts/installation/docker_env/run.sh

# Activate conda env in Docker Env
source activate naruto

# Install tinycudann as required in Co-SLAM
# We try to include this installation while building Docker but it fails somehow. 
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

[Optional 2] Conda Environment

Follow the steps to install the conda environment

# Build conda environment
bash scripts/installation/conda_env/build.sh

# Activate conda env
source activate naruto

πŸ’Ώ Dataset Preparation

Replica Data

Follow the steps to download Replica Dataset.

# Download Replica data and save as data/replica_v1.
# This process can take a while.
bash scripts/data/replica_download.sh data/replica_v1

# Once the donwload is completed, create modified Habitat Simulator configs that adjust the coordinate system direction.
# P.S. we adjust the config so the coordinates matches with the mesh coordinates.
bash scripts/data/replica_update.sh data/replica_v1

[Optional] We also use Replica Data (for SLAM) for some tasks, e.g. passive mapping / initialize the starting position.

# Download Replica (SLAM) Data and save as data/Replica
bash scripts/data/replica_slam_download.sh

Matterport3D

To download Matterport3D dataset, please refer to the instruction in Matterport3D.

The download script is not included here as there is a Term of Use agreement for using Matterport3D data.

However, our method does not require the full Matterport3D dataset. Users can download the data related to the task habitat only.

# Example use of the Matterport3D download script:
python download_mp.py -o data/MP3D --task_data habitat

# Unzip data
cd data/MP3D/v1/
unzip mp3d_habitat.zip
rm mp3d_habitat.zip
cd ${ROOT}

Running NARUTO

Here we provide the script to run the full NARUTO system described in the paper. This script also includes the upcoming evaluation process. We also provide a flowchart to assist users to better understand the logic flow of the NARUTO planner.

# Run Replica 
bash scripts/naruto/run_replica.sh {SceneName/all} {NUM_TRIAL} {EXP_NAME} {ENABLE_VIS}

# Run MP3D 
bash scripts/naruto/run_mp3d.sh {SceneName/all} {NUM_TRIAL} {EXP_NAME} {ENABLE_VIS}

# examples
bash scripts/naruto/run_replica.sh office0 1 NARUTO 1
bash scripts/naruto/run_mp3d.sh gZ6f7yhEvPG 1 NARUTO 0
bash scripts/naruto/run_replica.sh all 5 NARUTO 0

πŸ”Ž Evaluation

We evaluate the reconstruction using the following metrics with a threshold of 5cm:

  • Accuracy (cm)
  • Completion (cm)
  • Completion ratio (%)

We also compute the mean absolute distance, MAD (cm), between the estimated SDF distance on all vertices from the ground truth mesh.

In line with methodologies employed in previous studies [65, 66], we refine the predicted mesh by removing unobserved regions and noisy points that are within the camera frustum but external to the target scene, utilizing a mesh culling technique. Refer to [65] for a detailed explanation of the mesh culling process

Quantitative Evaluation

# Evaluate Replica result
bash scripts/evaluation/eval_replica.sh {SceneName/all} {TrialIndex} {IterationToBeEval}
bash scripts/evaluation/eval_mp3d.sh {SceneName/all} {TrialIndex} {IterationToBeEval}

# Examples
bash scripts/evaluation/eval_replica.sh office0 0 2000
bash scripts/evaluation/eval_mp3d.sh gZ6f7yhEvPG 0 5000

Qualitative Evaluation

# Draw trajectory in the scene mesh
bash scripts/evaluation/visualize_traj.sh {MESH_FILE} {TRAJ_DIR} {CAM_VIEW} {OUT_DIR}

# examples
bash scripts/evaluation/visualize_traj.sh \
results/Replica/office0/NARUTO/run_0/coslam/mesh/mesh_2000_final_cull_occlusion.ply \
results/Replica/office0/NARUTO/run_0/visualization/pose \
src/visualization/default_camera_view.json \
results/Replica/office0/NARUTO/run_0/visualization/traj_vis_view_1

🎨 Run on Customized Scenes

1. Prepare 3D model (e.g. ply/glb files)
2. Use Blender to adjust the coordinate system. (currently we are using RDF in Blender, when it is properly adjusted. We should be looking at the model, and the model is standing)
3. Delete all other unnecessary objects in the `Scene Collection`. 
4. Scale and Translate the object to a proper location and size.
5. Add a cube (shift+A) as the bounding box. Scaling cube with negative scales!
6. Export the model as glb
7. Create configuration files as listed in `configs/NARUTO/hokage_room`.
8. RUNNING NARUTO!

πŸ“œ License

NARUTO is licensed under MIT licence. For the third parties, please refer to their license.

🀝 Acknowledgement

We sincerely thank the owners of the following open source projects, which are used by our released codes: CoSLAM, HabitatSim.

πŸ“– Citation

@article{feng2024naruto,
  title={NARUTO: Neural Active Reconstruction from Uncertain Target Observations},
  author={Feng, Ziyue and Zhan, Huangying and Chen, Zheng and Yan, Qingan and Xu, Xiangyu and Cai, Changjiang and Li, Bing and Zhu, Qilun and Xu, Yi},
  journal={arXiv preprint arXiv:2402.18771},
  year={2024}
}