Skip to content

This repository stores the codes of my master's thesis "Semantic Landmark-based Localization in HD-Maps using Object-level Detections." The publication to ITSC 2021 based on this work is in progress.

Notifications You must be signed in to change notification settings

wolf943134497/carla-semantic-localization

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SMMPDA

This repo implements Semantic Max-mixture Probabilistic Data Association (SMMPDA) localization using the miniSAM library. Simulations are conducted with CARLA. Ubuntu 18.04 LTS has been used to develop this project. The publication to ITSC 2021 based on this work is in progress together with the RST Lab at TU Dortmund.

SMMPDA is a localization algorithm based on sliding window factor graphs combined with a PDA-like data association scheme. The following animation gives a sense of the localization result in CARLA simulations. The meanings of the elements are:

  • Triangles with ellipses: The estimated poses in the sliding window. These two examples use the window size of 10.
  • Blue curve: The ground truth trajectory
  • Cross: The ground truth position at the current time step.
  • Blue and red dots: Pole objects in the map. (blue: general pole. red: a pole of a traffic sign)
  • Jumping dot: The GNSS reading. Biases are added to degrade its reliablity in both cases.
  • Orange & green curves: The detected lane boundaries.
  • Lines connecting to hollow dots: The detected pole objects. The hollow dot ahead of the pose triangle is the camera. (blue: detected as general pole. red: detected as a pole of a traffic sign)

Note: The implementation also uses stop line detections (enmulated by road surface stop signs in CARLA), which are not visualized.

In the first highway case, with semantic lane boundary measurements, SMMPDA is able to recover from a wrong initial belief in the wrong lane resulting from laterally-biased GNSS measurements and achieve a lane-level accuracy. SMMPDA is capable of reinitialization when it detects previous beliefs were potentially wrong, which takes effect after the lane change in this case. (Poles not used in this case)

In the second urban case, SMMPDA results in fairly accurate localization even with biases in the longitudinal and lateral directions.

Environment Setup

Using miniconda is recommended beause this repo is developed this way. The repo comes with an environment.yml file that facilitates setting up the environment.

To create the same conda environment with the default name "smmpda":

conda env create -f environment.yml

Refer to Creating an environment from an environment.yml file for more info.

Dependencies

This project depends on CARLA and miniSAM, which need to be installed manually in addition to the conda environment. Follow their official guides for proper installation.

CARLA
The version 0.9.10 is used throughout this repo. The corresponding .egg file has be included in the root directory of this repo. To upgrade, replace the .egg file with a newer version. CARLA suggests two ways to import PythonAPIs as written in here. The first inelegant method by adding the .egg file to the system path is used because easy_install seems to be deprecated (ref1, ref2).

miniSAM
Follow the instructions in C++ Compile and Installation and Python Installation to install the miniSAM library. Some additional dependencies are required by miniSAM. Make sure to activate the correct conda environment when installing the miniSAM Python package, so it is installed in the correct conda environment. Note that when miniSAM is built with MINISAM_WITH_MULTI_THREADS set to ON, the python debugger doesn't work inside a factor class (in my case using VS code). Also, I don't experience much speed improvement when turning mutli-threads on.

Install SMMPDA Package

This repo comes with a setup.py file that wraps this project into an installable Python package. Once the project is installed, the library can be imported easily in a Python module. This makes testing more convenient. (ref)

To install the smmpda package in develop mode:

cd [PATH_TO_REPO]
pip install -e .

Quick Start

Three tasks are performed individually. Remember to activate the correct conda environment.

1. Collect data in CARLA with a predefined path.

Launch CARLA, then run:

python -O raw_collector.py settings/carlasim.yaml -r

This spawns a Mustang (yes, I'm old-school) in the CARLA world with sensors, which wanders around and collects data from the sensors. The -O flag turns on the optimization mode of python interpretor and turns off debug features. When runing without this flag, some shapes will be drawn in the CARLA world, e.g. lane boundaries points, for debug's purpose.

The first argument is the configuration YAML file that defines all details regarding CARLA simulation, such as the used map, the weather, how the car is controlled, and sensor configurations. The folder settings contains a carlasim.yaml that can be used for quick tests. See comments in the files for more information on how to set the parameters. If a sensor noise parameter is set as 0, CARLA simply gives you the ground truth value for the corresponding measurement. In the folder settings/routes, several pre-defined config files can be found, which only differ in their used waypoints.

The flag -r turns on the saving of the recorded data specified in the configuration file mentioned above. A recordings folder will be created in the root of this project the first time, under which the recorded data of a simulation will be saved into a folder named by the time the simulation is run. It is recommended to change the folder name to something more meaningful right after saving. Data are stored into 3 files:

  1. sensor_data.pkl: Simulated raw sensor data.
  2. gt_data.pkl: Ground truth data.
  3. carla_recording.log: Data for CARLA replay. Refer to here.

Besides recordings, a copy of the used CARLA simulation configuration file named config.yaml is saved in the folder settings under the same folder for future reference.

2. Generate simulated object-level detections

This step doesn't require CARLA. Say you have saved the collected data in recordings/test in the first step, generate simulated detections and the pole map for it by:

python -O detection_generator.py recordings/test settings/vision.yaml settings/sim_detection.yaml settings/pole_map.yaml

Running it without the -O flag will show the visualization of the generated pole map in the end. The first argument is the folder of the recording. The file vision.yaml defines parameters regarding vision-based detection algorithms. sim_detection.yaml defines parameters regarding data generated based on GT data, which are more artificial compared to the vision-based part. pole_map.yaml defines parameters that are used for pole map generation. See the comments in them for more information. It took me some time to tune the parameters, but feel free to fine-tune them.

The results will be saved into 2 files under the same folder as the recorded data:

  1. detections.pkl: Simulated detections.
  2. pole_map.pkl: Generated pole map.

The 3 above-mentioned configuration files are also copied into the settings folder under the same recording folder for future reference.

After generating simulated detections, you can visualize the result by:

python detection_viewer.py recordings/test

3. Run SMMPDA localization on simulated data

Say you have generated simulated detection data in recordings/test in the second step. Launch CARLA (preferably in no rendering mode), then run the following if you have added measurement noise in step 1. and 2..

python sliding_window_localization.py recordings/test settings/localization.yaml -s ANY_NAME_YOU_LIKE -ve

If you have run step 1. and 2. with simulated noise configured to 0, there is still a way to add post-simulation noise:

python sliding_window_localization.py recordings/test settings/localization.yaml -n settings/post_noise.yaml -s ANY_NAME_YOU_LIKE -ve

The first argument is the recording folder. localization.yaml defines all parameters regarding SMMPDA localization. The flag -n turns on post-simulation noise and uses parameter defined in post_noise.yaml to simulate noise. This way you can reuse the same recording to simulate situaions with different noise configurations. Recordings can take up a lot of space. The flag -s saves the localization results in the folder with a specified name under the folder results, which is created the first time localization results are to be saved. The flag -ve toggles on the visualization of the resulting colored error plots.

In the save folder, 4 files are stored:

  1. localizatin.gif: Animation of localization process.
  2. results.pkl: Localization results.
  3. localizatin.yaml: A copy of SMMPDA localization configuration file for future reference.
  4. post_noise.yaml (optional): A copy of post-simulation noise configuraiton file if used.

Note that the first time a CARLA map is used in a localization, a map image is created using pygame for visualization. It is then cached in the folder cache/map_images, so it doesn't have to be created again afterwards.

Reproduce Localization Tests

To reproduce the tests performed in the thesis, follow these 3 steps:

1. Prepare recordings:

Prepare the 5 recordings with the following 5 scenario configuraitons:

  1. urban
  2. right_turn
  3. highway
  4. s2
  5. s3

e.g. python -O raw_collector.py settings/routes/town03/urban.yaml -r

Note: s2 means the ego car on a straight highway starts from the 2nd lane and makes a lane change to the right. Similar for s3.

Rename the folders of recordings so you have the same structure in the recordings folder:

recordings
└─highway
└─right_turn
└─s2
└─s3
└─urban

2. Prepare detection data

Use the command as in the section "Generate simulated object-level detections" above to generate detections for each scenario.

e.g. python -O detection_generator.py recordings/highway settings/vision.yaml settings/sim_detection.yaml setting/pole_map.yaml

Use the default sim_detection.yaml to not introduce simulated errors at this step since post-simulation errors will be added during localization later.

3. Run localization tests

First, launch CARLA server (preferably in no rendering mode). A Python script run_predef_tests.py has been prepared that refers to scenarios.yaml and automatically run through all configurations in parallel. Simply run:

python run_predef_tests.py

The configurations corresponding to the parameters used in the thesis are pre-defined and stored in the folder settings/tests. The script uses a process pool of 2 by default since using more tends to crash CARLA on my laptop, but try it yourself maybe you have better luck ;). Running the entire test set takes roughly 2 hours on my laptop with i7-10750H. Afterwards, you should be able to find for each recording a folder results containing subfolders named after the configs in scenarios.yaml, where the localization results are stored.

To visualize the results, check the folder helper_scripts.

Camera calibration and IPM parameters

The two Jupyter notebooks front_camera_calibration.ipynb and ipm_using_deal_vanish_point.ipynb are provided to generate the calibration and IPM parameters of the front-facing camera in an interactive way. The front bumper frame is the reference frame when computing the calibration matrix, so the calbration matrix relates coordinates in the front bumper frame and the image pixels. Both notebooks use the images in the folder calib_images either for computation or visualization. The results are stored in calib_data.pkl and ipm_data.pkl respectively, which are already part of the repo.

These parameters are essential for detection simulation. If you somehow need to change the configuration of the camera, remember to update these parameters as well. It can be done by reusing the two Jupyter notebooks with a few adjustments.

Packages

This repo currently contains 4 major packages:

  • carlasim: Contains modules related to CARLA simulation and data collection.
  • detection: Contains modules related to detection simulation.
  • localization: Contains modules implementing SMMPDA localization.
  • model: Contains CTRV motion model implementation.

About

This repository stores the codes of my master's thesis "Semantic Landmark-based Localization in HD-Maps using Object-level Detections." The publication to ITSC 2021 based on this work is in progress.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 77.4%
  • Python 22.6%