Skip to content

AntoBongio/Sensor_Fusion_Nanodegree

Repository files navigation

Udacity - Sensor Fusion Nanodegree

This Nanodegree program consists of four core courses:

  • Lidar
  • Camera
  • Radar
  • Kalman filters

Nanodegree certificate

In this project, the main goal is to process a point clouds in order to detect cars and trucks on a narrow street using lidar. The detection pipeline takes as input the raw point cloud and it provides as output the bounding boxes surrounding the objects detected. The entire project is developed on the pcl library, with the exception of some parts developed by me to reach a total knowledge of the subject (RANSAC, KD-Tree, and Euclidean clustering algorithms).

This project gives you the possibility to familiarize with different Feature Detectors and Descriptors, exploiting their implementation in OpenCV. Feature Detectors: Shi-Tomasi, Harris, FAST, BRISK, ORB, AKAZE, SIFT Feature Descriptors: BRISK, BRIEF, ORB, FREAK, AKAZE, SIFT

This is the final project of the camera core course. The main idea is to combine both lidar and camera data for 3D object tracking. A real-time object detection algorithm (YOLOv3) is used to first identify the region of interest (ROI) in the colour image. Then the associated keypoint correspondences are used to match 3D objects over time. The time-to-collision (TTC) can be estimated based on lidar 3D points and key point matching pairs within the ROIs.

The Radar sensor gives us the ability to estimate the position of a target as well as its velocity, proving to be complementary to Lidar. In the final project, we implemented the simulation of an artificial moving target, by assuming that its motion is fully described by a constant velocity model. Then, we simulated a signal that is sent to the object and reflected back as what would be done in presence of radar. This signal is exploited to calculate the Range-Doppler Map (RDM) to ascertain the position and velocity of the target. Because the RDM is inherently noisy due to radar clutter, the CFAR is calculated in 2D in this RDM to finally detect where the object is and its speed.

The main goal of this project is to implement an Unscented Kalman Filter to estimate the state of multiple cars on a highway using noisy lidar and radar measurements. We assume that each object its entirely described by a state vector $\mathbf{x}=[p_x, p_y, v_x, v_y]$, and they behave according to a Constant Turn Rate and Velocity model (CTRV), described in the pdf CTRV Model.

To evaluate the solution, it is required to obtain RMSE values that are lower than a well defined tolerance.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages