Skip to content

hyoon1/RoadSense

Repository files navigation


RoadSense

Real-time Detection of Road Damage
Explore the docs »

About The Project

Potholes and poor road conditions pose significant risks to road safety, causing accidents, vehicle damage, and traffic delays. According to the report by CAA (Canadian Automobile Association), poor-quality roads cost Canadian drivers approximately $3 billion annually, with an average of $126 per vehicle per year in additional operating costs. This translates to over $1,250 in extra costs over the 10-year lifespan of a vehicle.

The goal of this project is to develop and implement a real-time road damage detection and severity assessment system using computer vision technology accessible through smartphones, dash cams, and traffic cameras. Additionally, the system will predict maintenance needs based on historical data to optimize road repair schedules.

(back to top)

Built With

Python TensorFlow PyTorch Azure Visual Studio Code

(back to top)

Getting Started

git clone https://github.com/hyoon1/RoadSense.git
cd RoadSense

Prerequisites

  • Python 3.9
  • tensorflow 2.10
  • pytorch 2.3
  • wandb

Installation

Install the required packages

pip install -r requirements.txt

Install the YOLOv8 package from Ultralytics

pip install ultralytics

Usage

To use this project, follow the steps below:

  1. Prepare the dataset by filtering and converting it to the YOLO format.
  2. Train the YOLOv8 model using the prepared dataset.
  3. Run inference using the trained model.

(back to top)

ML usecase1: Road Damage Detection

Dataset

The model is trained on the RDD2022 dataset. You can download the dataset from the official RDD2022 repository.

Dataset Links

Data Preparation

  1. Filter the dataset to include only specific labels(D00, D10, D20, D40) using the provided 'filter_dataset.py' script.
    python filter_dataset.py
  2. Convert the filtered RDD2022 dataset to the YOLO format using the provided 'yolo_data_converter.py' script. This script will also split the dataset into training and validation sets.
    python yolo_data_converter.py

Model Training and Inference

To train the YOLOv8 model on the RDD2022 dataset, use the 'train.py' script. For inference, use the 'predict.py' script with the best saved model.

You can download the pretrained models from the links below:

  1. Training the model: The results will be saved in the './runs/detect/train/' directory.
    python train.py
  2. Running Inference:
    python predict.py

Docker

The Dockerfile is located in the root of the folder. The main purpose of Docker is to run the Streamlit app in a container. The Python file for the Streamlit app is located at ./streamlit/app.py. The Docker configuration files are located in the ./docker/ directory, including the Python requirements file (requirements.txt).

  1. Bulding a Docker image:
    docker build -t roadsense:tag_id .
    
  2. Check Dockder images:
    docker images
    
  3. Running Docker container:
    docker run -p 8501:8501 roadsense:tag_id
    
    

Dockerhub repository name: king138786/roadsense

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

Authorizers

Project Link:

RoadSense

(back to top)

Acknowledgments

Use this space to list resources you find helpful and would like to give credit to. I've included a few of my favorites to kick things off!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published