Skip to content

Latest commit

 

History

History
118 lines (94 loc) · 3.04 KB

README.md

File metadata and controls

118 lines (94 loc) · 3.04 KB

FaceSeg

Note

(Currently in progress)

The next version of CLIP-DINO-SAM combination will come out soon!📆

Tip

📄 Paper with detailed explanation of the structure of the combination of CLIP-DINO-SAM models: PDF

:octocat: Github with detailed workflow of labelling data with CLIP-DINO-SAM for YOLO: Github

👀 Example Output

Here are example predictions of YOLO model segmenting parts of face after being trained on an auto-labeled dataset using CLIP-DINO-SAM

📚 Basic Concepts

CLIP-DINO-SAM combination is a Huge module that works relatively not quickly as it requires relatively Big ammounts of GPU. So i will show you a detailed workthroug for only two images to save your time on waiting for the results and my time on writing this tutorial. For the most curious ones i will leave a complete pipeline for training on custom face dataset. Enjoy 🎉

💿 Installation

Clone repo

git clone https://github.com/Mikzarjr/Face-Segmentation

Install requirements

pip install -r FaceSeg/requirements.txt

or

pip install -e setup.py

🚀 qwe

qweqweqwe

📑 Workthrough

Segmentation with CLIP-DINO-SAM only 🎨

Import dependencies

from FaceSegmentation.Pipeline.Config import *
from FaceSegmentation.Pipeline.Segmentation import FaceSeg

Choose image to test the framework

sample images are located in FaceSeg/TestImages

image_path = f"{IMGS_DIR}/img1.jpeg"

Run the following cell to get segmentation masks

Main segmentation mask is located in /segmentation/combined_masks

All separate masks are located in /segmentation/split_masks

S = FaceSeg(image_path)
S.Segment()

Annotations for training YOLO 📝

Create COCO.json annotations

from FaceSegmentation.Pipeline.Annotator import CreateJson
image_path = "/content/segmentation/img1/img1.jpg"
A = CreateJson(image_path)
A.CreateJsonAnnotation()
A.CheckJson()

Output will be in COCO_DIR named COCO.json

Convert COCO.json annotations to YOLOv8 txt annotatoins

from FaceSegmentation.Pipeline.Converter import COCO-to-YOLO
json_path = f"{COCO_DIR}/COCO.json"
C = ConvertCtY(image_path)
C.Convert()

Output will be in YOLO_DIR named YOLO.json