BlazeFace model used in face detection now instead of Haar Cascade, decreasing the inference time x10 times and detect frontal and profile face more accurate
Solving little import issue
Please Upgrade to latest version if you already have Face Library.
pip install face-library
pip install face-library -U
from face_lib import face_lib
FL = face_lib()
The model is built over OpenCV, so it expects cv2 input (i.e. BGR image), it will support PIL in the next version for RGB inputs. At the end there is a piece of code to make PIL image like cv2 image.
import cv2
img = cv2.imread(path_to_image)
faces = FL.get_faces(img) #return list of RGB faces image
If you want to get faces locations (coordinates) instead of the faces from the image you can use
no_of_faces, faces_coors = FL.faces_locations(face_img)
You can change the maximum number of faces could be detcted as follows
no_of_faces, faces_coors = FL.faces_locations(face_img, max_no_faces = 10) #default number of max_no_faces is 2
You can change face detection thresholds (score threshold, iou threshold) -if needed-, by using the following function
FL.set_detection_params(scoreThreshold=0.82, iouThreshold=0.24) # default paramters are scoreThreshold=0.7, iouThreshold=0.3
The verfication process is compossed of two models, a face detection model detect faces in the image and a verfication model verfiy those face.
img_to_verfiy = cv2.imread(path_to_image_to_verify) #image that contain face you want verify
gt_img = cv2.imread(path_to_image_to_compare) #image of the face to compare with
face_exist, no_faces_detected = FL.recognition_pipeline(img_to_verfiy, gt_image)
You can change the threshold of verfication with the best for your usage or dataset like this :
face_exist, no_faces_detected = FL.recognition_pipeline(img_to_verfiy, gt_image, threshold = 1.1) #default number is 0.92
also if you know that gt_img
has only one face and the image is zoomed to that face (minimum 65%-75% of image is face) like this :
You can save computing time and the make the model more faster by using
face_exist, no_faces_detected = FL.recognition_pipeline(img_to_verfiy, gt_image, only_face_gt = True)
Note: if you needed to change detection parameters before the recognition pipeline you can call set_detection_params
function as mentioned in Face detection section.
I you want represent the face with vector from face only image, you can use
face_embeddings = FL.face_embeddings(face_only_image)
import cv2
import numpy
from PIL import Image
PIL_img = Image.open(path_to_image)
cv2_img = cv2.cvtColor(numpy.array(PIL_img), cv2.COLOR_RGB2BGR) #now you can use this to be input for face_lib functions
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
There are many ways to support a project - starring⭐️ the GitHub repo is just one.
Face library is licensed under the MIT License