You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your pytorch implementation of the center face .
I played with your code for a while, and I may found some problem.
lfw dataset, It seems that you use the whole picture instead of using the face detection bounding boxes or just using center cropped images for training and testing, take a look at some pictures in the dataset,
by this way, too much background are envoled.
overlap between training set and test set.
for klass, name in enumerate(names):
def add_class(image):
image_path = os.path.join(images_root, name, image)
return (image_path, klass, name)
images_of_person = os.listdir(os.path.join(images_root, name))
total = len(images_of_person)
training_set += map(
add_class,
images_of_person[:ceil(total * train_val_split)])
validation_set += map(
add_class,
images_of_person[floor(total * train_val_split):])
I think you should use samples listed in the pairsDevTrain.txt to train and pairsDevTest.txt to test by the lfw paper
The text was updated successfully, but these errors were encountered:
Hi, thanks for your pytorch implementation of the center face .
I played with your code for a while, and I may found some problem.
by this way, too much background are envoled.
I think you should use samples listed in the pairsDevTrain.txt to train and pairsDevTest.txt to test by the lfw paper
The text was updated successfully, but these errors were encountered: