Skip to content

Latest commit

 

History

History
37 lines (24 loc) · 2.28 KB

File metadata and controls

37 lines (24 loc) · 2.28 KB

Synthetic-Voice-Detection-Vocoder-Artifacts

LibriSeVoc Dataset

  1. We are the first to identify neural vocoders as a source of features to expose synthetic human voices. Here are the differences shown by the six vocoders compared to the original audio: image

  2. We provide LibriSeVoC as a dataset of self-vocoding samples created with six state-of-the-art vocoders to highlight and exploit the vocoder artifacts. The composition of the data set is shown in the following table: image The source of our dataset ground truth comes from LibriTTS. Therefore, we follow the naming logic of LibriTTS. For example, 27_123349_000006_000000.wav, 27 is the reader's ID, and 123349 is the ID of the chapter.

Deepfake Detection

We propose a new approach to detecting synthetic human voices by exposing signal artifacts left by neural vocoders and modifying and improving the RawNet2 baseline by adding multi-loss, lowering the error rate from 6.10% to 4.54% on the ASVspoof Dataset. This is the framework of the proposed synthesized voice detection method: image

Paper & Dataset

For more details, please read our paper: https://openaccess.thecvf.com/content/CVPR2023W/WMF/html/Sun_AI-Synthesized_Voice_Detection_Using_Neural_Vocoder_Artifacts_CVPRW_2023_paper.html

For more details, please download our dataset: https://drive.google.com/file/d/1NXF9w0YxzVjIAwGm_9Ku7wfLHVbsT7aG/view

To train the model run:

python main.py --data_path /your/path/to/LibriSeVoc/ --model_save_path /your/path/to/models/

To test with your sample run:

python eval.py --input_path /your/path/to/sample.wav --model_path /your/path/to/your_model.pth

The weight of the trained model:

https://drive.google.com/file/d/1TWdsCFKP2luAfhpB91N9X4z1gsJMvvhI/view?usp=drive_link

In the wild testing:

Test on our Lab's Deepfake O Meter: https://zinc.cse.buffalo.edu/ubmdfl/deep-o-meter/landing_page