DeepSafe is a Streamlit-based web application for DeepFake detection, offering an easy-to-use interface for analyzing images and videos. Users can add their own deepfake detection models and compare them with existing models out of the box.
Live here (Limited access, for full access please contact me).
- Features
- Usage
- Installation
- Additional Sections
- WebApp
- Future Work
- Contributing
- Acknowledgments
- License
- Contact Information
✨ Multi-model Support: Users can select from multiple DeepFake detection models for both images and videos.
📁 File Upload: Supports uploading images (jpg, png, jpeg) and videos (mp4, mov).
🌐 URL Input: Allows users to input URLs for image or video analysis.
⚙️ Processing Unit Selection: Option to use GPU for supported models (default is CPU).
📊 Result Visualization:
- Displays DeepFake detection stats in a table format.
- Provides downloadable CSV of detection results.
- Visualizes results with bar charts for DeepFake probability and inference time.
- Select the "Detector" option from the sidebar.
- Upload an image/video or provide a URL.
- Choose the DeepFake detection model(s) you want to use.
- Optionally select GPU processing if available.
- Click "Real or Fake? 🤔" to start the analysis.
- View the results in the displayed charts and tables.
- Examples: View sample DeepFakes by selecting "Examples" from the sidebar.
- About: Learn about the detectors used in the app and their original authors.
- Learn: Access educational resources about DeepFakes.
-
Clone the repository:
git clone https://github.com/siddharthksah/DeepSafe cd DeepSafe
-
Create a conda environment:
conda create -n deepsafe python==3.8 -y conda activate deepsafe
-
Install dependencies:
pip install -r requirements.txt
-
Download Model Weights from Google Drive:
from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive import os # Authenticate and create the PyDrive client. gauth = GoogleAuth() gauth.LocalWebserverAuth() drive = GoogleDrive(gauth) # Specify the folder ID (the part after 'folders/' in the URL) folder_id = '1UmMTuXPmu-eYfskbrGgZ1uNXceznPQ6o' # Create 'models' directory if it doesn't exist if not os.path.exists('models'): os.makedirs('models') # List all files in the folder file_list = drive.ListFile({'q': f"'{folder_id}' in parents and trashed=false"}).GetList() # Download each file to the 'models' directory for file in file_list: file.GetContentFile(os.path.join('models', file['title'])) print("Download complete.")
-
Start the application:
streamlit run main.py
DeepSafe includes a powerful benchmarking feature that allows users to benchmark their datasets against selected deepfake detection models. The results include accuracy, precision, and recall metrics, along with detailed visualizations.
- Select Dataset Type: Choose between Image or Video datasets.
- Choose Dataset: Select an available dataset for benchmarking.
- Model Selection: Pick the deepfake detection models you want to benchmark your dataset against.
- Start Benchmarking: Click the "Benchmark Dataset" button to initiate the benchmarking process.
The benchmarking results are displayed in detailed bar charts for DeepFake probability and inference time, and a downloadable CSV of the detection results is provided.
-
Dataset Structure: Ensure your dataset follows this folder structure:
datasets/ └── image/ (or video/) └── your_dataset_name/ ├── real/ │ ├── image1.jpg │ └── ... └── fake/ ├── image1.jpg └── ...
-
Config File: Create a
.config
file within your dataset folder to provide metadata about your dataset. Here’s an example of a.config
file:[Dataset] name = Your Dataset Name description = A brief description of your dataset. source = URL or source information.
-
Upload Dataset: Place your dataset in the appropriate folder (
datasets/image/
ordatasets/video/
). -
Run Benchmark: Follow the steps in the benchmarking section to benchmark your custom dataset.
DeepSafe acts as a platform where newer models can be incorporated into the app.
Any kind of enhancement or contribution is welcomed. You can contribute your comments, questions, resources, and apps as issues or pull requests to the source code.
Methods | Repositories | Release Date |
---|---|---|
MesoNet | https://github.com/DariusAf/MesoNet | 2018.09 |
FWA | https://github.com/danmohaha/CVPRW2019_Face_Artifacts | 2018.11 |
VA | https://github.com/FalkoMatern/Exploiting-Visual-Artifacts | 2019.01 |
Xception | https://github.com/ondyari/FaceForensics | 2019.01 |
ClassNSeg | https://github.com/nii-yamagishilab/ClassNSeg | 2019.06 |
Capsule | https://github.com/nii-yamagishilab/Capsule-Forensics-v2 | 2019.1 |
CNNDetection | https://github.com/peterwang512/CNNDetection | 2019.12 |
DSP-FWA | https://github.com/danmohaha/DSP-FWA | 2019.11 |
Upconv | https://github.com/cc-hpc-itwm/UpConv | 2020.03 |
WM | https://github.com/cuihaoleo/kaggle-dfdc | 2020.07 |
Selim | https://github.com/selimsef/dfdc_deepfake_challenge | 2020.07 |
Photoshop FAL | https://peterwang512.github.io/FALdetector/ | 2019 |
FaceForensics | https://github.com/ondyari/FaceForensics | 2018.03 |
CViT | https://github.com/erprogs/CViT | 2021 |
Boken | https://github.com/beibuwandeluori/DeeperForensicsChallengeSolution | 2020 |
GANimageDetection | GANimageDetection | License |
This project is licensed under a dual license:
- Open Source License (MIT License): For personal and non-commercial use.
- Commercial License: For any commercial use, please contact Siddharth to obtain a commercial license.
For questions, please contact me at siddharth123sk[@]gmail.com
Made with ❤️ by Siddharth