This is a web application that uses an Anime VAE-GAN model to reconstruct images. The VAE-GAN model is capable of encoding and decoding anime images, allowing you to visualize the reconstructed versions.
- Clone this repository to your local machine:
git clone https://github.com/utkarsh-iitbhu/anime-vae-gan-webapp.git
cd anime-vae-gan-webapp
- Install the required Python packages. It is recommended to create a virtual environment before installing the dependencies:
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
pip install -r requirements.txt
- Download the pre-trained VAE-GAN model weights: Before running the application, you need to download the pre-trained VAE-GAN model weights and save them as 'vae.pth' in the root directory of the project. You can get the pre-trained model from here.
To run the web application, use the following command:
uvicorn main:app --host 0.0.0.0 --port 8000
This will start the FastAPI server, and you can access the web application at http://localhost:8000 in your web browser.
- Access the home page of the web application at http://localhost:8000.
- Click on the "Choose File" button to upload an anime image for reconstruction.
- Click the "Upload" button to submit the image.
- Wait for the image to be processed and view the original and reconstructed images side by side.
- Please note that the model is trained on anime images and may not work as expected with other images.