Skip to content

nazianafis/Neural-Style-Transfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

96 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


header

Neural-Style-Transfer (NST)

Neural Style Transfer is the ability to create a new image (known as a pastiche) based on two input images: one representing the content and the other representing the artistic style.

This repository contains a lightweight PyTorch implementation of art style transfer discussed in the seminal paper by Gatys et al. To make the model faster and more accurate, a pre-trained VGG19 model is used.

🔗Check out this article by me regarding the same.

Table of Contents

  1. Overview
  2. How does it work?
  3. File Description
  4. Getting Started
    1. Dependencies
    2. Usage
  5. Output
  6. Acknowledgements
  7. License
  8. Star History

Overview

Neural style transfer is a technique that is used to take two images—a content image and a style reference image—and blend them together so that output image looks like the content image, but “painted” in the style of the style reference image.

How does it work?

  1. We take content and style images as input and pre-process them.

  2. Next, we load VGG19 which is a pre-trained CNN (convolutional neural network).

    1. Starting from the network's input layer, the first few layer activations represent low-level features like colors, and textures. As we step through the network, the final few layers represent higher-level features—like eyes.
    2. In this case, we use conv1_1, conv2_1, conv3_1, conv4_1, conv5_1 for style representation, and conv4_2 for content representation.
  3. We begin by cloning the content image and then iteratively changing its style. Then, we set our task as an optimization problem where we try to minimize:

    1. content loss, which is the L2 distance between the content and the generated image,
    2. style loss, which is the sum of L2 distances between the Gram matrices of the representations of the content image and the style image, extracted from different layers of VGG19.
    3. total variation loss, which is used for spatial continuity between the pixels of the generated image, thereby denoising it and giving it visual coherence.
  4. Finally, we set our gradients and optimize using the L-BFGS algorithm to get the desired output.

Getting Started

File Description

Neural-Style-Transfer
    ├── data
    |   ├── content-images
    |   ├── style-images
    ├── models/definitions     
    │   ├── vgg19.py   <-- VGG19 model definition
    ├── NST.py  <-- the main python file
    ├── LICENSE
    └── README.md

Dependencies

  • Python 3.9+
  • Framework: PyTorch
  • Libraries: os, numpy, cv2, matplotlib, torchvision

Usage

    $ pip install -r requirements.txt

To implement Neural Style Transfer on images of your own:

  1. Clone the repository and move to the downloaded folder:
    $  git clone https://github.com/nazianafis/Neural-Style-Transfer
    $  cd Neural-Style-Transfer
  1. Move your content/style image(s) to their respective folders inside the data folder.

  2. Go to NST.py, and in it, set the PATH variable to your downloaded folder. Also set CONTENT_IMAGE, STYLE_IMAGE variables as your desired images:

    $ PATH = <your_path>
   
    $ CONTENT_IMAGE = <your_content_image_name>
    $ STYLE_IMAGE = <you_style_image_name>
  1. Run NST.py:
    $ python NST.py
  1. Find your generated image in the output-images folder inside data.

Output

The following images were generated using no image manipulation program(s) other than the code described in this article.

content

Acknowledgements

These are some of the resources I referred to while working on this project. You might want to check them out.

License

License: MIT

Star History

Star History Chart

About

Transferring the style of one image to the contents of another image, using PyTorch and VGG19.

Topics

Resources

License

Stars

Watchers

Forks

Languages