Skip to content

KGallyamov/NIC-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Using genetic algorithms for tuning autoencoders architecture

Links

Problem statement / Introduction

Project description

Autoencoders are gaining more and more interest as their application expands. However, creating these models is not as simple as it may seem, as advanced understanding of models workflow, clean training data and good architecture are minimum requirements to achieve acceptable performance. We overcame one of these challenges by using genetic algorithms for architecture tuning. As a result, we achieved a pipeline to improve architecture on Cats faces dataset cat generation.

Article link

If you are interested in our research, you can read our findings here

Installation

Python requirements

First of all you should have python installed (version 3.9 or above). Please make sure have it installed before moving to a next step

Directory strure

Second thing you need to do is configure directories. You need to create directories:

  • data - all datasets (we worked with these)
  • checkpoints - all models checkpoints
  • models - all final models
  • samples - image generation results of models

If you are working on linux, here is a shell script to create all directories:

mkdir "checkpoints"
mkdir "data"
mkdir "models"
mkdir "samples"

Important all directories above should be on the same level as src and test!

Libraries

And last things are downloading required libraries:

pip install -r requirements.txt

Run project

After you have done everything above, you can run main.py via console or any IDE you use:

python main.py

Methodology

The goal of optimizing neural network architecture is to determine the most effective parameters, layers, and internal structure of the network to maximize its performance.

Representation

To achieve this, we first need to select a suitable representation. We can focus on defining the encoder part of the network, as the decoder is typically the same but reversed. We define a representation as an activation function and sequence of layers in the format of layerType_featuresIn_featuresOut_kernelSize. However, we must apply certain restrictions such as ensuring that the number of features decreases towards the end of the encoder and that convolutions come before fully connected layers. Additionally, the activation function should be consistent throughout the network.

Mutation and crossover

Next, we need to define mutation and crossover operations. Mutation can be achieved by slightly altering the number of features in some layers and adding or deleting layers. Crossover involves exchanging portions of the networks while maintaining the mentioned restrictions.

Fitness function

Finally, we must define a fitness function. For our problem we decided that the fitness function will be identical to validation loss, so we aim to minimize it.

Credits

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages