Skip to content

PEFT technique comparison throught the training process. It implements Parallel Adapter, Prefix-Tuning and LoRA.

Notifications You must be signed in to change notification settings

Markel/PEFT-by-time

Repository files navigation

Comparison of learning-efficiency using PEFT techniques

Python PyTorch ADDI

This project helps to compare the effectiveness of Parameter-Efficient Fine-Tuning (PEFT) techniques in various NLP tasks. PEFT methods can democratize access to high-performance LLM models by making their fine-tuning more resource-efficient and accessible for diverse applications. The project implements three PEFT methods: Prefix-Tuning, Parallel Adapters, and Low-Rank Adaptation (LoRA). It can experiment on tasks including emotion classification, topic classification, multiple-choice question answering, and distractor generation, the research aims to identify the most effective PEFT technique given any number of operations.

Requirements

Tested Python version: 3.9

Tested CUDA version: 12.3

Installation

It's recommended to first create a virtual environment for the project:

python -m venv peft-env

Install the dependencies:

pip install -r requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html

Create a .env file and set your Weights & Biases key. An fictional example is provided here:

WANDB_API_KEY=5B5FCFA35882C50972C4CF6AA89DBFCA19608A28

Usage

The main.py file is the starting point of the project, obtain help with:

python main.py -h

Some example of usage are the following:

python main.py -d=commonsense_qa -e=4 -ee=100 FT

It executes a normal fine-tuning experiment on the CommonsenseQA dataset for 4 epochs with evaluation every 100 steps.

python main.py -d=race -e=1 -ee=100 LoRA -r=8 -a=8

It executes a LoRA experiment on the RACE dataset with a rank of 8 and alpha of 8. It runs for 1 epoch with evaluation every 100 steps.

python main.py -d=ag_news -e=1 -ee=100 prefix -nt=100

It executes a Prefix-Tuning experiment on the AG News dataset with a prefix length of 100. It runs for 1 epoch with evaluation every 100 steps.

python main.py -m=large -d=commonsense_qa --debug=DEBUG -e=4 -ee=100 adapters -rf=8

It executes a Parallel Adapters experiment on the CommonsenseQA dataset with a reduction factor of 8. It runs for 4 epochs with evaluation every 100 steps. It executes the large version of the T5 model and sets the logging level to DEBUG.

Linting

pylint ./

Disable reporting to W&B

You may disable Weigths & Bias reporting (it will be treated as a dummy run) creating a .env file and setting

WANDB_MODE=disabled

Architecture

The project is structured as follows:

  • assets: Contains images and other assets used for explaining the project.
  • downloads: It is created when the program is executed and contains the downloaded models and datasets.
  • plotting: Contains scripts to plot the results of the experiments.
  • results: The results of the experiments are stored here. Our results are stored in the repository for reference.
  • src: Contains the source code of the project.

The source code is structured as follows:

  • dataset: Contains the dataset base class and the implementations of the datasets used in the experiments.
  • models: Contains the code for downloading the model and applying the PEFT techniques.
  • utils: Contains the training functions and other auxiliary functions.

A diagram of the program flow is shown below:

Diagram of the program flow

About

PEFT technique comparison throught the training process. It implements Parallel Adapter, Prefix-Tuning and LoRA.

Resources

Stars

Watchers

Forks

Languages