Skip to content

Deep Learning for Natural Language Processing Mini Project Team 1

Notifications You must be signed in to change notification settings

Luuk99/dl4nlp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

dl4nlp

Project for Deep Learning for Natural Language Processing (second year master AI @ UvA)

This repository contains research on the ability of Natural Language Generation (NLG) models to capture dialogue reasoning capabilities in multi-turn dialogue on the MuTual dataset. The goal of the research is to examine the performance of state-of-the-art NLG models on the MuTual dataset.

Content

This repository consists of the following key scripts and folders:

  • evaluate_nlg.py: this is the core script of the research. It evaluates the NLG models on the MuTual dataset.
  • utils_nlg.py: copied utils script from the original MuTual Github with some functionality removed and altered for the NLG models.
  • cluster_data.py: script for doing further analysis on the model. Analyses the model on different dialogue lengths and based on their TF-IDF scores.
  • data: folder containing all the data. This is a direct copy from the original MuTual Github.
  • experiment_outputs: folder containing all the results from our experiment runs.

The accompanying short report of this project can be found in this repository as Research_Report.pdf.

Prerequisites

Getting Started

  1. Open Anaconda prompt and clone this repository (or download and unpack zip):
git clone https://github.com/AndrewHarrison/dl4nlp.git
  1. Create the environment:
conda env create -f environments/environment.yml

Or use the Lisa environment when running on the SurfSara Lisa cluster:

conda env create -f environments/environment_lisa.yml
  1. Activate the environment:
conda activate dl4nlp
  1. Run the NLG evaluation script for GPT2:
python evaluate_nlg.py

Or provide the name of another model:

python evaluate_nlg.py --model MODEL

Dataset

All data is available in the data folder of this repository. It is a direct copy from the original MuTual Github.

Arguments

The NLG models can be evaluated with the following command line arguments:

usage: evaluate_nlg.py [-h] [--model MODEL] [--batch_size BATCH_SIZE] [--data_dir DATA_dir] [--output_dir OUTPUT_DIR]
                        [--learning_method LEARNING_METHOD]

optional arguments:
  -h, --help            			Show help message and exit.
  --model MODEL            			What model to use. Options: ['gpt2', 'bart', 'gpt_neo', 'dialog_gpt', 'xlnet', 'blenderbot']. Default is 'gpt2'.
  --batch_size BATCH_SIZE            	        Batch size to use during training. Default is 8.
  --data_dir DATA_DIR            	        Directory where the data is stored. Default is 'data/mutual'.
  --output_dir OUTPUT_DIR            	        Directory where the evaluation results are stored as csv files. Default is 'experiment_outputs/'.
  --learning_method LEARNING_METHOD            	Learning method to use. Options: ['zero_shot', '1_shot', '10_shot', '1_epoch', '5_epoch', '10_epoch']. Default is 'zero_shot'.

Authors

Acknowledgements

  • Data and experimental code have been copied and adapted from the original MuTual Github.

About

Deep Learning for Natural Language Processing Mini Project Team 1

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.0%
  • Shell 2.0%