Skip to content

Latest commit

 

History

History
17 lines (13 loc) · 871 Bytes

File metadata and controls

17 lines (13 loc) · 871 Bytes

Prepare dataset

The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.

Before running anyone of these GLUE tasks you should download the GLUE data by running this script and unpack it to some directory data_dir.

Prepare environment

pip install -r requirements.txt

Pruning

Pruning now support basic magnitude for distilbert and gradient sensitivity for bert-base:

  • Enable magnitude pruning example:
bash run_pruning.sh --topology=distilbert_SST-2 --data_dir=path/to/dataset --output_model=path/to/output_model --config=path/to/conf.yaml