🤗 Hugging Face Models | 📄 Paper | 📜 Blog | 💬 Demo
Minitron is a family of small language models (SLMs) obtained via pruning and knowledge distillation. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.
- 🔥🔥🔥 SOTA 8B model via pruning and distillation with only 400B tokens! See our tech report Technical report and blog post: Mistral-NeMo-Minitron 8B Foundation Model Delivers Unparalleled Accuracy
- The best LLaMa-3.1 4B model is out! New blog post on Llama-3.1-Minitron-4B models: How to Prune and Distill Llama-3.1 8B to an NVIDIA Llama-3.1-Minitron 4B Model.
Minitron accuracy (MMLU) vs. other baseline models. Compression results in significant reduction of training costs for additional models(40x) while producing better results. Please refer to our paper for the full set of results.
Deriving the Minitron 8B and 4B models from the base Nemotron-4 15B model using our approach requires up to 40x fewer training tokens per model compared to training from scratch; this results in compute cost savings of 1.8x for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our arXiv paper for more details.
Please see:
- Mistral-NeMo-Minitron-8B-Base / Instruct.
- Llama-3.1-Minitron-4B-Width-Base.
- Llama-3.1-Minitron-4B-Depth-Base.
- Minitron-8B-Base.
- Minitron-4B-Base / Instruct.
Please refer to the instructions in the respective model cards above.
Quantized Versions: The 🤗 Hugging Face community has already created FP8 quantized versions of Minitron models. Give them a try here: Minitron-8B-Base-FP8 and Minitron-4B-Base-FP8.
The following steps provide an example of how to load the Minitron-8B model in the .nemo
checkpoint format. You can download the corresponding .nemo
checkpoints here: Minitron-8B-Base and Minitron-4B-Base.
-
Export TensorRT-LLM checkpoint.
First launch the NeMo container
nvcr.io/nvidia/nemo:24.05
with the.nemo
model checkpoint and TensorRT-Model-Optimizer folder mounted:git clone https://github.com/NVIDIA/TensorRT-Model-Optimizer.git docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --init -it -v <TensorRT-Model-Optimizer_directory>:/workspace/TensorRT-Model-Optimizer -v <minitron_model_directory>:/workspace/minitron --rm nvcr.io/nvidia/nemo:24.05 bash
Inside the container, run the following commands to export the TensorRT-LLM checkpoint:
export GPT_MODEL_FILE=<minitron_nemo_file_directory> pip install "nvidia-modelopt[torch]" -U cd TensorRT-Model-Optimizer/llm_ptq/ scripts/nemo_example.sh --type gptnext --model $GPT_MODEL_FILE --quant bf16 --tp 1 --task "build"
You will see something like:
Model config exported to: <TensorRT-LLM_checkpoint_directory>. Total time used ** s.
which means the TensorRT-LLM checkpoint has been exported successfully.
-
Export TensorRT engine.
Use docker to build and run TensorRT-LLM following these instructions:
# TensorRT-LLM uses git-lfs, which needs to be installed in advance. apt-get update && apt-get -y install git git-lfs git lfs install git clone https://github.com/NVIDIA/TensorRT-LLM.git cd TensorRT-LLM git submodule update --init --recursive git lfs pull make -C docker release_build
Now copy the exported TensorRT-LLM checkpoint to directory of TensorRT-LLM and launch the docker container:
cp -r <TensorRT-LLM_checkpoint_directory> <TensorRT-LLM_directory> cd <TensorRT-LLM_directory> make -C docker release_run
Inside the docker container, build TensorRT engine:
trtllm-build --checkpoint_dir /code/tensorrt_llm/<TensorRT-LLM_directory> --gpt_attention_plugin bfloat16 --gemm_plugin bfloat16 --output_dir <trt_engine_directory>
Run inference with the built TensorRT engine to summarize articles from the cnn_dailymail dataset:
python3 example/summarize.py --test_trt_llm --no_add_special_tokens --engine_dir <trt_engine_directory> --vocab_file <TensorRT-LLM_checkpoint_directory>/tokenizer.model
LMFlow is a complete pipeline for fine-tuning large language models. The following steps provide an example of how to fine-tune the Minitron-8B-Base
models using LMFlow with the alpaca
dataset.
-
Install LMFlow
git clone https://github.com/OptimalScale/LMFlow.git cd LMFlow bash install.sh
-
Prepare the dataset
Download the alpaca dataset and preprocess it using the following command.
cd data && ./download.sh alpaca && cd -
-
Fine-tune the model
Fine-tune the Minitron-8B model on the Wikitext-103 dataset using the following command.
bash ./scripts/run_finetune.sh \ --model_name_or_path nvidia/Minitron-8B-Base \ --dataset_path data/alpaca/train_conversation \ --output_model_path output_models/finetuned_minitron
With LMFlow, you can also fine-tune the model on your custom dataset. The only thing you need to do is transform your dataset into the LMFlow data format. In addition to full-finetuniing, you can also fine-tune minitron efficiently with LoRA, LISA, Flash Attention, and other acceleration techniques.
Minitron models are released under the NVIDIA Open Model License Agreement.
We would like to thank Ameya Sunil Mahabaleshwarkar, Hayley Ross, Brandon Rowlett, Oluwatobi Olabiyi, Ao Tang, and Yoshi Suhara for help with producing the instruction-tuned versions of MINITRON; additionally, James Shen for TRT-LLM support, and Sanjeev Satheesh, Oleksii Kuchaiev, Shengyang Sun, Jiaqi Zeng, Zhilin Wang, Yi Dong, Zihan Liu, Rajarshi Roy, Wei Ping, and Makesh Narsimhan Sreedhar for help with datasets; Ao Tang for HF support. We’d also like to gratefully acknowledge the insightful discussion and feedback from Chenhan Yu and Daniel Korzekwa.
If you find our work helpful, please consider citing our paper:
@article{minitron2024,
title={Compact Language Models via Pruning and Knowledge Distillation},
author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
journal={arXiv preprint arXiv:2407.14679},
year={2024},
url={https://arxiv.org/abs/2407.14679},
}