Skip to content

Latest commit

 

History

History
 
 

1_instruction_tuning

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Instruction Tuning

This module will you guide you through instruction tuning language models. Instruction tuning involves adapting pre-trained models to specific tasks by further training them on task-specific datasets. This process helps models improve their performance on targeted tasks.

In this module, we will explore two topics: 1) Chat Templates and 2) Supervised Fine-Tuning.

1️⃣ Chat Templates

Chat templates structure interactions between users and AI models, ensuring consistent and contextually appropriate responses. They include components like system prompts and role-based messages. For more detailed information, refer to the Chat Templates section.

2️⃣ Supervised Fine-Tuning

Supervised Fine-Tuning (SFT) is a critical process for adapting pre-trained language models to specific tasks. It involves training the model on a task-specific dataset with labeled examples.For a detailed guide on SFT, including key steps and best practices, see the Supervised Fine-Tuning page.

Exercise Notebooks

Title Description Exercise Link Colab
Chat Templates Learn how to use chat templates with SmolLM2 and process datasets into chatml format 🐢 Convert the HuggingFaceTB/smoltalk dataset into chatml format
🐕 Convert the openai/gsm8k dataset into chatml format
Notebook Open In Colab
Supervised Fine-Tuning Learn how to fine-tune SmolLM2 using the SFTTrainer 🐢 Use the HuggingFaceTB/smoltalk dataset
🐕 Try out the bigcode/the-stack-smol dataset
🦁 Select a dataset for a real world use case
Notebook Open In Colab

References