Skip to content

Latest commit

 

History

History
8 lines (5 loc) · 583 Bytes

README.md

File metadata and controls

8 lines (5 loc) · 583 Bytes

GPTeacher

A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer

Still cleaning the codegen instruct dataset, will be up when its cleaned.

Each dataset is split into 5 seperate datasets, based on similarity scored cleaning. Simple dedupe only, and then range of <60% to <90% similarity cleaned sets for each.

They are all made to be compliant with Alpaca's dataset format, i.e. each has an instruction, input, and output field, should make it easier to use the same fine tune script and process as alpaca has.