Skip to content

Latest commit

 

History

History
18 lines (12 loc) · 926 Bytes

Notes.md

File metadata and controls

18 lines (12 loc) · 926 Bytes

Alpaca:
LLaMA finetuned on instruction dataset of size 52k. The instruction dataset is created using self-instruct and 175 self instruct type generations from openai text-davinci-003. Also they use watermarking to be able to detect the model's outputs.

GPT4All:
trained LLaMA 7B on 437,605 prompt-generation pairs for 4 epochs using low rank rank adaptation. Report

RLHF:
rlhf

Ideas:
Prompts and Memorising or Retrieval Transformers.
LSTM Transformer variants, and storing pre prompt in hidden state