Consider watching this video explaining how to prompt tune Open LLMs: https://www.youtube.com/watch?v=f32dc5M2Mn0
Run Llama 3 on a few example questions:
./scripts/prompt_tune.sh
You can view the results in data/results/spot_check_results.jsonl.
Quicky iterate on different prompts by editing the Prompt code, and running the spot check.
lamini-examples/03_prompt_tuning/spot_check.py
Lines 93 to 109 in d01af0b
For example, try changing "You are an expert analyst from Goldman Sachs with 15 years of experience."
to "You are an influencer who loves emojis."
and see what happens!
Try out many prompts quickly instead of thinking hard about the perfect prompt. Good prompt engineers can try about 100 different prompts in an hour. If you are spending more than 1 hour on prompt tuning, you should move on.
This code adds the prompt template for Llama 3. Don't forget it! The model will perform much worse without the correct template. Every model has a different template. Look it up on the model card, e.g. Llama3 model card.
async def add_template(self, prompts):
async for prompt in prompts:
new_prompt = "<|begin_of_text|><|start_header_id|>user<|end_header_id|>"
new_prompt += prompt.data.get_prompt() + "<|eot_id|>"
new_prompt += "<|start_header_id|>assistant<|end_header_id|>"
Plug relevant information from your relational database, knowledge graph, recommendation system, etc into your prompt.
E.g. if you are building Q&A bot that answers questions about the document the user is viewing, pull the document title & summary from a database and insert it into the prompt.
Want to learn more? We have even more details about prompt tuning: Prompt Engineering.