diff --git a/README.md b/README.md index 7b35e62..34d6068 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ ModelData ReleaseWeb Demo • - Tool Eval • + Tool EvalPaperCitation @@ -26,7 +26,7 @@
- +
🔨This project (ToolLLM) aims to construct **open-source, large-scale, high-quality** instruction tuning SFT data to facilitate the construction of powerful LLMs with general **tool-use** capability. We aim to empower open-source LLMs to master thousands of diverse real-world APIs. We achieve this by collecting a high-quality instruction-tuning dataset. It is constructed automatically using the latest ChatGPT (gpt-3.5-turbo-16k), which is upgraded with enhanced [function call](https://openai.com/blog/function-calling-and-other-api-updates) capabilities. We provide the dataset, the corresponding training and evaluation scripts, and a capable model ToolLLaMA fine-tuned on ToolBench.