From 2624905bcf984910c4d6a42dc714c9c00daea8fa Mon Sep 17 00:00:00 2001
From: Yonatan Shelach <92271540+yonishelach@users.noreply.github.com>
Date: Wed, 9 Aug 2023 10:01:02 +0300
Subject: [PATCH] Update README.md
replaced former model `gpt2` with the existing one in the demo - `falcon-7b`
---
README.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index 79698ef..89ea4fb 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
-This demo demonstrates how to fine tune a LLM and build an ML application: the **MLOps master bot**! We'll train [`gpt2-medium`](https://huggingface.co/gpt2) on [**Iguazio**'s MLOps blogs](https://www.iguazio.com/blog/) and cover how easy it is to take a model and code from development to production. Even if its a big scary LLM model, MLRun will take care of the dirty work!
+This demo demonstrates how to fine tune a LLM and build an ML application: the **MLOps master bot**! We'll train [`falcon-7b`](https://huggingface.co/tiiuae/falcon-7b) on [**Iguazio**'s MLOps blogs](https://www.iguazio.com/blog/) and cover how easy it is to take a model and code from development to production. Even if its a big scary LLM model, MLRun will take care of the dirty work!
We will use:
* [**HuggingFace**](https://huggingface.co/) - as the main machine learning framework to get the model and tokenizer.
@@ -11,7 +11,7 @@ We will use:
The demo contains a single [notebook](./tutorial.ipynb) that covers the two main stages in every MLOps project:
-* **Training Pipeline Automation** - Demonstrating how to get an existing model (`GPT2-Medium`) from HuggingFace's Transformers package and operationalize it through all of its life cycle phases: data collection, data ppreparation, training and evaluation, as a fully automated pipeline.
+* **Training Pipeline Automation** - Demonstrating how to get an existing model (`falcon-7b`) from HuggingFace's Transformers package and operationalize it through all of its life cycle phases: data collection, data ppreparation, training and evaluation, as a fully automated pipeline.
* **Application Serving Pipeline** - Showing how to productize the newly trained LLM as a serverless function.
You can find all the python source code under [/src](./src)
@@ -64,4 +64,4 @@ Your environment should include `MLRUN_ENV_FILE= Note: You can also use a remote MLRun service (over Kubernetes), instead of starting a local mlrun,
-> edit the [mlrun.env](./mlrun.env) and specify its address and credentials
\ No newline at end of file
+> edit the [mlrun.env](./mlrun.env) and specify its address and credentials