View the UI of the live app at VaultApp
View the code in Google Colab at GoogleColab
- GPU: 8 GB VRAM
- RAM: 16 GB RAM
-
Make sure you have Python and Git installed on your system
-
Clone the repository on to your local machine using Git
git clone https://github.com/jonathanjthomas/GDG-RAG-Demo.git
-
Navigate to the repository directory
-
Set up a virtual environment using the below command
python -m venv venv
-
Activate your virtual environment using
- Windows
venv\Scripts\Activate
- Linux and MacOS
source venv/bin/activate
- Windows
-
Install all the required libraries and dependencies
pip install -r requirements.txt
-
Download and install Ollama
-
Pull the required Ollama models (gemma2:2b and nomic-embed-text)
ollama pull gemma2:2b ollama pull nomic-embed-text
-
Run vault.py with streamlit
streamlit run code\vault.py
orpython -m streamlit run code\vault.py
-
If you face any conflicts with existing dependencies, make sure you have activated your virtual environment
-
If you run into an error showing:
httpx.ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it
Then try running the following to resolve the issue
ollama serve
-
If you run into any other issues which have not been listed above, please feel free to reach out to us
Source code on Google Colab
Gemma 2 - Local RAG with Ollama and LangChain
How to Build a Local RAG Knowledge Base with Google Gemma 2 2B
Have any doubts? Feel free to reach out to us at:
- Aditya S ([email protected])
- Jonathan John Thomas ([email protected])