Skip to content

bankai254/llm-python-program

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Python Program

This project is a Python program that is used to load and query LLM's. It provides the ability to query the LLM via CLI or via a HTTP server.

Components

  • LocalLLM: This is a class and also executable that is responsible for loading a selected LLM, processing a query and returning the response. It maintains a conversation history per session. The conversation history is cleared once the application is closed.
  • HTTP Server: This is the main application which is comprised of a simple HTTP server with a custom handler. The custom handler takes the query and model requests and passes to LocalLLM for processing and returns the response to the client.

Prerequisites

  • Python: >= 3.9.x
  • LLMs: LLaMA 2 (Llama-2-7b-chat-hf) and Mistral (Mistral-7B-Instruct-v0.1)

Installation

  1. Clone the repository:

    git clone https://github.com/bankai254/llm-python-program
    cd llm-python-program
    
  2. Setup Virtual Environment

    python -m venv llms
    
    Run the following on Windows -> llms\Scripts\activate
    Run the following on Linux/Mac -> source llms/bin/activate
    
  3. Get access to LLM's via Hugging Face:

    This is optional if you already have access to the listed LLMs.

  4. Environment Variables:

    • Hugging Face Access Token - This will be used to download the LLM's
  5. Install Packages:

    pip install -r requirements.txt
    
  6. Start the server :

    docker-compose up --build
    
  7. Using the CLI (optional or testing the LLM):

    python localLLM.py   
    

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published