Skip to content

vishalvvr/my-ollama-gpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Containerization Setup with Ollama and Ollama WebUI

This project provides a containerization setup utilizing Ollama and Ollama WebUI.

Setup

NOTE: Ensure that you have Podman and Podman Compose (or Docker Compose) installed.

To install Podman, run:

sudo dnf install podman

To install Podman Compose, use:

pip3 install podman-compose

Running MyLLM

To run myllm, execute the following command:

./myllm

OR

If you prefer to use myllm as a command, follow these steps:

  1. sudo cp myllm myllm_docker-compose.yml /opt
  2. export PATH=$PATH:/opt
  3. Now you can run:
    myllm

How to Use

  • Run myllm or ./myllm and input 1 to start the local LLM.
  • Open your browser and navigate to http://localhost:8080 to access the prompt UI.
  • If this is your first time running the application on your machine, create a user account. All user data is stored locally.
  • By default, this local LLM setup does not include any pre-installed models. You will need to download a model from the Ollama repository. To download a model, go to Settings > Models. Under "Pull a model from ollama.com," enter the model name (e.g., qwen2:0.5b) and click the download button. You can find a list of available models here.

Credits

Happy Prompting! 😊

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages