An interactive interface for GPT-X, built with Streamlit.
# Get a GPU machine: Titan-X is known to work while K40 does not
nlprun -a janus -g 1 -s bash
# Clone the repository
git clone https://github.com/stanford-mercury/janus.git
cd janus
# Use the existing conda environment available on the cluster
conda activate janus
# Alternately, create a fresh conda environment
conda create -n [env_name]
conda install pip
pip install -r requirements.txt
Clone the repository, and create a conda environment with environment-osx.yaml
.
git clone https://github.com/stanford-mercury/janus.git
cd janus
conda env create -f environment-osx.yml
Run the streamlit application and you're good to begin!
# Auto-selects device
streamlit run main.py
# Use CPU-only
streamlit run main.py -- --device 'cpu'
Navigate to the URL to open the app.
Note (Stanford NLP Cluster): GPU tested on titanx
. GPU k40
doesn't work due to
torch+cuda version issues, launch the app in CPU-only mode.
You'll have to register the first time you use the app. The master password is
mercury-bagel
.
By default, Janus collects and stores all your interactions and generations in the app
(whether you save them explicitly or not). When running the app locally, this data is
stored under data/<username>
and is not transmitted at this time.
If you have any problems using Janus, please file a Github Issue. Feedback can be given directly to the Project Mercury team.
If contributing to this repository, please make sure to do the following:
-
On installing new dependencies (via
pip
orconda
), please make sure to update theenvironment.yml
files via the following command (note that you need to separately create theenvironment-osx.yml
file by exporting from Mac OS!):rm environment.yml; conda env export --no-builds | grep -v "^prefix: " > environment.yml