The purpose of this service is to convert given arguments into a prompt, use an LLM and send the result back.
The service will receive queries from api.serlo.org which should receive them via GraphQL operation from frontend.
The way through api.serlo.org is to check if a user is logged in and has the role required for the feature.
- Install the Python version in .tool-versions
- You may use asdf for the installation.
- Install the requirements using pipenv
- Run
pipenv shell
to activate the project's virtual environment. - Run
pipenv install --dev
to install the development dependencies. - Run
pipenv run lint
to run the linting. - Run
pipenv run format
to format the code. - Run
pipenv run type_check
to run the static type checker (mypy).
Copy the .env.sample
file into a filed named .env
, and change the OPENAI_API_KEY
value to a valid one:
cp .env.sample .env
To run the service, run
uvicorn 'src.main:app' --reload --port=8082
where reload is optional to well reload the app when you change the code.
Or using Docker, simply run: docker compose up -d
Your first step will likely be a look at
http://localhost:8082/docs#/
where you can use the "Try it out" button for an endpoint to test it or generate a request URL.
If you would like to see the debug level logs - for example the prompt sent to the LLM -, change the root log level in logging.conf from INFO to DEBUG.
Happy coding!