Skip to content

Commit

Permalink
feat: Azure OpenAI (#101)
Browse files Browse the repository at this point in the history
* feat: Azure OpenAI

* fix: black

* refactor: changed davinci3 to 2

* fix: tests

* refactor: added missing newline

* refactor: code formatting

* fix: use .env_secret_azure for additional env vars for azure

* fix: use .env_azure for public services

* feat: azure api variables

* fix: use .env_azure for public services

* feat: created .env_secret_azure

* fix: use .env_azure for management assistants

* Feat/doc skills turnon logic to common (#94)

* move doc skills logic to common; introduce it to desc based skill selector

* turn on doc-based skills if we have doc in use for desc based skill selector; complex checks for llm based skill selector

* remove dff_meeting_analysis_skill from automatically added skills

* add comment about turning on doc based skills

* add doc-skill turn on logic to universal llm-based skill selector; also fix the issue with activating all skills from pipeline if there is an exception

* codestyle

* remove extra list(set())

* fixes acc to Dilya

* fix: skill selection logic with docs also

* fix: codestyle

* codestyle

* remove N_TURNS_TO_KEEP_DOC from skill selector

---------

Co-authored-by: dilyararimovna <[email protected]>

* Feat/weekly with separate files (#99)

* feat: management distribution

* fix: prompt selector for roles

* first commit for meeting analysis

* working distribution, but no meeting analysis yet

* prototype files

* prompts

* dff_meeting_analysis_skill instead of prompted; llm-based everything

* working version of meeting analysis skill

* dff_meeting_analysis_skill with 4 nodes

* doc-processor annotator

* added saving previous meeting analysis results; links to them are  written to bot attrs

* update roles

* fix for meeting analysis skill, now working

* document only for now, then will be deleted

* prompt for unabridged response selection

* refactor doc_processor, remove unnecessary funcs

* better prompts

* better skill description in components

* add llm-based-skill-selector to dist

* enable finding previously generated meeting analyses; better fallback

* 512 max_tokens for chatgpt in some cases

* enhance response selector prompt

* add dff_meeting_analysis_skill_formatter

* some fixes to cards and configs

* update readmes

* correct ports for doc processor; remove extra prompt

* codestyle

* codestyle

* fixes for Dilya

* enhanced checks

* typo

* codestyle + small fix for checks

* file moved to google drive

* remove extra print

* checking each file if processed; concatenating multiple files; two containers for doc-processor

* typo fix

* unique ids for files in data/, ids to paths in config

* delete transcript files

* codestyle

* fix: UIDs for files in data now working

* fixes in working with files

* codestyle

* fix error in getting related_files

* Revert "fix error in getting related_files"

This reverts commit 705e23897e9317e1ba24702b14e7c097da093dcd.

* working fix for bot_attrs_files

* remove document file

* numerous fixes for review

* codestyle

* bring some things to common

* even better funcs in common

* codestyle

* saving all processed docs in atts; saving candidate texts in adds of utt; link or path possible for processing from atts

* fixes for accidentally broken stuff

* some more fixes

* candidate texts to hyp attributes

* codestyle

* FILE_SERVER_TIMEOUT as arg

* GENERATIVE_SERVICE_URL as arg

* fix: formatters in pipeline_conf

* component card for vectorize_documents

* openai-chatgpt-long.json for document-qa-llm-skill

* openai-chatgpt-long.json for meeting-analysis-skill

* fix: timeouts and component card paths

* add regex for http check

* doc processor names in service_config files

* update getting envvars

* codestyle

* fix: remove envvars from everywhere

* fix: remove envvars from everywhere

* fixes: details in cards and pipeline

* fixes: details in cards and pipeline

* feat: special message if failed to process file from atts

* get token limit from service endpoint

* fix: better upload_document, try except inside func & enable both text and file upload in one func

* docstrings; also fix: detecting extension for links

* codestyle

* again codestyle

* update READMEs with dialog state info

* fix: add diff endpoints to doc-retriever readme

* fix: solve inconsistencies in cards and readmes

* fix: incorrect formatters in cards

* update ports to non-allocated ones

* fixes: everything acc to comments

* codestyle

* generalize file service url in another comment

* codestyle

* refactor attributes structure

* update readmes to include info about new attributes format

* fix: clean config; comment about format

* add comments; {FILE_SERVER_URL} instead of actual url

* comments and readmes

* implement storing doc for N_TURNS_FOR_DISCUSSION turns

* codestyle

* improve N_TURNS_FOR_DISCUSSION, implement only for doc-processor-from-atts

* better logging in doc-retriever

* codestyle

* more comments

* codestyle

* delete extra logs

* some more comments

* count n_steps_discussed in any case; put that to readme

* fix: n_steps_discussed in correct place

* fix: if file was processed earlier, take processed text from processed_documents

* if we get doc from somewhere, consider it good as new -> reset n_steps_discussed to 0

* codestyle

* update comments; fix logic of n_steps_discussed

* better comments

* fix: small fixes

* N_TURNS_FOR_DISCUSSION: -> N_TURNS_TO_KEEP_DOC

* N_TURNS_TO_KEEP_DOC in distribution files

* N_TURNS_TO_KEEP_DOC: 10 ->; also updates in readmes and comments

* codestyle

* comment about N_TURNS_TO_KEEP_DOC

* comment about N_TURNS_TO_KEEP_DOC

* fix: remove sentseg from management dist

* better descriptions for skills

* fix hyp format for dff_meeting_analysis_skill

* fixes: remove logs, improve skill description

* ensure unique ids everywhere; add dialog_id to file_id

* update skill selector: turn off doc-based skills when we don't have doc

* codestyle

* codestyle again

* remove one extra log

* now we can also process files from file server

* codestyle

* fix: is_container_running to response.py

* fix to prompts; also longer context for many services

* always turn on document-based wa skill

* codestyle

* add file exists check

* start adding question_answering default node

* node for question answering in meeting analysis skill; small change in llm-based-skill-selector

* codestyle

* condition file

* Dilya's fixes for skill-selector

* codestyle

* slightly improve prompt for response selector

* fix: chunks only split by newlines

* fix: no extra info in prompts; better response selector

* small fixes

* codestyle

* added list title

* codestyle

* codestyle

* moved is_container_running up

* fix: tags: selector

* add check if skill to add is in pipeline

* shorter prompt for response selector

* copy older dist with tf-idf qa as management_assistant_extended

* remove tf-idf qa skill from management assistant

* update description for meeting analysis skill

* remover doc-retriever from main distribution

* better guidance for qa

* feat: turn on dff_meeting_analysis_skill when it was used with the same doc before

* codestyle

* codestyle

* fix: only perform doc-related checks in skill selector if we actually have a doc in use

* fix: include situation when we don't have prev_skills or prev_docs in skill selector

* use gpt4 for meeting analysis skill

* feat: add progress by areas

* improve prompts

* gpt-4 response selector

* feat: weekly reports, draft

* improved prompts for showing titles

* huge timeout

* add re.DOTALL flag

* fix: regex for conditions

* now working with separate files in use

* update attributes format (for docs_in_use)

* update test files for new attribute format

* codestyle

* update annotator readmes

* update skill for new attributes format

* improve comments

* switch to chatgpt

* fixing conflicts from merge

* fix things lost during merge

* codestyle

* add some more accidentally lost info

* return accidentally lost change

* changes for Dilya

* filetype exception - remove logging

* remove sentry from utils.py

* flake8 improve work with exception; update info about meeting skill in extended dist

* update envvars

* remove unnecessary const

---------

Co-authored-by: dilyararimovna <[email protected]>
Co-authored-by: Ubuntu <[email protected]>

* fix: remove envvars to send from attributes (#102)

* Feat/check before question answering node in meeting analysis (#104)

* first commit for check before call LLM

* condition for calling gpt4: WIP

* condition for calling gpt4: WIP-2

* working check before qa node

* docker container arg SHORT_GENERATIVE_SERVICE everuwhere; fix README

* codestyle

* update docs_in_use; add comment

* move prompts to common

* fix typo

* Feat/summary length options (#105)

* feat: length of summary now controllable

* codestyle

* flag re.IGNORECASE

* gpt4 for response generation and selection in management assistant dists (#106)

* replace chatgpt with gpt4 for response generation and selection in management assistant dists

* add gpt4 container to management_assistant

* also add to dev.yml

* llm-based-response-selector-gpt4

* fixes acc to Dilya

* feat: show up google api skill (#52)

* feat: show up google api skill

* fix: do not use envvars to send in google api skill

* fix: timeout for google api skill

* fix: do not wait for google api

* fix: short_generative_service in correct Dockerfile (#107)

* Feat/nice formatting (#110)

* formatting: first commit

* unify summary descriptions

* formatting for titles completed

* fix compose_variables; fix getting parts of report; fix summary length prompts; fix formatting

* fix: verify=False for getting files

* improve some prompts

* working formatting

* codestyle

* add comments

* formatting fixes

* sent most of logic to utils

* codestyle

* fix: use .env_azure

---------

Co-authored-by: dilyararimovna <[email protected]>
Co-authored-by: Nika Smilga <[email protected]>
Co-authored-by: Ubuntu <[email protected]>
  • Loading branch information
4 people authored Nov 9, 2023
1 parent 6b0f6dc commit 3bfdb1c
Show file tree
Hide file tree
Showing 15 changed files with 132 additions and 53 deletions.
40 changes: 40 additions & 0 deletions .env_azure
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
EXTERNAL_FOLDER=~/.deeppavlov/agent/
SENTRY_DSN=
DP_AGENT_SENTRY_DSN=
DB_NAME=dream-dev
DB_HOST=mongo
DB_PORT=27017
TFIDF_BAD_FILTER=1
USE_TFIDF_COBOT=
USE_ASSESSMENT=1
KEEP_ALIVE=True
KEEP_ALIVE_TIMEOUT=35
REQUEST_TIMEOUT=2
RESPONSE_TIMEOUT=2
CUSTOM_DIALOG_FILE=data/test_data.json
DB_CONFIG=db_conf.json
PIPELINE_CONFIG=pipeline_conf.json
TIME_LIMIT=30
KBQA_URL=http://kbqa:8072/model
TEXT_QA_URL=http://text-qa:8078/model
BADLIST_ANNOTATOR_URL=http://badlisted-words:8018/badlisted_words_batch
COMET_ATOMIC_SERVICE_URL=http://comet-atomic:8053/comet
COMET_CONCEPTNET_SERVICE_URL=http://comet-conceptnet:8065/comet
MASKED_LM_SERVICE_URL=http://masked-lm:8102/respond
DP_WIKIDATA_URL=http://wiki-parser:8077/model
DP_ENTITY_LINKING_URL=http://entity-linking:8075/model
KNOWLEDGE_GROUNDING_SERVICE_URL=http://knowledge-grounding:8083/respond
WIKIDATA_DIALOGUE_SERVICE_URL=http://wikidata-dial-service:8092/model
NEWS_API_ANNOTATOR_URL=http://news-api-annotator:8112/respond
WIKI_FACTS_URL=http://wiki-facts:8116/respond
FACT_RANDOM_SERVICE_URL=http://fact-random:8119/respond
INFILLING_SERVICE_URL=http://infilling:8106/respond
DIALOGPT_CONTINUE_SERVICE_URL=http://dialogpt:8125/continue
PROMPT_STORYGPT_SERVICE_URL=http://prompt-storygpt:8127/respond
STORYGPT_SERVICE_URL=http://storygpt:8126/respond
FILE_SERVER_URL=http://files:3000
SUMMARIZATION_SERVICE_URL=http://dialog-summarizer:8059/respond_batch
# Comment following three variables if you want to use api.openai.com instead of azure
OPENAI_API_BASE=https://sentius-swe.openai.azure.com/
OPENAI_API_TYPE=azure
OPENAI_API_VERSION=2023-05-15
4 changes: 4 additions & 0 deletions .github/workflows/alpha.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,7 @@ jobs:
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
PROXY_HOST: ${{ secrets.PROXY_HOST }}
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
OPENAI_API_TYPE: ${{ secrets.OPENAI_API_TYPE }}
OPENAI_API_VERSION: ${{ secrets.OPENAI_API_VERSION }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
4 changes: 4 additions & 0 deletions .github/workflows/dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,7 @@ jobs:
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
PROXY_HOST: ${{ secrets.PROXY_HOST }}
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
OPENAI_API_TYPE: ${{ secrets.OPENAI_API_TYPE }}
OPENAI_API_VERSION: ${{ secrets.OPENAI_API_VERSION }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
4 changes: 4 additions & 0 deletions .github/workflows/staging.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,7 @@ jobs:
REGISTRY_URL: ${{ secrets.REGISTRY_URL }}
PROXY_HOST: ${{ secrets.PROXY_HOST }}
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
OPENAI_API_TYPE: ${{ secrets.OPENAI_API_TYPE }}
OPENAI_API_VERSION: ${{ secrets.OPENAI_API_VERSION }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
6 changes: 5 additions & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
- uses: actions/checkout@v3

- name: Add .env_secret file
run: touch .env_secret
run: touch .env_secret .env_secret_azure

- name: Add google api keys
run: ./tests/test.sh MODE=add-google
Expand Down Expand Up @@ -63,3 +63,7 @@ jobs:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
OPENAI_API_TYPE: ${{ secrets.OPENAI_API_TYPE }}
OPENAI_API_VERSION: ${{ secrets.OPENAI_API_VERSION }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -146,3 +146,4 @@ docker-compose-one-replica.yml
# personal env keys and tokens
*.env_secret
*.env_secret_ru
*.env_secret_azure
14 changes: 7 additions & 7 deletions assistant_dists/management_assistant/docker-compose.override.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ services:
image: julienmeerschart/simple-file-upload-download-server

llm-based-skill-selector:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8182
Expand Down Expand Up @@ -77,7 +77,7 @@ services:
memory: 2G

llm-based-response-selector-gpt4:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8003
Expand Down Expand Up @@ -141,7 +141,7 @@ services:
memory: 3G

openai-api-chatgpt:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8145
Expand All @@ -161,7 +161,7 @@ services:
memory: 100M

openai-api-gpt4:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8159
Expand All @@ -178,9 +178,9 @@ services:
memory: 500M
reservations:
memory: 100M

dff-general-pm-prompted-skill:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8189
Expand All @@ -200,7 +200,7 @@ services:
memory: 128M

dff-meeting-analysis-skill:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8186
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ services:
memory: 10G

openai-api-gpt4:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8159
Expand All @@ -58,7 +58,7 @@ services:
memory: 100M

llm-based-skill-selector:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8182
Expand Down Expand Up @@ -119,7 +119,7 @@ services:
memory: 2G

llm-based-response-selector-gpt4:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8003
Expand Down Expand Up @@ -183,7 +183,7 @@ services:
memory: 3G

openai-api-chatgpt:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8145
Expand All @@ -203,7 +203,7 @@ services:
memory: 100M

dff-roles-prompted-skill:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8185
Expand All @@ -223,7 +223,7 @@ services:
memory: 128M

dff-meeting-analysis-skill:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8186
Expand All @@ -246,7 +246,7 @@ services:
memory: 500M

dff-document-qa-llm-skill:
env_file: [ .env, .env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8166
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ services:
memory: 2G

llm-based-skill-selector:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8182
Expand All @@ -57,7 +57,7 @@ services:
memory: 100M

llm-based-response-selector:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8003
Expand Down Expand Up @@ -121,7 +121,7 @@ services:
memory: 3G

openai-api-chatgpt:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8145
Expand All @@ -141,7 +141,7 @@ services:
memory: 100M

dff-dream-persona-chatgpt-prompted-skill:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8137
Expand All @@ -161,7 +161,7 @@ services:
memory: 128M

dff-casual-email-prompted-skill:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8154
Expand All @@ -181,7 +181,7 @@ services:
memory: 128M

dff-meeting-notes-prompted-skill:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8155
Expand All @@ -201,7 +201,7 @@ services:
memory: 128M

dff-official-email-prompted-skill:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8156
Expand All @@ -221,7 +221,7 @@ services:
memory: 128M

dff-plan-for-article-prompted-skill:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8157
Expand All @@ -241,7 +241,7 @@ services:
memory: 128M

dff-google-api-skill:
env_file: [ .env,.env_secret ]
env_file: [ .env, .env_secret_azure ]
build:
args:
SERVICE_PORT: 8162
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ services:
memory: 3G

openai-api-chatgpt:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8145
Expand All @@ -89,12 +89,12 @@ services:
memory: 100M

openai-api-davinci3:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8131
SERVICE_NAME: openai_api_davinci3
PRETRAINED_MODEL_NAME_OR_PATH: text-davinci-003
PRETRAINED_MODEL_NAME_OR_PATH: davinci-002
context: .
dockerfile: ./services/openai_api_lm/Dockerfile
command: flask run -h 0.0.0.0 -p 8131
Expand All @@ -108,7 +108,7 @@ services:
memory: 100M

openai-api-gpt4:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8159
Expand All @@ -127,7 +127,7 @@ services:
memory: 100M

openai-api-gpt4-32k:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8160
Expand All @@ -146,7 +146,7 @@ services:
memory: 100M

openai-api-chatgpt-16k:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8167
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ services:
memory: 3G

openai-api-chatgpt:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8145
Expand All @@ -114,12 +114,12 @@ services:
memory: 100M

openai-api-davinci3:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8131
SERVICE_NAME: openai_api_davinci3
PRETRAINED_MODEL_NAME_OR_PATH: text-davinci-003
PRETRAINED_MODEL_NAME_OR_PATH: davinci-002
context: .
dockerfile: ./services/openai_api_lm/Dockerfile
command: flask run -h 0.0.0.0 -p 8131
Expand All @@ -133,7 +133,7 @@ services:
memory: 100M

openai-api-gpt4:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8159
Expand All @@ -152,7 +152,7 @@ services:
memory: 100M

openai-api-gpt4-32k:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8160
Expand All @@ -171,7 +171,7 @@ services:
memory: 100M

openai-api-chatgpt-16k:
env_file: [ .env ]
env_file: [ .env_azure ]
build:
args:
SERVICE_PORT: 8167
Expand Down
7 changes: 7 additions & 0 deletions common/generative_configs/davinci-002.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"max_tokens": 64,
"temperature": 0.4,
"top_p": 1.0,
"frequency_penalty": 0,
"presence_penalty": 0
}
Loading

0 comments on commit 3bfdb1c

Please sign in to comment.