Skip to content

Commit

Permalink
Refactor lvm related examples (#1333)
Browse files Browse the repository at this point in the history
  • Loading branch information
Spycsh authored Jan 13, 2025
1 parent f48bd8e commit ca15fe9
Show file tree
Hide file tree
Showing 25 changed files with 161 additions and 154 deletions.
6 changes: 3 additions & 3 deletions MultimodalQnA/docker_compose/amd/gpu/rocm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ docker build --no-cache -t opea/embedding:latest --build-arg https_proxy=$https_
Build lvm-llava image

```bash
docker build --no-cache -t opea/lvm-llava:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/llava/dependency/Dockerfile .
docker build --no-cache -t opea/lvm-llava:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/integrations/dependency/llava/Dockerfile .
```

### 3. Build retriever-multimodal-redis Image
Expand Down Expand Up @@ -85,7 +85,7 @@ Then run the command `docker images`, you will have the following 8 Docker Image

1. `opea/dataprep-multimodal-redis:latest`
2. `ghcr.io/huggingface/text-generation-inference:2.4.1-rocm`
3. `opea/lvm-tgi:latest`
3. `opea/lvm:latest`
4. `opea/retriever-multimodal-redis:latest`
5. `opea/embedding:latest`
6. `opea/embedding-multimodal-bridgetower:latest`
Expand Down Expand Up @@ -193,7 +193,7 @@ curl http://${host_ip}:${LLAVA_SERVER_PORT}/generate \
-d '{"prompt":"Describe the image please.", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
```

5. lvm-llava-svc
5. lvm

```bash
curl http://${host_ip}:9399/v1/lvm \
Expand Down
11 changes: 6 additions & 5 deletions MultimodalQnA/docker_compose/amd/gpu/rocm/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ services:
container_name: dataprep-multimodal-redis
depends_on:
- redis-vector-db
- lvm-tgi
- lvm
ports:
- "6007:6007"
environment:
Expand Down Expand Up @@ -116,9 +116,9 @@ services:
ipc: host
command: --model-id ${LVM_MODEL_ID} --max-input-tokens 3048 --max-total-tokens 4096
restart: unless-stopped
lvm-tgi:
image: ${REGISTRY:-opea}/lvm-tgi:${TAG:-latest}
container_name: lvm-tgi
lvm:
image: ${REGISTRY:-opea}/lvm:${TAG:-latest}
container_name: lvm
depends_on:
- tgi-rocm
ports:
Expand All @@ -128,6 +128,7 @@ services:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LVM_COMPONENT_NAME: "OPEA_TGI_LLAVA_LVM"
LVM_ENDPOINT: ${LVM_ENDPOINT}
HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0
Expand All @@ -140,7 +141,7 @@ services:
- dataprep-multimodal-redis
- embedding
- retriever-redis
- lvm-tgi
- lvm
ports:
- "8888:8888"
environment:
Expand Down
12 changes: 6 additions & 6 deletions MultimodalQnA/docker_compose/intel/cpu/xeon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ lvm-llava
================
Port 8399 - Open to 0.0.0.0/0
lvm-llava-svc
lvm
===
Port 9399 - Open to 0.0.0.0/0
Expand Down Expand Up @@ -132,13 +132,13 @@ docker build --no-cache -t opea/retriever-redis:latest --build-arg https_proxy=$
Build lvm-llava image

```bash
docker build --no-cache -t opea/lvm-llava:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/llava/dependency/Dockerfile .
docker build --no-cache -t opea/lvm-llava:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/integrations/dependency/llava/Dockerfile .
```

Build lvm-llava-svc microservice image
Build lvm microservice image

```bash
docker build --no-cache -t opea/lvm-llava-svc:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/llava/Dockerfile .
docker build --no-cache -t opea/lvm:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/Dockerfile .
```

### 4. Build dataprep-multimodal-redis Image
Expand Down Expand Up @@ -179,7 +179,7 @@ cd ../../../
Then run the command `docker images`, you will have the following 11 Docker Images:

1. `opea/dataprep-multimodal-redis:latest`
2. `opea/lvm-llava-svc:latest`
2. `opea/lvm:latest`
3. `opea/lvm-llava:latest`
4. `opea/retriever-multimodal-redis:latest`
5. `opea/whisper:latest`
Expand Down Expand Up @@ -271,7 +271,7 @@ curl http://${host_ip}:${LLAVA_SERVER_PORT}/generate \
-d '{"prompt":"Describe the image please.", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'
```

6. lvm-llava-svc
6. lvm

```bash
curl http://${host_ip}:9399/v1/lvm \
Expand Down
9 changes: 5 additions & 4 deletions MultimodalQnA/docker_compose/intel/cpu/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,9 @@ services:
https_proxy: ${https_proxy}
entrypoint: ["python", "llava_server.py", "--device", "cpu", "--model_name_or_path", $LVM_MODEL_ID]
restart: unless-stopped
lvm-llava-svc:
image: ${REGISTRY:-opea}/lvm-llava-svc:${TAG:-latest}
container_name: lvm-llava-svc
lvm:
image: ${REGISTRY:-opea}/lvm:${TAG:-latest}
container_name: lvm
depends_on:
- lvm-llava
ports:
Expand All @@ -112,6 +112,7 @@ services:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LVM_COMPONENT_NAME: "OPEA_LLAVA_LVM"
LVM_ENDPOINT: ${LVM_ENDPOINT}
restart: unless-stopped
multimodalqna:
Expand All @@ -122,7 +123,7 @@ services:
- dataprep-multimodal-redis
- embedding
- retriever-redis
- lvm-llava-svc
- lvm
ports:
- "8888:8888"
environment:
Expand Down
10 changes: 5 additions & 5 deletions MultimodalQnA/docker_compose/intel/hpu/gaudi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,10 +86,10 @@ Build TGI Gaudi image
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.6
```
Build lvm-tgi microservice image
Build lvm microservice image
```bash
docker build --no-cache -t opea/lvm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/tgi-llava/Dockerfile .
docker build --no-cache -t opea/lvm:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/Dockerfile .
```
### 4. Build dataprep-multimodal-redis Image
Expand Down Expand Up @@ -128,7 +128,7 @@ docker build --no-cache -t opea/multimodalqna-ui:latest --build-arg https_proxy=
Then run the command `docker images`, you will have the following 11 Docker Images:
1. `opea/dataprep-multimodal-redis:latest`
2. `opea/lvm-tgi:latest`
2. `opea/lvm:latest`
3. `ghcr.io/huggingface/tgi-gaudi:2.0.6`
4. `opea/retriever-multimodal-redis:latest`
5. `opea/whisper:latest`
Expand Down Expand Up @@ -220,7 +220,7 @@ curl http://${host_ip}:${LLAVA_SERVER_PORT}/generate \
-H 'Content-Type: application/json'
```
6. lvm-tgi
6. lvm
```bash
curl http://${host_ip}:9399/v1/lvm \
Expand Down Expand Up @@ -274,7 +274,7 @@ curl --silent --write-out "HTTPSTATUS:%{http_code}" \
-F "files=@./${audio_fn}"
```

Also, test dataprep microservice with generating an image caption using lvm-tgi
Also, test dataprep microservice with generating an image caption using lvm

```bash
curl --silent --write-out "HTTPSTATUS:%{http_code}" \
Expand Down
11 changes: 6 additions & 5 deletions MultimodalQnA/docker_compose/intel/hpu/gaudi/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ services:
container_name: dataprep-multimodal-redis
depends_on:
- redis-vector-db
- lvm-tgi
- lvm
ports:
- "6007:6007"
environment:
Expand Down Expand Up @@ -115,9 +115,9 @@ services:
ipc: host
command: --model-id ${LVM_MODEL_ID} --max-input-tokens 3048 --max-total-tokens 4096
restart: unless-stopped
lvm-tgi:
image: ${REGISTRY:-opea}/lvm-tgi:${TAG:-latest}
container_name: lvm-tgi
lvm:
image: ${REGISTRY:-opea}/lvm:${TAG:-latest}
container_name: lvm
depends_on:
- tgi-gaudi
ports:
Expand All @@ -127,6 +127,7 @@ services:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LVM_COMPONENT_NAME: "OPEA_TGI_LLAVA_LVM"
LVM_ENDPOINT: ${LVM_ENDPOINT}
HF_HUB_DISABLE_PROGRESS_BARS: 1
HF_HUB_ENABLE_HF_TRANSFER: 0
Expand All @@ -139,7 +140,7 @@ services:
- dataprep-multimodal-redis
- embedding
- retriever-redis
- lvm-tgi
- lvm
ports:
- "8888:8888"
environment:
Expand Down
14 changes: 4 additions & 10 deletions MultimodalQnA/docker_image_build/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,21 +38,15 @@ services:
lvm-llava:
build:
context: GenAIComps
dockerfile: comps/lvms/llava/dependency/Dockerfile
dockerfile: comps/lvms/src/integrations/dependency/llava/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/lvm-llava:${TAG:-latest}
lvm-llava-svc:
lvm:
build:
context: GenAIComps
dockerfile: comps/lvms/llava/Dockerfile
dockerfile: comps/lvms/src/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/lvm-llava-svc:${TAG:-latest}
lvm-tgi:
build:
context: GenAIComps
dockerfile: comps/lvms/tgi-llava/Dockerfile
extends: multimodalqna
image: ${REGISTRY:-opea}/lvm-tgi:${TAG:-latest}
image: ${REGISTRY:-opea}/lvm:${TAG:-latest}
dataprep-multimodal-redis:
build:
context: GenAIComps
Expand Down
8 changes: 4 additions & 4 deletions MultimodalQnA/tests/test_compose_on_gaudi.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ function build_docker_images() {
cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding retriever-redis lvm-tgi dataprep-multimodal-redis whisper"
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding retriever-redis lvm dataprep-multimodal-redis whisper"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/tgi-gaudi:2.0.6
Expand Down Expand Up @@ -214,12 +214,12 @@ function validate_microservices() {
'{"inputs":"![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)What is this a picture of?\n\n","parameters":{"max_new_tokens":16, "seed": 42}}'

# lvm
echo "Evaluating lvm-tgi"
echo "Evaluating lvm"
validate_service \
"http://${host_ip}:9399/v1/lvm" \
'"text":"' \
"lvm-tgi" \
"lvm-tgi" \
"lvm" \
"lvm" \
'{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'

# data prep requiring lvm
Expand Down
8 changes: 4 additions & 4 deletions MultimodalQnA/tests/test_compose_on_rocm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ function build_docker_images() {
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../

echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding retriever-redis lvm-tgi lvm-llava-svc dataprep-multimodal-redis whisper"
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding retriever-redis lvm dataprep-multimodal-redis whisper"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker images && sleep 1m
Expand Down Expand Up @@ -220,12 +220,12 @@ function validate_microservices() {
'{"inputs":"![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)What is this a picture of?\n\n","parameters":{"max_new_tokens":16, "seed": 42}}'

# lvm
echo "Evaluating lvm-llava-svc"
echo "Evaluating lvm"
validate_service \
"http://${host_ip}:9399/v1/lvm" \
'"text":"' \
"lvm-tgi" \
"lvm-tgi" \
"lvm" \
"lvm" \
'{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'

# data prep requiring lvm
Expand Down
8 changes: 4 additions & 4 deletions MultimodalQnA/tests/test_compose_on_xeon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ function build_docker_images() {
cd $WORKPATH/docker_image_build
git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding retriever-redis lvm-llava lvm-llava-svc dataprep-multimodal-redis whisper"
service_list="multimodalqna multimodalqna-ui embedding-multimodal-bridgetower embedding retriever-redis lvm-llava lvm dataprep-multimodal-redis whisper"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker images && sleep 1m
Expand Down Expand Up @@ -212,12 +212,12 @@ function validate_microservices() {
'{"prompt":"Describe the image please.", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}'

# lvm
echo "Evaluating lvm-llava-svc"
echo "Evaluating lvm"
validate_service \
"http://${host_ip}:9399/v1/lvm" \
'"text":"' \
"lvm-llava-svc" \
"lvm-llava-svc" \
"lvm" \
"lvm" \
'{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}'

# data prep requiring lvm
Expand Down
17 changes: 11 additions & 6 deletions VideoQnA/docker_compose/intel/cpu/xeon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,10 @@ docker build -t opea/reranking:latest --build-arg https_proxy=$https_proxy --bui
### 4. Build LVM Image (Xeon)

```bash
docker build -t opea/video-llama-lvm-server:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/video-llama/dependency/Dockerfile .
docker build -t opea/lvm-video-llama:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/integrations/dependency/video-llama/Dockerfile .

# LVM Service Image
docker build -t opea/lvm-video-llama:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/video-llama/Dockerfile .
docker build -t opea/lvm:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/Dockerfile .
```

### 5. Build Dataprep Image
Expand Down Expand Up @@ -109,11 +109,16 @@ Then run the command `docker images`, you will have the following 8 Docker Image
1. `opea/dataprep-multimodal-vdms:latest`
2. `opea/embedding-multimodal-clip:latest`
3. `opea/retriever-vdms:latest`
<<<<<<< HEAD
4. `opea/reranking:latest`
5. `opea/video-llama-lvm-server:latest`
6. `opea/lvm-video-llama:latest`
7. `opea/videoqna:latest`
8. `opea/videoqna-ui:latest`
6. # `opea/lvm-video-llama:latest`
7. `opea/reranking-tei:latest`
8. `opea/lvm-video-llama:latest`
9. `opea/lvm:latest`
> > > > > > > d93597cbfd9da92b956adb3673c9e5d743c181af
10. `opea/videoqna:latest`
11. `opea/videoqna-ui:latest`

## 🚀 Start Microservices

Expand Down Expand Up @@ -275,7 +280,7 @@ docker compose up -d

In first startup, this service will take times to download the LLM file. After it's finished, the service will be ready.

Use `docker logs video-llama-lvm-server` to check if the download is finished.
Use `docker logs lvm-video-llama` to check if the download is finished.

```bash
curl -X POST \
Expand Down
9 changes: 5 additions & 4 deletions VideoQnA/docker_compose/intel/cpu/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,8 @@ services:
DATAPREP_GET_VIDEO_LIST_ENDPOINT: ${DATAPREP_GET_VIDEO_LIST_ENDPOINT}
restart: unless-stopped
lvm-video-llama:
image: ${REGISTRY:-opea}/video-llama-lvm-server:${TAG:-latest}
container_name: video-llama-lvm-server
image: ${REGISTRY:-opea}/lvm-video-llama:${TAG:-latest}
container_name: lvm-video-llama
ports:
- "9009:9009"
ipc: host
Expand All @@ -90,15 +90,16 @@ services:
- video-llama-model:/home/user/model
restart: unless-stopped
lvm:
image: ${REGISTRY:-opea}/lvm-video-llama:${TAG:-latest}
container_name: lvm-video-llama
image: ${REGISTRY:-opea}/lvm:${TAG:-latest}
container_name: lvm
ports:
- "9000:9000"
ipc: host
environment:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
no_proxy: ${no_proxy}
LVM_COMPONENT_NAME: "OPEA_VIDEO_LLAMA_LVM"
LVM_ENDPOINT: ${LVM_ENDPOINT}
restart: unless-stopped
depends_on:
Expand Down
Loading

0 comments on commit ca15fe9

Please sign in to comment.