Skip to content

Commit

Permalink
Added basic documentation for all new benchmarks.
Browse files Browse the repository at this point in the history
Signed-off-by: L Lakshmanan <[email protected]>
  • Loading branch information
Lakshman authored and dhschall committed Jul 29, 2024
1 parent dce2050 commit b1b9aa7
Show file tree
Hide file tree
Showing 5 changed files with 324 additions and 0 deletions.
63 changes: 63 additions & 0 deletions benchmarks/compression/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Compression Benchmark

The compression benchmark measures the performance of a serverless platform for the task of file compression. The benchmark uses the zlib library to compress and decompress input files. A specific file can be specified using the `--def_file` flag, however if not specified, the benchmark uses a default file.

The functionality is implemented in Python. The function is invoked using gRPC.

## Running this benchmark locally (using docker)

The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the compression-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
### Invoke once
2. Start the function with docker-compose
```bash
docker-compose -f ./yamls/docker-compose/dc-compression-python.yaml up
```
3. In a new terminal, invoke the interface function with grpcurl.
```bash
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
```
### Invoke multiple times
2. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "localhost" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 50000 -dbg -time 10 -rps 1
```

## Running this benchmark (using knative)

The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the compression-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
2. Start the function with knative
```bash
kubectl apply -f ./yamls/knative/kn-compression-python.yaml
```
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
### Invoke once
4. In a new terminal, invoke the interface function with test-client.
```bash
./test-client --addr $URL:80 --name "Example text for Compression"
```
### Invoke multiple times
4. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "$URL" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 80 -dbg -time 10 -rps 1
```
## Tracing
This Benchmark does not support distributed tracing for now.
66 changes: 66 additions & 0 deletions benchmarks/image-rotate/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Image Rotate Benchmark

The image rotate benchmark rotates an input image by 90 degrees. An input image can be specified, but if nothing is given, a default image is used. This benchmark also utilises and depends on a database to store the images that can be used, which in turn uses MongoDB.

The `init-database.go` script runs when starting the function and populates the database with the images from the `images` folder.

The functionality is implemented in two runtimes, namely Go and Python. The function is invoked using gRPC.

## Running this benchmark locally (using docker)

The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the image-rotate-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
### Invoke once
2. Start the function with docker-compose
```bash
docker-compose -f ./yamls/docker-compose/dc-image-rotate-python.yaml up
```
3. In a new terminal, invoke the interface function with grpcurl.
```bash
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
```
### Invoke multiple times
2. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "localhost" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 50000 -dbg -time 10 -rps 1
```

## Running this benchmark (using knative)

The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the image-rotate-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
2. Initialise the database and start the function with knative
```bash
kubectl apply -f ./yamls/knative/image-rotate-database.yaml
kubectl apply -f ./yamls/knative/kn-image-rotate-python.yaml
```
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
### Invoke once
4. In a new terminal, invoke the interface function with test-client.
```bash
./test-client --addr $URL:80 --name "Example text for Image-rotate"
```
### Invoke multiple times
4. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "$URL" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 80 -dbg -time 10 -rps 1
```
## Tracing
This Benchmark does not support distributed tracing for now.
63 changes: 63 additions & 0 deletions benchmarks/rnn-serving/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# RNN Serving Benchmark

The RNN serving benchmark generates a string using an RNN model given a specific language to generate the string in. An language can be specified as an input, but if nothing is given, a default language is chosen, either at random or uniquely using the input generator.

The functionality is implemented in Python. The function is invoked using gRPC.

## Running this benchmark locally (using docker)

The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the rnn-serving-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
### Invoke once
2. Start the function with docker-compose
```bash
docker-compose -f ./yamls/docker-compose/dc-rnn-serving-python.yaml up
```
3. In a new terminal, invoke the interface function with grpcurl.
```bash
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
```
### Invoke multiple times
2. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "localhost" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 50000 -dbg -time 10 -rps 1
```

## Running this benchmark (using knative)

The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the rnn-serving-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
2. Initialise the database and start the function with knative
```bash
kubectl apply -f ./yamls/knative/kn-rnn-serving-python.yaml
```
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
### Invoke once
4. In a new terminal, invoke the interface function with test-client.
```bash
./test-client --addr $URL:80 --name "Example text for rnn-serving"
```
### Invoke multiple times
4. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "$URL" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 80 -dbg -time 10 -rps 1
```
## Tracing
This Benchmark does not support distributed tracing for now.
66 changes: 66 additions & 0 deletions benchmarks/video-analytics-standalone/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Video Analytics Standalone Benchmark

The video analyics standalone benchmark preprocesses an input video and runs an object detection model (squeezenet) on the video. An input video can be specified, but if nothing is given, a default video is used. This benchmark also utilises and depends on a database to store the videos that can be used, which in turn uses MongoDB.

The `init-database.go` script runs when starting the function and populates the database with the videos from the `videos` folder.

The functionality is implemented in Python. The function is invoked using gRPC.

## Running this benchmark locally (using docker)

The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the video-analytics-standalone-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
### Invoke once
2. Start the function with docker-compose
```bash
docker-compose -f ./yamls/docker-compose/dc-video-analytics-standalone-python.yaml up
```
3. In a new terminal, invoke the interface function with grpcurl.
```bash
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
```
### Invoke multiple times
2. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "localhost" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 50000 -dbg -time 10 -rps 1
```

## Running this benchmark (using knative)

The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the video-analytics-standalone-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
2. Initialise the database and start the function with knative
```bash
kubectl apply -f ./yamls/knative/video-analytics-standalone-database.yaml
kubectl apply -f ./yamls/knative/kn-video-analytics-standalone-python.yaml
```
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
### Invoke once
4. In a new terminal, invoke the interface function with test-client.
```bash
./test-client --addr $URL:80 --name "Example text for video analytics standalone"
```
### Invoke multiple times
4. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "$URL" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 80 -dbg -time 10 -rps 1
```
## Tracing
This Benchmark does not support distributed tracing for now.
66 changes: 66 additions & 0 deletions benchmarks/video-processing/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Video Processing Benchmark

The video processing benchmark converts an input video to grayscale. An input video can be specified, but if nothing is given, a default video is used. This benchmark also utilises and depends on a database to store the videos that can be used, which in turn uses MongoDB.

The `init-database.go` script runs when starting the function and populates the database with the videos from the `videos` folder.

The functionality is implemented in Python. The function is invoked using gRPC.

## Running this benchmark locally (using docker)

The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the video-processing-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
### Invoke once
2. Start the function with docker-compose
```bash
docker-compose -f ./yamls/docker-compose/dc-video-processing-python.yaml up
```
3. In a new terminal, invoke the interface function with grpcurl.
```bash
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
```
### Invoke multiple times
2. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "localhost" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 50000 -dbg -time 10 -rps 1
```

## Running this benchmark (using knative)

The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the video-processing-python function.
1. Build or pull the function images using `make all-image` or `make pull`.
2. Initialise the database and start the function with knative
```bash
kubectl apply -f ./yamls/knative/video-processing-database.yaml
kubectl apply -f ./yamls/knative/kn-video-processing-python.yaml
```
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
### Invoke once
4. In a new terminal, invoke the interface function with test-client.
```bash
./test-client --addr $URL:80 --name "Example text for Video-processing"
```
### Invoke multiple times
4. Run the invoker
```bash
# build the invoker binary
cd ../../tools/invoker
make invoker
# Specify the hostname through "endpoints.json"
echo '[ { "hostname": "$URL" } ]' > endpoints.json
# Start the invoker with a chosen RPS rate and time
./invoker -port 80 -dbg -time 10 -rps 1
```
## Tracing
This Benchmark does not support distributed tracing for now.

0 comments on commit b1b9aa7

Please sign in to comment.