A toy project done to explore concurrency in go by simulating heavy load on a webserver, whose main function is to upload json sent in post http requests to a blob storage (emulated using azurite docker image)
The test I ran hit the webserver with 100,000 requests in 7.919 seconds, approx - 12,628.41 requests per second. The server was able to handle the load and succesfully upload all data into the blob storage. We verify this by sending a get request to the webserver after the upload, which logs the size of all blobs and returns total size of all blobs in the container ( each json obj when uploaded is 59 bytes, and when we check the total size, its 5900000 bytes -> this means all 100000 requests were processed without any loss in data)
logs_go_routines_demo-2024-10-09_10.02.08.mp4
the log "size of one entry" indicates that the consumer process is running and preparing the data to upload
1 request = 59 Bytes, Therefore -> 100000 request = 5900000 Bytes
This Docker Compose setup defines a multi-service architecture with two primary components:
Azurite is a Docker container that emulates Azure Blob Storage. It allows the application to store data as if it's interacting with real Azure storage. The container is exposed to other services via an internal Docker network (logster-network) for communication.
Logster is the primary application, which is a Go-based server designed to handle incoming HTTP requests (listening on port 8080). It processes JSON payloads and stores them in Azurite Blob Storage. The application is built from source using a Dockerfile and waits for Azurite to be ready before starting (depends_on ensures the order). Environment variables, like AZURE_BLOB_ENDPOINT, allow Logster to communicate with Azurite over the internal Docker network using the service name azurite.
Both services are connected via a custom bridge network (logster-network), which enables Logster to communicate with Azurite directly using its container name (azurite) instead of an IP address.
The application is fully dockerized, making it simple to set up and run.
-
Clone the repository:
git clone https://github.com/tren03/logster.git cd logster
-
Build the Docker images:
sudo docker-compose build
-
Run the containers:
sudo docker-compose up
Make sure you have ab
(Apache Benchmark) installed:
sudo apt-get install apache2-utils
To simulate sending 100,000 requests to the server:
ab -p data.json -T 'application/json' -c 500 -n 100000 http://localhost:8080/log
To log the size of blobs in azure container after the benchmark
curl localhost:8080