You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current load test setup runs locust directly on the host machine and additional workers need to be spawned manually.
It would be nice if we could move the cluster (main and wokers) into Docker but on first attempt we were seeing resource utilisation issues. For example, the Docker VM had access to multiple CPUs but only used one which limited locust's ability to send higher rates of requests.
Locust's example docker-compose that configures a main node and 4 workers.
The text was updated successfully, but these errors were encountered:
Another option: we could deploy locust into k8s so that its client traffic to other k8s services appears as tcp streams between pods. OTel microservices demo does this with their own load-generator image; ours might be simpler than theirs.
The current load test setup runs locust directly on the host machine and additional workers need to be spawned manually.
It would be nice if we could move the cluster (main and wokers) into Docker but on first attempt we were seeing resource utilisation issues. For example, the Docker VM had access to multiple CPUs but only used one which limited locust's ability to send higher rates of requests.
Locust's example docker-compose that configures a main node and 4 workers.
The text was updated successfully, but these errors were encountered: