Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network traffic shaping is not accurate #8

Open
gapan opened this issue May 11, 2021 · 0 comments
Open

Network traffic shaping is not accurate #8

gapan opened this issue May 11, 2021 · 0 comments

Comments

@gapan
Copy link

gapan commented May 11, 2021

While trying to enforce bandwidth/latency limits, I'm not sure how the settings finally apply. It certainly seems that latency plays a big role with respect to bandwidth as well. Latency, by itself, is however mostly OK. The problem is the effect it has on bandwidth. I'm running the following setups with 4 containers, deployed as two server/client pairs. These are only some examples that show this problem, I've run a lot more similar setups. I'm using iperf3 to calculate the actual bandwidth between containers.

  • Setting bandwidth: 100Mbps (bidirectional) and delay: 3ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~750Mbits/sec
    • Setting bandwidth: 100Mbps (bidirectional) and delay: 30ms for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~710Mbits/sec
    • Setting bandwidth: 100Mbps (bidirectional) and delay: 60ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~380 Mbits/sec
  • Setting bandwidth: 1000Mbps (bidirectional) and delay: 3ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~7.10 Gbits/sec
  • Setting bandwidth: 1000Mbps (bidirectional) and delay: 30ms for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~790Mbits/sec
  • Setting bandwidth: 10000Mbps (bidirectional) and delay: 3ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~7.10 Gbits/sec
  • Setting bandwidth: 10000Mbps (bidirectional) and delay: 30ms for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~780Mbits/sec
  • Setting bandwidth: 10000Mbps (bidirectional) and delay: 60ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~380 Mbits/sec

If you want to try similar setups, you can look into the start-network-test.sh script in this repo: https://github.com/Datalab-AUTH/fogify-db-benchmarks

I should note that on my system, connecting 2 docker containers with no traffic shaping whatsoever (and without fogify) results to actual bandwidth measurements of ~27Gbits/sec, so there is no bottleneck on the host system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant