Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

Obtaining Less Throughput in Competition with TCP during iperf Tests #504

Open
d0649528 opened this issue Feb 21, 2023 · 3 comments
Open

Comments

@d0649528
Copy link

image

I have two bottleneck links and six hosts. I am using an HP switch to simulate the bottleneck links in the diagram. I want to create an environment where TCP flow compete with an MPTCP flow on the two bottleneck links separately. I have tested congestion methods such as OLIA, LIA, BALIA, and WVEGAS. The duration of the experiment is 100 seconds. The results are as follows:

  path Transfer Bandwidth
olia mptcp 64.8Mbytes 5.42Mbits/sec
  tcp1 79.3Mbytes 6.64Mbits/sec
  tcp2 83.0Mbytes 6.96Mbits/sec
lia mptcp 41.2Mbytes 3.44Mbits/sec
  tcp1 91.2Mbytes 7.64Mbits/sec
  tcp2 94.8Mbytes 7.94Mbits/sec
balia mptcp 57.0Mbytes 4.76Mbits/sec
  tcp1 87.4Mbytes 7.31Mbits/sec
  tcp2 82.8Mbytes 6.94Mbits/sec
wVegas mptcp 56.8Mbytes 4.74Mbits/sec
  tcp1 79.0Mbytes 6.63Mbits/sec
  tcp2 91.1Mbytes 7.64Mbits/sec

I have seen in some literature that the sum of MPTCP throughput should account for 0.4 of the total throughput. Therefore, in my environment, it should be 8 Mbits/sec. I am not sure if there is a problem somewhere or if this could be the correct result.

image

If there is only one TCP flow, the results are approximately MPTCP sum throughput = 12 Mbits/sec and TCP throughput = 7 Mbits/sec. This should be correct.

Can someone please help me with an answer? Thank you very much!!

@matttbe
Copy link
Member

matttbe commented Feb 21, 2023

Hi (not sure what your name is),

I have seen in some literature that the sum of MPTCP throughput should account for 0.4 of the total throughput. Therefore, in my environment, it should be 8 Mbits/sec. I am not sure if there is a problem somewhere or if this could be the correct result.

The idea of these coupled congestion algorithms is to have multiple subflows for the same MPTCP connections being fair with single flow TCP sharing the same bottleneck. So more to cover such cases where TCP and MPTCP would have approximately the same bandwidth:

           10M
          _____
TCP   ---|-----|---
MPTCP ===|=====|===
          _____

To be honest, I don't know if these CC covers the case you shared. If the use case is supposed to be covered, these CC have not been updated for a while (more used for scientific papers, not much in "real" deployments) so maybe on more recent kernels, you have, you might have bugs.

Also, it might also depend on the TCP CC used by the "plain" TCP flows: they might be more "aggressive" than Reno or vegas.

@d0649528
Copy link
Author

@matttbe Thank you for your response. Currently, I have only tested Cubic and Reno using plain TCP. Yesterday, I found out that because my two network cards were set to different VLANs, it seems that although I used fullmesh with 2 IP addresses on each end in MPTCP, only two subflows were established. Will this significantly affect the throughput of MPTCP?

@matttbe
Copy link
Member

matttbe commented Feb 22, 2023

Yesterday, I found out that because my two network cards were set to different VLANs, it seems that although I used fullmesh with 2 IP addresses on each end in MPTCP, only two subflows were established. Will this significantly affect the throughput of MPTCP?

Sorry, I'm not sure to understand: on your schemas, I can see 2 subflows, what you expect to have, no?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants