Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very slow in the LoadBalancer type service #11271

Closed
danielvincenzi opened this issue Nov 8, 2024 · 1 comment
Closed

Very slow in the LoadBalancer type service #11271

danielvincenzi opened this issue Nov 8, 2024 · 1 comment

Comments

@danielvincenzi
Copy link

Hi everyone, I would like your help to identify a slowness problem that I am having when using the LoadBalancer service. When I do port-forward or use NodePort, it works normally, but the LoadBalancer service is extremely slow to load the apps. What I have already tried:

  • Change the CNI from Flannel to Cillium
  • Run the Cillium connectivity tests
  • Changed the kube-proxy to the native Cilium one
  • Changed the service load balancer to MetalLb and then to Cilium

All the tests above had the same result, slowness to access the apps via LoadBalancer, and via NodePort it works correctly.

I'm running three VMWare, Etcd virtual machines in HA and with Cilium enabled (even though I tested Flannel), OS: Debian GNU/Linux 11 (bullseye), Kernel: 5.10.0-33-amd64, K3s: v1.30.6+k3s1, Cilium: 1.16.3.

Installation:

Master:

curl -sfL https://get.k3s.io | sh -s - server \
 --flannel-backend=none \
 --disable-network-policy \
 --disable-kube-proxy \
 --disable traefik \
 --disable servicelb \
 --cluster-init

Workers:

curl -sfL https://get.k3s.io | K3S_TOKEN="$K3S_TOKEN" sh -s - server \
    --server https://<SERVER_IP>:6443 \
    --flannel-backend=none \
    --disable-network-policy \
    --disable-kube-proxy \
    --disable traefik \
    --disable servicelb

Do you have any ideas? I don't know what to do anymore.

Thank you very much!

@danielvincenzi
Copy link
Author

➜ helm git:(main) ✗ cilium connectivity perf
ℹ️ Monitor aggregation detected, will skip some flow validation steps
🔥 [k3s] Deleting connectivity check deployments...
⌛ [k3s] Waiting for namespace cilium-test-1 to disappear
✨ [k3s] Creating namespace cilium-test-1 for connectivity check...
ℹ️ Nodes used for performance testing:
ℹ️ Node name: dataplatform1br26-pro, zone:
ℹ️ Node name: dataplatform2br27-pro, zone:
✨ [k3s] Deploying perf-client deployment...
✨ [k3s] Deploying perf-client-other-node deployment...
✨ [k3s] Deploying perf-server deployment...
✨ [k3s] Deploying perf-client-host-net deployment...
✨ [k3s] Deploying perf-client-other-node-host-net deployment...
✨ [k3s] Deploying perf-server-host-net deployment...
⌛ [k3s] Waiting for deployment cilium-test-1/perf-client to become ready...
⌛ [k3s] Waiting for deployment cilium-test-1/perf-client-other-node to become ready...
⌛ [k3s] Waiting for deployment cilium-test-1/perf-server to become ready...
⌛ [k3s] Waiting for deployment cilium-test-1/perf-client-host-net to become ready...
⌛ [k3s] Waiting for deployment cilium-test-1/perf-client-other-node-host-net to become ready...
⌛ [k3s] Waiting for deployment cilium-test-1/perf-server-host-net to become ready...
🔭 Enabling Hubble telescope...
ℹ️ Hubble is OK, flows: 12285/12285, connected nodes: 3, unavailable nodes 0
ℹ️ Cilium version: 1.16.2
🏃[cilium-test-1] Running 1 tests ...
[=] [cilium-test-1] Test [network-perf] [1/1]
........
🔥 Network Performance Test Summary [cilium-test-1]:

📋 Scenario | Node | Test | Duration | Min | Mean | Max | P50 | P90 | P99 | Transaction rate OP/s

📋 pod-to-pod | same-node | TCP_RR | 10s | 85µs | 160.69µs | 9.465ms | 130µs | 223µs | 521µs | 6152.03
📋 host-to-host | same-node | TCP_RR | 10s | 119µs | 246.22µs | 12.929ms | 213µs | 312µs | 844µs | 4016.88
📋 pod-to-pod | other-node | TCP_RR | 10s | 409µs | 650.67µs | 11.351ms | 578µs | 815µs | 1.926ms | 1528.26
📋 host-to-host | other-node | TCP_RR | 10s | 323µs | 552.91µs | 8.322ms | 484µs | 691µs | 1.783ms | 1797.84


📋 Scenario | Node | Test | Duration | Throughput Mb/s

📋 pod-to-pod | same-node | TCP_STREAM | 10s | 3116.94
📋 host-to-host | same-node | TCP_STREAM | 10s | 5422.39
📋 pod-to-pod | other-node | TCP_STREAM | 10s | 11.51
📋 host-to-host | other-node | TCP_STREAM | 10s | 3231.13

✅ [cilium-test-1] All 1 tests (8 actions) successful, 0 tests skipped, 0 scenarios skipped.

@k3s-io k3s-io locked and limited conversation to collaborators Nov 8, 2024
@brandond brandond converted this issue into discussion #11272 Nov 8, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
Status: Done Issue
Development

No branches or pull requests

1 participant