Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

router-perf-v2 fails due to oc cp failing #553

Open
jtaleric opened this issue Apr 3, 2023 · 2 comments
Open

router-perf-v2 fails due to oc cp failing #553

jtaleric opened this issue Apr 3, 2023 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@jtaleric
Copy link
Member

jtaleric commented Apr 3, 2023

Describe the bug
When copying the mb configuration file to the client pod, we have seen this fail, and there is no retry.

[2023-04-03, 18:27:08 UTC] {subprocess.py:92} INFO - �[1mMon Apr  3 18:27:08 UTC 2023 Sleeping for 60s before next test�[0m
[2023-04-03, 18:28:08 UTC] {subprocess.py:92} INFO - �[1mMon Apr  3 18:28:08 UTC 2023 Generating config for termination reencrypt with 1 clients 50 keep alive requests and path /1024.html�[0m
[2023-04-03, 18:28:09 UTC] {subprocess.py:92} INFO - �[1mMon Apr  3 18:28:09 UTC 2023 Copying mb config http-scale-reencrypt.json to pod http-scale-client-656b4ccb67-nspv8�[0m
[2023-04-03, 18:28:09 UTC] {subprocess.py:92} INFO - Error from server: error dialing backend: write tcp 10.0.194.34:52724->10.0.98.224:10250: use of closed network connection
[2023-04-03, 18:28:09 UTC] {subprocess.py:92} INFO - bdd8fe86-router-20230403
[2023-04-03, 18:28:09 UTC] {subprocess.py:96} INFO - Command exited with return code 1

To Reproduce
n/a

Expected behavior
oc cp to succeed.

Screenshots
image

@jtaleric jtaleric added the bug Something isn't working label Apr 3, 2023
@jtaleric jtaleric changed the title router-perf-v2 router-perf-v2 fails due to bad oc cp Apr 3, 2023
@jtaleric jtaleric changed the title router-perf-v2 fails due to bad oc cp router-perf-v2 fails due to oc cp failing Apr 3, 2023
@rsevilla87
Copy link
Member

rsevilla87 commented Apr 4, 2023

Looks like a connectivity issue between kube-apiserver and kubelet. This is the first time I've seen something like this, but seems is very unlikely that the benchmark caused these connectivity issues.

@jtaleric
Copy link
Member Author

jtaleric commented Apr 6, 2023

Looks like a connectivity issue between kube-apiserver and kubelet. This is the first time I've seen something like this, but seems is very unlikely that the benchmark caused these connectivity issues.

Correct, this is less about the benchmark causing it, and the fragility of the automation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants