-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubefwd is not properly reconnecting when new pod is created #243
Comments
@imdbere kubefwd checks the pod list every 5 minutes and will fix this automatically. I don't think it's necessary for kubefwd to monitor pod changes and react in real time, it will consume more resources. It is recommended to use kubefwd for local development environment. If stable access to pods inside the cluster is required, I recommend using a private load balancer for service exposure. |
5 min delay in reconnecting effectively means that user must stop kubefwd and restart it, i.e. waiting 5 mins is long time to wait when trying to do some development work locally. |
@ndj888 this is a tool for local development isn't it? When service gets redeployed in the cluster you're connected to, you want your local app to pick this up. 5 min delay makes reconnect virtually non-functional. For me, 5 seconds would be more appropriate as I want to focus on app I work on, and let Idea: why not just make timeout a CLI option and let users decide what is best resource/delay combo? |
Btw, I'm experiencing very similar problem. The moment pod gets reshuffled or new deployment happens, the port forwarding dies and does not recover (even after 5 minutes), see below:
|
I've been having the same problem since I first started using kubefwd years ago, I have to restart it a lot. I even tried putting in a fix that was merged, but the issue resurfaced again later. e.g. #153 |
Hi, first of all thanks for making this project available :)
One thing that makes kubefwd harder to use is that it doesn't properly reconnect if e.g. a deployment is updated and therefore an old pod is terminated and a new one is created. The error that is created looks like this:
ERRO[17:36:11] Runtime: an error occurred forwarding 80 -> 80: error forwarding port 80 to pod 1b4e23d5467af287d3685ede5f693f04b5a909d475cb31d470381f6c3c956ffe, uid : failed to find sandbox "1b4e23d5467af287d3685ede5f693f04b5a909d475cb31d470381f6c3c956ffe" in store: not found
After that, only restarting the command makes the port-forwarding work again.
Is this intended behaviour ? It would be ideal if kubefwd would properly shift traffic to the new pod like it is done by kubernetes services or at least doing some reconnecting.
This is the command used:
sudo kubefwd svc -n default -d some-name -x some-kubeconfig
Thanks!
The text was updated successfully, but these errors were encountered: