-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some ipv6 fixes #1249
Some ipv6 fixes #1249
Conversation
Decides to uses ip6tables and ipset based on whether the NodeIP is an ipv6 address. Analogous to how NetworkRoutingController does it. No support for dual stack yet
gobgp.GrpcListenAddress(nrc.nodeIP.String() + ":50051" + "," + "127.0.0.1:50051")) | ||
var addr string | ||
if nrc.isIpv6 { | ||
addr = "[" + nrc.nodeIP.String() + "]:50051" + "," + "[::1]:50051" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Random question by the way. Why does this need to listen on nodeIP
on the first place; and not just on loopback? nodeIP
is definitely publicly routable in ipv6 and i'm curious why we're having to spawn a publicly accessible grpc server. Do the different kube-router
daemonsets connect to eachother over grpc or something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question. TBH I'm not 100% sure since I'm pretty sure this code was around before my time on the project. Off the top of my head, I'm not sure if gobgp exposes metrics on that port, but if it did that might be a reason for it to be bound. kube-router uses BGP and the kube-apiserver for communication between instances, it does not use gobgp's grpc port remotely.
I'll try to dig around and see if I can find a better explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I haven't had a chance to pay this much attention, the last bit has been extremely busy. I have to say I'm a bit torn on this one, because while it does mimic what the routes controller already does in regards to IPv6, I'd much rather begin moving the pieces towards full dual-stack solutions rather than toggling between the two.
I have a branch related to #1245 that I'm still drafting that begins adding support for both IPv4 and IPv6 blocks on NetworkPolicy and I'd like to do more stuff like that that moves us slowly towards supporting dual stack.
That being said, maybe this isn't a bad half-step if it solves a specific need in the meantime. But there would be some gotchas for sure that people might not understand. For instance, if their node IP had an IPv6 address, it means that any IPv4 based network policy, which I would imagine would be the most common, would bomb and stop the NPC from syncing policy on the node.
Have you tested this in a cluster?
gobgp.GrpcListenAddress(nrc.nodeIP.String() + ":50051" + "," + "127.0.0.1:50051")) | ||
var addr string | ||
if nrc.isIpv6 { | ||
addr = "[" + nrc.nodeIP.String() + "]:50051" + "," + "[::1]:50051" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question. TBH I'm not 100% sure since I'm pretty sure this code was around before my time on the project. Off the top of my head, I'm not sure if gobgp exposes metrics on that port, but if it did that might be a reason for it to be bound. kube-router uses BGP and the kube-apiserver for communication between instances, it does not use gobgp's grpc port remotely.
I'll try to dig around and see if I can find a better explanation.
Haven't tested it on my cluster yet but am in the process of. Understand the concern that simply breaking as soon as an ipv4 address is added is not ideal. It would work for my usecase as I have an ipv6 only internal network and am planning on doing address translation at the egde (https://www.jool.mx/en/intro-xlat.html#siit-dc) . Wondering if it would make sense to address translate the ipv4 addresses in NetworkPolicy rules to their NAT-mapped counterparts (e.g. dual-stack in my opinion is not the way to go for Ipv6 deployments and sometimes bit surprised how much Kubernetes focuses on making it work. one is way better off by making it a concern at the gateway of your network. But I understand it's something that needs to be supported as it is something a compliant network plugin is supposed to handle according to kubernetes. Let me get some tests done (and maybe implement the address mapping from ipv4 to ipv6) and see if it works well and then we can decide whether this is a patch I should just carry myself or it's something worth adopting as a stopgap solution. The address mapping support would still be neat even if dual-stack works in the future as it's the way to go for ipv6-only deployments. |
That seems good. Comment back when you think you have it how you want it. |
Replaced by #1386 |
Decides to uses ip6tables and ipset based on whether the NodeIP is an ipv6 address.
Analogous to how NetworkRoutingController does it.
No support for dual stack yet