Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some ipv6 fixes #1249

Closed
wants to merge 2 commits into from
Closed

Some ipv6 fixes #1249

wants to merge 2 commits into from

Conversation

arianvp
Copy link

@arianvp arianvp commented Feb 13, 2022

Decides to uses ip6tables and ipset based on whether the NodeIP is an ipv6 address.
Analogous to how NetworkRoutingController does it.

No support for dual stack yet

Arian van Putten added 2 commits February 13, 2022 13:37
Decides to uses ip6tables and ipset based on whether the NodeIP is an ipv6 address.
Analogous to how NetworkRoutingController does it.

No support for dual stack yet
@arianvp arianvp changed the title netpol: Add IPv6 support Some ipv6 fixes Feb 13, 2022
@arianvp arianvp mentioned this pull request Feb 13, 2022
gobgp.GrpcListenAddress(nrc.nodeIP.String() + ":50051" + "," + "127.0.0.1:50051"))
var addr string
if nrc.isIpv6 {
addr = "[" + nrc.nodeIP.String() + "]:50051" + "," + "[::1]:50051"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Random question by the way. Why does this need to listen on nodeIP on the first place; and not just on loopback? nodeIP is definitely publicly routable in ipv6 and i'm curious why we're having to spawn a publicly accessible grpc server. Do the different kube-router daemonsets connect to eachother over grpc or something?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. TBH I'm not 100% sure since I'm pretty sure this code was around before my time on the project. Off the top of my head, I'm not sure if gobgp exposes metrics on that port, but if it did that might be a reason for it to be bound. kube-router uses BGP and the kube-apiserver for communication between instances, it does not use gobgp's grpc port remotely.

I'll try to dig around and see if I can find a better explanation.

@arianvp arianvp marked this pull request as ready for review February 17, 2022 13:58
Copy link
Collaborator

@aauren aauren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I haven't had a chance to pay this much attention, the last bit has been extremely busy. I have to say I'm a bit torn on this one, because while it does mimic what the routes controller already does in regards to IPv6, I'd much rather begin moving the pieces towards full dual-stack solutions rather than toggling between the two.

I have a branch related to #1245 that I'm still drafting that begins adding support for both IPv4 and IPv6 blocks on NetworkPolicy and I'd like to do more stuff like that that moves us slowly towards supporting dual stack.

That being said, maybe this isn't a bad half-step if it solves a specific need in the meantime. But there would be some gotchas for sure that people might not understand. For instance, if their node IP had an IPv6 address, it means that any IPv4 based network policy, which I would imagine would be the most common, would bomb and stop the NPC from syncing policy on the node.

Have you tested this in a cluster?

gobgp.GrpcListenAddress(nrc.nodeIP.String() + ":50051" + "," + "127.0.0.1:50051"))
var addr string
if nrc.isIpv6 {
addr = "[" + nrc.nodeIP.String() + "]:50051" + "," + "[::1]:50051"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. TBH I'm not 100% sure since I'm pretty sure this code was around before my time on the project. Off the top of my head, I'm not sure if gobgp exposes metrics on that port, but if it did that might be a reason for it to be bound. kube-router uses BGP and the kube-apiserver for communication between instances, it does not use gobgp's grpc port remotely.

I'll try to dig around and see if I can find a better explanation.

@arianvp
Copy link
Author

arianvp commented Feb 19, 2022

Haven't tested it on my cluster yet but am in the process of.

Understand the concern that simply breaking as soon as an ipv4 address is added is not ideal.

It would work for my usecase as I have an ipv6 only internal network and am planning on doing address translation at the egde (https://www.jool.mx/en/intro-xlat.html#siit-dc) . Wondering if it would make sense to address translate the ipv4 addresses in NetworkPolicy rules to their NAT-mapped counterparts (e.g. 1.2.3.4 => 64:ff9b:1:1::1.2.3.4/96) when in Ipv6 mode. That would at least make ipv4 rules work on an ipv6 only cluster if some form of 4-to-6 translation is in use.

dual-stack in my opinion is not the way to go for Ipv6 deployments and sometimes bit surprised how much Kubernetes focuses on making it work. one is way better off by making it a concern at the gateway of your network. But I understand it's something that needs to be supported as it is something a compliant network plugin is supposed to handle according to kubernetes.

Let me get some tests done (and maybe implement the address mapping from ipv4 to ipv6) and see if it works well and then we can decide whether this is a patch I should just carry myself or it's something worth adopting as a stopgap solution. The address mapping support would still be neat even if dual-stack works in the future as it's the way to go for ipv6-only deployments.

@aauren
Copy link
Collaborator

aauren commented Feb 19, 2022

That seems good. Comment back when you think you have it how you want it.

@aauren
Copy link
Collaborator

aauren commented Oct 31, 2022

Replaced by #1386

@aauren aauren closed this Oct 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants