-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keepalived Operator with Ingress breaks ip whitelist annotation #77
Comments
I did not understand the issue. Can you try to explain it in a different way? |
Yes, OKD Routes support an IP whitelist annotation. In this annotation you can specifiy which CIDR can access this route. This creates a simple ACL in the HAProxy configuration to deny anyone not having an IP in the CIDR. This mechanism relies on the fact that HAProxy can see the SRC IP from the end user to determine whether this IP is allowed. When having using a Keepalived VIP to loadbalance external traffic to the Ingress (HAProxy) the SRC IP always appear as a cluster internal IP due to the fact that the VIP is an external IP on a loadbalancer service. I'm not familiar enough with the internal working of the Kubernetes service loadbalancer part but it seems we see the service loadbalancer IP instead of the end user ip. This renders ip whitelisting unusable. |
understood. There is no way to fix this issue at the moment.
I suggest you look at MetalLB operator, I believe that with this operator
you will see the real IP of the client.
…On Wed, Sep 29, 2021 at 9:17 AM Jocelyn Thode ***@***.***> wrote:
Yes, OKD Routes support an IP whitelist annotation. In this annotation you
can specifiy which CIDR can access this route. This creates a simple ACL in
the HAProxy configuration to deny anyone not having an IP in the CIDR.
This mechanism relies on the fact that HAProxy can see the SRC IP from the
end user to determine whether this IP is allowed. When having using a
Keepalived VIP to loadbalance external traffic to the Ingress (HAProxy) the
SRC IP always appear as a cluster internal IP due to the fact that the VIP
is an external IP on a loadbalancer service.
I'm not familiar enough with the internal working of the Kubernetes
service loadbalancer part but it seems we see the service loadbalancer IP
instead of the end user ip. This renders ip whitelisting unusable.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#77 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABPERXDGP62TUELLSZRGQT3UEMGWPANCNFSM5DRAYEMQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
--
ciao/bye
Raffaele
|
Hi,
I tried setting up the keepalived operator together with the default IngressController. Unfortunately I noticed that since the IPs are set as an externalIp on a service source IP received by the default IngressController (HAProxy in OKD 4.7) will only receive the loadbalancer IP instead of a potential external source, which breaks the ip whitelist annotation.
For now I had to completely drop the keepalived Operator and I use a NodePort configuration to bypass the issue.
Is there something I missed with my configuration? Otherwise I think we could add a disclaimer in the how-to-ingress.md to state this point.
The text was updated successfully, but these errors were encountered: