You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I find that its not easy to understand what the intent is in terms of configuration for an internal EPG in a separate tenant to communicate with a K8 service IP (load balancer IP) defined in a L3Out.
So for example, K8 running with a app deployment of 3 containers and configured with a loadbalancer service. The acc-provision tool has setup ACI and the acc-provision output file has been applied to via kubectl apply. The L3Out/EPGs etc are all in the common tenant. When the deployment is created for the app, the correct service graphs/contracts are all setup correctly and as expected the service (container app) can be successfully accessed from outside the fabric via the service host address. All good.
What is not clear is the intent in relation to how an internal EPG (i.e. tn-ATEN/ap-AAP/epg-AEPG) not in the same tenant or vrf as the deployed L3Out/VRF/etc (common as above) should communicate with the application via the service IP. Most of the objects created by the ACI containers on the APIC are 'managed' and should any configuration of these objects change, the ACI containers will change the configuration back to the original state. So for example, if I change the subnet settings for the host route in the L3Out EPG to allow the advertisement of the service host IP (enable Shared Security Import Subnet) to another VRF, the change is immediately reversed by the ACI containers. This prevents the route leaking of the host IP. In the same way, if I change the contract scope for the K8 app service (the contract created by the ACI Containers when the K8 app was deployed) from VRF to Global to be able to use this contract in another tenant, the modification is reversed immediately.
kubectl get services -o wide -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
...
multicast mcast LoadBalancer 172.22.73.15 172.25.0.2 5621:32362/TCP 43h app=mcast-app
The IP 172.25.0.2 is assigned from the acc-provision extern_dynamic subnet which is configured on the L3Out by the ACI containers when the mcast app is deployed with a LoadBalancer configuration as discussed above.
I did take a look at the configuration option kube_config\snat_operator\contract_scope: global which does not seem to apply here although I did try this option and reapply the configuration but this does not change the app contract scope.
There is little documentation around the Cisco (noironetworks) / ACI Containers hence the question.
Does anybody understand what the intent is to provide this EPG to K8 Service IP communication ? I would have expected just allowing the contract to be Global and not the default VRF would be the obvious but as above I can't do this as the change is reversed soon after submission.
I would expect that the same contract/service graph can be used for EPGs other than the L3Out keeping the configuration consistent across all accesses (internal or external to the fabric)
ACI 4.2(6h)
acc-provision 5.1.3.1
K8 1.2
Thanks.
The text was updated successfully, but these errors were encountered:
I find that its not easy to understand what the intent is in terms of configuration for an internal EPG in a separate tenant to communicate with a K8 service IP (load balancer IP) defined in a L3Out.
So for example, K8 running with a app deployment of 3 containers and configured with a loadbalancer service. The acc-provision tool has setup ACI and the acc-provision output file has been applied to via kubectl apply. The L3Out/EPGs etc are all in the common tenant. When the deployment is created for the app, the correct service graphs/contracts are all setup correctly and as expected the service (container app) can be successfully accessed from outside the fabric via the service host address. All good.
What is not clear is the intent in relation to how an internal EPG (i.e. tn-ATEN/ap-AAP/epg-AEPG) not in the same tenant or vrf as the deployed L3Out/VRF/etc (common as above) should communicate with the application via the service IP. Most of the objects created by the ACI containers on the APIC are 'managed' and should any configuration of these objects change, the ACI containers will change the configuration back to the original state. So for example, if I change the subnet settings for the host route in the L3Out EPG to allow the advertisement of the service host IP (enable Shared Security Import Subnet) to another VRF, the change is immediately reversed by the ACI containers. This prevents the route leaking of the host IP. In the same way, if I change the contract scope for the K8 app service (the contract created by the ACI Containers when the K8 app was deployed) from VRF to Global to be able to use this contract in another tenant, the modification is reversed immediately.
The IP 172.25.0.2 is assigned from the acc-provision extern_dynamic subnet which is configured on the L3Out by the ACI containers when the mcast app is deployed with a LoadBalancer configuration as discussed above.
I did take a look at the configuration option kube_config\snat_operator\contract_scope: global which does not seem to apply here although I did try this option and reapply the configuration but this does not change the app contract scope.
There is little documentation around the Cisco (noironetworks) / ACI Containers hence the question.
Does anybody understand what the intent is to provide this EPG to K8 Service IP communication ? I would have expected just allowing the contract to be Global and not the default VRF would be the obvious but as above I can't do this as the change is reversed soon after submission.
I would expect that the same contract/service graph can be used for EPGs other than the L3Out keeping the configuration consistent across all accesses (internal or external to the fabric)
ACI 4.2(6h)
acc-provision 5.1.3.1
K8 1.2
Thanks.
The text was updated successfully, but these errors were encountered: