Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service graph contract created with scope vrf #485

Open
simonbirtles opened this issue Mar 30, 2021 · 0 comments
Open

Service graph contract created with scope vrf #485

simonbirtles opened this issue Mar 30, 2021 · 0 comments

Comments

@simonbirtles
Copy link

I find that its not easy to understand what the intent is in terms of configuration for an internal EPG in a separate tenant to communicate with a K8 service IP (load balancer IP) defined in a L3Out.

So for example, K8 running with a app deployment of 3 containers and configured with a loadbalancer service. The acc-provision tool has setup ACI and the acc-provision output file has been applied to via kubectl apply. The L3Out/EPGs etc are all in the common tenant. When the deployment is created for the app, the correct service graphs/contracts are all setup correctly and as expected the service (container app) can be successfully accessed from outside the fabric via the service host address. All good.

What is not clear is the intent in relation to how an internal EPG (i.e. tn-ATEN/ap-AAP/epg-AEPG) not in the same tenant or vrf as the deployed L3Out/VRF/etc (common as above) should communicate with the application via the service IP. Most of the objects created by the ACI containers on the APIC are 'managed' and should any configuration of these objects change, the ACI containers will change the configuration back to the original state. So for example, if I change the subnet settings for the host route in the L3Out EPG to allow the advertisement of the service host IP (enable Shared Security Import Subnet) to another VRF, the change is immediately reversed by the ACI containers. This prevents the route leaking of the host IP. In the same way, if I change the contract scope for the K8 app service (the contract created by the ACI Containers when the K8 app was deployed) from VRF to Global to be able to use this contract in another tenant, the modification is reversed immediately.

kubectl get services -o wide -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
...
multicast mcast LoadBalancer 172.22.73.15 172.25.0.2 5621:32362/TCP 43h app=mcast-app

The IP 172.25.0.2 is assigned from the acc-provision extern_dynamic subnet which is configured on the L3Out by the ACI containers when the mcast app is deployed with a LoadBalancer configuration as discussed above.

I did take a look at the configuration option kube_config\snat_operator\contract_scope: global which does not seem to apply here although I did try this option and reapply the configuration but this does not change the app contract scope.

There is little documentation around the Cisco (noironetworks) / ACI Containers hence the question.

Does anybody understand what the intent is to provide this EPG to K8 Service IP communication ? I would have expected just allowing the contract to be Global and not the default VRF would be the obvious but as above I can't do this as the change is reversed soon after submission.

I would expect that the same contract/service graph can be used for EPGs other than the L3Out keeping the configuration consistent across all accesses (internal or external to the fabric)

ACI 4.2(6h)
acc-provision 5.1.3.1
K8 1.2

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant