-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tetra policyfilter listpolicies command #3122
base: main
Are you sure you want to change the base?
Conversation
d62cd19
to
7ed2acf
Compare
a6e138f
to
0350886
Compare
0350886
to
9a9bfd1
Compare
edb12b0
to
1833233
Compare
✅ Deploy Preview for tetragon ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
1833233
to
6d7bb31
Compare
6d7bb31
to
0de3361
Compare
0de3361
to
06d4488
Compare
cmd/tetra/debug/dump.go
Outdated
ids := make([]string, 0, len(cgIDs)) | ||
for id := range cgIDs { | ||
ids = append(ids, strconv.FormatUint(uint64(id), 10)) | ||
} | ||
fmt.Printf("%d: %s\n", polId, strings.Join(ids, ",")) | ||
} | ||
|
||
fmt.Println("--- Reverse Map ---") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if we do not rename the map I find that "Direct" and "Reverse" would make things harder to understand. Let's describe what is the key and what is the value here to make it easier to understand the output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have renamed those to more descriptive headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments :)
06d4488
to
83d092c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! I'll let Kornilios review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Please find some comments below.
bpf/process/policy_filter.h
Outdated
@@ -9,6 +9,7 @@ | |||
|
|||
#define POLICY_FILTER_MAX_POLICIES 128 | |||
#define POLICY_FILTER_MAX_NAMESPACES 1024 | |||
#define POLICY_FILTER_MAX_CGROUP_IDS 32768 /* same as polMapSize in policyfilter/state.go */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems high to me. According to https://kubernetes.io/docs/setup/best-practices/cluster-large/, a good limit for the number of pods per node is ~100. How about having something 512 or 1024 entries?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this makes sense. Just tried to keep that consistent with what we had in policyfilter/state.go
.
Changed that to 1024.
pkg/policyfilter/map.go
Outdated
return PfMap{}, fmt.Errorf("opening map %s failed: %w", MapName, err) | ||
} | ||
if ret.cgroupMap, err = openMap(spec, CgroupMapName, polMaxPolicies); err != nil { | ||
return PfMap{}, fmt.Errorf("opening cgroup map %s failed: %w", MapName, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to release policyMap
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, fixed that.
This patch introduces an eBPF map that maps cgroupIds to policyIds. This is handled from the user-space in a similar way to policy_filter_maps. This can be used on later PRs to quickly indentify policies that match a specific container or optimize tracing policies. Signed-off-by: Anastasios Papagiannis <[email protected]>
Signed-off-by: Anastasios Papagiannis <[email protected]>
It is useful to have a debug command to indentify which Kubernetes Identity Aware policies should be applied on a specific container. An example can be found here: Create a pod with "app: ubuntu" and "usage: dev" labels. $ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: ubuntu labels: app: ubuntu usage: dev spec: containers: - name: ubuntu image: ubuntu:24.10 command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent restartPolicy: Always EOF And apply several policies where some of them match while others don't. $ cat << EOF | kubectl apply -f - apiVersion: cilium.io/v1alpha1 kind: TracingPolicy metadata: name: "lseek-podfilter-app" spec: podSelector: matchLabels: app: "ubuntu" kprobes: [...] EOF $ cat << EOF | kubectl apply -f - apiVersion: cilium.io/v1alpha1 kind: TracingPolicy metadata: name: "lseek-podfilter-usage" spec: podSelector: matchLabels: usage: "dev" kprobes: [...] EOF $ cat << EOF | kubectl apply -f - apiVersion: cilium.io/v1alpha1 kind: TracingPolicy metadata: name: "lseek-podfilter-prod" spec: podSelector: matchLabels: prod: "true" kprobes: [...] EOF $ cat << EOF | kubectl apply -f - apiVersion: cilium.io/v1alpha1 kind: TracingPolicy metadata: name: "lseek-podfilter-info" spec: podSelector: matchLabels: info: "broken" kprobes: [...] EOF $ cat << EOF | kubectl apply -f - apiVersion: cilium.io/v1alpha1 kind: TracingPolicy metadata: name: "lseek-podfilter-global" spec: kprobes: [...] EOF Based on the labels we expect that policies lseek-podfilter-app and lseek-podfilter-usage to match on that pod. lseek-podfilter-global is not a Kubernetes Identity Aware policy so this will be applied in all cases and we do not report that. First step is to find the container ID that we care about. $ kubectl describe pod/ubuntu | grep containerd Container ID: containerd://ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c And then use it to report all Kubernetes Identity Aware policies that match. $ kubectl exec -it ds/tetragon -n kube-system -c tetragon -- tetra policyfilter -r "unix:///procRoot/1/root/run/containerd/containerd.sock" listpolicies ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c ID NAME STATE FILTERID NAMESPACE SENSORS KERNELMEMORY 5 lseek-podfilter-usage enabled 5 (global) generic_kprobe 1.72 MB 1 lseek-podfilter-app enabled 1 (global) generic_kprobe 1.72 MB We also provide --debug flag to provide more details i.e.: $ kubectl exec -it ds/tetragon -n kube-system -c tetragon -- tetra policyfilter -r "unix:///procRoot/1/root/run/containerd/containerd.sock" listpolicies ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c --debug time="2024-12-13T09:47:38Z" level=info msg=cgroup path=/run/tetragon/cgroup2/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod189a8053_9f36_4250_bcae_9ed167172920.slice/cri-containerd-ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c.scope time="2024-12-13T09:47:38Z" level=info msg=cgroup id=5695 time="2024-12-13T09:47:39Z" level=debug msg="resolved server address using info file" InitInfoFile=/var/run/tetragon/tetragon-info.json ServerAddress="localhost:54321" ID NAME STATE FILTERID NAMESPACE SENSORS KERNELMEMORY 1 lseek-podfilter-app enabled 1 (global) generic_kprobe 1.72 MB 5 lseek-podfilter-usage enabled 5 (global) generic_kprobe 1.72 MB This uses the cgroup-based policy filter map that introduced in a previous commit that maps cgroupIds to policyIds. Signed-off-by: Anastasios Papagiannis <[email protected]>
By adding a command line argument (and the appropriate configmap option). Signed-off-by: Anastasios Papagiannis <[email protected]>
83d092c
to
fca9fe4
Compare
Add
tetra policyfilter listpolicies
to determine which Kubernetes Identity Aware policies should be applied on a specific container.Example:
Details on how this works can be found on specific commits.