diff --git a/CHANGELOG.md b/CHANGELOG.md index d77abc6..82d83fa 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,16 @@ All notable changes to this project will be documented in this file. The format --- ### + +# [0.9.0] - 2024-09-24 +- Added several new Arguments to zfw to allow for direct system call integrations with + ziti-edge-tunnel ```-A --add-user-rules```, ```-H --init-tc ```, ```-Z, --init-xdp ```, ```-B, bind-saddr-add ```, + ```-J, bind-saddr-delete ```, ```-F -j, bind-flush``` + - Added new interface setting to ```-q, pass-non-tuple``` (off by default) pass all non tuple (tcp/udp) traffic to the os for + applications requiring only redirection (Not recommended for stand alone fw use) + - Updated README.md + + # [0.8.19] - 2024-09-08 - Add masquerade/reverse_masquerade map garbage collection to ```zfw.c -L -G, --list-gc-sessions``` which is now added to /etc/cron.d/zfw_refresh as well so it will run once every 60 seconds unless modified. diff --git a/README.md b/README.md index 6b1e92f..9be3927 100644 --- a/README.md +++ b/README.md @@ -6,227 +6,106 @@ for an [OpenZiti](https://docs.openziti.io/) ziti-edge-tunnel installation and i filtering. It can also be used in conjunction with OpenZiti edge-routers or as a standalone fw. It now has built in EBPF based masquerade capability for both IPv4/IPv6. -## New features in 0.8.x - - -### Native EBPF based IPv4 and IPv6 Masquerade support - -zfw can now provide native IPv4/IPv6 masquerade operation for outbound pass through connections which can be enabled on a WAN facing interface: - -```sudo zfw -k, --masquerade ``` - -This function requires that both ingress and egress TC filters are enabled on outbound interface. For IPv4 this is now using Dynamic PAT and IPv6 is using -static PAT. Note: When running on later kernels i.e. 6+ some older network hardware may not work with ebpf Dynamic PAT. We have also seen some incompatibility with 2.5Gb interfaces on 5.x+ kernels. + +## Build -In release v0.8.19 masquerade session gc was added to /etc/cron.d/zfw_refresh via ```/opt/openziti/bin/zfw -L -G > /dev/null``` and runs once per minute. Stale udp sessions will be -removed if over 30s and stale tcp sessions will be removed if over 3600 seconds(1hr). +[To build / install zfw from source. Click here!](./BUILD.md) -### Explicit Deny Rules -This feature adds the ability to enter explicit deny rules by appending ```-d, --disable``` to the ```-I, --insert rule``` to either ingress or egress rules. Rule precedence is based on longest match prefix. If the prefix is the same then the precedence follows the order entry of the rules, which when listed will go from top to bottom for ports with in the same prefix e.g. +## Standalone FW Deployment -If you wanted to allow all tcp 443 traffic outbound except to 10.1.0.0/16 you would enter the following egress rules: + Install + binary deb or rpm package (refer to workflow for detail pkg contents) +Debian 12/ Ubuntu 22.04+ ``` -sudo zfw -I -c 10.1.0.0 -m 16 -l 443 -h 443 -t 0 -p tcp -z egress -d -sudo zfw -I -c 0.0.0.0 -m 0 -l 443 -h 443 -t 0 -p tcp -z egress -``` -Listing the above with ```sudo zfw -L -z egress``` you would see: -``` -EGRESS FILTERS: -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- -accept 0000000000000000000000 tcp 0.0.0.0/0 0.0.0.0/0 dpts=443:443 PASSTHRU to 0.0.0.0/0 [] -deny 0000000000000000000000 tcp 0.0.0.0/0 10.1.0.0/16 dpts=443:443 PASSTHRU to 10.1.0.0/16 [] -Rule Count: 2 / 250000 -prefix_tuple_count: 2 / 100000 +sudo dpkg -i zfw-router__.deb ``` -The following illustrates the precedence with rules matching the same prefix: - -Assume you want to block port 22 to address 172.16.240.137 and enter rules the following rules: -``` -sudo zfw -I -c 172.16.240.139 -m 32 -l 1 -h 65535 -t 0 -p tcp -z egress -sudo zfw -I -c 172.16.240.139 -m 32 -l 22 -h 22 -t 0 -p tcp -z egress -d +RedHat 9.4 ``` +sudo yum install zfw-router-..rpm ``` -sudo zfw -L -z egress -EGRESS FILTERS: -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- -accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=1:65535 PASSTHRU to 172.16.240.139/32 [] -deny 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=22:22 PASSTHRU to 172.16.240.139/32 [] -Rule Count: 2 / 250000 -``` -The rule listing shows the accept port range 1-65535 listed first and then the deny port 22 after. This would result in port 22 being allowed outbound because traffic would match the accept rule and never reach the deny rule. +Note if running firewalld you will need to at a minimum set each interface you enable tc ebpf on to the trusted zone or equivalent. e.g. ```firewall-cmd --permanent --zone=trusted --add-interface=ens33``` or firewalld will drop traffic before it reaches the zfw filters. -The correct rule order entry would be: +Install from source ubuntu 22.04+ / Debian 12+ / Redhat 9.4 +[build / install zfw from source](./BUILD.md) + +After installing: ``` -sudo zfw -I -c 172.16.240.139 -m 32 -l 22 -h 22 -t 0 -p tcp -z egress -d -sudo zfw -I -c 172.16.240.139 -m 32 -l 1 -h 65535 -t 0 -p tcp -z egress +sudo su - +cd /opt/openziti/etc +cp ebpf_config.json.sample ebpf_config.json ``` +Follow the README.md section ```Two Interface config with ens33 facing internet and ens37 facing local lan``` on how to edit ebpf_config.json based on your interface configuration. +if you only have one interface then set it as InternalInterfaces and leave ExternalInterfaces as an empty list []. With a single interface if you want to block outbound traffic you will need to add egress rules as described in the section ```Outbound Filtering```. + +example ebpf_config.json: ``` -sudo zfw -L -z egress -EGRESS FILTERS: -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- -deny 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=22:22 PASSTHRU to 172.16.240.139/32 [] -accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=1:65535 PASSTHRU to 172.16.240.139/32 [] -Rule Count: 2 / 250000 -prefix_tuple_count: 1 / 100000 +{"InternalInterfaces":[{"Name":"eth0"}], + "ExternalInterfaces":[{"Name":"eth1"}]} ``` -This will result in traffic to port 22 matching the first rule and correctly being dropped as intended. +To start the firewall just run -### Outbound filtering -- This new feature is currently meant to be used in stand alone FW mode (No OpenZiti). It can be run with OpenZiti - on intercepted inbound connections but locally hosted services will require manually entered egress rules. - See note in section ```User space manual configuration``` which briefly describes installing - zfw without OpenZiti. + sudo /opt/openziti/bin/start_ebpf_router.py - The feature allows for both IPv4 and IPv6 ingress/egress filters on a single external interface. i.e. - This mode maintains state for outbound traffic associated with traffic allowed by ingress filters so - there is no need to statically configure high port ranges for return traffic. The assumption is - If you enable inbound ports you want to allow the stateful reply packets for udp and tcp. - -An egress filter must be attached to the interface , ```-b, --outbound-filter ``` needs to be set ,and at least one interface needs to have had an ingress filter applied. - -From cli: +you will see output like ``` -sudo zfw -X ens33 -O /opt/openziti/bin/zfw_tc_ingress.o -z ingress -sudo zfw -X ens33 -O /opt/openziti/bin/zfw_tc_outbound_track.o -z egress -sudo /opt/openziti/bin/zfw --outbound-filter ens33 +Unable to retrieve LanIf! +ziti-router not installed, skipping ebpf router configuration! +Attempting to add ebpf ingress to: eth0 +Attempting to add ebpf egress to: eth1 +Ebpf not running no maps to clear +tc parent add : eth0 +Set tc filter enable to 1 for ingress on eth0 +Attached /opt/openziti/bin/zfw_tc_ingress.o to eth0 +Skipping adding existing rule +Skipping adding existing rule (v6) +tc parent add : eth1 +Set tc filter enable to 1 for ingress on eth1 +Attached /opt/openziti/bin/zfw_tc_ingress.o to eth1 +Rules updated +Rules updated (v6) +Set per_interface rule aware to 1 for eth1 +Error: Exclusivity flag on, cannot modify. +tc parent already exists : eth1 +Set tc filter enable to 1 for egress on eth1 +Attached /opt/openziti/bin/zfw_tc_outbound_track.o to eth1 ``` -The above should result in all outbound traffic except for arp and icmp to be dropped on ens33 (icmp echo-reply -will also be dropped unless ```sudo zfw -e ens33 is set```). ssh return traffic will also be allowed outbound -unless ```ssh -x ens33 is set```. - -If per interface rules is not false then the egress rules would -need explicit -N for each rule in the same manner as ingress rules. - -i.e. set ```/opt/openziti/etc/ebpf_config.json``` as below changing interface name only - - ```{"InternalInterfaces":[], "ExternalInterfaces":[{"Name":"ens33", "PerInterfaceRules": false}]}``` - - or equivalent InternalInterfaces config: - -```{"InternalInterfaces":[{"Name":"ens33"}],"ExternalInterfaces":[]}``` - -Then in executable script file ```/opt/openziti/bin/user/user_rules.sh``` +the important lines from above to verify its worked: ``` -#!/bin/bash +Set tc filter enable to 1 for ingress on eth0 +Attached /opt/openziti/bin/zfw_tc_ingress.o to eth0 -# enable outbound filtering (Can be set before or after egress rule entry) -# If set before DNS rules some systems command response might be slow till -# a DNS egress rule is entered - -sudo /opt/openziti/bin/zfw --outbound-filter ens33 - -#example outbound rules set by adding -z, --direction egress -#ipv4 -sudo /opt/openziti/bin/zfw -I -c 0.0.0.0 -m 0 -l 53 -h 53 -t 0 -p udp --direction egress -sudo /opt/openziti/bin/zfw -I -c 172.16.240.139 -m 32 -l 5201 -h 5201 -t 0 -p tcp -z egress -sudo /opt/openziti/bin/zfw -I -c 172.16.240.139 -m 32 -l 5201 -h 5201 -t 0 -p udp --direction egress - -#ipv6 -sudo /opt/openziti/bin/zfw -6 ens33 #enables ipv6 -sudo /opt/openziti/bin/zfw -I -c 2001:db8::2 -m 32 -l 5201 -h 5201 -t 0 -p tcp -z egress -sudo /opt/openziti/bin/zfw -I -c 2001:db8::2 -m 32 -l 5201 -h 5201 -t 0 -p udp --direction egress +Set tc filter enable to 1 for ingress on eth1 +Attached /opt/openziti/bin/zfw_tc_ingress.o to eth1 -#inbound rules -sudo /opt/openziti/bin/zfw -I -c 172.16.240.0 -m 24 -l 22 -h 22 -t 0 -p tcp``` +Set tc filter enable to 1 for egress on eth1 +Attached /opt/openziti/bin/zfw_tc_outbound_track.o to eth1 ``` -- To view all IPv4 egress rules: ```sudo zfw -L -z egress``` +In order to ensure the FW starts on boot you need to enable the fw-init.service. ``` -EGRESS FILTERS: -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- -accept 0000000000000000000000 udp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] -accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] - +sudo systemctl enable fw-init.service ``` -- To view all IPv6 egress rules: ```sudo zfw -L -6 all -z egress``` -``` -EGRESS FILTERS: -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ------------------------------------------ ------------------------------------------ ------------------------- -------------- -accept 0000000000000000000000|tcp |::/0 |2001:db8::2/32 | dpts=5201:5201 PASSTHRU | [] -accept 0000000000000000000000|udp |::/0 |2001:db8::2/32 | dpts=5201:5201 PASSTHRU | [] -``` -- to view egress rules for a single IPv4 CIDR ```sudo zfw -L -c 172.16.240.139 -m 32 -z egress``` +Since you will not be using ziti to populate rules all your rules would be with respect to the local OS then any rules will need to be set + +to drop to the host system as mentioned in the README this is done by setting the ```tproxy-port to 0``` in your rules. i.e. + ``` -EGRESS FILTERS: -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- -accept 0000000000000000000000 udp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] -Rule Count: 1 -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- -accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] -accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=22:22 PASSTHRU to 172.16.240.139/32 [] -Rule Count: 2 +sudo /usr/sbin/zfw -I -c 192.168.1.108 -m 32 -l 8000 -h 8000 -t 0 -p tcp ``` - -- to view tcp egress rules for a single IPv6 CIDR ```$sudo zfw -L -c 2001:db8:: -m 64 -z egress -p tcp``` +Note: +The ExternalInterface is set to what is called ```per interface rules``` which means it only follows +rules where its name is set in the rule, whereas the InternalInterface follows all rules by default. i.e. to allow the above rule in on the External interface you need +```-N, --interface ``` in the rule i.e. ``` -EGRESS FILTERS: -type service id proto origin destination mapping: interface list ------- ---------------------- ----- ------------------------------------------ ------------------------------------------ ------------------------- -------------- -accept|0000000000000000000000|tcp |::/0 |2001:db8::/64 | dpts=5201:5201 PASSTHRU | [] -Rule Count: 1 +sudo /usr/sbin/zfw -I -c 192.168.1.108 -m 32 -l 8000 -h 8000 -t 0 -p tcp -N eth1 ``` - -### Initial support for ipv6 -- *Enabled via ```sudo zfw -6 ``` - Note: Router discovery / DHCPv6 are always enabled even if ipv6 is disabled in order to ensure the ifindex_ip6_map gets populated. -- Supports ipv6 neighbor discovery (redirects not supported) -- *Supports inbound ipv6 echo (disabled by default can be enabled via zfw -e)/ echo reply -- *Supports inbound ssh (Can be disabled via ```sudo zfw -x ```) (Care should be taken as this affects IPv4 as well) -- Supports outbound stateful host connections (Inbound only if outbound initiated) -- Supports outbound passthrough tracking. Sessions initiated from non-ebpf enabled and ebpf enabled internal interfaces out - through interface(s) defined as ExternalInterface (requires -N with -I unless "PerInterfaceRules": false) or InternalInterface in /opt/openziti/etc/ebpf_config.json - or manually applied with sudo ```zfw -X -O /opt/openziti/zfw_outbound_track.o -z egress``` - will allow stateful udp and tcp session traffic back in. -- Support for inbound IPv6 filter destination rules. Currently only destination filtering is allowed. - e.g. - ``` - sudo zfw -I -c 2001:db9:: -m 64 -l 443 -h 443 -t 0 -p tcp - ``` -- All IPv6 ingress Rules can be listed with the following command: - ``` - sudo zfw -L -6 all - ``` -- individual IPv6 ingress rules can be listed with - ``` - sudo zfw -L -c -m - ``` -- IPv6 rules can be individually deleted or flushed - e.g. -``` -sudo zfw -F -sudo zfw -D -c 2001:db9:: -m 64 -l 443 -h 443 -p tcp -``` -- Monitor connection state via ```sudo zfw -M, --monitor ``` optionally ```sudo zfw -v verbose ``` - alternatively you can use the dedicated monitor binary ```sudo zfw_monitor -i ``` -*These setting need to be in /opt/openziti/bin/user_rules.sh to be persistent across reboots. - -Note: Some of the above IPv6 features are not fully supported with OpenZiti yet. Features like -tproxy and ziti0 forwarding will not work completely till updates are released in OpenZiti. -OpenZiti routers do support IPv6 fabric connections using DNS names in the config with corresponding -AAAA records defined. ziti-edge-tunnel supports ipv6 interception but the IPC events channel does -not include the intercept IPv6 addresses, so currently IPv6 services would require manual zfw rule -entry. Similarly to IPv4, IPv6 rules can be used to forward packets to the host OS by setting -```-t, --tproxy-port 0``` in the insert command. - - - -## Build - -[To build / install zfw from source. Click here!](./BUILD.md) - ## Ziti-Edge-Tunnel Deployment The program is designed to be deployed as systemd services if deployed via .deb package with @@ -255,7 +134,7 @@ Install from source ubuntu 22.04+ / Debian 12+ / Redhat 9.4 ## Ziti-Router Deployment The program is designed to integrated into an existing Openziti ziti-router installation if ziti router has been deployed via ziti_auto_enroll - [instructions](https://docs.openziti.io/docs/guides/Local_Gateway/EdgeRouter). Running with ziti router is optional and zfw router can be run as a standalone FW as mentioned in the ```User space manual configuration``` section below. + [instructions](https://docs.openziti.io/docs/guides/Local_Gateway/EdgeRouter). - Install binary deb package (refer to workflow for detail pkg contents) @@ -286,7 +165,7 @@ sudo vi /opt/openziti/etc/ebpf_config.json - Adding interfaces Replace ens33 in line with:{"InternalInterfaces":[{"Name":"ens33"}], "ExternalInterfaces":[]} Replace with interface that you want to enable for ingress firewalling / openziti interception and - optionally ExternalInterfaces if you want per interface rules -N with -I. + optionally ExternalInterfaces if you want per interface rules -N with -I. ``` i.e. ens33 {"InternalInterfaces":[{"Name":"ens33"}], "ExternalInterfaces":[]} @@ -352,22 +231,46 @@ Verify running on the configured interface i.e. ``` sudo tc filter show dev ens33 ingress ``` -If running on interface: +If running ingress filters on interface: ``` filter protocol all pref 1 bpf chain 0 -filter protocol all pref 1 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action] direct-action not_in_hw id 26 tag e8986d00fc5c5f5a +filter protocol all pref 1 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action] direct-action not_in_hw id 18287 tag 7924b3b7066e6c20 jited filter protocol all pref 2 bpf chain 0 -filter protocol all pref 2 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/1] direct-action not_in_hw id 31 tag ae5f218d80f4f200 +filter protocol all pref 2 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/1] direct-action not_in_hw id 18293 tag aa2d601900a4bb11 jited filter protocol all pref 3 bpf chain 0 -filter protocol all pref 3 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/2] direct-action not_in_hw id 36 tag 751abd4726b3131a +filter protocol all pref 3 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/2] direct-action not_in_hw id 18299 tag b2a4d46c249aec22 jited filter protocol all pref 4 bpf chain 0 -filter protocol all pref 4 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/3] direct-action not_in_hw id 41 tag 63aad9fa64a9e4d2 +filter protocol all pref 4 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/3] direct-action not_in_hw id 18305 tag ed0a156d6e90d4ab jited filter protocol all pref 5 bpf chain 0 -filter protocol all pref 5 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/4] direct-action not_in_hw id 46 tag 6c63760ceaa339b7 +filter protocol all pref 5 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/4] direct-action not_in_hw id 18311 tag 7b65254c0f4ce589 jited filter protocol all pref 6 bpf chain 0 -filter protocol all pref 6 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/5] direct-action not_in_hw id 51 tag b7573c4cb901a5da +filter protocol all pref 6 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/5] direct-action not_in_hw id 18317 tag f4d6609cc4eb2da3 jited +filter protocol all pref 7 bpf chain 0 +filter protocol all pref 7 bpf chain 0 handle 0x1 zfw_tc_ingress.o:[action/6] direct-action not_in_hw id 18323 tag a3c047d2327de858 jited ``` +Verify running egress filters on the configured interface i.e. +``` +sudo tc filter show dev ens33 egress +``` +If running egress on interface: +``` +filter protocol all pref 1 bpf chain 0 +filter protocol all pref 1 bpf chain 0 handle 0x1 zfw_tc_outbound_track.o:[action] direct-action not_in_hw id 18329 tag 4d66fa6f69670aad jited +filter protocol all pref 2 bpf chain 0 +filter protocol all pref 2 bpf chain 0 handle 0x1 zfw_tc_outbound_track.o:[action/1] direct-action not_in_hw id 18335 tag e55132e45dc4a711 jited +filter protocol all pref 3 bpf chain 0 +filter protocol all pref 3 bpf chain 0 handle 0x1 zfw_tc_outbound_track.o:[action/2] direct-action not_in_hw id 18341 tag 9ec5f3c00f9ef356 jited +filter protocol all pref 4 bpf chain 0 +filter protocol all pref 4 bpf chain 0 handle 0x1 zfw_tc_outbound_track.o:[action/3] direct-action not_in_hw id 18347 tag 9af99a7218e0be3d jited +filter protocol all pref 5 bpf chain 0 +filter protocol all pref 5 bpf chain 0 handle 0x1 zfw_tc_outbound_track.o:[action/4] direct-action not_in_hw id 18353 tag d1a536ae48efe657 jited +filter protocol all pref 6 bpf chain 0 +filter protocol all pref 6 bpf chain 0 handle 0x1 zfw_tc_outbound_track.o:[action/5] direct-action not_in_hw id 18359 tag 7da52c707c308700 jited +filter protocol all pref 7 bpf chain 0 +filter protocol all pref 7 bpf chain 0 handle 0x1 zfw_tc_outbound_track.o:[action/6] direct-action not_in_hw id 18365 tag bd21505cf7e27536 jited +``` + Services configured via the openziti controller for ingress on the running ziti-edge-tunnel/ziti-router identity will auto populate into the firewall's inbound rule list. @@ -532,11 +435,9 @@ sudo reboot ### User space manual configuration ziti-edge-tunnel/ziti-router will automatically populate rules for configured ziti services so the following is if you want to configure additional rules outside of the automated ones. zfw-tunnel will also auto-populate /opt/openziti/bin/user/user_rules.sh -with listening ports in the config.yml. - -**Note the ```zfw-router__.deb / zfw-tunnel-..rpm``` will install an un-enabled service ```fw-init.service```. If you install the zfw-router package without an OpenZiti ziti-router installation and enable this service it will start the ebpf fw after reboot and load the commands from /opt/openziti/bin/user/user_rules.sh. You will also need to manually copy the /opt/openziti/etc/ebpf_config.json.sample to ebpf_config.json and edit interface name**. If you later decide to install ziti-router this service should be disabled and you should evaluate whether the existing ebpf_config.json include the ziti lanIf ifname. ```/opt/openziti/bin/start_ebpf_router.py``` +with listening ports in the config.yml. -**(All commands listed in this section need to be put in /opt/openziti/bin/user/user_rules.shin order to survive reboot)** +**(All commands listed in this section need to be put in /opt/openziti/bin/user/user_rules.sh in order to survive reboot)** ### ssh default operation By default ssh is enabled to pass through to the ip address of the attached interface from any source. @@ -575,6 +476,19 @@ sudo zfw --vrrp-enable sudo zfw --vrrp-enable -d ``` +### Non tuple passthrough +**Caution:** +This allows all non udp/tcp traffic to passthrough to the OS and should only be enabled if you are using zfw for tcp/udp redirection and are +using **another firewall** to filter traffic. This setting will also disable icmp masquerade if enabled. **THIS SETTING IS DISABLED BY DEFAULT**. +- Enable +``` +sudo zfw -q, --pass-non-tuple +``` + +- Disable +``` +sudo zfw -q, --pass-non-tuple -d +``` ### Inserting / Deleting Ingress rules @@ -621,6 +535,226 @@ Example: Monitor ebpf trace messages sudo zfw -M |all ``` + +### Load rules from /opt/openziti/bin/user/user_rules.sh + +```sudo zfw -A, --add-user-rules``` + +### Enable both TC ingress and Egress filters on an interface + +```sudo zfw -H, --init-tc ``` + +### Native EBPF based IPv4 and IPv6 Masquerade support + +zfw can now provide native IPv4/IPv6 masquerade operation for outbound pass through connections which can be enabled on a WAN facing interface: + +```sudo zfw -k, --masquerade ``` + +This function requires that both ingress and egress TC filters are enabled on outbound interface. For IPv4 this is now using Dynamic PAT and IPv6 is using +static PAT. Note: When running on later kernels i.e. 6+ some older network hardware may not work with ebpf Dynamic PAT. We have also seen some incompatibility with 2.5Gb interfaces on 5.x+ kernels. + +In release v0.8.19 masquerade session gc was added to /etc/cron.d/zfw_refresh via ```/opt/openziti/bin/zfw -L -G > /dev/null``` and runs once per minute. Stale udp sessions will be +removed if over 30s and stale tcp sessions will be removed if over 3600 seconds(1hr). + +### Explicit Deny Rules +This feature adds the ability to enter explicit deny rules by appending ```-d, --disable``` to the ```-I, --insert rule``` to either ingress or egress rules. Rule precedence is based on longest match prefix. If the prefix is the same then the precedence follows the order entry of the rules, which when listed will go from top to bottom for ports with in the same prefix e.g. + +If you wanted to allow all tcp 443 traffic outbound except to 10.1.0.0/16 you would enter the following egress rules: + +``` +sudo zfw -I -c 10.1.0.0 -m 16 -l 443 -h 443 -t 0 -p tcp -z egress -d +sudo zfw -I -c 0.0.0.0 -m 0 -l 443 -h 443 -t 0 -p tcp -z egress +``` +Listing the above with ```sudo zfw -L -z egress``` you would see: +``` +EGRESS FILTERS: +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- +accept 0000000000000000000000 tcp 0.0.0.0/0 0.0.0.0/0 dpts=443:443 PASSTHRU to 0.0.0.0/0 [] +deny 0000000000000000000000 tcp 0.0.0.0/0 10.1.0.0/16 dpts=443:443 PASSTHRU to 10.1.0.0/16 [] +Rule Count: 2 / 250000 +prefix_tuple_count: 2 / 100000 +``` + +The following illustrates the precedence with rules matching the same prefix: + +Assume you want to block port 22 to address 172.16.240.137 and enter rules the following rules: +``` +sudo zfw -I -c 172.16.240.139 -m 32 -l 1 -h 65535 -t 0 -p tcp -z egress +sudo zfw -I -c 172.16.240.139 -m 32 -l 22 -h 22 -t 0 -p tcp -z egress -d +``` +``` +sudo zfw -L -z egress +EGRESS FILTERS: +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- +accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=1:65535 PASSTHRU to 172.16.240.139/32 [] +deny 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=22:22 PASSTHRU to 172.16.240.139/32 [] +Rule Count: 2 / 250000 +``` +The rule listing shows the accept port range 1-65535 listed first and then the deny port 22 after. This would result in port 22 being allowed outbound because traffic would match the accept rule and never reach the deny rule. + +The correct rule order entry would be: +``` +sudo zfw -I -c 172.16.240.139 -m 32 -l 22 -h 22 -t 0 -p tcp -z egress -d +sudo zfw -I -c 172.16.240.139 -m 32 -l 1 -h 65535 -t 0 -p tcp -z egress +``` +``` +sudo zfw -L -z egress +EGRESS FILTERS: +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- +deny 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=22:22 PASSTHRU to 172.16.240.139/32 [] +accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=1:65535 PASSTHRU to 172.16.240.139/32 [] +Rule Count: 2 / 250000 +prefix_tuple_count: 1 / 100000 +``` +This will result in traffic to port 22 matching the first rule and correctly being dropped as intended. + + +### Outbound filtering +- This new feature is currently meant to be used in stand alone FW mode (No OpenZiti). It can be run with OpenZiti + on intercepted inbound connections but locally hosted services will require manually entered egress rules. + See note in section ```User space manual configuration``` which briefly describes installing + zfw without OpenZiti. + + The feature allows for both IPv4 and IPv6 ingress/egress filters on a single external interface. i.e. + This mode maintains state for outbound traffic associated with traffic allowed by ingress filters so + there is no need to statically configure high port ranges for return traffic. The assumption is + If you enable inbound ports you want to allow the stateful reply packets for udp and tcp. + +An egress filter must be attached to the interface , ```-b, --outbound-filter ``` needs to be set ,and at least one interface needs to have had an ingress filter applied. + +From cli: + +``` +sudo zfw --init-tc ens33 +sudo /opt/openziti/bin/zfw --outbound-filter ens33 +``` + +The above should result in all outbound traffic except for arp and icmp to be dropped on ens33 (icmp echo-reply +will also be dropped unless ```sudo zfw -e ens33 is set```). ssh return traffic will also be allowed outbound +unless ```ssh -x ens33 is set```. + +If per interface rules is not false then the egress rules would +need explicit -N for each rule in the same manner as ingress rules. + +i.e. set ```/opt/openziti/etc/ebpf_config.json``` as below changing interface name only + + ```{"InternalInterfaces":[], "ExternalInterfaces":[{"Name":"ens33", "PerInterfaceRules": false}]}``` + + or equivalent InternalInterfaces config: + +```{"InternalInterfaces":[{"Name":"ens33"}],"ExternalInterfaces":[]}``` + +Then in executable script file ```/opt/openziti/bin/user/user_rules.sh``` +``` +#!/bin/bash + +# enable outbound filtering (Can be set before or after egress rule entry) +# If set before DNS rules some systems command response might be slow till +# a DNS egress rule is entered + +sudo /opt/openziti/bin/zfw --outbound-filter ens33 + +#example outbound rules set by adding -z, --direction egress +#ipv4 +sudo /opt/openziti/bin/zfw -I -c 0.0.0.0 -m 0 -l 53 -h 53 -t 0 -p udp --direction egress +sudo /opt/openziti/bin/zfw -I -c 172.16.240.139 -m 32 -l 5201 -h 5201 -t 0 -p tcp -z egress +sudo /opt/openziti/bin/zfw -I -c 172.16.240.139 -m 32 -l 5201 -h 5201 -t 0 -p udp --direction egress + +#ipv6 +sudo /opt/openziti/bin/zfw -6 ens33 #enables ipv6 +sudo /opt/openziti/bin/zfw -I -c 2001:db8::2 -m 32 -l 5201 -h 5201 -t 0 -p tcp -z egress +sudo /opt/openziti/bin/zfw -I -c 2001:db8::2 -m 32 -l 5201 -h 5201 -t 0 -p udp --direction egress + +#inbound rules +sudo /opt/openziti/bin/zfw -I -c 172.16.240.0 -m 24 -l 22 -h 22 -t 0 -p tcp``` +``` +- To view all IPv4 egress rules: ```sudo zfw -L -z egress``` + +``` +EGRESS FILTERS: +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- +accept 0000000000000000000000 udp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] +accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] + +``` +- To view all IPv6 egress rules: ```sudo zfw -L -6 all -z egress``` + +``` +EGRESS FILTERS: +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ------------------------------------------ ------------------------------------------ ------------------------- -------------- +accept 0000000000000000000000|tcp |::/0 |2001:db8::2/32 | dpts=5201:5201 PASSTHRU | [] +accept 0000000000000000000000|udp |::/0 |2001:db8::2/32 | dpts=5201:5201 PASSTHRU | [] +``` +- to view egress rules for a single IPv4 CIDR ```sudo zfw -L -c 172.16.240.139 -m 32 -z egress``` +``` +EGRESS FILTERS: +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- +accept 0000000000000000000000 udp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] +Rule Count: 1 +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ----------------- ------------------ ------------------------------------------------------- ----------------- +accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=5201:5201 PASSTHRU to 172.16.240.139/32 [] +accept 0000000000000000000000 tcp 0.0.0.0/0 172.16.240.139/32 dpts=22:22 PASSTHRU to 172.16.240.139/32 [] +Rule Count: 2 +``` + +- to view tcp egress rules for a single IPv6 CIDR ```$sudo zfw -L -c 2001:db8:: -m 64 -z egress -p tcp``` + +``` +EGRESS FILTERS: +type service id proto origin destination mapping: interface list +------ ---------------------- ----- ------------------------------------------ ------------------------------------------ ------------------------- -------------- +accept|0000000000000000000000|tcp |::/0 |2001:db8::/64 | dpts=5201:5201 PASSTHRU | [] +Rule Count: 1 +``` + +### Support for ipv6 +- *Enabled via ```sudo zfw -6 ``` + Note: Router discovery / DHCPv6 are always enabled even if ipv6 is disabled in order to ensure the ifindex_ip6_map gets populated. +- Supports ipv6 neighbor discovery (redirects not supported) +- *Supports inbound ipv6 echo (disabled by default can be enabled via zfw -e)/ echo reply +- *Supports inbound ssh (Can be disabled via ```sudo zfw -x ```) (Care should be taken as this affects IPv4 as well) +- Supports outbound stateful host connections (Inbound only if outbound initiated) +- Supports outbound passthrough tracking. Sessions initiated from non-ebpf enabled and ebpf enabled internal interfaces out + through interface(s) defined as ExternalInterface (requires -N with -I unless "PerInterfaceRules": false) or InternalInterface in /opt/openziti/etc/ebpf_config.json + or manually applied with sudo ```zfw -X -O /opt/openziti/zfw_outbound_track.o -z egress``` + will allow stateful udp and tcp session traffic back in. +- Support for inbound IPv6 filter destination rules. Currently only destination filtering is allowed. + e.g. + ``` + sudo zfw -I -c 2001:db9:: -m 64 -l 443 -h 443 -t 0 -p tcp + ``` +- All IPv6 ingress Rules can be listed with the following command: + ``` + sudo zfw -L -6 all + ``` +- individual IPv6 ingress rules can be listed with + ``` + sudo zfw -L -c -m + ``` +- IPv6 rules can be individually deleted or flushed + e.g. +``` +sudo zfw -F +sudo zfw -D -c 2001:db9:: -m 64 -l 443 -h 443 -p tcp +``` +- Monitor connection state via ```sudo zfw -M, --monitor ``` optionally ```sudo zfw -v verbose ``` + alternatively you can use the dedicated monitor binary ```sudo zfw_monitor -i ``` +*These setting need to be in /opt/openziti/bin/user_rules.sh to be persistent across reboots. + +Note: Some of the above IPv6 features are not fully supported with OpenZiti yet. Features like +tproxy and ziti0 forwarding will not work completely till updates are released in OpenZiti. +OpenZiti routers do support IPv6 fabric connections using DNS names in the config with corresponding +AAAA records defined. ziti-edge-tunnel supports ipv6 interception but the IPC events channel does +not include the intercept IPv6 addresses, so currently IPv6 services would require manual zfw rule +entry. Similarly to IPv4, IPv6 rules can be used to forward packets to the host OS by setting +```-t, --tproxy-port 0``` in the insert command. ``` Jul 26 2023 01:42:24.108913490 : ens33 : TCP :172.16.240.139:51166[0:c:29:6a:d1:61] > 192.168.1.1:5201[0:c:29:bb:24:a1] redirect ---> ziti0 @@ -686,6 +820,8 @@ sudo zfw -L -E lo: 1 -------------------------- icmp echo :1 +pass non tuple :1 +ipv6 enable :1 verbose :0 ssh disable :0 outbound_filter :0 @@ -697,29 +833,31 @@ vrrp enable :0 eapol enable :0 ddos filtering :0 masquerade :0 -ipv6 enable :1 -------------------------- ens33: 2 -------------------------- -icmp echo :1 +icmp echo :0 +pass non tuple :0 +ipv6 enable :0 verbose :0 ssh disable :0 -outbound_filter :1 +outbound_filter :0 per interface :0 tc ingress filter :1 tc egress filter :1 -tun mode intercept :1 +tun mode intercept :0 vrrp enable :0 eapol enable :0 ddos filtering :0 masquerade :0 -ipv6 enable :1 -------------------------- ens37: 3 -------------------------- icmp echo :0 +pass non tuple :0 +ipv6 enable :0 verbose :0 ssh disable :0 outbound_filter :0 @@ -731,7 +869,6 @@ vrrp enable :0 eapol enable :0 ddos filtering :0 masquerade :0 -ipv6 enable :0 -------------------------- ``` @@ -785,6 +922,7 @@ removing /sys/fs/bpf/tc/globals/masquerade_map removing /sys/fs/bpf/tc/globals/icmp_masquerade_map removing /sys/fs/bpf/tc/globals/icmp_echo_map removing /sys/fs/bpf/tc/globals/masquerade_reverse_map +removing /sys/fs/bpf/tc/globals/bind_saddr_map ``` diff --git a/src/zfw.c b/src/zfw.c index 64465dc..292bd86 100644 --- a/src/zfw.c +++ b/src/zfw.c @@ -100,6 +100,9 @@ bool ddos = false; bool add = false; +bool bind_saddr = false; +bool unbind_saddr = false; +bool user_rules = false; bool delete = false; bool list = false; bool list_gc = false; @@ -114,9 +117,11 @@ bool cd6 = false; bool cs = false; bool cs6 = false; bool prot = false; +bool non_tuple = false; bool route = false; bool passthru = false; bool intercept = false; +bool bind_flush = false; bool masquerade = false; bool echo = false; bool eapol = false; @@ -131,6 +136,8 @@ bool all_interface = false; bool ssh_disable = false; bool tc = false; bool tcfilter = false; +bool init_tc = false; +bool init_xdp = false; bool direction = false; bool object; bool ebpf_disable = false; @@ -168,6 +175,8 @@ union bpf_attr if6_map; int if6_fd = -1; union bpf_attr ddos_saddr_map; int ddos_saddr_fd = -1; +union bpf_attr bind_saddr_map; +int bind_saddr_fd = -1; union bpf_attr ddos_dport_map; int ddos_dport_fd = -1; union bpf_attr diag_map; @@ -225,6 +234,7 @@ const char *egress_count6_map_path = "/sys/fs/bpf/tc/globals/egress6_count_map"; const char *masquerade_map_path = "/sys/fs/bpf/tc/globals/masquerade_map"; const char *masquerade_reverse_map_path = "/sys/fs/bpf/tc/globals/masquerade_reverse_map"; const char *icmp_masquerade_map_path = "/sys/fs/bpf/tc/globals/icmp_masquerade_map"; +const char *bind_saddr_map_path = "/sys/fs/bpf/tc/globals/bind_saddr_map"; const char *icmp_echo_map_path = "/sys/fs/bpf/tc/globals/icmp_echo_map"; char doc[] = "zfw -- ebpf firewall configuration tool"; const char *if_map_path; @@ -234,6 +244,7 @@ char *eapol_interface; char *verbose_interface; char *ssh_interface; char *prefix_interface; +char *nt_interface; char *tun_interface; char *vrrp_interface; char *ddos_interface; @@ -241,13 +252,14 @@ char *outbound_interface; char *monitor_interface; char *ipv6_interface; char *tc_interface; +char *xdp_interface; char *log_file_name; char *object_file; char *direction_string; char *masq_interface; char check_alt[IF_NAMESIZE]; -const char *argp_program_version = "0.8.19"; +const char *argp_program_version = "0.9.0"; struct ring_buffer *ring_buffer; __u32 if_list[MAX_IF_LIST_ENTRIES]; @@ -259,6 +271,16 @@ struct interface uint32_t addresses[MAX_ADDRESSES]; }; +/*Key to bind_map*/ +struct bind_key { + union { + __u32 ip; + __u32 ip6[4]; + }__in46_u_dest; + __u8 mask; + __u8 type; +}; + /*Key to masquerade_map*/ struct masq_key { uint32_t ifindex; @@ -381,6 +403,7 @@ void open_tproxy_ext_map(); void open_egress_ext_map(); void open_if_list_ext_map(); void open_egress_if_list_ext_map(); +void open_bind_saddr_map(); void map_insert6(); void map_delete6(); void map_insert(); @@ -469,6 +492,7 @@ struct diag_ip4 { bool ipv6_enable; bool outbound_filter; bool masquerade; + bool pass_non_tuple; }; struct tproxy_tuple @@ -570,7 +594,7 @@ int check_filter(uint32_t idx, char *direction){ return 0; } -/*function to add loopback binding for intercept IP prefixes that do not +/*function to add loopback binding for intercept IPv4 prefixes that do not * currently exist as a subset of an external interface * */ void bind_prefix(struct in_addr *address, unsigned short mask) @@ -578,7 +602,7 @@ void bind_prefix(struct in_addr *address, unsigned short mask) char *prefix = inet_ntoa(*address); char *cidr_block = malloc(19); sprintf(cidr_block, "%s/%u", prefix, mask); - printf("binding intercept %s to loopback\n", cidr_block); + printf("binding route %s to loopback\n", cidr_block); pid_t pid; char *const parmList[] = {"/usr/sbin/ip", "addr", "add", cidr_block, "dev", "lo", "scope", "host", NULL}; if ((pid = fork()) == -1) @@ -593,12 +617,57 @@ void bind_prefix(struct in_addr *address, unsigned short mask) free(cidr_block); } +/*function to add loopback binding for intercept IPv6 prefixes that*/ +void bind6_prefix(struct in6_addr *address, unsigned short mask) +{ + char prefix[INET6_ADDRSTRLEN]; + struct in6_addr addr_6 = {0}; + inet_ntop(AF_INET6, address, prefix, INET6_ADDRSTRLEN); + char cidr_block[44]; + sprintf(cidr_block, "%s/%u", prefix, mask); + printf("binding route %s to loopback\n", cidr_block); + pid_t pid; + char *const parmList[] = {"/usr/sbin/ip", "addr", "add", cidr_block, "dev", "lo", "scope", "host", NULL}; + if ((pid = fork()) == -1) + { + perror("fork error: can't spawn bind"); + } + else if (pid == 0) + { + execv("/usr/sbin/ip", parmList); + printf("execv error: unknown error binding"); + } +} + +/*function to add loopback binding for intercept IPv6 prefixes that*/ +void unbind6_prefix(struct in6_addr *address, unsigned short mask) +{ + char prefix[INET6_ADDRSTRLEN]; + struct in6_addr addr_6 = {0}; + inet_ntop(AF_INET6, address, prefix, INET6_ADDRSTRLEN); + char cidr_block[44]; + sprintf(cidr_block, "%s/%u", prefix, mask); + printf("unbinding route %s from loopback\n", cidr_block); + pid_t pid; + char *const parmList[] = {"/usr/sbin/ip", "addr", "delete", cidr_block, "dev", "lo", "scope", "host", NULL}; + if ((pid = fork()) == -1) + { + perror("fork error: can't spawn unbind"); + } + else if (pid == 0) + { + execv("/usr/sbin/ip", parmList); + printf("execv error: unknown error unbinding"); + } +} + +/*Unbind IPv4 prefixes from lo*/ void unbind_prefix(struct in_addr *address, unsigned short mask) { char *prefix = inet_ntoa(*address); char *cidr_block = malloc(19); sprintf(cidr_block, "%s/%u", prefix, mask); - printf("unbinding intercept %s from loopback\n", cidr_block); + printf("unbinding route %s from loopback\n", cidr_block); pid_t pid; char *const parmList[] = {"/usr/sbin/ip", "addr", "delete", cidr_block, "dev", "lo", "scope", "host", NULL}; if ((pid = fork()) == -1) @@ -629,7 +698,7 @@ void set_tc(char *action) else if (pid == 0) { execv("/usr/sbin/tc", parmList); - printf("execv error: unknown error binding"); + printf("execv error: unknown error binding\n"); } else { @@ -638,7 +707,7 @@ void set_tc(char *action) { if(!(WIFEXITED(status) && !WEXITSTATUS(status))) { - printf("could not set tc parent %s : %s\n", action, tc_interface); + printf("waitpid error: could not set tc parent %s : %s\n", action, tc_interface); } } } @@ -684,7 +753,7 @@ void set_tc_filter(char *action) else if (pid == 0) { execv("/usr/sbin/tc", parmList); - printf("execv error: unknown error attaching filter"); + printf("execv error: unknown error attaching filter\n"); } else { @@ -693,7 +762,7 @@ void set_tc_filter(char *action) { if(!(WIFEXITED(status) && !WEXITSTATUS(status))) { - printf("tc %s filter not set : %s\n", direction_string, tc_interface); + printf("waitpid error: tc %s filter not set : %s\n", direction_string, tc_interface); } } } @@ -720,15 +789,15 @@ void disable_ebpf() disable = true; tc = true; interface_tc(); - const char *maps[37] = {tproxy_map_path, diag_map_path, if_map_path, count_map_path, + const char *maps[38] = {tproxy_map_path, diag_map_path, if_map_path, count_map_path, udp_map_path, matched_map_path, tcp_map_path, tun_map_path, if_tun_map_path, transp_map_path, rb_map_path, ddos_saddr_map_path, ddos_dport_map_path, syn_count_map_path, tp_ext_map_path, if_list_ext_map_path, range_map_path, wildcard_port_map_path, tproxy6_map_path, if6_map_path, count6_map_path, matched6_map_path, egress_range_map_path, egress_if_list_ext_map_path, egress_ext_map_path, egress_map_path, egress6_map_path, egress_count_map_path, egress_count6_map_path, egress_matched6_map_path, egress_matched_map_path, udp_ingress_map_path, tcp_ingress_map_path, - masquerade_map_path, icmp_masquerade_map_path, icmp_echo_map_path, masquerade_reverse_map_path}; - for (int map_count = 0; map_count < 37; map_count++) + masquerade_map_path, icmp_masquerade_map_path, icmp_echo_map_path, masquerade_reverse_map_path, bind_saddr_map_path}; + for (int map_count = 0; map_count < 38; map_count++) { int stat = remove(maps[map_count]); @@ -1515,6 +1584,124 @@ bool set_tun_diag() return true; } +void update_bind_saddr_map(struct bind_key *key) +{ + if (bind_saddr_fd == -1) + { + open_bind_saddr_map(); + } + struct in_addr cidr; + __u32 count = 0; + bind_saddr_map.key = (uint64_t)key; + bind_saddr_map.value = (uint64_t)&count; + bind_saddr_map.map_fd = bind_saddr_fd; + bind_saddr_map.flags = BPF_ANY; + int lookup = syscall(__NR_bpf, BPF_MAP_LOOKUP_ELEM, &bind_saddr_map, sizeof(bind_saddr_map)); + if (lookup) + { + count = 1; + int result = syscall(__NR_bpf, BPF_MAP_UPDATE_ELEM, &bind_saddr_map, sizeof(bind_saddr_map)); + if (result) + { + printf("MAP_UPDATE_BIND_ELEM: %s \n", strerror(errno)); + } + if(key->type == 4){ + struct in_addr addr = {0}; + addr.s_addr = key->__in46_u_dest.ip; + bind_prefix(&addr, key->mask); + char *source = inet_ntoa(addr); + if(source){ + printf("Prefix: %s/%u Added to loopback\n", source, key->mask); + } + }else{ + char saddr6[INET6_ADDRSTRLEN]; + struct in6_addr saddr_6 = {0}; + memcpy(saddr_6.__in6_u.__u6_addr32, key->__in46_u_dest.ip6, sizeof(key->__in46_u_dest.ip6)); + inet_ntop(AF_INET6, &saddr_6, saddr6, INET6_ADDRSTRLEN); + bind6_prefix(&saddr_6, key->mask); + printf("Prefix: %s/%u Added to loopback\n", saddr6, key->mask); + } + } + else + { + count += 1; + int result = syscall(__NR_bpf, BPF_MAP_UPDATE_ELEM, &bind_saddr_map, sizeof(bind_saddr_map)); + if (result) + { + printf("MAP_UPDATE_BIND_ELEM: %s \n", strerror(errno)); + } + printf("Key already exists: total add count=%u\n", count); + } +} + +void delete_bind_saddr_map(struct bind_key *key) +{ + if (bind_saddr_fd == -1) + { + open_bind_saddr_map(); + } + struct in_addr cidr; + __u32 count = 0; + bind_saddr_map.key = (uint64_t)key; + bind_saddr_map.value = (uint64_t)&count; + bind_saddr_map.map_fd = bind_saddr_fd; + bind_saddr_map.flags = BPF_ANY; + int lookup = syscall(__NR_bpf, BPF_MAP_LOOKUP_ELEM, &bind_saddr_map, sizeof(bind_saddr_map)); + if (!lookup) + { + if(count <= 1 || flush){ + union bpf_attr map; + memset(&map, 0, sizeof(map)); + map.pathname = (uint64_t)bind_saddr_map_path; + map.bpf_fd = 0; + int fd = syscall(__NR_bpf, BPF_OBJ_GET, &map, sizeof(map)); + if (fd == -1) + { + printf("BPF_OBJ_GET: %s\n", strerror(errno)); + close_maps(1); + } + // delete element with specified key + map.map_fd = fd; + map.key = (uint64_t)key; + int result = syscall(__NR_bpf, BPF_MAP_DELETE_ELEM, &map, sizeof(map)); + if (result) + { + printf("MAP_DELETE_ELEM: %s\n", strerror(errno)); + } + else + { + if(key->type == 4){ + struct in_addr addr = {0}; + addr.s_addr = key->__in46_u_dest.ip; + unbind_prefix(&addr, key->mask); + char *source = inet_ntoa(addr); + if(source){ + printf("Prefix: %s/%u removed from loopback\n", source, key->mask); + } + }else{ + char saddr6[INET6_ADDRSTRLEN]; + struct in6_addr saddr_6 = {0}; + memcpy(saddr_6.__in6_u.__u6_addr32, key->__in46_u_dest.ip6, sizeof(key->__in46_u_dest.ip6)); + inet_ntop(AF_INET6, &saddr_6, saddr6, INET6_ADDRSTRLEN); + unbind6_prefix(&saddr_6, key->mask); + printf("Prefix: %s/%u removed from loopback\n", saddr6, key->mask); + } + } + close(fd); + }else{ + count -= 1; + printf("add count decremented to: %u\n", count); + int result = syscall(__NR_bpf, BPF_MAP_UPDATE_ELEM, &bind_saddr_map, sizeof(bind_saddr_map)); + if (result) + { + printf("MAP_UPDATE_BIND_ELEM: %s \n", strerror(errno)); + } + } + }else{ + printf("bind prefix does not exist\n"); + } +} + void update_ddos_saddr_map(char *source) { if (ddos_saddr_fd == -1) @@ -1678,6 +1865,26 @@ bool set_diag(uint32_t *idx) printf("icmp echo is always set to 1 for lo\n"); } } + + if (non_tuple) + { + if (!disable || *idx == 1) + { + o_diag.pass_non_tuple = true; + } + else + { + o_diag.pass_non_tuple = false; + } + if (*idx != 1) + { + printf("Set pass-non-tuple to %d for %s\n", !disable, nt_interface); + } + else + { + printf("pass-non-tuple is always set to 1 for lo\n"); + } + } if (v6) { @@ -1878,10 +2085,14 @@ bool set_diag(uint32_t *idx) if (*idx != 1) { printf("%-24s:%d\n", "icmp echo", o_diag.echo); + printf("%-24s:%d\n", "pass non tuple", o_diag.pass_non_tuple); + printf("%-24s:%d\n", "ipv6 enable", o_diag.ipv6_enable); } else { printf("%-24s:%d\n", "icmp echo", 1); + printf("%-24s:%d\n", "pass non tuple", 1); + printf("%-24s:%d\n", "ipv6 enable", 1); } printf("%-24s:%d\n", "verbose", o_diag.verbose); printf("%-24s:%d\n", "ssh disable", o_diag.ssh_disable); @@ -1894,14 +2105,6 @@ bool set_diag(uint32_t *idx) printf("%-24s:%d\n", "eapol enable", o_diag.eapol); printf("%-24s:%d\n", "ddos filtering", o_diag.ddos_filtering); printf("%-24s:%d\n", "masquerade", o_diag.masquerade); - if (*idx != 1) - { - printf("%-24s:%d\n", "ipv6 enable", o_diag.ipv6_enable); - } - else - { - printf("%-24s:%d\n", "ipv6 enable", 1); - } printf("--------------------------\n\n"); } return true; @@ -2058,6 +2261,7 @@ void interface_diag() } if (all_interface) { + nt_interface = address->ifa_name; echo_interface = address->ifa_name; verbose_interface = address->ifa_name; prefix_interface = address->ifa_name; @@ -2077,6 +2281,10 @@ void interface_diag() { printf("%s:zfw does not allow setting on ziti tun interfaces!\n", address->ifa_name); } + if (non_tuple && !strncmp(nt_interface, "ziti", 4)) + { + printf("%s:zfw does not allow setting on ziti tun interfaces!\n", address->ifa_name); + } if (tun && !strncmp(tun_interface, "ziti", 4)) { printf("%s:zfw does not allow setting on ziti tun interfaces!\n", address->ifa_name); @@ -2123,6 +2331,13 @@ void interface_diag() set_diag(&idx); } } + if (non_tuple) + { + if (!strcmp(nt_interface, address->ifa_name)) + { + set_diag(&idx); + } + } if (masquerade) { if (!strcmp(masq_interface, address->ifa_name)) @@ -5577,6 +5792,46 @@ int flush_tcp_egress() return 0; } +int flush_bind() +{ + union bpf_attr map; + struct bind_key init_key = {0}; + struct bind_key *key = &init_key; + struct bind_key current_key = {0}; + bool bstate; + // Open BPF tcp_map + memset(&map, 0, sizeof(map)); + map.pathname = (uint64_t)bind_saddr_map_path; + map.bpf_fd = 0; + map.file_flags = 0; + int fd = syscall(__NR_bpf, BPF_OBJ_GET, &map, sizeof(map)); + if (fd == -1) + { + printf("BPF_OBJ_GET: %s \n", strerror(errno)); + return 1; + } + map.map_fd = fd; + map.key = (uint64_t)key; + map.value = (uint64_t)&bstate; + int ret = 0; + while (true) + { + ret = syscall(__NR_bpf, BPF_MAP_GET_NEXT_KEY, &map, sizeof(map)); + if (ret == -1) + { + break; + } + map.key = map.next_key; + current_key = *(struct bind_key *)map.key; + struct bind_key *pass_key = malloc(sizeof(struct bind_key)); + memcpy(pass_key,¤t_key, sizeof(struct bind_key)); + delete_bind_saddr_map(pass_key); + free(pass_key); + } + close(fd); + return 0; +} + void map_list() { union bpf_attr map; @@ -6038,11 +6293,15 @@ void map_list_all() // commandline parser options static struct argp_option options[] = { + {"add-user-rules", 'A', NULL, 0, "Add user rules from /opt/openziti/bin/user/user_rules.sh", 0}, + {"bind-saddr-add", 'B', "", 0, "Bind loopback route with scope host", 0}, {"delete", 'D', NULL, 0, "Delete map rule", 0}, {"list-diag", 'E', NULL, 0, "", 0}, - {"list-gc-sessions", 'G', NULL, 0, "", 0}, {"flush", 'F', NULL, 0, "Flush all map rules", 0}, + {"list-gc-sessions", 'G', NULL, 0, "", 0}, {"insert", 'I', NULL, 0, "Insert map rule", 0}, + {"init-tc", 'H', "", 0, "sets ingress and egress tc filters for ", 0}, + {"bind-saddr-delete", 'J', "", 0, "Unbind loopback route with scope host", 0}, {"list", 'L', NULL, 0, "List map rules", 0}, {"monitor", 'M', "", 0, "Monitor ebpf events for interface", 0}, {"interface", 'N', "", 0, "Interface ", 0}, @@ -6055,6 +6314,7 @@ static struct argp_option options[] = { {"write-log", 'W', "", 0, "Write to monitor output to /var/log/ ", 0}, {"set-tc-filter", 'X', "", 0, "Add/remove TC filter to/from interface", 0}, {"list-ddos-saddr", 'Y', NULL, 0, "List source IP Addresses currently in DDOS IP whitelist", 0}, + {"init-xdp", 'Z', "", 0, "sets ingress xdp for (used for setting xdp on zet tun interface) ", 0}, {"ddos-filtering", 'a', "", 0, "Manually enable/disable ddos filtering on interface", 0}, {"outbound-filtering", 'b', "", 0, "Manually enable/disable ddos filtering on interface", 0}, {"ipv6-enable", '6', "", 0, "Enable/disable IPv6 packet processing on interface", 0}, @@ -6064,22 +6324,25 @@ static struct argp_option options[] = { {"passthrough", 'f', NULL, 0, "List passthrough rules ", 0}, {"high-port", 'h', "", 0, "Set high-port value (1-65535)> ", 0}, {"intercepts", 'i', NULL, 0, "List intercept rules ", 0}, + {"bind-flush", 'j', NULL, 0, "flush all bind routes ", 0}, {"masquerade", 'k', "", 0, "enable outbound masquerade", 0}, {"low-port", 'l', "", 0, "Set low-port value (1-65535)> ", 0}, {"dprefix-len", 'm', "", 0, "Set dest prefix length (1-32) ", 0}, {"oprefix-len", 'n', "", 0, "Set origin prefix length (1-32) ", 0}, {"ocidr-block", 'o', "", 0, "Set origin ip prefix i.e. 192.168.1.0 ", 0}, {"protocol", 'p', "", 0, "Set protocol (tcp or udp) ", 0}, + {"pass-non-tuple", 'q', "", 0, "Pass all non-tuple to ", 0}, {"route", 'r', NULL, 0, "Add or Delete static ip/prefix for intercept dest to lo interface ", 0}, {"service-id", 's', "", 0, "set ziti service id", 0}, {"tproxy-port", 't', "", 0, "Set high-port value (0-65535)> ", 0}, - {"verbose", 'v', "", 0, "Enable verbose tracing on interface", 0}, {"ddos-dport-add", 'u', "", 0, "Add destination port to DDOS port list i.e. (1-65535)", 0}, + {"verbose", 'v', "", 0, "Enable verbose tracing on interface", 0}, {"enable-eapol", 'w', "", 0, "enable 802.1X eapol packets inbound on interface", 0}, {"disable-ssh", 'x', "", 0, "Disable inbound ssh to interface (default enabled)", 0}, {"ddos-saddr-add", 'y', "", 0, "Add source IP Address to DDOS IP whitelist i.e. 192.168.1.1", 0}, {"direction", 'z', "", 0, "Set direction", 0}, - {0}}; + {0} +}; static error_t parse_opt(int key, char *arg, struct argp_state *state) { @@ -6087,6 +6350,26 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) uint32_t idx = 0; switch (key) { + case 'A': + user_rules = true; + break; + case 'B': + bind_saddr = true; + if (inet_aton(arg, &dcidr)) + { + cd = true; + } + else if (inet_pton(AF_INET6, arg, &dcidr6)) + { + cd6 = true; + } + else + { + fprintf(stderr, "Invalid IP Address for arg -B, --bind-saddr-add: %s\n", arg); + fprintf(stderr, "%s --help for more info\n", program_name); + exit(1); + } + break; case 'D': delete = true; break; @@ -6100,9 +6383,53 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) case 'G': list_gc = true; break; + case 'H': + if (!strlen(arg) || (strchr(arg, '-') != NULL)) + { + fprintf(stderr, "Interface name or all required as arg to -H, --init-tc: %s\n", arg); + fprintf(stderr, "%s --help for more info\n", program_name); + exit(1); + } + idx = if_nametoindex(arg); + if (strcmp("all", arg) && idx == 0) + { + printf("Interface not found: %s\n", arg); + exit(1); + } + init_tc = true; + if (!strcmp("all", arg)) + { + all_interface = true; + } + else + { + if(if_indextoname(idx, check_alt)){ + tc_interface = check_alt; + }else{ + tc_interface = arg; + } + } + break; case 'I': add = true; break; + case 'J': + unbind_saddr = true; + if (inet_aton(arg, &dcidr)) + { + cd = true; + } + else if (inet_pton(AF_INET6, arg, &dcidr6)) + { + cd6 = true; + } + else + { + fprintf(stderr, "Invalid IP Address for arg -J, --bind-saddr-delete: %s\n", arg); + fprintf(stderr, "%s --help for more info\n", program_name); + exit(1); + } + break; case 'L': list = true; break; @@ -6302,6 +6629,26 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) case 'Y': ddos_saddr_list = true; break; + case 'Z': + if (!strlen(arg) || (strchr(arg, '-') != NULL)) + { + fprintf(stderr, "Interface name or all required as arg to -Z, --init-xdp: %s\n", arg); + fprintf(stderr, "%s --help for more info\n", program_name); + exit(1); + } + idx = if_nametoindex(arg); + if (strcmp("all", arg) && idx == 0) + { + printf("Interface not found: %s\n", arg); + exit(1); + } + init_xdp = true; + if(if_indextoname(idx, check_alt)){ + xdp_interface = check_alt; + }else{ + xdp_interface = arg; + } + break; case 'a': if (!strlen(arg) || (strchr(arg, '-') != NULL)) { @@ -6439,6 +6786,9 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) case 'i': intercept = true; break; + case 'j': + bind_flush = true; + break; case 'k': if (!strlen(arg) || (strchr(arg, '-') != NULL)) { @@ -6512,6 +6862,33 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) protocol_name = arg; prot = true; break; + case 'q': + if (!strlen(arg) || (strchr(arg, '-') != NULL)) + { + fprintf(stderr, "Interface name or all required as arg to -q, --pass-non-tuple: %s\n", arg); + fprintf(stderr, "%s --help for more info\n", program_name); + exit(1); + } + idx = if_nametoindex(arg); + if (strcmp("all", arg) && idx == 0) + { + printf("Interface not found: %s\n", arg); + exit(1); + } + non_tuple = true; + if (!strcmp("all", arg)) + { + all_interface = true; + } + else + { + if(if_indextoname(idx, check_alt)){ + nt_interface = check_alt; + }else{ + nt_interface = arg; + } + } + break; case 'r': route = true; break; @@ -6650,6 +7027,41 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) return 0; } +void zfw_init_tc(){ + tcfilter = true; + object_file = "/opt/openziti/bin/zfw_tc_ingress.o"; + ingress = true; + direction_string = "ingress"; + interface_tc(); + ingress = false; + object_file = "/opt/openziti/bin/zfw_tc_outbound_track.o"; + egress = true; + direction_string = "egress"; + interface_tc(); + close_maps(0); +} + +void zfw_init_xdp(){ + pid_t pid; + char *const parmList[] = {"/usr/sbin/ip", "link", "set", xdp_interface, "xdpgeneric", "obj", "/opt/openziti/bin/zfw_xdp_tun_ingress.o", "sec", "xdp_redirect", NULL}; + if ((pid = fork()) == -1) + { + perror("fork error: can't spawn bind"); + } + else if (pid == 0) + { + execv("/usr/sbin/ip", parmList); + printf("execv error: unknown error binding xdp to %s\n", xdp_interface); + }else{ + int status =0; + if(!(waitpid(pid, &status, 0) < 0)){ + if(!(WIFEXITED(status) && !WEXITSTATUS(status))){ + printf("waitpid error: xdp not set on dev %s\n", xdp_interface); + } + } + } +} + struct argp argp = {options, parse_opt, 0, doc, 0, 0, 0}; void close_maps(int code) @@ -6658,6 +7070,9 @@ void close_maps(int code) { close(diag_fd); } + if(bind_saddr_fd != -1){ + close(bind_saddr_fd); + } if (if_fd != -1) { close(if_fd); @@ -6715,9 +7130,7 @@ char *get_ts(unsigned long long tstamp) time_t s; struct timespec spec; const char *format = "%b %d %Y %H:%M:%S"; - ; clock_gettime(CLOCK_REALTIME, &spec); - s = spec.tv_sec; ns = spec.tv_nsec; time_t now = s + (ns / 1000000000); @@ -6758,6 +7171,20 @@ void open_ddos_saddr_map() } } +void open_bind_saddr_map() +{ + memset(&bind_saddr_map, 0, sizeof(bind_saddr_map)); + bind_saddr_map.pathname = (uint64_t)bind_saddr_map_path; + bind_saddr_map.bpf_fd = 0; + bind_saddr_map.file_flags = 0; + /* make system call to get fd for map */ + bind_saddr_fd = syscall(__NR_bpf, BPF_OBJ_GET, &bind_saddr_map, sizeof(bind_saddr_map)); + if (bind_saddr_fd == -1) + { + ebpf_usage(); + } +} + void open_range_map() { memset(&range_map, 0, sizeof(range_map)); @@ -6943,12 +7370,152 @@ void egress_usage(){ close_maps(1); } +void add_user_rules(){ + pid_t pid; + char *const parmList[] = {"/opt/openziti/bin/user/user_rules.sh", NULL}; + if ((pid = fork()) == -1) + { + perror("fork error: can't spawn bind"); + close_maps(1); + } + else if (pid == 0) + { + execv("/opt/openziti/bin/user/user_rules.sh", parmList); + printf("execv error: unknown error adding user defined rules\n"); + close_maps(1); + }else{ + int status =0; + if(!(waitpid(pid, &status, 0) < 0)){ + if(!(WIFEXITED(status) && !WEXITSTATUS(status))){ + printf("waitpid error: adding user defined rules\n"); + close_maps(1); + } + } + } +} + int main(int argc, char **argv) { signal(SIGINT, INThandler); signal(SIGTERM, INThandler); argp_parse(&argp, argc, argv, 0, 0, 0); + if(user_rules){ + if ((access(tproxy_map_path, F_OK) != 0) || (access(tproxy6_map_path, F_OK) != 0)) + { + ebpf_usage(); + } + if(non_tuple ||dsip || echo || ssh_disable || verbose || per_interface || add || delete || eapol || ddos || vrrp || + monitor || logging || ddport || masquerade || list || outbound || bind_saddr || unbind_saddr || ddport || init_xdp || tcfilter){ + usage("-A, --add-user-rules cannot be used in non related combination calls"); + } + add_user_rules(); + close_maps(0); + } + + if(init_tc){ + if(non_tuple ||dsip || echo || ssh_disable || verbose || per_interface || add || delete || eapol || ddos || vrrp || + monitor || logging || ddport || masquerade || list || outbound || bind_saddr || unbind_saddr || ddport || init_xdp || tcfilter){ + usage("-V, --init-tc cannot be used in non related combination calls"); + }else{ + zfw_init_tc(); + close_maps(0); + } + } + + if(init_xdp){ + if(init_tc || non_tuple ||dsip || echo || ssh_disable || verbose || per_interface || add || delete || eapol || ddos || vrrp || + monitor || logging || ddport || masquerade || list || outbound || bind_saddr || unbind_saddr || ddport || tcfilter){ + usage("-Z, --init-xdp cannot be used in non related combination calls"); + }else{ + zfw_init_xdp(); + close_maps(0); + } + } + + if(non_tuple && (dsip || tcfilter || echo || ssh_disable || verbose || per_interface || add || delete || eapol || ddos || vrrp || + monitor || logging || ddport || masquerade || list || outbound || bind_saddr || unbind_saddr || ddport)){ + usage("-j, --bind-flush cannot be used in non related combination calls"); + } + + if(bind_flush){ + if(dsip || tcfilter || echo || ssh_disable || verbose || per_interface || add || delete || eapol || ddos || vrrp || + monitor || logging || ddport || masquerade || list || outbound || bind_saddr || unbind_saddr|| ddport){ + usage("-j, --bind-flush cannot be used in non related combination calls"); + }else{ + if(flush){ + flush_bind(); + }else{ + usage("-j, --bind-flush requires -F, --flush\n"); + } + } + close_maps(0); + } + + if(bind_saddr){ + if ((dsip || tcfilter || echo || ssh_disable || verbose || per_interface || add || delete + || flush || eapol) || ddos || vrrp || monitor || logging || ddport || masquerade || list || outbound || + unbind_saddr || ddport) + { + usage("-B, --bind-saddr-add can not be used in combination call\n"); + + }else if(cd && dl){ + if((dplen >= 0) && (dplen <= 32)){ + struct bind_key key = {0}; + key.__in46_u_dest.ip = dcidr.s_addr; + key.mask = dplen; + key.type = 4; + update_bind_saddr_map(&key); + }else{ + usage("Invalid IPv4 cidr len\n"); + } + }else if(cd6 && dl){ + if((dplen >= 0) && (dplen <= 128)){ + struct bind_key key = {0}; + memcpy(key.__in46_u_dest.ip6, dcidr6.__in6_u.__u6_addr32, sizeof(key.__in46_u_dest.ip6)); + key.mask = dplen; + key.type = 6; + update_bind_saddr_map(&key); + }else{ + usage("Invalid IPv6 cidr len\n"); + } + } + close_maps(0); + } + + if(unbind_saddr){ + if ((dsip || tcfilter || echo || ssh_disable || verbose || per_interface || add || delete + || flush || eapol) || ddos || vrrp || monitor || logging || ddport || masquerade || list || outbound || + bind_saddr || ddport) + { + usage("-J, --bind-saddr-delete can not be used in combination call\n"); + + }else if(cd && dl){ + if((dplen >= 0) && (dplen <= 32)){ + struct bind_key key = {0}; + key.__in46_u_dest.ip = dcidr.s_addr; + key.mask = dplen; + key.type = 4; + delete_bind_saddr_map(&key); + }else{ + printf("Invalid IPv4 cidr len\n"); + close_maps(1); + } + }else if(cd6 && dl){ + if((dplen >= 0) && (dplen <= 128)){ + struct bind_key key = {0}; + memcpy(key.__in46_u_dest.ip6, dcidr6.__in6_u.__u6_addr32, sizeof(key.__in46_u_dest.ip6)); + key.mask = dplen; + key.type =6; + delete_bind_saddr_map(&key); + }else{ + printf("Invalid IPv6 cidr len\n"); + close_maps(1); + } + } + close_maps(0); + } + if (service && (!add && !delete)) { usage("-s, --service-id requires -I, --insert or -D, --delete"); @@ -7112,7 +7679,7 @@ int main(int argc, char **argv) } if (disable && (!ssh_disable && !echo && !verbose && !per_interface && !tcfilter && !tun && !vrrp - && !eapol && !ddos && !dsip && !ddport && !v6 && !outbound && !add && !delete && !masquerade)) + && !eapol && !ddos && !dsip && !ddport && !v6 && !outbound && !add && !delete && !masquerade && !non_tuple)) { usage("Missing argument at least one of -a,-b,-6,-e, -k, -u, -v, -w, -x, -y, or -E, -P, -R, -T, -X"); } @@ -7390,7 +7957,7 @@ int main(int argc, char **argv) } } } - else if (vrrp || verbose || ssh_disable || echo || per_interface || tun || eapol || ddos || v6 || outbound || masquerade) + else if (vrrp || verbose || ssh_disable || echo || per_interface || tun || eapol || ddos || v6 || outbound || masquerade || non_tuple ) { interface_diag(); } diff --git a/src/zfw_monitor.c b/src/zfw_monitor.c index 40cbe07..3ecb83f 100644 --- a/src/zfw_monitor.c +++ b/src/zfw_monitor.c @@ -85,7 +85,7 @@ char check_alt[IF_NAMESIZE]; char doc[] = "zfw_monitor -- ebpf firewall monitor tool"; const char *rb_map_path = "/sys/fs/bpf/tc/globals/rb_map"; const char *tproxy_map_path = "/sys/fs/bpf/tc/globals/zt_tproxy_map"; -const char *argp_program_version = "0.8.19"; +const char *argp_program_version = "0.9.0"; union bpf_attr rb_map; int rb_fd = -1; diff --git a/src/zfw_tc_ingress.c b/src/zfw_tc_ingress.c index 8571243..9a519d9 100644 --- a/src/zfw_tc_ingress.c +++ b/src/zfw_tc_ingress.c @@ -215,6 +215,16 @@ struct icmp_masq_key { __u32 ifindex; }; +/*Key to bind_map*/ +struct bind_key { + union { + __u32 ip; + __u32 ip6[4]; + }__in46_u_dest; + __u8 mask; + __u8 type; +}; + /*Key to masquerade_map*/ struct masq_key { uint32_t ifindex; @@ -355,6 +365,7 @@ struct diag_ip4 { bool ipv6_enable; bool outbound_filter; bool masquerade; + bool pass_non_tuple; }; /*Value to tun_map*/ @@ -407,6 +418,15 @@ struct { __uint(pinning, LIBBPF_PIN_BY_NAME); } ddos_saddr_map SEC(".maps"); +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(key_size, sizeof(struct bind_key)); + __uint(value_size,sizeof(uint32_t)); + __uint(max_entries, 65535); + __uint(pinning, LIBBPF_PIN_BY_NAME); + __uint(map_flags, BPF_F_NO_PREALLOC); +} bind_saddr_map SEC(".maps"); + struct { __uint(type, BPF_MAP_TYPE_LRU_HASH); __uint(key_size, sizeof(uint16_t)); @@ -1248,7 +1268,7 @@ int bpf_sk_splice(struct __sk_buff *skb){ /* if not tuple forward ARP and drop all other traffic */ if (!tuple){ - if(skb->ifindex == 1){ + if(skb->ifindex == 1 || local_diag->pass_non_tuple){ return TC_ACT_OK; } else if(icmp){ diff --git a/src/zfw_tc_outbound_track.c b/src/zfw_tc_outbound_track.c index 78a3456..03fb070 100644 --- a/src/zfw_tc_outbound_track.c +++ b/src/zfw_tc_outbound_track.c @@ -206,6 +206,7 @@ struct diag_ip4 { bool ipv6_enable; bool outbound_filter; bool masquerade; + bool pass_non_tuple; }; /*value to ifindex_tun_map*/ @@ -924,6 +925,9 @@ int bpf_sk_splice(struct __sk_buff *skb){ /* if not tuple forward */ if (!tuple){ + if(local_diag->pass_non_tuple || skb->ifindex == 1){ + return TC_ACT_OK; + } if(ipv4){ struct iphdr *iph = (struct iphdr *)(skb->data + sizeof(*eth)); if ((unsigned long)(iph + 1) > (unsigned long)skb->data_end){