21. Components » Packet Filter

The functionality of Packet Filter is described in depth in the Choosing a Method of DDoS Mitigation chapter.

To add a Packet Filter, click the [+] button found in the title bar of the Configuration » Components panel. To configure an existing Packet Filter, go to Configuration » Components and click its name.

PACKET_FILTER_CONFIGURATION8.01_png

Packet Filter Configuration parameters:

Filter Name – Assign a short, descriptive name to easily identify the Packet Filter
Filter Color – By default, a random color is used in graphs. You can change it via the dropdown menu
Filter Visibility – Toggles the listing inside the Reports » Devices panel
Device Group – Enter a description if you wish to organize components (e.g. by location, characteristics) or to permit fine-grained access for roles
Filter Server – Choose a server that meets the minimum system requirements for running Packet Filter. If this is not the Console server, follow the NFS configuration steps to make the raw packet data visible in the web interface
Capture Engine – Select the preferred packet capturing engine:
Embedded Libpcap – Uses the built-in libpcap library. Requires no extra setup, but it cannot run on multiple threads and may be too slow for multi-gigabit environments
PACKET_FILTER_OPTIONS_LIBPCAP
Packet Sampling (1/N) – The packet sampling rate. On most systems, the default (1) is correct
System Libpcap – Uses the libpcap library shipped with the Linux distribution
PACKET_FILTER_OPTIONS_SYSTEM_LIBPCAP
Environment Variables – Some NIC drivers ship their own libpcap and require a specific environment variable
Packet Sampling (1/N) – If using a round-robin packet scheduler with multiple filtering servers, set this to the number of servers. Otherwise, 1
PF_RING – Uses the PF_RING framework framework to accelerate packet processing. PF_RING typically outperforms Libpcap but needs extra kernel modules and only supports certain NICs. PF_RING ZC is not supported by Packet Filter because ZC isolates the interface from the kernel, preventing packet switching, routing, or filtering. Click the button on the right for PF_RING-specific options:
PACKET_FILTER_OPTIONS_PFRING
CPU Threads – Packet Filter can run across multiple CPU cores. Each thread increases RAM usage. Typically, more than 6 threads may degrade performance. The number of threads also depends on the RSS queues
PF_RING Vanilla RX/TX – PF_RING can limit itself to packets matching a specific direction (RX or TX)
PF_RING Sampling (1/N) – PF_RING can sample packets directly in the kernel, more efficient than user-space sampling
Packet Sampling (1/N) – If using a round-robin packet scheduler with multiple filtering servers, set this to the number of servers. Otherwise, 1
DPDK – Choose the DPDK framework framework for high-performance packet capture. Click the button beside Capture Engine to configure DPDK-specific settings (see Appendix 1)
DPDK [Packet Sensor] – Uses a DPDK-enabled Packet Sensor for packet capture. Because DPDK offers exclusive NIC access to one process, this option merges Packet Filter logic into the Packet Sensor, so both components listen on the same interface
Sniffing Interface – Specify which network interface(s) the Packet Filter should monitor. Libpcap supports only one interface, while PF_RING supports multiple interfaces if they are comma-separated (“,”). Using the Inbound Interface raises CPU usage (all traffic is inspected), whereas using the Outbound Interface lowers CPU usage (only forwarded traffic is seen). In the outbound scenario, Packet Filter can still collect dropped-traffic stats from NIC counters
PACKET_FILTER_OPTIONS_INTERFACE_8.2
Interface Type – Choose TUN/TAP or GRE if appropriate for the Sniffing Interface
GRE Decapsulation – When enabled, Packet Filter inspects the inner packet rather than the GRE wrapper
Filtering Interface – Select on which interface to apply the filtering rules:
None – Detects/reports filtering rules but does not apply them
Inbound interface – Applies filtering on the inbound interface (defined below)
Outbound interface – Applies filtering on the outbound interface (defined below)
Inbound Interface – Enter the interface receiving incoming (ingress) traffic. This parameter can be omitted if Filtering Interface is the same as Outbound Interface. For bridged interfaces, prepend “physdev:” to the interface name
Outbound Interface – Cleaned traffic is sent to the downstream router/switch via this interface, which should have a route to the default gateway. Omit if Filtering Interface is the same as Inbound Interface. For bridged interfaces, prepend “physdev:” to the interface name
BGP Flowspec – Choose the policy to apply when sending BGP Flowspec announcements via a Response. The rate-limit policy applies only to bits/s anomalies; for pkts/s anomalies, any matched traffic is fully discarded
Netfilter Firewall – Packet Filter utilizes the Netfilter framework in the Linux kernel for software-based packet filtering and rate limiting. Because Packet Filter avoids the connection tracking of stateful firewalls, it operates very quickly and remains highly flexible
Disabled – Packet Filter detects/reports rules but does not invoke the Netfilter firewall
Filtering rules drop matched traffic. Valid traffic is accepted – Packet Filter detects/reports/applies filtering rules via Netfilter. If a rule is not whitelisted, matched traffic is blocked; everything else passes
Filtering rules drop matched traffic. Valid traffic is rate-limited – Packet Filter applies filtering rules, dropping non-whitelisted traffic and rate-limiting the rest. Netfilter only supports pkts/s thresholds, and some kernels fail above 10000 pkts/s
Filtering rules rate-limit matched traffic. Valid traffic is accepted – Packet Filter rate-limits matched traffic to the threshold. Netfilter does not support bits/s thresholds; some kernels fail above 10000 pkts/s
Apply the default Netfilter chain policy – For testing only. Packet Filter detects/reports rules, but all rules have the RETURN target
When using the Netfilter Firewall, the following options become available:
PACKET_FILTER_OPTIONS_NETFILTER
Execution – Filtering rules can be applied automatically or manually (by clicking the Netfilter icon in Reports » Tools » Anomalies)
Netfilter Table – The raw option typically offers better performance but needs both Inbound and Outbound interfaces to be set. It may not work on virtual interfaces. The filter option is usually slower
Netfilter Chain – Use FORWARD if the server forwards traffic, or INPUT if it doesn’t
Operating Layer – Choose OSI Layer 2 if the server is configured as a bridge, OSI Layer 3 otherwise
Dataplane Firewall – Controls the built-in, DPDK-based firewall, which outperforms Netfilter but is less flexible and more complex to configure
PACKET_FILTER_OPTIONS_DATAPLANE
Execution – Filtering rules can be applied automatically or manually (by clicking the Dataplane icon in Reports » Tools » Anomalies)
Hardware Offload – Choose a NIC hardware-filtering option if available. Because hardware filters don’t use CPU cycles, they can complement Netfilter or Dataplane Firewall for better performance
Disabled – No hardware filters are applied
DPDK Flow API – Leverages the Generic Flow API (rte_flow) in DPDK to offload packet filtering tasks directly to the NIC hardware
Chelsio T5+ 10/40/100 Gigabit adapter with LE-TCAM filters – Uses the cxgbtool utility to apply up to 487 filtering rules for source/destination IPv4/IPv6 addresses, source/destination TCP/UDP ports, and IP protocols. The utility can be installed by the Chelsio Unified Wire driver. Drop counters are available for packets, not for bytes
Mellanox ConnectX NIC with OFED driver – Uses the ethtool utility to apply up to 924 rules for source/destination IPv4/IPv6 addresses, source/destination TCP/UDP ports, and IP protocols. The utility must be installed by the OFED driver driver in /opt/mellanox/ethtool/sbin/. No drop counters available
Intel x520+ 1/10/40 Gigabit adapter configured to block IPv4 sources – Programs the Intel chipset to drop IPv4 source IPs. Up to 4086 hardware filters; no drop counters
Intel x520+ 1/10/40 Gigabit adapter configured to block IPv4 destinations – Programs the Intel chipset to drop IPv4 destination IPs. Up to 4086 hardware filters; no drop counters
When using Hardware Offload, the following option becomes available:
PACKET_FILTER_OPTIONS_HW
Execution – Filtering rules can be applied automatically or manually (by clicking the NIC chipset icon in Reports » Tools » Anomalies)
Whitelist – Contains a set of rules preventing critical traffic from being blocked. Refer to the Whitelist Template chapter for more information
Comments – (Optional) Store internal notes about this Packet Filter. These notes are not displayed elsewhere

Enable the Packet Filter by clicking the on/off button next to its name in Configuration » Components. If a traffic anomaly triggers the Response action “Detect filtering rules and mitigate the attack with Wanguard Filter”, a Packet Filter instance is launched automatically. If there are no anomalies requiring a Filter instance, Reports » Devices » Overview displays “No active instance”.

Note

You can test any firewall supported by Packet Filter in Reports » Tools » Firewall by clicking [Add Firewall Rule].

21.1. Packet Filter Troubleshooting

✔ If the server is not switching packets although it should function as a network bridge, follow the steps required by your Linux distribution and make sure the following commands are executed during startup:
[root@localhost ~]# sysctl -w net.bridge.bridge-nf-call-ip6tables=1
[root@localhost ~]# sysctl -w net.bridge.bridge-nf-call-iptables=1
[root@localhost ~]# sysctl -w net.bridge.bridge-nf-filter-vlan-tagged=1
✔ If the server is not routing packets although it should function as a router, follow the steps required by your Linux distribution and make sure the following commands are executed during startup. To inject the packets back into the network, set a core router as default gateway, reachable through the Outbound Interface, either directly (recommended) or through a GRE or IP-in-IP tunnel
[root@localhost ~]# sysctl -w net.ipv4.ip_forward=1
[root@localhost ~]# sysctl -w net.ipv4.conf.all.forwarding=1
[root@localhost ~]# sysctl -w net.ipv4.conf.default.rp_filter=0
[root@localhost ~]# sysctl -w net.ipv4.conf.all.rp_filter=0
✔ Go to Help » Software Updates to make sure that you are running the latest version of the software
✔ To view the traffic counters for each Netfilter or Chelsio filtering rule, go to Reports » Tools » Firewall
✔ To see the filtering rules applied by the Netfilter firewall, execute on CLI as root:
[root@localhost ~]# iptables -L -n -v && iptables -L -n -v -t raw
To delete all chains:
[root@localhost ~]# for chain in `iptables -L -t raw | grep wanguard | awk '{ print $2 }'`; do iptables -X $chain; done
✔ To view the filtering rules applied on the Intel 80599 chipset:
[root@localhost ~]# ethtool --show-ntuple <filtering_interface>
or, for kernels >3.1:
[root@localhost ~]# ethtool --show-nfc <filtering_interface>
✔ To ensure that filtering rules can be applied on the Intel 80599 chipset, load the ixgbe driver with the parameter FdirPballoc=3. To prevent getting Location out of range errors from the ixgbe driver, load it with the right parameters in order to activate the maximum number of 8k filtering rules
✔ To view filtering rules applied on the Chelsio T4+ chipset:
[root@localhost ~]# cxgbtool <filtering_interface> filter show
✔ If the CPU usage of the Packet Filter instance is too high, install PF_RING (no ZC/DNA/LibZero needed), or use a network adapter that allows distributing Packet Filters over multiple CPU cores
✔ If the Netfilter’s performance is too low, check if nf_conntrack is used and disable it, if possible. Packet Filter uses only stateless firewalls
✔ For PF_RING installation issues, contact ntop.org. To increase the maximum number of PF_RING programs from 64 to 256, increase the MAX_NUM_RING_SOCKETS defined in kernel/linux/pf_ring.h and recompile the pf_ring kernel module
✔ The event log error License key not compatible with the existing server indicates that the server is unregistered and you need to send the string from Configuration » Servers » [Server] » Hardware Key to sales@andrisoft.com