Snort 3 + NFQ with IPS mode

This configuration for Openwrt x64 22.03.4
Install ethtool, snort3, kmod-nft-queue

opkg install kmod-nft-queue ethtool snort3

Check Offloading of wan interface and disable it.

root@OpenWrt:~# ethtool -k eth1 | grep receive-offload
generic-receive-offload: on
large-receive-offload: off [fixed]
root@OpenWrt:~# ethtool -K eth1 gro off lro off
Cannot change large-receive-offload
root@OpenWrt:~# ethtool -k eth1 | grep receive-offload
generic-receive-offload: off
large-receive-offload: off [fixed]

Add ntf rule for nfqueue

nft 'add chain inet fw4 IPS { type filter hook forward priority filter ; }'
nft insert rule inet fw4 IPS counter queue num 4 bypass

Note: On older verstion openwrt using iptables. The rule will not persist when you reboot device. Create file firewall.user and then add it into /etc/config/firewall.

Download snort rule and extract into /etc/snort/ then replace alert to block action in rules file:

for i in /etc/snort/rules/*.rules; do sed -i s'/^alert/block/' "$i"; done

Update RULE_PATH in /etc/snort/snort_default.lua

-- Path to your rules files (this can be a relative path)
RULE_PATH = '/etc/snort/rules'
BUILTIN_RULE_PATH = '/etc/snort/builtins'
PLUGIN_RULE_PATH = '/etc/snort/so_rules'

Update in file /etc/snort/snort.lua
Check ips with mode inline and add rules for IPS.

ips =
{
    mode = inline,
    variables = default_variables,
    rules = [[
	-- Update the rules here
    include $RULE_PATH/snort3-server-web.rules
    include $RULE_PATH/snort3-protocol-icmp.rules
    ]]
}

-- Add more section for daq nfq in snort.lua

daq = {
module_dirs = {'/usr/lib/daq'},
inputs = { '4' },
modules =
    {
        {
            name = 'nfq',
            mode = 'inline',
			variables = { 'device=eth1' } -- eth1 is wan interface
        }
    }
}

Run the test:

> snort -c /etc/snort/snort.lua -Q -T

Update in file/etc/init.d/snort

> procd_set_param command $PROG -c "$config_dir/snort.lua" -A "$alert_module" -Q -M

Depend your hardware or your configurations, you can use -q instead of -M to do not show snort startup log in logread or syslog-ng.

Restart snort
> /etc/init.d/snort restart

Inlog:

> May 25 08:53:16 OpenWrt snort[31733]: Finished /etc/snort//snort.lua:
> May 25 08:53:16 OpenWrt snort: Loading ips.rules:
> May 25 08:53:16 OpenWrt snort: Loading /etc/snort/rules/snort3-protocol-icmp.rules:
> May 25 08:53:16 OpenWrt snort: Finished /etc/snort/rules/snort3-protocol-icmp.rules:
> May 25 08:53:16 OpenWrt snort: Finished ips.rules:
> May 25 08:53:16 OpenWrt snort: --------------------------------------------------
> May 25 08:53:16 OpenWrt snort: rule counts
> May 25 08:53:16 OpenWrt snort:        total rules loaded: 149
> May 25 08:53:16 OpenWrt snort:                text rules: 149
> May 25 08:53:16 OpenWrt snort:             option chains: 149
> May 25 08:53:16 OpenWrt snort:             chain headers: 4
> May 25 08:53:16 OpenWrt snort: --------------------------------------------------
> May 25 08:53:16 OpenWrt snort: port rule counts
> May 25 08:53:16 OpenWrt snort:              tcp     udp    icmp      ip
> May 25 08:53:16 OpenWrt snort:      any       1       1     149       1
> May 25 08:53:16 OpenWrt snort:    total       1       1     149       1
> May 25 08:53:16 OpenWrt snort: --------------------------------------------------
> May 25 08:53:16 OpenWrt snort: ips policies rule stats
> May 25 08:53:16 OpenWrt snort:               id  loaded  shared enabled    file
> May 25 08:53:16 OpenWrt snort:                0     149       0     149    /etc/snort//snort.lua
> May 25 08:53:16 OpenWrt snort: --------------------------------------------------
> May 25 08:53:16 OpenWrt snort: fast pattern port groups        src     dst     any
> May 25 08:53:16 OpenWrt snort:                    packet:        0       0       4
> May 25 08:53:16 OpenWrt snort: --------------------------------------------------
> May 25 08:53:16 OpenWrt snort: search engine
> May 25 08:53:16 OpenWrt snort:                 instances: 4
> May 25 08:53:16 OpenWrt snort:                  patterns: 55
> May 25 08:53:16 OpenWrt snort:             pattern chars: 721
> May 25 08:53:16 OpenWrt snort:                num states: 544
> May 25 08:53:16 OpenWrt snort:          num match states: 85
> May 25 08:53:16 OpenWrt snort:              memory scale: KB
> May 25 08:53:16 OpenWrt snort:              total memory: 21.8525
> May 25 08:53:16 OpenWrt snort:            pattern memory: 2.84863
> May 25 08:53:16 OpenWrt snort:         match list memory: 8.28125
> May 25 08:53:16 OpenWrt snort:         transition memory: 10.2227
> May 25 08:53:16 OpenWrt snort: --------------------------------------------------
> May 25 08:53:16 OpenWrt snort: nfq DAQ configured to inline.
> May 25 08:53:16 OpenWrt snort: Commencing packet processing

Test with icmp rules..
Try to ping any server from client...

> May 25 08:56:15 OpenWrt snort: [1:366:11] "PROTOCOL-ICMP PING Unix" [Classification: Misc activity] [Priority: 3] {ICMP} 192.168.1.110 -> 8.8.8.8
> May 25 08:56:15 OpenWrt snort: [1:29456:3] "PROTOCOL-ICMP Unusual PING detected" [Classification: Information Leak] [Priority: 2] {ICMP} 192.168.1.110 -> 8.8.8.8
> May 25 08:56:15 OpenWrt snort: [1:384:8] "PROTOCOL-ICMP PING" [Classification: Misc activity] [Priority: 3] {ICMP} 192.168.1.110 -> 8.8.8.8
1 Like

Thanks, looks good!

A couple of questions, but first I have to say I'm currently only running in IDS mode (detection, not prevention).

  1. What does disabling gro accomplish? My snort3 install works fine with it enabled:
$ ethtool -k eth0 | grep receive-offload
generic-receive-offload: on
large-receive-offload: off
  1. What does adding the queue accomplish? Again, I'm getting good results without the kmod, no filter chain, no rule, no inputs in my daq section and so on... Is this just to get a counter on the packets sent through snort?

Do you know if either or both of these impact performance/throughput in any way?

For 1 you can read it at [Snort Blog: Running Snort on Commodity Hardware - The Pitfalls of Mass Offloading](https://blog.snort.org/2016/08/running-snort-on -commodity-hardware.html )

The configuration of IPS is more complicated than that of IDS. We need to create an nftables rule to send the packet to userspace so that snort can analyze it and drop it if it matches the snort rule. IPS will affect performance/throughput more than IDS but it may depend on your hw or the number of rules loaded by snort.

1 Like

Aha, so my "works fine" is actually "works fine, until it doesn't..."

I've condensed the offload issue into a function in my rules-fetching script:

disable_offload()
{
    # From https://forum.openwrt.org/t/snort-3-nfq-with-ips-mode/161172
    # https://blog.snort.org/2016/08/running-snort-on-commodity-hardware.html
    local wan=$(uci get network.wan.device)
    if ethtool -k $wan | grep -q 'receive-offload: on' ; then
        ethtool -K $wan   gro off   lro off   2> /dev/null
    fi
}
disable_offload

Digging in somewhat, it seems like the above should be in /etc/hotplug.d/iface/ somewhere early (10-snort?), with a check for $ACTION = ifup and $device = $wan...

Does anyone know another way to persist the NIC settings? Or alternatively, a way to set it via the usual device file mantra echo 0 > /sys/devices/??? (Either would be nice, as it would eliminate the need to install ethtool.)

Edit: I'm guessing those settings are a bit field in /sys/class/net/eth0/device/config and hence the need for ethtool to decode/set these values.

Thank you it works. But I would have one more improvement you don't need to rewrite the rules from alert to block (drop) it's enough to add in the IPS section action_override = 'block'.

1 Like

I found another problem the queue entry is killed by some script when you add it for example via rc.local during the startup process. I could observe it during the startup process first it was there in the firewall rules after Openwrt was finished it was gone again. I have solved the problem by making a small script which contains a sleep command of 300 seconds before the rule is added. I will observe this if it also happens during runtime, maybe it would be better to put everything in a separate table like banip does for example.

Correct, @cuongdao mentions that in a note above. Whenever the firewall is reloaded, anything like that is transient. To get it working with firewall reload, put it in /etc/firewall.user and add it to the firewall includes in /etc/config/firewall like this:

config include
        option enabled '1'
        option type 'script'
        option path '/etc/firewall.user'
        option fw4_compatible '1'

Test it with fw4 reload and then nft list chain inet fw4 IPS to see if the commands were executed.

Thank you for the instructions. But I'm not sure if this is the ideal configuration because the performance is much worse than with afpacket which is a bit strange because I don't see any Cpu bottleneck. I think there is a reason why Banip uses its own table maybe for performance reasons?

Are you using htop for finding bottlenecks? Assuming so, do you have the "Hide kernel threads" turned off? Default is to hide it, so it might not be showing.

I'm just guessing, but creating a separate table would probably only give a tiny improvement if anything. The packet pipeline through nftables is pretty streamlined already.

Snort is going to be a hog no matter how you deal with it. If I'm recalling right, it's still single-threaded internally (or was that the reason for the snort 2 -> 3 rewrite, I'm probably mixing things up). I think we need to get Suricata up and running on OpenWrt and see if it, with it's much more modern take on things, does as good a job at IDS/IPS while hopefully reducing resource utilization (I noted on first startup that snort gobbled 255 MB of RAM, gah!).

I see you've already been to @darksky's thread on performance issues, but in case anyone else wants to jump in...

No performance problems on the part of the Cpu are not because I had afpacket Daq running before and there I got full speed with Vpn it seems like the queue is slowing down because creating a table and adding the rules to the new table has already brought a visible improvement. Maybe we should make one queue for Tcp and one for Udp or for inbound and outbound Snort is able to handle multiple streams at the same time that should alleviate the bottleneck. Problem is I have no idea about nftables. In any case the configuration is more performant and there is no problem with firewall reload.

nft add table inet snort
nft 'add chain inet snort IPS { type filter hook forward priority filter ; }'
nft insert rule inet snort IPS counter queue num 4 bypass

1 Like

You can detect the family with a meta l4proto match on the rule. The beauty of nft becomes apparent, as this grabs both IPv4 and IPv6 packets.

$ nft add rule inet snort IPS   meta l4proto tcp  counter queue num 4 bypass
$ nft add rule inet snort IPS   meta l4proto udp  counter queue num 5 bypass

$ nft list table inet snort
table inet snort {
        chain IPS {
                type filter hook forward priority filter; policy accept;
                meta l4proto tcp counter packets 0 bytes 0 queue flags bypass to 4
                meta l4proto udp counter packets 0 bytes 0 queue flags bypass to 5
        }
}

Then, of course, add queue '5' to the inputs in your snort.lua...

Thank you that should work better. Apparently the reverse path filter is also a problem when I disabled it the performance was also better. But I think it is turned off by default.

Oops, just those two rules ignore the non-tcp/udp packets like ICMPv6, so I think we also need another one to grab all those

nft add rule inet snort IPS  'meta l4proto != {tcp, udp}  counter queue num 7 bypass '

Thanks I will add it to my script but I will test the performance first.I have now created 2 streams with the z parameter in the start line but I still have to see if there is no error because I added the parameter directly in the service file is easier.

I found the problem nfnetlink_queue: nf_queue: full at 1024 entries, dropping packets(s) it seems the queue is to small.

So I found the problem that the nf_queue is full is strangely not the reason for the lack of performance that can be turned off with daq-var queue_maxlen=8192 quite well or delay.The problem seems to be a kernel limit / bug that limits the speed of the queue the tests with larger queue and only a third of the Snort rules brought no improvement. What brought success however is to divide the traffic on several Queues on and then Snort individually to hand over I solved it in such a way:

nft add table inet snort
nft 'add chain inet snort IPS { type filter hook forward priority filter ; }'
nft insert rule inet snort IPS counter queue num 4-6 bypass

Then pass queues 4, 5 and 6 individually to Snort (e.g. -i '5' -i '6'... -i 'x' or adding it to snort.lua) and start Snort with the -z 3 parameters in the command line. You can use more than 3 queues to increase the performance but not more than one per core. For this the queue_maxlen should be set high (I use 8192 but this could be too much) and the snaplen should be set to ~64000. But there is still a disadvantage servers that use only one connection are limited to the throughput of one queue, for me the limit was max 75 Mbit. Oh and tcp-segmentation-offload should also be disabled for the network cards with the ethtool (ethtool -K eth(x) tso off).

1 Like

I'm still trying to get data to pass through the queue on my N5105 box. Everything seems to work fine in a VM, but once I put it on the box with Intel I-226 NICs, it either locks up or nothing passes through Snort...

In any case, here are some pieces for you to make life easier. Put this in /etc/snort/snort-table.sh:

#!/bin/sh

verbose=false

nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook forward priority filter; policy accept;

            counter  queue flags bypass to 4-6

#           meta l4proto tcp               counter  queue flags bypass to 4
#           meta l4proto udp               counter  queue flags bypass to 5
#           meta l4proto != { tcp, udp }   counter  queue flags bypass to 6
        }
    }
TABLE

$verbose && nft list table inet snort

exit 0

Point to it in /etc/config/firewall:

config include
        option enabled '1'
        option type 'script'
        option path '/etc/snort/snort-table.sh'
        option fw4_compatible '1'

Now fw4 reload and reboots will re-initialize your snort table (or create it from scratch). Whenever you change the script, just do another reload.

2 Likes

Thanks for your Script I will insert it on occasion times with first I'm glad that it runs. I had the problem with letting through once but I don't know what it was I think it was an enabled network card option I think it was the software flow option that automatically enables large-receive-offload. Generally generic-receive-offload, large-receive-offload and tcp-segmentation-offload must be disabled all three had visible influence on performance and function in my tests.
The snaplen is also important according to https://github.com/snort3/libdaq/blob/master/modules/nfq/README.nfq.md the Packets will come up from the kernel defragmented.

@efahl has you enabled Ipv6? If so have you added the Ipv6 traffic to the queue? I ask because I saw an example with Iptables and there the Ipv6 traffic had to be added with a second command to the same queue number.

You don't have to modify the rules, simply use action_override in your ips section of /etc/snort/local.lua

Example:

ips = {
  -- mode = tap,
  mode = inline,
  variables = default_variables,
  action_override = 'drop',
  include = RULE_PATH .. '/snort.rules',
}

EDIT: AH! I see xxxx already suggested this.