Oh that's crap what you are doing better pass the parameters by start line because the parameters are changeable for example it would be better to use 4 queues and more for your bandwidth because you have only one 4 threads unfortunately only 4 are possible that means you would have to change in the script of Efahl the line:
counter queue flags bypass to 4-6
to: counter queue flags bypass to 4-7.
Then you start snort with the parameters:
snort -q -c "/etc/snort/snort.lua" -i "4" -i "5" -i "6" - i "7" --daq-dir /usr/lib/daq --daq nfq -Q -z 4 -s 64000 --daq-var queue_maxlen=8192
As you can see with another queue also the z parameter has to be changed and this is easier solved with the command line.
Yeah, I'm a coder from waaaay back, so I put everything into the config files and minimize the command line.
(Aside: I'm working toward being able to specify all this stuff in UCI /etc/config/snort as settings, then generating the appropriate config when /etc/init.d/snort is launched.)
snaplen can be put into the config as a parameter of the daq section:
-z/--max-packet-threads (and many other CLI options) may be specified in the snort values:
The coding is not so useful in this case because the command line overwrites the values of the snort.lua also you can see so well with which important parameters Snort runs and for a not coder is not so nice because a missing/incorrect character quickly leads to the abort because of syntax error you must always remember that not every user is a programmer so I find the use of lua as a config file also quite off the old snort.conf files were better there. What would make sense would be a script where you enter the desired number of queues and which then automatically adjusts the number of queues in the queue start script and the i and z parameters in the service file.
Oh yes the variables = { 'device=eth1' } variable can be omitted for nfq I have not noticed any difference between being present and not being present.
Yes, exactly, and setting up the nft tables correspondingly. Here's a very rough draft of my current thinking.
# cat /etc/config/snort
config snort 'snort'
option enabled '1'
option config_dir '/etc/snort/'
option mode 'ips' # or 'ids', maybe better names 'detectonly' and 'prevent'?
option mode_action 'block' # 'alert', 'reject', don't know what makes sense yet
option method 'nfq' # or 'afpacket' or ???
option nfq_queue_count '4'
option ... maybe put max queue length and snaplen in here, too.
Once I get it (a lot) more mature, I'll get with @darksky as I believe John is the current maintainer of the OpenWrt snort package, and see if we can make this whole thing a lot easier to deploy. It's pretty wild right now, I've got a lot of questions yet about how various things behave.
My current experiments have gotten to the point where I can block
Test router is 10.1.1.20 on the WAN and 192.168.1.1 on the LAN.
LAN -> router-eth0
Sun May 28 14:56:28 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 10.1.1.20
WAN -> router-eth0
Sun May 28 16:06:29 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 10.1.1.200 -> 10.1.1.20
LAN -> WAN
Sun May 28 16:06:52 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 10.1.1.200
Sun May 28 16:07:46 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 8.8.8.8
LAN -> router-br-lan
Sun May 28 16:08:27 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 192.168.1.1
router -> router
Sun May 28 18:14:55 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 10.1.1.20 -> 10.1.1.20
Sun May 28 18:15:36 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.1 -> 192.168.1.1
If I ping from the router to anything else, it gets through, e.g., ping -c4 8.8.8.8 (real WAN) or 10.1.1.200 (testing WAN) or 192.168.1.121 (testing LAN) all respond and no log entries are generated.
I'm using three queues, each in their own chain, along with three threads in snort:
inet snort table with three chains
# nft list table inet snort
table inet snort {
chain input_ips {
type filter hook input priority mangle; policy accept;
counter queue flags bypass to 4
}
chain forward_ips {
type filter hook forward priority mangle; policy accept;
counter queue flags bypass to 5
}
chain prerouting_ips {
type filter hook prerouting priority mangle; policy accept;
counter queue flags bypass to 6
}
}
Has anyone been able to block pings originating from the router itself? (This seems like a major item, as if your router is compromised, lateral movement through the network is really trivial.)
Has anyone found a good reference for rule syntax? My attempts to create an ICMPv6 equivalent test rule have all failed.
As long as afpaket doesn't work properly, it falls out as an ips, there is only nfq and there no reject works, stay only alert drop and block, but I thought I had read somewhere that block kills the connection right away, drop would be the better choice. Pcap is a good IDS because it can also be bound to virtual network devices. The names are good, everyone understands that.
The problem that you can ping from the router could be due to the queue, the nftables makes differences between local and external packets according to my knowledge, because as it is in my Nftables table, the queue is in it with hook forward, but the local rules are under hook input/output. You'll probably need to create an extra queue for local traffic first and bind it to Snort.
//edit
nft 'add chain inet snort local { type filter hook output priority filter ; }'
nft insert rule inet snort local counter queue num 7 bypass
with this rules snort can block the output traffic from the router self.
It is ignoring the logging to /mnt/mmcblk0p3/alert_fast.txt which is defined in my ok.lua
If I do not hide kernel threads, I see CPU saturation on several cores during a speed test which limts bandwidth limiting from over 1000 Mbps without running snort to around 100-200 Mbps.
Yes that was to be expected that the bandwidth goes down Snort is very demanding has always been so. Sure the logging does not work for me it is but note that Snort creates multiple log files one per queue which then 0_alert_fast.txt 1_alert_fast.txt etc are called. The problem that Snort does not block local pings I have already solved in this thread this is due to the nature of the queue Nfttables makes a distinction between local traffic to and from the device and traffic passing through the device from other devices you need to create an extra queue with the hook input or output.
Well since afpacket doesn't work probably not the problem is that the queue is limited but I don't know what I suspect an internal kernel limit or a bug. Can you post the performance with only one connection or queue?
That is clear that with the flush command you have probably deleted the queue. I reach up to 75 Mbit max usually around the 60 since the difference between us is so large that indicates a Cpu limit does not surprise me. see if the performance goes up if you reduce the rule set many rules are unnecessary since intended for servers.
Yes, that two-chain ruleset is correct (you can delete those "meta l4proto" comment lines, they were just an experiment that was resolved with the "4-6" syntax). See also the hidden section "inet snort table with three chains" in 16, above.
I'm not looking at speed yet, as I'm still trying to fully understand where I can intercept all packets that originate from, destined for or pass through the router. From this excellent set of articles I read last fall, it looks like the only reliable hook is postrouting, as prerouting misses local processes (e.g., ping from router) and other hooks depend on routing decision. I'm still trying to figure out whether it makes more sense to check before or after the nat priority.
Well there is another way to create a queue with Conntrackd https://conntrack-tools.netfilter.org/manual.html under user space helpers. But I have no idea how and if this works my attempts failed because of some syntax errors.
The local queue is also not recommended, but the chain with the hook output performed a bit better than the one with the hook input.
Hanging the queue on the postrouting hook works but the performance drops compared to the forward hook. What also influences the performance is the --daq-var fanout_type=hash parameter.
I haven't tried the conntrack queues yet, but I did try an nft netdev table with an ingress hook, but it caused a kernel crash when snort started. Apparently don't do that!