IPS mode of snort3 is not dropping traffic

You have not set the snaplen look in the other thread I analyzed your config, you need to set a snaplen from ~64000.

1 Like

I added -s 64000 to the snort start line. I am confused though... for IPS mode, shouldn't I be using pipeline = 'nfq' not pipeline = 'afpacket'?

I have modified my local.lua based on @efahl suggestion:

config   = 'IPS'      -- 'IDS' or 'IPS'
pipeline = 'afpacket' -- 'afpacket' or 'nfq'

if config == 'IDS' then
  mode   = tap
  action = 'alert'
  mode   = inline
  snort  = { ['-Q'] = true }
  action = 'drop'  -- 'block' or 'drop' or 'reject' or ???

if pipeline == 'afpacket' then
  inputs = { 'eth1' }
  vars   = {}
  inputs = { '4', '5', '6' } -- to match queue numbers in 'inet snort' table
  vars   = { 'device=eth1', 'queue_maxlen=8192', }


ips = {
  mode            = mode,
  variables       = default_variables,
  action_override = action,
--  include         = RULE_PATH .. '/snort.rules',
  include         = RULE_PATH .. '/test.rules',

daq = {
  inputs      = inputs,
  module_dirs = { '/usr/lib/daq', },
  modules     = {
      name = pipeline,
      mode = mode,
      variables = vars,

output.logdir = '/mnt/mmcblk0p3'
alert_fast = {
	file = true,
	packet = false,

--search_engine = { search_method = "hyperscan" }
--detection = { hyperscan_literals = true, pcre_to_regex = true }

normalizer = {
  tcp = {
    ips = true,

file_policy = {
  enable_type = true,
  enable_signature = true,
  rules = {
    use = {
      verdict = 'log', enable_file_type = true, enable_file_signature = true

And to be clear, I am starting snort like this:

# snort -c /etc/snort/snort.lua -s 64000 --tweaks local
1 Like

Oh that's crap what you are doing better pass the parameters by start line because the parameters are changeable for example it would be better to use 4 queues and more for your bandwidth because you have only one 4 threads unfortunately only 4 are possible that means you would have to change in the script of Efahl the line:
counter queue flags bypass to 4-6
to: counter queue flags bypass to 4-7.
Then you start snort with the parameters:
snort -q -c "/etc/snort/snort.lua" -i "4" -i "5" -i "6" - i "7" --daq-dir /usr/lib/daq --daq nfq -Q -z 4 -s 64000 --daq-var queue_maxlen=8192

As you can see with another queue also the z parameter has to be changed and this is easier solved with the command line.

Yeah, I'm a coder from waaaay back, so I put everything into the config files and minimize the command line. :grin:

(Aside: I'm working toward being able to specify all this stuff in UCI /etc/config/snort as settings, then generating the appropriate config when /etc/init.d/snort is launched.)

snaplen can be put into the config as a parameter of the daq section:

-z/--max-packet-threads (and many other CLI options) may be specified in the snort values:

  snort  = {
    ['-Q'] = true,
    ['--max-packet-threads'] = 3,

The coding is not so useful in this case because the command line overwrites the values of the snort.lua also you can see so well with which important parameters Snort runs and for a not coder is not so nice because a missing/incorrect character quickly leads to the abort because of syntax error you must always remember that not every user is a programmer so I find the use of lua as a config file also quite off the old snort.conf files were better there. What would make sense would be a script where you enter the desired number of queues and which then automatically adjusts the number of queues in the queue start script and the i and z parameters in the service file.
Oh yes the variables = { 'device=eth1' } variable can be omitted for nfq I have not noticed any difference between being present and not being present.

Yes, exactly, and setting up the nft tables correspondingly. Here's a very rough draft of my current thinking.

# cat /etc/config/snort
config snort 'snort'
        option enabled '1'
        option config_dir '/etc/snort/'
        option mode 'ips' # or 'ids', maybe better names 'detectonly' and 'prevent'?
        option mode_action 'block' # 'alert', 'reject', don't know what makes sense yet
        option method 'nfq'  # or 'afpacket' or ???
        option nfq_queue_count '4'
        option ... maybe put max queue length and snaplen in here, too.

Once I get it (a lot) more mature, I'll get with @darksky as I believe John is the current maintainer of the OpenWrt snort package, and see if we can make this whole thing a lot easier to deploy. It's pretty wild right now, I've got a lot of questions yet about how various things behave.

My current experiments have gotten to the point where I can block

Test router is on the WAN and on the LAN.

LAN -> router-eth0
Sun May 28 14:56:28 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} ->
WAN -> router-eth0
Sun May 28 16:06:29 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} ->
Sun May 28 16:06:52 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} ->
Sun May 28 16:07:46 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} ->
LAN -> router-br-lan
Sun May 28 16:08:27 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} ->
router -> router
Sun May 28 18:14:55 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} ->
Sun May 28 18:15:36 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} ->

If I ping from the router to anything else, it gets through, e.g., ping -c4 (real WAN) or (testing WAN) or (testing LAN) all respond and no log entries are generated.

I'm using three queues, each in their own chain, along with three threads in snort:

inet snort table with three chains
# nft list table inet snort
table inet snort {
        chain input_ips {
                type filter hook input priority mangle; policy accept;
                counter   queue flags bypass to 4

        chain forward_ips {
                type filter hook forward priority mangle; policy accept;
                counter   queue flags bypass to 5

        chain prerouting_ips {
                type filter hook prerouting priority mangle; policy accept;
                counter   queue flags bypass to 6
  1. Has anyone been able to block pings originating from the router itself? (This seems like a major item, as if your router is compromised, lateral movement through the network is really trivial.)

  2. Has anyone found a good reference for rule syntax? My attempts to create an ICMPv6 equivalent test rule have all failed.

As long as afpaket doesn't work properly, it falls out as an ips, there is only nfq and there no reject works, stay only alert drop and block, but I thought I had read somewhere that block kills the connection right away, drop would be the better choice. Pcap is a good IDS because it can also be bound to virtual network devices. The names are good, everyone understands that.

The problem that you can ping from the router could be due to the queue, the nftables makes differences between local and external packets according to my knowledge, because as it is in my Nftables table, the queue is in it with hook forward, but the local rules are under hook input/output. You'll probably need to create an extra queue for local traffic first and bind it to Snort.

nft 'add chain inet snort local { type filter hook output priority filter ; }'
nft insert rule inet snort local counter queue num 7 bypass

with this rules snort can block the output traffic from the router self.

  • This works to stop the ping test on a PC.
  • It is ignoring the logging to /mnt/mmcblk0p3/alert_fast.txt which is defined in my ok.lua
  • If I do not hide kernel threads, I see CPU saturation on several cores during a speed test which limts bandwidth limiting from over 1000 Mbps without running snort to around 100-200 Mbps.
Running snort CLI
# snort -c "/etc/snort/snort.lua" -i "4" -i "5" -i "6" -i "7" --daq-dir /usr/lib/daq --daq nfq -Q -z 4 -s 64000 --daq-var queue_maxlen=8192 --tweaks ok
o")~   Snort++
Loading /etc/snort/snort.lua:
Loading homenet.lua:
Finished homenet.lua:
Loading snort_defaults.lua:
Finished snort_defaults.lua:
Loading ok.lua:
Finished ok.lua:
Finished /etc/snort/snort.lua:
Loading file_id.rules_file:
Loading file_magic.rules:
Finished file_magic.rules:
Finished file_id.rules_file:
Loading rules/test.rules:
Finished rules/test.rules:
ips policies rule stats
              id  loaded  shared enabled    file
               0     209       0     209    /etc/snort/snort.lua
rule counts
       total rules loaded: 209
               text rules: 209
            option chains: 209
            chain headers: 2
port rule counts
             tcp     udp    icmp      ip
     any       0       0       1       0
   total       0       0       1       0
service rule counts          to-srv  to-cli
                   dcerpc:      208     208
                 ftp-data:      208     208
                     http:      208     208
                    http2:      208     208
                    http3:      208     208
                     imap:      208     208
              netbios-ssn:      208     208
                     pop3:      208     208
                     smtp:      208     208
                    total:     1872    1872
fast pattern groups
                to_server: 9
                to_client: 9
search engine (ac_bnfa)
                instances: 18
                 patterns: 3744
            pattern chars: 22572
               num states: 16002
         num match states: 3330
             memory scale: KB
             total memory: 617.291
           pattern memory: 168.275
        match list memory: 245.953
        transition memory: 200.812
appid: MaxRss diff: 2540
appid: patterns loaded: 300
nfq DAQ configured to inline.
Commencing packet processing
++ [0] 4
++ [1] 5
++ [2] 6
++ [3] 7
^C** caught int signal
== stopping
-- [0] 4
-- [2] 6
-- [1] 5
-- [3] 7
Packet Statistics
                 received: 905754
                 analyzed: 905754
                    allow: 905285
                  replace: 4
                whitelist: 463
                blacklist: 2
                 rx_bytes: 954194408
                    total: 905754      	(100.000%)
                 discards: 2948        	(  0.325%)
                    icmp4: 4           	(  0.000%)
                 icmp4_ip: 2           	(  0.000%)
                     ipv4: 905754      	(100.000%)
                      raw: 905754      	(100.000%)
                      tcp: 695482      	( 76.785%)
                      udp: 210268      	( 23.215%)
Module Statistics
                  packets: 902805
        processed_packets: 902548
          ignored_packets: 257
           total_sessions: 1196
       service_cache_adds: 214
             bytes_in_use: 32528
             items_in_use: 214
                  packets: 209041
              raw_packets: 257
                new_flows: 1192
          service_changes: 147
                 inspects: 1449
                 analyzed: 905754
               hard_evals: 3
            file_searches: 2
                   alerts: 1
             total_alerts: 1
                   logged: 1
                  packets: 743
                 requests: 721
                responses: 22
              total_files: 2
          total_file_data: 943
     max_concurrent_files: 1
                    flows: 43
                    scans: 334
              reassembles: 326
              inspections: 326
                 requests: 156
                responses: 2
             get_requests: 156
       uri_normalizations: 2
  max_concurrent_sessions: 35
          pipelined_flows: 23
       pipelined_requests: 121
              total_bytes: 73202
        test_tcp_trim_syn: 2
        test_tcp_trim_win: 48
             tcp_trim_win: 91025
          test_tcp_ts_nop: 89
             tcp_ips_data: 8
           test_tcp_block: 88939
                  packets: 905754
                 trackers: 235
     non_qualified_events: 2
         qualified_events: 1
           searched_bytes: 943
                  packets: 29039
                  decoded: 29039
             client_hello: 104
             server_hello: 104
              certificate: 39
              server_done: 119
      client_key_exchange: 34
      server_key_exchange: 39
            change_cipher: 198
       client_application: 720
       server_application: 27546
     unrecognized_records: 540
     handshakes_completed: 37
         sessions_ignored: 37
  max_concurrent_sessions: 24
                    flows: 1192
             total_prunes: 96
              idle_prunes: 96
                 sessions: 3
                      max: 1
                  created: 3
                 released: 3
                 sessions: 264
                      max: 128
                  created: 264
                 released: 260
             instantiated: 264
                   setups: 264
                 restarts: 147
         discards_skipped: 88939
          invalid_seq_num: 42
              invalid_ack: 88869
                   events: 72
             syn_trackers: 141
            data_trackers: 119
              segs_queued: 307221
            segs_released: 307221
                segs_used: 301107
          rebuilt_packets: 29541
            rebuilt_bytes: 433646203
                 overlaps: 8
                     gaps: 3
        exceeded_max_segs: 91025
    payload_fully_trimmed: 4
          client_cleanups: 120
          server_cleanups: 36
              established: 3
                     syns: 141
                 syn_acks: 109
                   resets: 230
                     fins: 145
      inspector_fallbacks: 5
        partial_fallbacks: 22
                 max_segs: 3072
                max_bytes: 3555999
                 sessions: 925
                      max: 364
                  created: 929
                 released: 929
                 timeouts: 4
              total_bytes: 280454396
        bad_tcp4_checksum: 2640
        bad_udp4_checksum: 308
                tcp_scans: 651
                 tcp_hits: 147
               tcp_misses: 12
                udp_scans: 230
               udp_misses: 230
Appid Statistics
detected apps and services
              Application: Services   Clients    Users      Payloads   Misc       Referred  
                  unknown: 513        731        0          133        0          0         
Summary Statistics
                  signals: 1
                  runtime: 00:03:26
                  seconds: 206.231494
                 pkts/sec: 4392
                Mbits/sec: 35
o")~   Snort exiting

This text will be hidden

ips = {
  mode = inline,
  variables = default_variables,
  action_override = 'block',
--  include = RULE_PATH .. '/snort.rules',
  include = RULE_PATH .. '/test.rules',

-- To log to a file, uncomment the below and manually create the dir defined in output.logdir
output.logdir = '/mnt/mmcblk0p3'
alert_fast = {
	file = true,
	packet = false,

normalizer = {
  tcp = {
    ips = true,

file_policy = {
  enable_type = true,
  enable_signature = true,
  rules = {
    use = {
      verdict = 'log', enable_file_type = true, enable_file_signature = true


nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook forward priority filter; policy accept;

            counter  queue flags bypass to 4-7

#           meta l4proto tcp               counter  queue flags bypass to 4
#           meta l4proto udp               counter  queue flags bypass to 5
#           meta l4proto != { tcp, udp }   counter  queue flags bypass to 6

$verbose && nft list table inet snort

exit 0

No, using the setup described in the post right before this one, pings on a box behind the router are blocked but on the router itself, they are not.

From PC:

% ping www.google.com
PING www.google.com ( 56(84) bytes of data.

From router:

# ping www.google.com
PING www.google.com ( 56 data bytes
64 bytes from seq=0 ttl=56 time=18.867 ms
64 bytes from seq=1 ttl=56 time=20.191 ms
--- www.google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss

Yes that was to be expected that the bandwidth goes down Snort is very demanding has always been so. Sure the logging does not work for me it is but note that Snort creates multiple log files one per queue which then 0_alert_fast.txt 1_alert_fast.txt etc are called. The problem that Snort does not block local pings I have already solved in this thread this is due to the nature of the queue Nfttables makes a distinction between local traffic to and from the device and traffic passing through the device from other devices you need to create an extra queue with the hook input or output.

1 Like

Thanks @xxxx, you were right... I was tail -f the wrong file.

-rw-------    1 root     root           0 May 29 04:39 0_alert_fast.txt
-rw-------    1 root     root           0 May 29 04:39 1_alert_fast.txt
-rw-------    1 root     root         119 May 29 04:44 2_alert_fast.txt
-rw-------    1 root     root        2.5K May 29 04:52 3_alert_fast.txt
-rw-------    1 root     root        1.6M May 29 04:32 alert_fast.txt

Pity the speed loss is observed. Is it due to running in nfq mode? Is there a more efficient IPS mode?

Well since afpacket doesn't work probably not the problem is that the queue is limited but I don't know what I suspect an internal kernel limit or a bug. Can you post the performance with only one connection or queue?

I setup one queue by modifying @efahl script shown below. Got download speeds of 19-20 Mbps.


nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook forward priority filter; policy accept;

            counter  queue flags bypass to 4

$verbose && nft list table inet snort

exit 0

Then ran snort like this:

# snort -c "/etc/snort/snort.lua" -i "4" --daq-dir /usr/lib/daq --daq nfq -Q -s 64000 --daq-var queue_maxlen=8192 --tweaks ok

If I drop the nft table for snort the download speed is fast again but of course, no blocking occurs.

# nft flush table inet snort

That is clear that with the flush command you have probably deleted the queue. I reach up to 75 Mbit max usually around the 60 since the difference between us is so large that indicates a Cpu limit does not surprise me. see if the performance goes up if you reduce the rule set many rules are unnecessary since intended for servers.



nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook forward priority filter; policy accept;

            counter  queue flags bypass to 4-6

         chain localinput {
             type filter hook input priority filter; policy accept;

          counter  queue flags bypass to 99

#           meta l4proto tcp               counter  queue flags bypass to 4
#           meta l4proto udp               counter  queue flags bypass to 5
#           meta l4proto != { tcp, udp }   counter  queue flags bypass to 6

$verbose && nft list table inet snort

exit 0

I have adapted the script from efahl and extended it with a local queue. Please have a look if it works but I don't know if it is correct.

The rule set size doesn't affect the speed in my testing.

Yes, I have also made the experience, but I had the hope that it helps with you there remains probably only a device with stronger Cpu.

Yes, that two-chain ruleset is correct (you can delete those "meta l4proto" comment lines, they were just an experiment that was resolved with the "4-6" syntax). See also the hidden section "inet snort table with three chains" in 16, above.

I'm not looking at speed yet, as I'm still trying to fully understand where I can intercept all packets that originate from, destined for or pass through the router. From this excellent set of articles I read last fall, it looks like the only reliable hook is postrouting, as prerouting misses local processes (e.g., ping from router) and other hooks depend on routing decision. I'm still trying to figure out whether it makes more sense to check before or after the nat priority.

Well there is another way to create a queue with Conntrackd https://conntrack-tools.netfilter.org/manual.html under user space helpers. But I have no idea how and if this works my attempts failed because of some syntax errors.

The local queue is also not recommended, but the chain with the hook output performed a bit better than the one with the hook input.