IPS mode of snort3 is not dropping traffic

Oh that's crap what you are doing better pass the parameters by start line because the parameters are changeable for example it would be better to use 4 queues and more for your bandwidth because you have only one 4 threads unfortunately only 4 are possible that means you would have to change in the script of Efahl the line:
counter queue flags bypass to 4-6
to: counter queue flags bypass to 4-7.
Then you start snort with the parameters:
snort -q -c "/etc/snort/snort.lua" -i "4" -i "5" -i "6" - i "7" --daq-dir /usr/lib/daq --daq nfq -Q -z 4 -s 64000 --daq-var queue_maxlen=8192

As you can see with another queue also the z parameter has to be changed and this is easier solved with the command line.

Yeah, I'm a coder from waaaay back, so I put everything into the config files and minimize the command line. :grin:

(Aside: I'm working toward being able to specify all this stuff in UCI /etc/config/snort as settings, then generating the appropriate config when /etc/init.d/snort is launched.)

snaplen can be put into the config as a parameter of the daq section:

-z/--max-packet-threads (and many other CLI options) may be specified in the snort values:

  snort  = {
    ['-Q'] = true,
    ['--max-packet-threads'] = 3,
  }

The coding is not so useful in this case because the command line overwrites the values of the snort.lua also you can see so well with which important parameters Snort runs and for a not coder is not so nice because a missing/incorrect character quickly leads to the abort because of syntax error you must always remember that not every user is a programmer so I find the use of lua as a config file also quite off the old snort.conf files were better there. What would make sense would be a script where you enter the desired number of queues and which then automatically adjusts the number of queues in the queue start script and the i and z parameters in the service file.
Oh yes the variables = { 'device=eth1' } variable can be omitted for nfq I have not noticed any difference between being present and not being present.

Yes, exactly, and setting up the nft tables correspondingly. Here's a very rough draft of my current thinking.

# cat /etc/config/snort
config snort 'snort'
        option enabled '1'
        option config_dir '/etc/snort/'
        option mode 'ips' # or 'ids', maybe better names 'detectonly' and 'prevent'?
        option mode_action 'block' # 'alert', 'reject', don't know what makes sense yet
        option method 'nfq'  # or 'afpacket' or ???
        option nfq_queue_count '4'
        option ... maybe put max queue length and snaplen in here, too.

Once I get it (a lot) more mature, I'll get with @darksky as I believe John is the current maintainer of the OpenWrt snort package, and see if we can make this whole thing a lot easier to deploy. It's pretty wild right now, I've got a lot of questions yet about how various things behave.

My current experiments have gotten to the point where I can block

Test router is 10.1.1.20 on the WAN and 192.168.1.1 on the LAN.

LAN -> router-eth0
Sun May 28 14:56:28 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 10.1.1.20
WAN -> router-eth0
Sun May 28 16:06:29 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 10.1.1.200 -> 10.1.1.20
LAN -> WAN
Sun May 28 16:06:52 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 10.1.1.200
Sun May 28 16:07:46 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 8.8.8.8
LAN -> router-br-lan
Sun May 28 16:08:27 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.121 -> 192.168.1.1
router -> router
Sun May 28 18:14:55 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 10.1.1.20 -> 10.1.1.20
Sun May 28 18:15:36 2023 auth.info snort: [1:10000010:1] "TEST ALERT" {ICMP} 192.168.1.1 -> 192.168.1.1

If I ping from the router to anything else, it gets through, e.g., ping -c4 8.8.8.8 (real WAN) or 10.1.1.200 (testing WAN) or 192.168.1.121 (testing LAN) all respond and no log entries are generated.

I'm using three queues, each in their own chain, along with three threads in snort:

inet snort table with three chains
# nft list table inet snort
table inet snort {
        chain input_ips {
                type filter hook input priority mangle; policy accept;
                counter   queue flags bypass to 4
        }

        chain forward_ips {
                type filter hook forward priority mangle; policy accept;
                counter   queue flags bypass to 5
        }

        chain prerouting_ips {
                type filter hook prerouting priority mangle; policy accept;
                counter   queue flags bypass to 6
        }
}
  1. Has anyone been able to block pings originating from the router itself? (This seems like a major item, as if your router is compromised, lateral movement through the network is really trivial.)

  2. Has anyone found a good reference for rule syntax? My attempts to create an ICMPv6 equivalent test rule have all failed.

As long as afpaket doesn't work properly, it falls out as an ips, there is only nfq and there no reject works, stay only alert drop and block, but I thought I had read somewhere that block kills the connection right away, drop would be the better choice. Pcap is a good IDS because it can also be bound to virtual network devices. The names are good, everyone understands that.

The problem that you can ping from the router could be due to the queue, the nftables makes differences between local and external packets according to my knowledge, because as it is in my Nftables table, the queue is in it with hook forward, but the local rules are under hook input/output. You'll probably need to create an extra queue for local traffic first and bind it to Snort.

//edit
nft 'add chain inet snort local { type filter hook output priority filter ; }'
nft insert rule inet snort local counter queue num 7 bypass

with this rules snort can block the output traffic from the router self.

  • This works to stop the ping test on a PC.
  • It is ignoring the logging to /mnt/mmcblk0p3/alert_fast.txt which is defined in my ok.lua
  • If I do not hide kernel threads, I see CPU saturation on several cores during a speed test which limts bandwidth limiting from over 1000 Mbps without running snort to around 100-200 Mbps.
Running snort CLI
# snort -c "/etc/snort/snort.lua" -i "4" -i "5" -i "6" -i "7" --daq-dir /usr/lib/daq --daq nfq -Q -z 4 -s 64000 --daq-var queue_maxlen=8192 --tweaks ok
--------------------------------------------------
o")~   Snort++ 3.1.62.0
--------------------------------------------------
Loading /etc/snort/snort.lua:
Loading homenet.lua:
Finished homenet.lua:
Loading snort_defaults.lua:
Finished snort_defaults.lua:
Loading ok.lua:
Finished ok.lua:
	ssh
	host_cache
	pop
	so_proxy
	stream_tcp
	mms
	smtp
	gtp_inspect
	packets
	dce_http_proxy
	alert_fast
	ips
	stream_icmp
	hosts
	normalizer
	binder
	wizard
	appid
	js_norm
	file_id
	http2_inspect
	http_inspect
	stream_udp
	ftp_data
	ftp_server
	search_engine
	port_scan
	dce_http_server
	dce_tcp
	dce_smb
	iec104
	cip
	telnet
	ssl
	sip
	rpc_decode
	netflow
	modbus
	host_tracker
	stream_user
	stream_ip
	process
	back_orifice
	classifications
	dnp3
	active
	trace
	ftp_client
	decode
	alerts
	stream
	references
	daq
	arp_spoof
	output
	network
	dns
	dce_udp
	imap
	file_policy
	s7commplus
	stream_file
Finished /etc/snort/snort.lua:
Loading file_id.rules_file:
Loading file_magic.rules:
Finished file_magic.rules:
Finished file_id.rules_file:
Loading rules/test.rules:
Finished rules/test.rules:
--------------------------------------------------
ips policies rule stats
              id  loaded  shared enabled    file
               0     209       0     209    /etc/snort/snort.lua
--------------------------------------------------
rule counts
       total rules loaded: 209
               text rules: 209
            option chains: 209
            chain headers: 2
--------------------------------------------------
port rule counts
             tcp     udp    icmp      ip
     any       0       0       1       0
   total       0       0       1       0
--------------------------------------------------
service rule counts          to-srv  to-cli
                   dcerpc:      208     208
                 ftp-data:      208     208
                     http:      208     208
                    http2:      208     208
                    http3:      208     208
                     imap:      208     208
              netbios-ssn:      208     208
                     pop3:      208     208
                     smtp:      208     208
                    total:     1872    1872
--------------------------------------------------
fast pattern groups
                to_server: 9
                to_client: 9
--------------------------------------------------
search engine (ac_bnfa)
                instances: 18
                 patterns: 3744
            pattern chars: 22572
               num states: 16002
         num match states: 3330
             memory scale: KB
             total memory: 617.291
           pattern memory: 168.275
        match list memory: 245.953
        transition memory: 200.812
appid: MaxRss diff: 2540
appid: patterns loaded: 300
--------------------------------------------------
nfq DAQ configured to inline.
Commencing packet processing
++ [0] 4
++ [1] 5
++ [2] 6
++ [3] 7
^C** caught int signal
== stopping
-- [0] 4
-- [2] 6
-- [1] 5
-- [3] 7
--------------------------------------------------
Packet Statistics
--------------------------------------------------
daq
                 received: 905754
                 analyzed: 905754
                    allow: 905285
                  replace: 4
                whitelist: 463
                blacklist: 2
                 rx_bytes: 954194408
--------------------------------------------------
codec
                    total: 905754      	(100.000%)
                 discards: 2948        	(  0.325%)
                    icmp4: 4           	(  0.000%)
                 icmp4_ip: 2           	(  0.000%)
                     ipv4: 905754      	(100.000%)
                      raw: 905754      	(100.000%)
                      tcp: 695482      	( 76.785%)
                      udp: 210268      	( 23.215%)
--------------------------------------------------
Module Statistics
--------------------------------------------------
appid
                  packets: 902805
        processed_packets: 902548
          ignored_packets: 257
           total_sessions: 1196
       service_cache_adds: 214
             bytes_in_use: 32528
             items_in_use: 214
--------------------------------------------------
back_orifice
                  packets: 209041
--------------------------------------------------
binder
              raw_packets: 257
                new_flows: 1192
          service_changes: 147
                 inspects: 1449
--------------------------------------------------
detection
                 analyzed: 905754
               hard_evals: 3
            file_searches: 2
                   alerts: 1
             total_alerts: 1
                   logged: 1
--------------------------------------------------
dns
                  packets: 743
                 requests: 721
                responses: 22
--------------------------------------------------
file_id
              total_files: 2
          total_file_data: 943
     max_concurrent_files: 1
--------------------------------------------------
http_inspect
                    flows: 43
                    scans: 334
              reassembles: 326
              inspections: 326
                 requests: 156
                responses: 2
             get_requests: 156
       uri_normalizations: 2
  max_concurrent_sessions: 35
          pipelined_flows: 23
       pipelined_requests: 121
              total_bytes: 73202
--------------------------------------------------
normalizer
        test_tcp_trim_syn: 2
        test_tcp_trim_win: 48
             tcp_trim_win: 91025
          test_tcp_ts_nop: 89
             tcp_ips_data: 8
           test_tcp_block: 88939
--------------------------------------------------
port_scan
                  packets: 905754
                 trackers: 235
--------------------------------------------------
search_engine
     non_qualified_events: 2
         qualified_events: 1
           searched_bytes: 943
--------------------------------------------------
ssl
                  packets: 29039
                  decoded: 29039
             client_hello: 104
             server_hello: 104
              certificate: 39
              server_done: 119
      client_key_exchange: 34
      server_key_exchange: 39
            change_cipher: 198
       client_application: 720
       server_application: 27546
     unrecognized_records: 540
     handshakes_completed: 37
         sessions_ignored: 37
  max_concurrent_sessions: 24
--------------------------------------------------
stream
                    flows: 1192
             total_prunes: 96
              idle_prunes: 96
--------------------------------------------------
stream_icmp
                 sessions: 3
                      max: 1
                  created: 3
                 released: 3
--------------------------------------------------
stream_tcp
                 sessions: 264
                      max: 128
                  created: 264
                 released: 260
             instantiated: 264
                   setups: 264
                 restarts: 147
         discards_skipped: 88939
          invalid_seq_num: 42
              invalid_ack: 88869
                   events: 72
             syn_trackers: 141
            data_trackers: 119
              segs_queued: 307221
            segs_released: 307221
                segs_used: 301107
          rebuilt_packets: 29541
            rebuilt_bytes: 433646203
                 overlaps: 8
                     gaps: 3
        exceeded_max_segs: 91025
    payload_fully_trimmed: 4
          client_cleanups: 120
          server_cleanups: 36
              established: 3
                     syns: 141
                 syn_acks: 109
                   resets: 230
                     fins: 145
      inspector_fallbacks: 5
        partial_fallbacks: 22
                 max_segs: 3072
                max_bytes: 3555999
--------------------------------------------------
stream_udp
                 sessions: 925
                      max: 364
                  created: 929
                 released: 929
                 timeouts: 4
              total_bytes: 280454396
--------------------------------------------------
tcp
        bad_tcp4_checksum: 2640
--------------------------------------------------
udp
        bad_udp4_checksum: 308
--------------------------------------------------
wizard
                tcp_scans: 651
                 tcp_hits: 147
               tcp_misses: 12
                udp_scans: 230
               udp_misses: 230
--------------------------------------------------
Appid Statistics
--------------------------------------------------
detected apps and services
              Application: Services   Clients    Users      Payloads   Misc       Referred  
                  unknown: 513        731        0          133        0          0         
--------------------------------------------------
Summary Statistics
--------------------------------------------------
process
                  signals: 1
--------------------------------------------------
timing
                  runtime: 00:03:26
                  seconds: 206.231494
                 pkts/sec: 4392
                Mbits/sec: 35
o")~   Snort exiting
/etc/snort/ok.lau

This text will be hidden

ips = {
  mode = inline,
  variables = default_variables,
  action_override = 'block',
--  include = RULE_PATH .. '/snort.rules',
  include = RULE_PATH .. '/test.rules',
}

-- To log to a file, uncomment the below and manually create the dir defined in output.logdir
output.logdir = '/mnt/mmcblk0p3'
alert_fast = {
	file = true,
	packet = false,
}

normalizer = {
  tcp = {
    ips = true,
  }
}

file_policy = {
  enable_type = true,
  enable_signature = true,
  rules = {
    use = {
      verdict = 'log', enable_file_type = true, enable_file_signature = true
    }
  }
}
/etc/snort/snort-table.sh
#!/bin/sh

verbose=false

nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook forward priority filter; policy accept;

            counter  queue flags bypass to 4-7

#           meta l4proto tcp               counter  queue flags bypass to 4
#           meta l4proto udp               counter  queue flags bypass to 5
#           meta l4proto != { tcp, udp }   counter  queue flags bypass to 6
        }
    }
TABLE

$verbose && nft list table inet snort

exit 0

No, using the setup described in the post right before this one, pings on a box behind the router are blocked but on the router itself, they are not.

From PC:

% ping www.google.com
PING www.google.com (172.217.1.100) 56(84) bytes of data.

From router:

# ping www.google.com
PING www.google.com (142.250.191.228): 56 data bytes
64 bytes from 142.250.191.228: seq=0 ttl=56 time=18.867 ms
64 bytes from 142.250.191.228: seq=1 ttl=56 time=20.191 ms
^C
--- www.google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss

Yes that was to be expected that the bandwidth goes down Snort is very demanding has always been so. Sure the logging does not work for me it is but note that Snort creates multiple log files one per queue which then 0_alert_fast.txt 1_alert_fast.txt etc are called. The problem that Snort does not block local pings I have already solved in this thread this is due to the nature of the queue Nfttables makes a distinction between local traffic to and from the device and traffic passing through the device from other devices you need to create an extra queue with the hook input or output.

1 Like

Thanks @xxxx, you were right... I was tail -f the wrong file.

-rw-------    1 root     root           0 May 29 04:39 0_alert_fast.txt
-rw-------    1 root     root           0 May 29 04:39 1_alert_fast.txt
-rw-------    1 root     root         119 May 29 04:44 2_alert_fast.txt
-rw-------    1 root     root        2.5K May 29 04:52 3_alert_fast.txt
-rw-------    1 root     root        1.6M May 29 04:32 alert_fast.txt

Pity the speed loss is observed. Is it due to running in nfq mode? Is there a more efficient IPS mode?

Well since afpacket doesn't work probably not the problem is that the queue is limited but I don't know what I suspect an internal kernel limit or a bug. Can you post the performance with only one connection or queue?

I setup one queue by modifying @efahl script shown below. Got download speeds of 19-20 Mbps.

!/bin/sh
verbose=true

nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook forward priority filter; policy accept;

            counter  queue flags bypass to 4
        }
    }
TABLE

$verbose && nft list table inet snort

exit 0

Then ran snort like this:

# snort -c "/etc/snort/snort.lua" -i "4" --daq-dir /usr/lib/daq --daq nfq -Q -s 64000 --daq-var queue_maxlen=8192 --tweaks ok

If I drop the nft table for snort the download speed is fast again but of course, no blocking occurs.

# nft flush table inet snort

That is clear that with the flush command you have probably deleted the queue. I reach up to 75 Mbit max usually around the 60 since the difference between us is so large that indicates a Cpu limit does not surprise me. see if the performance goes up if you reduce the rule set many rules are unnecessary since intended for servers.

#!/bin/sh

verbose=false

nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook forward priority filter; policy accept;

            counter  queue flags bypass to 4-6
	}

         chain localinput {
             type filter hook input priority filter; policy accept;

          counter  queue flags bypass to 99

#           meta l4proto tcp               counter  queue flags bypass to 4
#           meta l4proto udp               counter  queue flags bypass to 5
#           meta l4proto != { tcp, udp }   counter  queue flags bypass to 6
        }
    }
TABLE

$verbose && nft list table inet snort

exit 0

I have adapted the script from efahl and extended it with a local queue. Please have a look if it works but I don't know if it is correct.

The rule set size doesn't affect the speed in my testing.

Yes, I have also made the experience, but I had the hope that it helps with you there remains probably only a device with stronger Cpu.

Yes, that two-chain ruleset is correct (you can delete those "meta l4proto" comment lines, they were just an experiment that was resolved with the "4-6" syntax). See also the hidden section "inet snort table with three chains" in 16, above.

I'm not looking at speed yet, as I'm still trying to fully understand where I can intercept all packets that originate from, destined for or pass through the router. From this excellent set of articles I read last fall, it looks like the only reliable hook is postrouting, as prerouting misses local processes (e.g., ping from router) and other hooks depend on routing decision. I'm still trying to figure out whether it makes more sense to check before or after the nat priority.

Well there is another way to create a queue with Conntrackd https://conntrack-tools.netfilter.org/manual.html under user space helpers. But I have no idea how and if this works my attempts failed because of some syntax errors.

The local queue is also not recommended, but the chain with the hook output performed a bit better than the one with the hook input.

Hanging the queue on the postrouting hook works but the performance drops compared to the forward hook. What also influences the performance is the --daq-var fanout_type=hash parameter.

#!/bin/sh

verbose=false

nft list tables | grep -q 'snort' && nft flush table inet snort

nft -f - <<TABLE
    table inet snort {
        chain IPS {
            type filter hook postrouting priority filter; policy accept;

            counter  queue flags bypass to 4-7

        }
    }
TABLE

$verbose && nft list table inet snort

exit 0

I haven't tried the conntrack queues yet, but I did try an nft netdev table with an ingress hook, but it caused a kernel crash when snort started. Apparently don't do that!

table netdev snort {
    chain ingress {
        type filter hook ingress devices = { eth0, br-lan } priority -500; policy accept;
        counter  queue flags bypass to 4-6
    }
}

Run snort:

Mon May 29 08:37:17 2023 kern.warn kernel: [138216.974169] ------------[ cut here ]------------
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.974680] WARNING: CPU: 0 PID: 5781 at nf_reinject+0x3f/0x1e0
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.975211] Modules linked in: pppoe ppp_async nft_fib_inet nf_flow_table_ipv6 nf_flow_table_ipv4 nf_flow_table_inet pppox ppp_generic nft_reject_ipv6 nft_reject_ipv4 nft_reject_inet nft_reject nft_redir nft_quota nft_queue nft_objref nft_numgen nft_nat nft_masq nft_log nft_limit nft_hash nft_flow_offload nft_fib_ipv6 nft_fib_ipv4 nft_fib nft_ct nft_counter nft_chain_nat nf_tables nf_nat nf_flow_table nf_conntrack_netlink nf_conntrack lzo iptable_mangle iptable_filter ipt_REJECT ipt_ECN ip_tables xt_time xt_tcpudp xt_tcpmss xt_statistic xt_multiport xt_mark xt_mac xt_limit xt_length xt_hl xt_ecn xt_dscp xt_comment xt_TCPMSS xt_LOG xt_HL xt_DSCP xt_CLASSIFY x_tables slhc sch_cake r8169 nfnetlink_queue nfnetlink nf_reject_ipv6 nf_reject_ipv4 nf_log_syslog nf_defrag_ipv6 nf_defrag_ipv4 lzo_rle lzo_decompress lzo_compress libcrc32c igc forcedeth e1000e crc_ccitt bnx2 sch_tbf sch_ingress sch_htb sch_hfsc em_u32 cls_u32 cls_route cls_matchall cls_fw cls_flow cls_basic act_skbedit act_mirred
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.975244]  act_gact i2c_dev ixgbe e1000 amd_xgbe ifb mdio nls_utf8 ena crypto_acompress nls_iso8859_1 nls_cp437 igb vfat fat button_hotplug tg3 ptp realtek pps_core mii
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.981496] CPU: 0 PID: 5781 Comm: snort Tainted: G        W         5.15.112 #0
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.982124] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.0 11/01/2019
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.982829] RIP: 0010:nf_reinject+0x3f/0x1e0
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.983430] Code: 10 0f b6 57 31 4c 8b 77 10 48 8b 4f 50 0f b6 47 30 80 fa 07 74 43 80 fa 0a 0f 84 f7 00 00 00 80 fa 02 0f 84 de 00 00 00 0f 0b <0f> 0b 31 f6 4c 89 f7 e8 25 60 f6 ff 4c 89 e7 e8 dd f9 ff ff 4c 89
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.984832] RSP: 0018:ffffc90005243a08 EFLAGS: 00010206
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.985470] RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffffffff8273d880
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.986153] RDX: 0000000000000005 RSI: 0000000000000001 RDI: ffff88800856e680
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.986825] RBP: ffffc90005243a40 R08: 0000000000000000 R09: ffffc90005243b48
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.987499] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88800856e680
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.988169] R13: ffffffffa03110c0 R14: ffff888007f3b700 R15: ffffc90005243b48
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.988850] FS:  00007f64d4e89b30(0000) GS:ffff88801f000000(0000) knlGS:0000000000000000
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.989551] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.990206] CR2: 00007f64cc13c000 CR3: 000000001f1ea000 CR4: 00000000003506f0
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.990898] Call Trace:
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.991516]  <TASK>
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.992116]  0xffffffffa023a060
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.992747]  0xffffffffa023b96c
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.993371]  nfnetlink_unicast+0x2ae/0xdee [nfnetlink]
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.994020]  ? __wake_up+0xe/0x20
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.994656]  ? nfnetlink_unicast+0xf0/0xdee [nfnetlink]
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.995325]  netlink_rcv_skb+0x52/0x100
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.995973]  nfnetlink_unicast+0xd1a/0xdee [nfnetlink]
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.996652]  ? __kmalloc_track_caller+0x48/0x440
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.997304]  ? _copy_from_iter+0x90/0x5f0
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.997954]  netlink_unicast+0x1ff/0x2e0
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.998597]  netlink_sendmsg+0x21d/0x460
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.999226]  __sys_sendto+0x17f/0x190
Mon May 29 08:37:17 2023 kern.warn kernel: [138216.999869]  ? fput+0xe/0x20
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.000462]  ? __sys_recvmsg+0x62/0x90
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.001054]  ? irqentry_exit+0x1d/0x30
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.001655]  __x64_sys_sendto+0x1f/0x30
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.002246]  do_syscall_64+0x42/0x90
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.002853]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.003475] RIP: 0033:0x7f64d71dc399
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.004076] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 c8 cf ff ff 41 54 b8 02 00 00 00 55 48 89 f5 be 00 88 08 00
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.005563] RSP: 002b:00007f64d4e628f8 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.006225] RAX: ffffffffffffffda RBX: 000000000000002c RCX: 00007f64d71dc399
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.006882] RDX: 0000000000000020 RSI: 00007f64d5d6d080 RDI: 0000000000000003
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.007530] RBP: 00007f64d4e89b30 R08: 00007f64d5400000 R09: 000000000000000c
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.008158] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.008772] R13: 00007f64d53eb7e8 R14: 0000000000000001 R15: 0000000000000000
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.009378]  </TASK>
Mon May 29 08:37:17 2023 kern.warn kernel: [138217.009884] ---[ end trace 57b675713d9ff314 ]---