Opening Taxi App - Oom_reaper kills dnsmasq

The real question here is what is spawning multiple instances of dnsmasq. That's wrong...

Ok, so dnsmasq forks itself to handle tcp connections...

1 Like

If that's the case, check if it's possible to set a maximum of child processes.

This is defined to be #define MAX_PROCS 20 in dnsmasq's config.h

That's kind of hard to redefine once compiled, I was looking for a conf file or command line parameter.

Yep, agreed - there is no conf file option for this

Definitely no command line parameter. Just looked at the code. dnsmasq would need to be patched to allow for a configurable MAX_PROCS set on the command line.

If you roll your own openwrt or can build dnsmasq from source, here's a patch to dnsmasq that implements a maximum number of child processes. It's on dnsmasq-2.84, so the trunk version of dnsmasq. It's a relatively trivial patch....

With this patch, dnsmasq will accept a new configuration file parameter or command line long-form argument max-procs=<number>. Just put the parameter in /etc/dnsmasq.conf.

The configured maximum number of child processes can not go above the MAX_PROCS value defined to be 20, so this argument really only serves to allow you to reduce the maximum number of processes.

I've tested it as far as making sure that the set value is used correctly. Be interesting to see your mileage and if it actually fixes your problem or not....

If you don't have a full buildroot, you can just use the openwrt SDK to compile a patched version of dnsmasq. Put the file below into package/network/services/dnsmasq/patches/200-max-procs.patch

Do backup the original binary of dnsmasq before you test this one.

Index: dnsmasq-2.84/src/dnsmasq.c
===================================================================
--- dnsmasq-2.84.orig/src/dnsmasq.c
+++ dnsmasq-2.84/src/dnsmasq.c
@@ -1011,7 +1011,7 @@ int main (int argc, char **argv)
   pid = getpid();
 
   daemon->pipe_to_parent = -1;
-  for (i = 0; i < MAX_PROCS; i++)
+  for (i = 0; i < daemon->max_procs; i++)
     daemon->tcp_pipes[i] = -1;
   
 #ifdef HAVE_INOTIFY
@@ -1459,7 +1459,7 @@ static void async_event(int pipe, time_t
 		break;
 	    }      
 	  else 
-	    for (i = 0 ; i < MAX_PROCS; i++)
+	    for (i = 0 ; i < daemon->max_procs; i++)
 	      if (daemon->tcp_pids[i] == p)
 		daemon->tcp_pids[i] = 0;
 	break;
@@ -1523,7 +1523,7 @@ static void async_event(int pipe, time_t
 	
       case EVENT_TERM:
 	/* Knock all our children on the head. */
-	for (i = 0; i < MAX_PROCS; i++)
+	for (i = 0; i < daemon->max_procs; i++)
 	  if (daemon->tcp_pids[i] != 0)
 	    kill(daemon->tcp_pids[i], SIGALRM);
 	
@@ -1702,7 +1702,7 @@ static int set_dns_listeners(time_t now)
       /* death of a child goes through the select loop, so
 	 we don't need to explicitly arrange to wake up here */
       if  (listener->tcpfd != -1)
-	for (i = 0; i < MAX_PROCS; i++)
+	for (i = 0; i < daemon->max_procs; i++)
 	  if (daemon->tcp_pids[i] == 0 && daemon->tcp_pipes[i] == -1)
 	    {
 	      poll_listen(listener->tcpfd, POLLIN);
@@ -1718,7 +1718,7 @@ static int set_dns_listeners(time_t now)
     }
   
   if (!option_bool(OPT_DEBUG))
-    for (i = 0; i < MAX_PROCS; i++)
+    for (i = 0; i < daemon->max_procs; i++)
       if (daemon->tcp_pipes[i] != -1)
 	poll_listen(daemon->tcp_pipes[i], POLLIN);
   
@@ -1750,7 +1750,7 @@ static void check_dns_listeners(time_t n
      to free the process slot. Once the child process has gone, poll()
      returns POLLHUP, not POLLIN, so have to check for both here. */
   if (!option_bool(OPT_DEBUG))
-    for (i = 0; i < MAX_PROCS; i++)
+    for (i = 0; i < daemon->max_procs; i++)
       if (daemon->tcp_pipes[i] != -1 &&
 	  poll_check(daemon->tcp_pipes[i], POLLIN | POLLHUP) &&
 	  !cache_recv_insert(now, daemon->tcp_pipes[i]))
@@ -1879,7 +1879,7 @@ static void check_dns_listeners(time_t n
 		  read_write(pipefd[0], &a, 1, 1);
 #endif
 
-		  for (i = 0; i < MAX_PROCS; i++)
+		  for (i = 0; i < daemon->max_procs; i++)
 		    if (daemon->tcp_pids[i] == 0 && daemon->tcp_pipes[i] == -1)
 		      {
 			daemon->tcp_pids[i] = p;
Index: dnsmasq-2.84/src/option.c
===================================================================
--- dnsmasq-2.84.orig/src/option.c
+++ dnsmasq-2.84/src/option.c
@@ -168,6 +168,7 @@ struct myoption {
 #define LOPT_SINGLE_PORT   359
 #define LOPT_SCRIPT_TIME   360
 #define LOPT_PXE_VENDOR    361
+#define LOPT_MAX_PROCS     362
  
 #ifdef HAVE_GETOPT_LONG
 static const struct option opts[] =  
@@ -341,6 +342,7 @@ static const struct myoption opts[] =
     { "dumpfile", 1, 0, LOPT_DUMPFILE },
     { "dumpmask", 1, 0, LOPT_DUMPMASK },
     { "dhcp-ignore-clid", 0, 0,  LOPT_IGNORE_CLID },
+    { "max-procs", 1, 0, LOPT_MAX_PROCS },
     { NULL, 0, 0, 0 }
   };
 
@@ -521,6 +523,7 @@ static struct {
   { LOPT_DUMPFILE, ARG_ONE, "<path>", gettext_noop("Path to debug packet dump file"), NULL },
   { LOPT_DUMPMASK, ARG_ONE, "<hex>", gettext_noop("Mask which packets to dump"), NULL },
   { LOPT_SCRIPT_TIME, OPT_LEASE_RENEW, NULL, gettext_noop("Call dhcp-script when lease expiry changes."), NULL },
+  { LOPT_MAX_PROCS, ARG_ONE, "<number>", gettext_noop("Specify maximum number of child process to fork."), NULL },
   { 0, 0, NULL, NULL, NULL }
 }; 
 
@@ -4546,6 +4549,12 @@ err:
       }
 #endif
 		
+    case LOPT_MAX_PROCS:  /* --max-procs */
+      if (!atoi_check16(arg, &daemon->max_procs))
+	ret_err(gen_err);
+      if (daemon->max_procs > MAX_PROCS) daemon->max_procs = MAX_PROCS;
+      break;
+
     default:
       ret_err(_("unsupported option (check that dnsmasq was compiled with DHCP/TFTP/DNSSEC/DBus support)"));
       
@@ -5036,6 +5045,7 @@ void read_opts(int argc, char **argv, ch
   daemon->soa_expiry = SOA_EXPIRY;
   daemon->max_port = MAX_PORT;
   daemon->min_port = MIN_PORT;
+  daemon->max_procs = MAX_PROCS;
 
 #ifndef NO_ID
   add_txt("version.bind", "dnsmasq-" VERSION, 0 );
Index: dnsmasq-2.84/src/dnsmasq.h
===================================================================
--- dnsmasq-2.84.orig/src/dnsmasq.h
+++ dnsmasq-2.84/src/dnsmasq.h
@@ -1169,6 +1169,9 @@ extern struct daemon {
   /* file for packet dumps. */
   int dumpfd;
 #endif
+
+  /* maximum number of child processes to fork */
+  unsigned int max_procs;
 } *daemon;
 
 /* cache.c */

EDIT:

So here's a simple test to show you - use a script to launch 20 netcats to connect to tcp port 53. When running with the command line argument to cap the number of processes, dnsmasq has one process for the main dnsmasq instance and then a maximum additional number of processes equal to the specified value to process incoming tcp connections.

If the argument is not specified, then dnsmasq will spawn up to the maximum of 20 processes

root@openwrt:~# ps ax | grep netcat
26074 pts/0    S      0:00 netcat -t 127.0.0.1 53
26075 pts/0    S      0:00 netcat -t 127.0.0.1 53
26076 pts/0    S      0:00 netcat -t 127.0.0.1 53
26077 pts/0    S      0:00 netcat -t 127.0.0.1 53
26078 pts/0    S      0:00 netcat -t 127.0.0.1 53
26079 pts/0    S      0:00 netcat -t 127.0.0.1 53
26080 pts/0    S      0:00 netcat -t 127.0.0.1 53
26081 pts/0    S      0:00 netcat -t 127.0.0.1 53
26082 pts/0    S      0:00 netcat -t 127.0.0.1 53
26083 pts/0    S      0:00 netcat -t 127.0.0.1 53
26084 pts/0    S      0:00 netcat -t 127.0.0.1 53
26085 pts/0    S      0:00 netcat -t 127.0.0.1 53
26086 pts/0    S      0:00 netcat -t 127.0.0.1 53
26087 pts/0    S      0:00 netcat -t 127.0.0.1 53
26088 pts/0    S      0:00 netcat -t 127.0.0.1 53
26089 pts/0    S      0:00 netcat -t 127.0.0.1 53
26090 pts/0    S      0:00 netcat -t 127.0.0.1 53
26091 pts/0    S      0:00 netcat -t 127.0.0.1 53
26092 pts/0    S      0:00 netcat -t 127.0.0.1 53
26093 pts/0    S      0:00 netcat -t 127.0.0.1 53
26094 pts/0    S      0:00 netcat -t 127.0.0.1 53
26178 pts/0    S+     0:00 grep netcat
root@openwrt:~# ps ax | grep dnsmasq
26026 pts/2    S+     0:00 ./dnsmasq -C /var/etc/dnsmasq.conf.cfg01411c -k --max-procs=1
26095 pts/2    S+     0:00 ./dnsmasq -C /var/etc/dnsmasq.conf.cfg01411c -k --max-procs=1
26212 pts/0    S+     0:00 grep dnsmasq
root@openwrt:~# 
3 Likes

First of all thanks for the netcat hint :wink:

Before patching I was able to Oom dnsmasq just by opening new tcp connections to port 53 by netcat.

After your patch + max-procs=1 in /etc/dnsmasq.conf I was not able to Oom dnsmasq by opening tcp connections to port 53..

root@R7800:~# ps |grep netcat
 4203 root       740 T    netcat -t 127.0.0.1 53
 4219 root       740 T    netcat -t 127.0.0.1 53
 4228 root       740 T    netcat -t 127.0.0.1 53
 4238 root       740 T    netcat -t 127.0.0.1 53
 4260 root       740 T    netcat -t 127.0.0.1 53
 4280 root       740 T    netcat -t 127.0.0.1 53
 4283 root       740 T    netcat -t 127.0.0.1 53
 4350 root       740 T    netcat -t 127.0.0.1 53
 4358 root       740 T    netcat -t 127.0.0.1 53
 4370 root       740 T    netcat -t 127.0.0.1 53
 4371 root       740 T    netcat -t 127.0.0.1 53
 4372 root       740 T    netcat -t 127.0.0.1 53
 4373 root       740 T    netcat -t 127.0.0.1 53
 4402 root       740 T    netcat -t 127.0.0.1 53
 4403 root       740 T    netcat -t 127.0.0.1 53
 4404 root       740 T    netcat -t 127.0.0.1 53
 4405 root       740 T    netcat -t 127.0.0.1 53
 4406 root       740 T    netcat -t 127.0.0.1 53
 4407 root       740 T    netcat -t 127.0.0.1 53
 4408 root       740 T    netcat -t 127.0.0.1 53
 4423 root       740 T    netcat -t 127.0.0.1 53
 4601 root      1080 R    grep netcat
root@R7800:~# ps |grep dnsmasq
 3988 dnsmasq  52448 S    /usr/sbin/dnsmasq -C /var/etc/dnsmasq.conf.cfg01411c -k -x /var/run/dnsmasq/dnsmasq.cfg01411c.pid
 4568 dnsmasq  52516 S    /usr/sbin/dnsmasq -C /var/etc/dnsmasq.conf.cfg01411c -k -x /var/run/dnsmasq/dnsmasq.cfg01411c.pid
 4621 root      1080 R    grep dnsmasq

Then when using the Taxi App with max-procs=1 I was not able to get dnsmasq to Oom.

Once removed the max-procs=1 conf line and restarted the dnsmasq, I was pretty easily able to reproduce the dnsmasq Oom again.

Seems by default when opening Taxi App it's opening several tcp connections to port 53. It does not Oom every time but the percentage doing so is high.

Toggling max-procs=1 on and off the problem is repeatable.

So yes, at least in my Taxi App case (and when using https-dns-proxy) your patch fixed the dnsmasq Oom issue.

1 Like

Yeah, the default MAX_PROCS of 20 is really not suitable for a resource constrained embedded platform.

This patch should ideally be put into Openwrt, although there's understandable reluctance to patch upstream source unless really necessary.

Each tcp client can have up to 100 tcp requests, so I'm not sure that spawning up to 20 separate processes is really necessary except in the case of a very heavily loaded system

1 Like

Thanks. Will mark your patch as a solution.

Unless patch will be included to Openwrt It may though have a limited runtime as understand it's only relevant for current dnsmasq-2.84.

Well, that's the problem with a patch to upstream source. Every time the version gets bumped, there's a risk the patch won't apply or will need to be redone

Kind of depends on the device.

If we would assume we're going towards devices with more memory, the current max won't be an issue in the long run.

It's still a nice solution/hack you've created, and it should be implemented, here or up stream.

hat off!

At least I know the problem and able to overcome it (changing DoH to DoT and removing TCP:53 firewall forward) once patch does not work anymore.

:+1:

I sent the patch to the dnsmasq author. Maybe he'll consider adding it...

1 Like

When I think about it, this is actually an exploitable denial of service attack on openwrt with a non-trivial number of platforms, although one with limited risk since it would require an internal user with query access to dnsmasq to exploit.

But, just making sure there are 20 netcats spawned all the time could effectively keep a targeted router out of memory...

Be interesting to know what happens to your router if you do something similar - keep having a netcat connect so that dnsmasq is always oom and whether it causes other things to break or not..

Theoretically, it's also exploitable by a malignant app that the user downloads...

#!/bin/sh
  
while true; do
       instances=$(ps | grep netcat | wc -l)
       [ "$instances" -ge 21 ] && continue
       netcat -t 127.0.0.1 53 &
done

Thanks. Appreciate your effort trying to get this patch included to Openwrt.

Curious to know a couple of things about this if you could maybe dig in a bit...

  • Exactly how many times does dnsmasq have to fork before the router is oom?
  • Could you post your /etc/dnsmasq.conf
  • With only one dnsmasq process running, can you post a screen shot of your memory usage (used, buffers, cache)
  • Then the same thing when dnsmasq cause oom errors...
  • What are your exact router specs? Precise make and model...

I just noticed you changed your dnsmasq cache size in /etc/dnsmasq.conf. You should try removing this option cachesize '1000' line (and removing the --max-procs= value) and see if the problem still occurs.

Router is Netgear R7800:

root@R7800:~# ubus call system board
{
	"kernel": "5.4.106",
	"hostname": "R7800",
	"system": "ARMv7 Processor rev 0 (v7l)",
	"model": "Netgear Nighthawk X4S R7800",
	"board_name": "netgear,r7800",
	"release": {
		"distribution": "OpenWrt",
		"version": "SNAPSHOT",
		"revision": "r16345-d71424a085",
		"target": "ipq806x/generic",
		"description": "OpenWrt SNAPSHOT r16345-d71424a085"
	}
}
/etc/dnsmasq.conf (max-procs commented out for below testing)
root@R7800:~# cat /etc/dnsmasq.conf
# Change the following lines if you want dnsmasq to serve SRV
# records.
# You may add multiple srv-host lines.
# The fields are <name>,<target>,<port>,<priority>,<weight>

# A SRV record sending LDAP for the example.com domain to
# ldapserver.example.com port 289
#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389

# Two SRV records for LDAP, each with different priorities
#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,1
#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,2

# A SRV record indicating that there is no LDAP server for the domain
# example.com
#srv-host=_ldap._tcp.example.com

# The following line shows how to make dnsmasq serve an arbitrary PTR
# record. This is useful for DNS-SD.
# The fields are <name>,<target>
#ptr-record=_http._tcp.dns-sd-services,"New Employee Page._http._tcp.dns-sd-services"

# Change the following lines to enable dnsmasq to serve TXT records.
# These are used for things like SPF and zeroconf.
# The fields are <name>,<text>,<text>...

#Example SPF.
#txt-record=example.com,"v=spf1 a -all"

#Example zeroconf
#txt-record=_http._tcp.example.com,name=value,paper=A4

# Provide an alias for a "local" DNS name. Note that this _only_ works
# for targets which are names from DHCP or /etc/hosts. Give host
# "bert" another name, bertrand
# The fields are <cname>,<target>
#cname=bertand,bert

#max-procs=1
root@R7800:~#

Below memory usage after commenting out option cachesize '1000' and reboot:

root@R7800:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         444176      133436      257800       12472       52940      253460
Swap:             0           0           0

Notes:

  • I use adblock and have +300k blocked domains
Tue Mar 30 20:22:53 2021 user.info adblock-4.1.0[1278]: adblock instance started ::: action: start, priority: 0, pid: 1278
Tue Mar 30 20:24:15 2021 user.info adblock-4.1.0[1278]: blocklist with overall 309334 blocked domains loaded successfully (Netgear Nighthawk X4S R7800, OpenWrt SNAPSHOT r16345-d71424a085)
  • Slighly cut the memory to get pstore/ramoops activated
[    0.000000] Kernel command line: mem=0x1C000000 ramoops.mem_address=0x5F000000 ramoops.mem_size=0x2C000 ramoops.record_size=0x4000
[    0.018995] ramoops: using module parameters
[    0.019360] pstore: Registered ramoops as persistent store backend
[    0.019378] ramoops: using 0x2c000@0x5f000000, ecc: 0

For below testing of dnsmasq oom I used netcat as when using Taxi App oo-reaper clears all the extra dnsmasq processes and memory consumption returns normal.

Mem after 1st netcat -t 127.0.0.1 53 &

root@R7800:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         444176      185020      206048       12472       53108      201792
Swap:             0           0           0

Mem after 2nd netcat -t 127.0.0.1 53 &

root@R7800:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         444176      232116      158928       12472       53132      154684
Swap:             0           0           0

Mem after 3rd netcat -t 127.0.0.1 53 &

root@R7800:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         444176      283340      107692       12472       53144      103456
Swap:             0           0           0

Mem after 4th netcat -t 127.0.0.1 53 &

root@R7800:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         444176      338756       52232       12472       53188       48016
Swap:             0           0           0

Mem after 5th netcat -t 127.0.0.1 53 &

root@R7800:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         444176      386996       26696       12472       30484       11088
Swap:             0           0           0

Mem after 6th netcat -t 127.0.0.1 53 & --> dnsmasq oom

root@R7800:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         444176      389360       29004       12472       25812       11100
Swap:             0           0           0
syslog - dnsmasq oom
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.820556] dnsmasq invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.820590] CPU: 0 PID: 9121 Comm: dnsmasq Not tainted 5.4.106 #0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.829094] Hardware name: Generic DT based system
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.835182] [<c0311068>] (unwind_backtrace) from [<c030c4e0>] (show_stack+0x14/0x20)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.839867] [<c030c4e0>] (show_stack) from [<c0923098>] (dump_stack+0x94/0xa8)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.847768] [<c0923098>] (dump_stack) from [<c0422338>] (dump_header+0x54/0x1a8)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.854794] [<c0422338>] (dump_header) from [<c0422bb4>] (oom_kill_process+0x190/0x194)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.862347] [<c0422bb4>] (oom_kill_process) from [<c04235c8>] (out_of_memory+0x268/0x324)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.870076] [<c04235c8>] (out_of_memory) from [<c0467dd8>] (__alloc_pages_nodemask+0xb9c/0x1260)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.878410] [<c0467dd8>] (__alloc_pages_nodemask) from [<c044e8f4>] (wp_page_copy+0x64/0x77c)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.887262] [<c044e8f4>] (wp_page_copy) from [<c04513a8>] (do_wp_page+0xb0/0x5b0)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.895678] [<c04513a8>] (do_wp_page) from [<c04529a0>] (handle_mm_fault+0x64c/0xb6c)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.903147] [<c04529a0>] (handle_mm_fault) from [<c0311fd4>] (do_page_fault+0x124/0x2c4)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.910960] [<c0311fd4>] (do_page_fault) from [<c0312328>] (do_DataAbort+0x48/0xd4)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.919117] [<c0312328>] (do_DataAbort) from [<c0301ddc>] (__dabt_usr+0x3c/0x40)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.926492] Exception stack(0xda22dfb0 to 0xda22dff8)
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.934132] dfa0:                                     00000000 00000000 04145f10 ffffffff
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.939089] dfc0: 00000000 004b2f58 00000026 00000000 004b2f58 004b2fa0 004b2f3c 05425990
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.947245] dfe0: bef34b20 bef34b60 00482f84 0048311c 20000010 ffffffff
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.955529] Mem-Info:
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.961836] active_anon:83316 inactive_anon:770 isolated_anon:0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.961836]  active_file:14 inactive_file:50 isolated_file:1
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.961836]  unevictable:0 dirty:1 writeback:3 unstable:0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.961836]  slab_reclaimable:1994 slab_unreclaimable:3315
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.961836]  mapped:3 shmem:3118 pagetables:336 bounce:0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.961836]  free:4015 free_pcp:33 free_cma:0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.974227] Node 0 active_anon:333264kB inactive_anon:3080kB active_file:92kB inactive_file:200kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:12kB dirty:4kB writeback:8kB shmem:12472kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1760.996533] Normal free:16060kB min:16384kB low:20480kB high:24576kB active_anon:332796kB inactive_anon:3080kB active_file:40kB inactive_file:284kB unevictable:0kB writepending:12kB present:458752kB managed:444176kB mlocked:0kB kernel_stack:856kB pagetables:1344kB bounce:0kB free_pcp:132kB local_pcp:0kB free_cma:0kB
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.024640] lowmem_reserve[]: 0 0 0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.046894] Normal: 165*4kB (UE) 119*8kB (UME) 43*16kB (UE) 12*32kB (UE) 17*64kB (UE) 19*128kB (UME) 5*256kB (UE) 1*512kB (M) 0*1024kB 2*2048kB (UE) 1*4096kB (M) = 16188kB
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.050128] 3199 total pagecache pages
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.065440] 0 pages in swap cache
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.069211] Swap cache stats: add 0, delete 0, find 0/0
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.072579] Free swap  = 0kB
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.077674] Total swap = 0kB
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.080738] 114688 pages RAM
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.083671] 0 pages HighMem/MovableOnly
Tue Mar 30 20:50:39 2021 kern.warn kernel: [ 1761.086466] 3644 pages reserved
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.090041] Tasks state (memory values in pages):
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.093168] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.098087] [    190]    81   190      260       23    10240        0             0 ubusd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.106606] [    191]     0   191      176       10     6144        0             0 askfirst
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.114853] [    235]     0   235      204       12     6144        0             0 urngd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.123283] [    614]   514   614      267       42    10240        0             0 logd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.131412] [    615]     0   615      293       23     8192        0             0 logread
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.139564] [    667]     0   667      500       64    12288        0             0 rpcd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.147655] [    736]   323   736      252       33     8192        0             0 chronyd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.155996] [    954]     0   954      215       11     6144        0             0 dropbear
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.164056] [   1052]     0  1052     1027       68    12288        0             0 hostapd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.172675] [   1053]     0  1053     1003       51    12288        0             0 wpa_supplicant
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.180811] [   1115]     0  1115      393       37     8192        0             0 netifd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.189829] [   1311]     0  1311      211       14     6144        0             0 odhcp6c
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.198076] [   1336]     0  1336      270       10     8192        0             0 udhcpc
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.206324] [   1462]     0  1462      315       31     6144        0             0 odhcpd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.214665] [   1626]     0  1626      271       10     8192        0             0 crond
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.222848] [   1813]     0  1813     1016      152    12288        0             0 uhttpd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.231161] [   2000]     0  2000     1045      160    10240        0             0 collectd
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.239313] [   2089] 65534  2089     1043      317    12288        0             0 https-dns-proxy
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.247904] [   2090] 65534  2090     1080      353    14336        0             0 https-dns-proxy
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.256683] [   4161]   453  4161    13047    12443    61440        0             0 dnsmasq
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.265711] [   4536]     0  4536      229       22     8192        0             0 dropbear
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.274052] [   4555]     0  4555      272       13     8192        0             0 ash
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.282659] [   8868]     0  8868      185       15     8192        0             0 netcat
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.290454] [   8869]   453  8869    13064    12444    61440        0             0 dnsmasq
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.298764] [   8883]     0  8883      185       15     8192        0             0 netcat
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.307011] [   8884]   453  8884    13064    12444    61440        0             0 dnsmasq
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.315352] [   8890]     0  8890      185       15     8192        0             0 netcat
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.323600] [   8891]   453  8891    13064    12444    61440        0             0 dnsmasq
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.331878] [   8900]     0  8900      185       15     8192        0             0 netcat
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.340187] [   8901]   453  8901    13064    12444    61440        0             0 dnsmasq
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.348528] [   9046]     0  9046      185       15    10240        0             0 netcat
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.356776] [   9047]   453  9047    13064    12444    61440        0             0 dnsmasq
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.365087] [   9120]     0  9120      185       15    10240        0             0 netcat
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.373282] [   9121]   453  9121    13047    12442    61440        0             0 dnsmasq
Tue Mar 30 20:50:39 2021 kern.info kernel: [ 1761.381671] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/,task=dnsmasq,pid=9047,uid=453
Tue Mar 30 20:50:39 2021 kern.err kernel: [ 1761.389935] Out of memory: Killed process 9047 (dnsmasq) total-vm:52256kB, anon-rss:49776kB, file-rss:0kB, shmem-rss:0kB, UID:453 pgtables:60kB oom_score_adj:0