Load average when my app starts

chip is mtk7688an , mips

Mem: 16940K used, 44360K free, 0K shrd, 1264K buff, 5236K cached
CPU:   0% usr   0% sys   0% nic  99% idle   0% io   0% irq   0% sirq
Load average: 0.98 1.03 0.66 1/40 955
  PID  PPID USER     STAT   VSZ %VSZ %CPU COMMAND
  955   384 root     R     1484   2%   0% top
  883     1 root     S     1480   2%   0% /usr/sbin/ntpd -n -p 0.openwrt.pool.n
  827     1 root     S    21432  35%   0% /usr/bin/gateway
  675     1 root     S     1508   2%   0% /sbin/netifd
  384     1 root     S     1484   2%   0% /bin/ash --login
  773   675 root     S     1480   2%   0% udhcpc -p /var/run/udhcpc-eth0.1.pid
  809     1 root     S     1476   2%   0% /usr/sbin/telnetd -F -l /bin/login.sh
    1     0 root     S     1420   2%   0% /sbin/procd
  698     1 root     S     1184   2%   0% /usr/sbin/odhcpd
  774     1 root     S     1148   2%   0% /usr/sbin/dropbear -F -P /var/run/dro
  646     1 root     S     1048   2%   0% /sbin/logd -S 16
  898     1 nobody   S      980   2%   0% /usr/sbin/dnsmasq -C /var/etc/dnsmasq
  383     1 root     S      888   1%   0% /sbin/ubusd
    4     2 root     SW       0   0%   0% [kworker/0:0]
  250     2 root     SWN      0   0%   0% [jffs2_gcd_mtd6]
    3     2 root     SW       0   0%   0% [ksoftirqd/0]
    6     2 root     SW       0   0%   0% [kworker/u2:0]
  101     2 root     SW       0   0%   0% [kswapd0]
    5     2 root     SW<      0   0%   0% [kworker/0:0H]
^C  7     2 root     SW<      0   0%   0% [khelper]

is the load ok? 7688 seems only one core, why load is bigger than 1?

The load can be higher that the number of CPUs, when there are more tasks demanding CPU time that CPUs are available.

we have created 6 pthreads in our ipk(app), is there a way to reduce load?

Well... you can create less pthreads, or code the app differently, or... I do not know, it's up to the app.

"load" is not direct CPU utilization.
Basically it tells you how many threads are waiting for execution, on average. If it is over 1 (in single-core), CPU is overloaded and some process are waiting.

https://en.wikipedia.org/wiki/Load_(computing)#Unix-style_load_calculation

https://www.tecmint.com/understand-linux-load-averages-and-monitor-performance/

By decreasing the actual computational work of your app...
Or by decreasing the amount of threads and using nice etc. to slow down the startup of your app (but prolonging the real time needed for the startup).

But having a maxed out CPU during a short startup is no that unusual.

1 Like

i dont understand it, it willl reduce the startup load?
thanks a lot for your detail answer!!! :+1:

You can use the "nice" value of a process to decrease its execution priority in Linux, so that way you can tell your app to be more polite.

Just a mathematical example:
If you have a total workload of 1200 CPU units needed for your startup, and

  • instead of doing 6 secs of a load 200 units/s that would max out you CPU,
  • you do 15 seconds of a load of 80, which leaves lots of CPU free for other tasks during those 15 seconds.

So, your startup will take 15 seconds instead of 6. And the max momentary CPU utilisation would be 80 instead of 200, meaning that 120 units would be free for other tasks during the startup.

Note that the nice value in procd means the value for the whole process lifetime. Not just for startup.

And as an answer to your question in How to Change Process Priority? - #3 by davidgates , the default nice value is 0.
"niceness" ranges from -20 (highest priority value) to 19 (lowest priority value).

Note below in my example (taken with "htop" app) from "NI" column that

  • most apps have the default 0
  • the NTP daemon is priority hungry with niceness of -15
  • the statistics collection apps are more polite with nlbwmon with 19 and collectd with 5
root@router1:~# htop

    0[||                                             1.3%]   Tasks: 23, 10 thr; 1 running
    1[|                                              0.7%]   Load average: 0.00 0.00 0.00
  Mem[||||||||||||||||                         98.8M/465M]   Uptime: 3 days, 18:29:03
  Swp[                                              0K/0K]

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
19076 root       20   0  1756  1548   872 R  1.3  0.3  0:00.13 htop
    1 root       20   0  1400   860   680 S  0.0  0.2  0:15.16 /sbin/procd
  301 ubus       20   0  1052   692   596 S  0.0  0.1  0:03.00 /sbin/ubusd
  302 root       20   0   704   512   476 S  0.0  0.1  0:00.03 /sbin/askfirst /usr/libexec/login.sh
  336 root       20   0   816   568   524 S  0.0  0.1  2:15.61 /sbin/urngd
 1013 logd       20   0  1152   788   540 S  0.0  0.2  0:00.39 /sbin/logd -S 128
 1065 root       20   0  2004  1140   836 S  0.0  0.2  0:01.24 /sbin/rpcd -s /var/run/ubus/ubus.sock -t 30
 1330 root       20   0   888   700   660 S  0.0  0.1  0:00.01 /usr/sbin/dropbear -F -P /var/run/dropbear.1.pid -p 22 -K 3
 1449 root       20   0  4484  2796  2500 S  0.0  0.6 22:04.29 /usr/sbin/hostapd -s -g /var/run/hostapd/global
 1450 root       20   0  4388  1884  1684 S  0.0  0.4  0:05.63 /usr/sbin/wpa_supplicant -n -s -g /var/run/wpa_supplicant/g
 1555 root       20   0  1572   932   788 S  0.0  0.2  1:24.78 /sbin/netifd
 1832 root       20   0  1108   764   728 S  0.0  0.2  0:00.22 udhcpc -p /var/run/udhcpc-eth0.2.pid -s /lib/netifd/dhcp.sc
 1838 root       20   0   844   604   552 S  0.0  0.1  0:01.06 odhcp6c -s /lib/netifd/dhcpv6.script -P0 -t120 eth0.2
 1975 root       20   0  1260   788   660 S  0.0  0.2  1:18.43 /usr/sbin/odhcpd
 2490 root       20   0  4272  3516  2704 S  0.0  0.7  0:01.07 /usr/sbin/uhttpd -f -h /www -r router1 -x /cgi-bin -u /ubus
 4248 root        5 -15  1112   708   668 S  0.0  0.1  0:00.45 /usr/sbin/ntpd -n -N -S /usr/sbin/ntpd-hotplug -p 0.openwrt
 4629 root       39  19  1408   768   652 S  0.0  0.2  7:02.45 /usr/sbin/nlbwmon -o /var/lib/nlbwmon -b 524288 -i 24h -r 3
 5756 dnsmasq    20   0  7840  7476   740 S  0.0  1.6 30:20.17 /usr/sbin/dnsmasq -C /var/etc/dnsmasq.conf.cfg01411c -k -x
 8231 root       25   5  5996  2376  1500 S  0.0  0.5  1:10.48 /usr/sbin/collectd -C /tmp/collectd.conf -f
 8244 root       25   5  5996  2376  1500 S  0.0  0.5  0:00.76 /usr/sbin/collectd -C /tmp/collectd.conf -f
 8245 root       25   5  5996  2376  1500 S  0.0  0.5  0:22.07 /usr/sbin/collectd -C /tmp/collectd.conf -f
 8246 root       25   5  5996  2376  1500 S  0.0  0.5  0:01.99 /usr/sbin/collectd -C /tmp/collectd.conf -f
 8247 root       25   5  5996  2376  1500 S  0.0  0.5  0:02.01 /usr/sbin/collectd -C /tmp/collectd.conf -f
19063 root       20   0   912   664   608 S  0.0  0.1  0:00.11 /usr/sbin/dropbear -F -P /var/run/dropbear.1.pid -p 22 -K 3
19064 root       20   0  1116   904   856 S  0.0  0.2  0:00.01 -ash

May we take a step back, and explain what is the issue exactly? Just a number in a report? Does the app takes too long to start? Does it affect other services?

I feel we are just trying to fix a non issue here...

2 Likes