#!/bin/sh /etc/rc.common
START=97
STOP=4
USE_PROCD=1
start_service() {
procd_open_instance
procd_set_param command /root/program.sh
# uncomment if you want procd to restart your script if it terminated for whatever reason
#procd_set_param respawn
procd_close_instance
}
If 'program.sh' terminates, e.g. owing to an error, 'service X status' still shows 'running', whereas I would like such termination to result in 'service X status' showing 'inactive' because the program terminated.
I expect so. It’s part of /etc/rc.common to check for a predefined custom function. Then you can code whatever you like in that function to determine your status.
All my advice is theoretical but based on reviewing the procd base scripts.
I see from the documentation that there is a respawn concept:
The comments seem to me a little cryptic:
# respawn automatically if something died, be careful if you have an alternative process supervisor
# if process exits sooner than respawn_threshold, it is considered crashed and after 5 retries the service is stopped
# if process finishes later than respawn_threshold, it is restarted unconditionally, regardless of error code
# notice that this is literal respawning of the process, no in a respawn-on-failure sense
respawn if something died, but literal respawning of the process, no in a respawn-on-failure sense?
In any case, I'm looking for something a bit different.
I don't want procd to respawn service on termination. I just want the status reported by procd to reflect that the command exited.
If respawn can pick up on the termination (in order to respawn the process on termination and eventually stop the service after e.g. 5 failed attempts), then it strikes me as a deficiency in the OpenWrt procd service implementation that the status reported by 'service X status' does not actually pick up on the command having terminated.
I think 'service X status' should not show up as 'running' if the command terminated and respawn is not set, right? I mean a service is surely not running if respawn would have brought it back to life? That service crashed to a halt and should be stopped just like after the five failed respawn attempts:
if process exits sooner than respawn_threshold, it is considered crashed and after 5 retries the service is stopped
But perhaps I am misunderstanding something or missing something obvious?
Note that ubus correctly reports "running": false but the shell script wrappers report any existing, non-unknown instance as "running". It should probably gain a further state "present, not running".
Then /etc/init.d/test status will report "running" w/ code 0, "running (2/3)" w/ code 0, "running(1/3)" w/ code 0, "not running" w/ code 5 within 1s, 11s, 21s and 31s after service start respectively.
If the intent is to programmatically query service state, then maybe consider bypassing all the shell fluff entirely and directly checking
ubus call service list '{ "name": "yourservicename" }' | \
jsonfilter -l 1 -e '@[*].instances[*].running'
The context for me is for cake-autorate, which is launched (thanks to your help some time ago) with this procd service wrapper:
which launches this launcher:
Namely I would like 'service cake-autorate status' not to show 'running' when cake-autorate.sh has errored out or crashed (e.g. owing to a configuration error or bug). cake-autorate.sh cleans up after itself and that is caught by the cake-autorate_launcher.sh (which may run multiple instances for multiple interfaces).
Given your example showing multiple instances, looks to me like it might be a good idea to fold the cake-autorate_launcher.sh into the procd service script? Any chance you could help with that?
@moeller0 does it make sense to you to keep alive cake-autorate instances separately so that if one goes down it doesn't tear the others down? I think this makes sense, but I wonder what you think?
@jow can I test your code above? In my /lib/functions/procd.sh (RT3200 with 22.03.05) I see:
_procd_status() {
local service="$1"
local instance="$2"
local data
json_init
[ -n "$service" ] && json_add_string name "$service"
data=$(_procd_ubus_call list | jsonfilter -e '@["'"$service"'"]')
[ -z "$data" ] && { echo "inactive"; return 3; }
data=$(echo "$data" | jsonfilter -e '$.instances')
if [ -z "$data" ]; then
[ -z "$instance" ] && { echo "active with no instances"; return 0; }
data="[]"
fi
[ -n "$instance" ] && instance="\"$instance\"" || instance='*'
if [ -z "$(echo "$data" | jsonfilter -e '$['"$instance"']')" ]; then
echo "unknown instance $instance"; return 4
else
echo "running"; return 0
fi
}
Given that the default service status procedure implementation is bugged or at least unintuitively behaving, you would need to override the default one and provide a custom status_service() procedure in your init script. It would be a copy of _procd_status() from procd.sh with something similar to my suggested fix applied and with the service name hardcoded.
BTW for the folding the multi-instance launcher into single procd script, does this look about right? It's OK to use /bin/bash (since cake-autorate requires bash anyway)?
#!/bin/bash /etc/rc.common
START=97
STOP=4
USE_PROCD=1
start_service() {
cake_instances=(/root/cake-autorate/cake-autorate_config*sh)
for cake_instance in "${!cake_instances[@]}"
do
procd_open_instance "${cake_instance}"
procd_set_param command /root/cake-autorate/cake-autorate.sh "${cake_instances[cake_instance]}"
# uncomment if you want procd to restart your script if it terminated for whatever reason
#procd_set_param respawn
procd_close_instance
done
}
At the moment I presume if one errors out the others will be kept running (and status will show 'running' regardless of whether the one or more instances are running)?
Whether it shows running depends on how you implement status_service() in your init script but yes, procd will keep other instances running if one crashes.
I would guess these are better handled as individual entities with separate "fates".
Are you trying to abandon the launcher script completely or just for procd? Because for testing on the command line the script is quite convenient, you know my crude way of using screen to background running instances without actually ever installing/running them as service?
Keeping and maintaining the launcher script for that exact reason. This is just modifying the procd script to act in place of the launcher rather than call the launcher, which was a bit cumbersome. I didn't realise procd scripts could call multiple things until now.