Hi;
I have made some additions to the luci backup / restore script, mainly for databases , dramatically increasing execution time resulting in page time-outs during backup / restore.
Because of openwrt updates, this issue has resurfaced:
which was solved by following @Ansuel recomendation (poll result file):
<script type="text/javascript">//<![CDATA[
function refresh_status(x, data)
{
var file = "/backup.txt?rand=" + Math.random();;
$( "#contain" ).load( file , function (data)
{
var status = document.getElementById('status');
status.innerHTML = data;
}
);
}
XHR.poll(5, '<%=url('admin/system/flashops')%>', null,
function(x, data)
{
refresh_status(x, data);
}
);
//]]></script>
Since then, I have updated to a more recent trunk:
from:
commit 1b937cb14184b5ff9a7a75fbc5d226032f931c35
Author: Rafał Miłecki rafal@milecki.pl
Date: Wed Jul 3 11:16:22 2019 +0200
to:
commit 7cb721c03fdc163818f8114692229d0097d2f26b
Author: Adrian Schmutzler freifunk@adrianschmutzler.de
Date: Tue Jul 7 11:49:36 2020 +0200
where luci, cgi... have dramatically changed.
nginx.conf:
# Note: external servers (vhosts) should not redirect http to https, else redirect loop (eg: sme-server)
user root;
worker_processes auto;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
#events {}
http {
#access_log off;
log_format openwrt
'$request_method $scheme://$host$request_uri => $status'
' (${body_bytes_sent}B in ${request_time}s) <- $http_referer';
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 0;
# client_body_buffer_size 10K;
# client_header_buffer_size 1k;
# client_max_body_size 1G;
#large_client_header_buffers 2 1k;
client_max_body_size 128M;
#large_client_header_buffers 2 1k;
gzip on;
#gzip_http_version 1.1;
gzip_vary on;
#gzip_comp_level 1;
gzip_proxied any;
root /www;
server {
listen [::]:80 default_server;
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name localhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:DHE+AESGCM:DHE:!RSA!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!CAMELLIA:!SEED";
ssl_session_tickets off;
ssl_certificate /etc/ssl/domains/domain.crt;
ssl_certificate_key /etc/ssl/domains/domain.key;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 32k;
fastcgi_buffers 4 32k;
fastcgi_busy_buffers_size 32k;
fastcgi_temp_file_write_size 32k;
client_body_timeout 10;
client_header_timeout 10;
send_timeout 120; # 60 sec should be enough, if experiencing a lot of timeouts, increase this.
output_buffers 1 32k;
postpone_output 1460;
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}
location /cgi-bin/luci {
index index.html;
include uwsgi_params;
uwsgi_param SERVER_ADDR $server_addr;
uwsgi_modifier1 9;
uwsgi_pass unix:////var/run/luci-webui.socket;
}
location ~ /cgi-bin/cgi-(backup|download|upload|exec) {
include uwsgi_params;
uwsgi_param SERVER_ADDR $server_addr;
uwsgi_modifier1 9;
uwsgi_pass unix:////var/run/luci-cgi_io.socket;
}
location /luci-static {
error_log stderr crit;
}
location /ubus {
ubus_interpreter;
ubus_socket_path /var/run/ubus.sock;
ubus_parallel_req 2;
}
location ~ \.php$ {
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
if (-f $request_filename) {
# Only throw it at PHP-FPM if the file exists (prevents some PHP exploits)
#fastcgi_pass 127.0.0.1:1026; # The upstream determined above
fastcgi_pass unix:/var/run/php7-fpm.sock;
}
}
include locations/*.conf;
}
include vhosts/*.conf;
}
I am unsure (given openwrt updates) what is the correct way to approach / solve this.
Questions:
- How identify what is timing out (approx 20 seconds)
- Is there a timeout I can increase
- If polling a result file is still the correct approach, how do I alter flash.js to achieve this.
- General suggestion: If there is no way to control script_timeout on a page basis, can it be added?
Thanks;
Bill