Hi @aparcar the current slow page load is indeed a big problem. Before thinking about "migrating" the whole wiki, which is a pain in the *ss most likely.
I encourage you to do the following first:
I think you should really start looking into caching, better production configs and other performance improvements. I try to break your setup into several key components below.
Nginx
Let's start with your Nginx server. Try to let the browser cache all static images by default, using:
add_header Access-Control-Allow-Origin "*";
add_header Cache-Control "public, no-transform";
More info (snippet): https://gitlab.melroy.org/-/snippets/87#LC13
I just touch the surface about Nginx caching options. So please dive deeper.
Also, Nginx by default is not using a production ready config. Disable logging of static files or other files you don't care about to access log, to reduce the log I/O load (access_log off;
).
Increase the Nginx worker threads + worker connections! Depending on your hardware..:
thread_pool default threads=16 max_queue=65536;
events {
# Determines how much clients will be served per worker
# max clients = worker_connections * worker_processes
worker_connections 65535;
# Optimized to serve many clients with each thread, for linux only
use epoll;
# Accept as many connections as possible
multi_accept on;
}
Please, increase proxy buffer sizes, for example:
# Buffer the response from the backend server, which contains the headers.
proxy_buffer_size 32k;
# Buffer size of the response to the client while the response is not yet fully read
# proxy_buffer_size + 2x 4k buffers
proxy_busy_buffers_size 32k;
# xk = 128 times 16k buffering response (16k for the headers, 496k for the body response)
proxy_buffers 128 16k;
If you have a lot of bots... try to introduce limit_req_zone
and rate limits. Reduce the loads by rate limiting to eg. 4 requests/seconds after for example 400 requests in burst.
Gzip could help to reduce the size over the network, but increase the CPU load. Again, I refer back to my Nginx snippet above.
You can actually also monitor your Nginx access log and block bots on your server using Fail2Ban. Snippet about fail2ban: https://gitlab.melroy.org/-/snippets/612
PHP
Then we have PHP... Be sure to update your PHP version to at least PHP 8.2, in order to get the best performance improvements! PHP 5 and 7 should be avoided.
Next, enable opcache
! Which is crucial of PHP performance:
[opcache]
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=128
opcache.max_accelerated_files=100000
opcache.validate_timestamps=1
opcache.revalidate_freq=60
opcache.save_comments=1
opcache.jit_buffer_size=500M
See this snippet for more info: https://gitlab.melroy.org/-/snippets/91
Important: After an update, a PHP FPM service reload is needed to clear the Opcache.
DokuWiki
Be sure to follow all the good practices of the wiki itself regarding caching:
https://www.dokuwiki.org/devel:caching
Database
DokuWiki by default is using text on disk instead of a database
. Be sure your server is running on fast, enterprise rated SSDs!
If you are using a data plugin, which I don't know. Let me know, you might want to tweak the SQL server.
In case of MySQL/MariaDB I also have a snippet for that: https://gitlab.melroy.org/-/snippets/92
Monitoring
Monitoring is key to understand your bottlenecks and backend errors and the overall server performance.
Try using Telegraf, influxdb with Grafana or something similar to monitor your server/VM? Also monitor your Nginx logs, etc.
EDIT: You can also look into varnish and what about Fastly which helps you with this: https://www.fastly.com/fast-forward#apply-for-the-fast-forward-program