Thank you @slh for the ping, although I had already seen this thread and was "sleeping on it" before responding.
First of all, I am sure most if not all developers will agree with me that documentation, although very important, is often a difficult and onerous task consuming a great deal of time and effort, time that is always in short supply.
The wiki approach is very useful as it allows a dev to very quickly add at least a documentation template enabling that dev and experienced users to contribute to making the wiki based documentation better. It usually works very well and is "self moderated" in that changes can and should have a reason included in the edit (these are hidden from the actual document). This generates a notification, if configured, to the original author and subsequent contributors.
A change can then be checked and and is approved by doing nothing, or accepted with changes by further editing. It can also be disapproved by reverting to a previous version. This is in some respects similar to a "git" process I guess.
Using "AI" as a research tool is something quite new, and here we need to be very careful.
Without going into any tech details, we have "AI - Artificial Intelligence", a generic term and we have "LLM - Large Language Models", a training method where written language sources are absorbed (eg Internet) for use by some AI.
So an AI service based on an LLM will have lots of data, but no means of knowing what is real/relevant data.
Basically a new type of search engine that outputs in nice sentences, hopefully with references.
Very useful and can make writing wiki pages a little easier by giving the author a list of references on the subject as a starting point, nothing more.
The problem is, as we can see, someone could, for their own reasons completely replace an existing wiki with a LLM generated page with little or no verification.
Reasons for doing this are inevitably selfish and dishonest.
An example could be abusing a wiki to publish something to put on your CV.
The perpetrator could be lucky and choose a wiki that does not have an active author or list of editors with notifications set up.
The situation will then be amplified by LLMs lapping up the abused wiki page and repeating it over and over.
This abuse happened with the page in question. Rejecting/Reverting the obvious LLM generated edits only resulted in verbal abuse and a re-edit.
Eventually I added the current RED warning on that page:
WARNING! This document contains many errors and misconceptions.
Many paragraphs are produced using an online LLM and added without any verification
This page should really be removed, in my opinion.
I had to resort to creating a new wiki page containing my original contents.
Can we ban AI for creating Wiki pages?
AI does not (yet) create Wiki pages, at least not on the OpenWrt Wiki as far as I can see.
We should use AI as a research/translation/formatting tool, and it is only going to get better, probably very quickly.
The real issue here is the mechanism for "self moderation" depends on contributors working together.
Clearly some "contributors" have their own agenda.
A solution would be when an edit is made it is not immediately made live if the editor is not the original author. That author should get a notification and then be able to review the change. If the author does not respond for a set time (maybe 7 days?), the changes are automatically published.
If an editor gets a set number of edits rejected, they should be blocked automatically from further edits.
Someone who is trusted and contributes a lot could be flagged as an "author" by the original author.
In this way, there would be no need for an admin to get involved, unless the original author fails totally to respond for whatever reason.
I do not know if this solution, or something like it is supported by the Wiki software. Someone else with a detailed knowledge of it would have to chip in.