A note about using ChatGPT for OpenWrt Answers

I want to make it clear: ChatGPT is not an authoritative source for information about OpenWrt or networking in general (among many other topics).

The AI systems currently create human-sounding answers, and they pull from various sources on the internet. The problem is that they do not have any mechanisms to ensure the accuracy of the responses, nor do they provide context as to how they arrived at the answer.

Please do not post the ChatGPT ‘conversations’ or ‘solutions’ here, as they are more likely to cause confusion, and they certainly are not reliable. Further, if you choose to use an AI based solution, do so at your own risk.

The contributors to the OpenWrt forums are human and generally highly skilled, and will provide far better support than you’ll ever get with an AI system (at least for the foreseeable future). What makes this forum so great is the exchange of information that only humans can do reliably… let’s keep that going!

36 Likes

The main problem is that ChatGPT at its current level cannot reliably verify the provided information and validate it against best practices, so its answers can be outdated, incomplete, nonsensical, or just harmful, however it can probably serve as a good source of inspiration to show possible research directions.

5 Likes

In none of the instances that came up recently, I've seen any of that. Yes, it sounded convincing, it sounded authoritative, it even went as far as suggesting 'detailed' configuration files, but technically it was far away from the truth and the capabilities of the really existing hardware available to this day.

2 Likes

This likely depends on the difficulty of a specific problem, but it can provide albeit sub-optimal, but more or less working answers.
I think it needs a good and up-to-date machine knowledge base as well as some sort of integration with test automation frameworks to improve the answer quality.

1 Like

Out of interest, how do we know the answers were provided by ChatGPT? Were responders openly admitting to that?

In the linked reply, the user tells that they have used ChatGPT to write a script for them.

3 Likes

So this kind of AI that ChatGPT represents (at least up to version 3, I have not looked deep enough in what version 4 brought in changes) are really only able to pick up some statistics from their input corpus and synthesize stuff with similar statistics if prompted. There is a bit more to it, but that is what they do at their core. If such a system is highly specialized and trained with a carefully curated set of inputs of known truth (and that truth value is part of the training data) it can return surprisingly useful "answers". However the required curation is tedious and voluminous work, that those folks that can handle the math/programming on the algorithmic side typically do not (or can not) want to perform, so the "solution" often is to use sub-sections of the internet as training data... That can work pretty well, if e.g. taking a mostly innocent topic like cat pictures (I fear however to do a google image search on that term) but generally information from the internet needs to be taken with a shipload of salt... So essentially ChatGPT has three big problems:
a) GIGO, garbage in, garbage out: using a bad trainingset
b) extrapolation/intrapolation: not all problems can be solved by extrapolating from the training set statistics..
C) being essentially invented to tell convincing, not true facts... turns out these are two different orthogonal categories, who'd thunk that?

7 Likes

The other day (after long thinking aboit my first question to the AI) I asked - it told me something creepy about the Nashville incident...from the 1990's. The information it had up to 2021 is defnatly relevant - if not just pure coincidence.

So creepy (or rathar - info very different than published already) ...I asked it how it knew the information...but I can't verify its sources.

My point: You should be able to ask the AI how it knows the information it provided - albeit that response could be as failed as the information itself.

1 Like

The AskHistorians subreddit had some posts on that topic and it is pretty evident that ChatGPT isn't suitable to deliver accurate/factual information on quite a few matters. You would need to spoonfeed so much information that the actual use is diminished in its helpfulness. See here: https://www.reddit.com/r/AskHistorians/comments/11u21ie/the_consensus_from_a_brief_search_of_previous/ and here: https://www.reddit.com/r/AskHistorians/comments/10yeh01/meta_can_we_get_two_new_regulations_regarding_bad/

1 Like

I've been using ChatGPT 3.5 and 4 for other science-based research/writing and it's not usually up to my standard or doesn't produce what I'm asking for. Even when I spoon feed it a lot of data and try to get it to refer to existing research, version 4 often just takes my wording as-is and both write a lot of "fluff" which might be good for a certain level of communication, but not great for specifics.

I have a hard time believing that the ChatGPT model could "make sense" of some of the massive forum threads for any given model of router since there are so many aspects of OpenWRT and different router capabilities discussed at once, yet are often interrelated. Also the way people respond to each other in these forums can take many hours for a human going back through other threads and fundamentals to understand the context and therefore the conversation. I doubt ChatGPT v3.5 or v4 can give truthful specifics about OpenWRT and a piece of hardware.

3 Likes

Well, that seems a quite tedious way to get a notorious "bull-shitter" /liar to stick to the facts... Once the dust settles I am sure we will find some sensible use-cases for this generation of machine learning tools (or we already have, deepL's translation often is a decent starting point), I just predict these will fall far below the current hype... It is not that among human writers there is a lack of folks able to spin a convincing yarn already (both negatively when making facts up (aka operand handling of the truth) and positively when making whole stories up (aka literature))...

Well, ChatGPT will not make "sense" out of anything anyway, it will pick up the underlaying statistics, so if enough folks write untested and wrong hypothesis, ChatGPT is likely to preferentially regurgitate these... To repeat, these tools do not "understand" anything they just extract commonality in their training corpus... (as before this is painting with a broad brush, the details do differ, but at the core this IMHO still holds).

Yes, and not the least because ChatGPT has no concept of "truth", and without one "truthful" is a tricky qualifier :wink:

Side-note: I would not be amazed if any comment in this thread would have been written by ChatGPT "itself", after all this is a somewhat fuzzy topic where a convincingly sounding position does not need supporting facts :wink: