Well, let's put it this way: it's perfectly useable as-is for the people who understand that there's always going to be some amount of aliasing happening with these things.
With the above in mind, I do actually understand your point and yes, it would seem to make more sense for a wider audience, if it was averaged over a couple of samples. If you wish, you could e.g. open a ticket on OpenWRT's Github-repo suggesting such an adjustment.
I really don't understand the joke you two try to make here. I did even mark the fkn value red in the screenshots and you try to tell me something about alaising of a graph which was irrelevant from post 3.
I'm not making a joke. I literally just said that I do understand your point and I agree that averaging it a little would make the value make more sense to a wider audience. I do think a small adjustment to the way the value is shown would be a beneficial tweak. I still do not agree that it's wrong as it is, however.
Thank you. I think now everything is clear and one problem was that I wrote my answer while you guys were still editing, so you seemed to understand what I was writing about a littlebit later but I answered without having seen the edit.
Anyways, my initial question was if somebody can reproduce it so I can report it and it is not just me having a bad snapshot.
Wait...you haven't verified this behavior on a stable version?
Over 18 production devices...0 exhibit what you describe if I count the average of the 3 second interval.
BUT...if I ignore the graph/values/time interval/whatever (and all other SNMP recordings, values, graphs, netflow etc....then yes the input and output values never reaches peak or whatever (because it's still unclear on what you believe it really should be).
...but the values/graphs/whatever on my clients/SNMP/etc. match...so again:
(I recall someone wanting to make the interval less than 3 seconds, and it caused the browser/device issues.)
Since you asked me if I tested stable and you wrote that you don't have problems I asume that you run stable which is iptables with firewall3.
I run Snapshot with nftables and firewall4 and I see the problem.
Since my graphs/values/etc. appear OK to me, and I (and at least 1 other poster) believe yours are OK too - you should test to verify the same behavior.
(Please recall, I do not see a problem - I can count the 3 second interval, average and peak - and it makes complete sense to me in OpenWrt and any other software which sampling is done on a timed interval. If you insist the Input/Output should be higher, feel free to report this.)
We tried to explain it to you; but you don't seem to understand. Also, you feel that number is too low.
Feel free to make a bug report.
Also, using only values:
(131.54 + 193.32 + 303.10) / 3 == 209.32
(131.54 + 303.10) / 2 == 217.32 (this is quite close to the 214 Mbps value you quoted from Windows)
(At first I didn't realize you wanted to ignore the graphs - despite them showing the same thing. And I should add, I do believe your connection may hit 303.10 Mbps at times and also dip lower at times too...so that would mean the Windows machine is less accurate in displaying that. I asked about the graph and you never showed Windows values until later; but I think you believe I was confused since you focused on values - and so you never responded.)
The graphs are part of Luci already without needing to install any extra packages. It's also not a bug, as has been explained: just unfortunate timing of the sampling combined with apparently no averaging.
Besides the above, I don't see why you chose to reply to me, since I wasn't complaining about any of this to begin with; I didn't make this thread.
After looking a little deeper at this. I can only reproduce it on a odroid h2+ with openwrt as vm.
On Windows 10 with a 9900k and vmware the value is correct with the same image as used on the h2+.
What device did you see this on?