When changing timespan to anything different than 1 hour, the ICMP Drop Rate Statistics (collectd-mod-ping) graph changes the vertical scale range to something I don't understand.
When timespan is 1 hour, the vertical scale shows 0...1% (which makes sense and I understand).
However, changing timespan to 1 week for example, the vertical timescale (which should be %) changes from 0m to 1000m which does not make any sense. How can I read such data? Or is it a bug? (22.03.2)
I did not check source code or config files but I think the "m" is for milli so "1000 m" = 1 so that vertical scale would be 0 to 1%. Does that make sense considering the change in averaging periods?
Thank you @spence. The percentage should be indeed very low, but I don't think it's the case (or at least I'm not grasping the concept of these graphs). For example, see below the data for this Tuesday.
In the weekly graph it shows a spike just below "1000 m".
This same spike in the monthly graph shows just above "200 m".
It is milli. 1/1000
Rrdtool & rrdgraph set the scaling automatically, and these really small percentages get shown a bit strangely.
So?
A narrow spike of 1000 in a short period, and when shown at the next level for a longer period, the narrow spike gets averaged down by the surrounding low values.
(10+10+1000+10+10)/5 = 208
The rrd database always average/sum short-term data for the next longer period data series. The are actually 5 different period data series, and details are kept only for the shortest 2 hours data period, and all longer periods (day, week, month, year) are stored as averages of the shorter-time series data points. (Sometimes as sums, depending on the data style: a counter or a rate). That helps to keep the database size constant.
(We use a really ancient but small version of rrdtool)
Edit:
Maybe I should edit that chart definition in LuCI stats to avoid the SI scaling units and be just clear numbers.