It's been super fun (at least for me, albeit as a patent attorney by profession I almost cannot bear disclosing and not patenting along the way, but I guess that's just the way open source stuff works or something! At least all public disclosure with demonstrable publication dates blocks any third party patenting. I invalidate patents mostly anyway ).
Yes you can just download SDK and then build as above:
@moeller0 also since we now have baseline per reflector not just one baseline what is the bufferbloat detection? When all baselines are exceeded or just one?
It depends IMHO we should start on the following assumption:
our main goal is to adjust our shaper to counter a close-by variable congestion link (either because of true variable rate or because of oversubscription and hence variable share available for our traffic)
Congestion further down the line, like specific ISP peering points are not our main concern, and we ignore them
-> then the consequence is that we should use a small but network-toplogic diverse set of reflectors so that they ideally all use different paths after the access link (not possible, but preferably they should not all like behind the exact same peering link) and then check whether the minimal deltaOWD (we need to calculate these per reflector and direction) crosses our severity threshold at which point we know all reflectors experienced a delay increase >= threshold.
If we are less concerned about the access link but want to control bufferbloat to as much of the internet as possible we should just check the maximum deltaOWDs.
I believe however the first goal, tracking a close by shared link to be more relevant and hence argue we should make out consensus to take the minimum (as long as we make sure that failures in getting timestamps do not create arbitrarily small estimates).
I think @tievolu has prior art he already partly shared not sure though whether first to invent or first to register is the current method of patent granting.
In the US now first inventor to file, but in Europe any prior art even from inventor that later files kills.
So if you want to stop patenting you just have to disclose left, right and centre. And ideally disclosures have publication dates. Journal publications or publications from patent offices have clear publication dates. Internet disclsoures like on this thread are harder to work with. I guess these would probably be OK in most jurisdictions though.
Yes, that is basically equivalent with @tievolu's approach of requiring all reflectors to agree that the OWD is truly above threshold. In theory we could relax that by only requiring 2 out of three or similar, but IMHO conceptually requiring all sounds fine and logical.
As long as this is below threshold we can increase the rate of that direction (contingent of sufficient load to justify that) if the rate s above threshold we want to slow throttle a bit...
Turns out duktape does have a CLI you can use to just run scripts (if you compile with the right Makefile)
# duk -h
Usage: duk [options] [<filenames>]
-i enter interactive mode after executing argument file(s) / eval code
-e CODE evaluate code
-c FILE compile into bytecode and write to FILE (use with only one file argument)
-b allow bytecode input files (memory unsafe for invalid bytecode)
--run-stdin treat stdin like a file, i.e. compile full input (not line by line)
--verbose verbose messages to stderr
--restrict-memory use lower memory limit (used by test runner)
--alloc-default use Duktape default allocator
--recreate-heap recreate heap after every file
--no-heap-destroy force GC, but don't destroy heap at end (leak testing)
--no-auto-complete disable linenoise auto completion [ignored, not supported]
If <filename> is omitted, interactive mode is started automatically.
Input files can be either ECMAScript source files or bytecode files
(if -b is given). Bytecode files are not validated prior to loading,
so that incompatible or crafted files can cause memory unsafe behavior.
See discussion in
https://github.com/svaarala/duktape/blob/master/doc/bytecode.rst#memory-safety-and-bytecode-validation.
If it's actually gonna be useful I don't know, but hey, it does work on OpenWrt. Might be of interest to someone if not us.
get_min_OWD_deltas() {
local reflector
local prev_uplink_baseline
local prev_downlink_baseline
local cur_uplink_baseline
local cur_downlink_baseline
local reflector_OWDs
local min_uplink_delta
local min_downlink_delta
min_uplink_delta=10000
min_downlink_delta=10000
> $BASELINES_new
while IFD= read -r reflector_line; do
reflector = echo $reflector_line | awk '{print $1}'
uplink_baseline = echo $reflector_line | awk '{print $2}'
downlink_baseline = echo $reflector_line | awk '{print $3}'
reflector_OWDs = awk '/$reflector/' $OWDs
uplink_OWD= echo $reflector_OWDs | awk '{print $1}'
downlink_OWD = echo $reflector_OWDs | awk '{print $2}'
delta_uplink_OWD=$( call_awk "${uplink_OWD} - ${uplink_baseline}" )
delta_downlink_OWD=$( call_awk "${downlink_OWD} - ${downlink_baseline}" )
if awk "BEGIN {exit !($delta_uplink_OWD >= 0)}"; then
cur_uplink_baseline=$( call_awk "( 1 - ${alpha_OWD_increase} ) * ${prev_uplink_baseline} + ${alpha_OWD_increase} * ${uplink_OWD} " )
else
cur_uplink_baseline=$( call_awk "( 1 - ${alpha_OWD_decrease} ) * ${prev_uplink_baseline} + ${lpha_OWD_decrease} * ${uplink_OWD} " )
fi
if awk "BEGIN {exit !($delta_downlink_OWD >= 0)}"; then
cur_downlink_baseline=$( call_awk "( 1 - ${alpha_OWD_increase} ) * ${prev_downlink_baseline} + ${alpha_OWD_increase} * ${downlink_OWD} " )
else
cur_downlink_baseline=$( call_awk "( 1 - ${alpha_OWD_decrease} ) * ${prev_downlink_baseline} + ${alpha_OWD_decrease} * ${downlink_OWD} " )
fi
echo $reflector $cur_uplink_baseline $cur_downlink_base >> $BASELINES_cur
if awk "BEGIN {exit !($delta_uplink_OWD < $min_uplink_delta)}"; then
min_uplink_delta = $delta_uplink_OWD
fi
if awk "BEGIN {exit !($delta_downlink_OWD < $min_downlink_delta)}"; then
min_downlink_delta = $delta_downlink_OWD
fi
done < $BASELINES_prev
mv $BASELINES_cur $BASELINES_prev
echo "${min_uplink_delta} ${min_downlink_delta}"
}
@moeller0 before I give up I am presently planning on spitting out OWD values to an OWDs file, then maintaining BASELINES_prev and BASELINES_cur. To update baselines I go through each line of BASELINES_prev find the relevant OWDs for that reflector in the OWDs file, then work out the updated baselines and put those into BASELINES_new.
Does that sound not too insane? I mean obviously this is totally nuts doing this in shell. But I've not given up yet.
@Lynx && @moeller0 I just pushed an update to my awk file. I switched to having records split based on a regex for len=[0-9]+ . Due to maintaining POSIX regex compatibility, that's the most 'advanced' regex I could get to work.
Anyway, it eliminates the need for tail -n+2 in the pipe so it shaves a couple clock-cycles off each execution. I also added a couple commented lines in the awk file that you can uncomment for additional output verbosity.
Please test and let me know if you see any regression. Thanks!
root@OpenWrt:~# time hping3 9.9.9.9 --icmp --icmp-ts -i u1000 -c 20 2> /dev/null | ./hping_parser.awk
11 10.5
real 0m0.194s
user 0m0.005s
sys 0m0.000s
I fixed a few things around the variable assignment syntax and moved the assignment of $BASELINES_new to the bottom of the while loop. Still not working fully, but get some errors knocked out...
get_min_OWD_deltas() {
local reflector
local prev_uplink_baseline
local prev_downlink_baseline
local cur_uplink_baseline
local cur_downlink_baseline
local reflector_OWDs
local min_uplink_delta
local min_downlink_delta
min_uplink_delta=10000
min_downlink_delta=10000
while IFD= read -r reflector_line; do
reflector=$(echo $reflector_line | awk '{print $1}')
uplink_baseline=$(echo $reflector_line | awk '{print $2}')
downlink_baseline=$(echo $reflector_line | awk '{print $3}')
reflector_OWDs=$(awk "/$reflector/" $OWDs)
uplink_OWD=$(echo $reflector_OWDs | awk '{print $1}')
downlink_OWD=$(echo $reflector_OWDs | awk '{print $2}')
delta_uplink_OWD=$( call_awk "${uplink_OWD} - ${uplink_baseline}" )
delta_downlink_OWD=$( call_awk "${downlink_OWD} - ${downlink_baseline}" )
if awk "BEGIN {exit !($delta_uplink_OWD >= 0)}"; then
cur_uplink_baseline=$( call_awk "( 1 - ${alpha_OWD_increase} ) * ${prev_uplink_baseline} + ${alpha_OWD_increase} * ${uplink_OWD} " )
else
cur_uplink_baseline=$( call_awk "( 1 - ${alpha_OWD_decrease} ) * ${prev_uplink_baseline} + ${lpha_OWD_decrease} * ${uplink_OWD} " )
fi
if awk "BEGIN {exit !($delta_downlink_OWD >= 0)}"; then
cur_downlink_baseline=$( call_awk "( 1 - ${alpha_OWD_increase} ) * ${prev_downlink_baseline} + ${alpha_OWD_increase} * ${downlink_OWD} " )
else
cur_downlink_baseline=$( call_awk "( 1 - ${alpha_OWD_decrease} ) * ${prev_downlink_baseline} + ${alpha_OWD_decrease} * ${downlink_OWD} " )
fi
echo $reflector $cur_uplink_baseline $cur_downlink_base >> $BASELINES_cur
if awk "BEGIN {exit !($delta_uplink_OWD < $min_uplink_delta)}"; then
min_uplink_delta=$delta_uplink_OWD
fi
if awk "BEGIN {exit !($delta_downlink_OWD < $min_downlink_delta)}"; then
min_downlink_delta=$delta_downlink_OWD
fi
done < $BASELINES_prev > $BASELINES_new
mv $BASELINES_cur $BASELINES_prev
echo "${min_uplink_delta} ${min_downlink_delta}"
}
In that case if this is prolonged the script should fall back to the old deltaRTT detection, but for that to work seemlessly RTT samples should also be continuously sampled and EWMAd
The idea would be to recognize that special value and ignore it instead of feeding it into baseOWD history? As far as I understand @tievolu we can expect reflectors to be on-and-off things that can not be fully relied on (hence the need to sample more than 1)
The EWMA means the baseline updates up very slowly though. So the odd sample may not make such a difference? Could certainly detect and just not update though, as you say. Hmm.
I wonder if you could test this on you connection and let me know what happens?
For some reason on my side when there is a heavy load the script slows down. I wonder if hping3 is taking a long time to execute in that situation? This is very problematic because we don't want tick duration to slow down during bufferbloat!
Certainly it seems on bufferbloat I tend to see the odd lost reflector packet. I wonder if that could be because my uplink is going through SQM and CAKE is dropping the packet? Or would that be nonsense?
In general it seems to be working. Sort of. But it is far from satisfactory at the moment.
@_FailSafe I am no git expert. At the moment I have just uploaded your parser onto my repository. I presume this is not the right way to do it? Is there a way to do it that means when you update yours mine will ask me to update?