Yes theoretically that is not nice, but typically only one direction is loaded, so that moving the other un-loaded direction back to the configured minimum rate is the desired outcome anyway it is just happening a bit faster, no?
I would hypothesize this comes from CUBIC, as larger shaper increase steps will cause larger overshoots of the true bottleneck capacity, which in turn will lead to more data in flight and that in turn will cause larger/longer bufferbloat episodes. That can be an acceptable trade-off, but it certainly appears to be a consequence of faster rate "learning".
I am not sure that without a proper reflector this is going to work as they need packet spacing information at the receiver. Not impossible but that means a dedicated program at the other end.... at which point one needs an VPS an might be able to simply set up sprout as a tunnel for all traffic...
In theory that might work, in practice however I am not sure how hard this would be, given that wireguard is multi-threaded and will decrypt in parallel un multiple CPUs, I would assume piping wiregourd through a sprout tunnel might be easer to achieve.
The challenge with "auto-back-off" protocols is always that they are open to being crowded out if competing with less reactive flows.
So the problem is, end to end does not solve the issue reliably, otherwise we would not be in the mess that we are in. IMHO consistent low latency requires cooperation of end-points and network nodes.
Yes.
One would hope, but the problem here is that unless end-users are willing to pay a premium for low latency there is very little incentive for ISPs to invest in the additional hardware capabilities that enable reasonably low latency over their links to the end users. In 5G low latency has been turned into a marketing term, but looking at what they offer right now, it seems a bit optimistic....
Sure, we all agree that egress shaping is more immediate and more precise than ingress shaping, but "much better" really?
(I am still impressed how well ingress shaping does work, especially with cake and its ingress keyword).
Did your ISP use SRA and did you really see big rate sync rate fluctuations on your DSL link during "showtime"?
You could try to remedy this with an additional ingress cake instance in your home which could be run simply in dsthost mode, relaying on the VPS shaper to deliver pe flow fairness. Then again, I might be overlooking something here ad that might gloriously fail.