Yep, that's why I was mentioning it, as a recommended replacement.
Are you from US or Germany?
In my country you can't replace ISP provided modem for your own device. You have to use the one provided by your ISP. I'm using UPC and they are refusing to let their users to use their own devices.
In my case I'm forced to use Compal CH7465-LG.
And thanks again, moeller0... So, not really needed to go off default, and the other is just exposing things rather than setting a mode I didn't know about. Clears it up.
Last little thing I've been wondering, my current belief is that you don't need (or maybe even shouldn't have?) sqm-scripts loaded, when you've loaded luci-apps-sqm. I think I read in a setup guide someplace that you should remove sqm-scripts before installing luci-apps-sqm.
Is this correct, at least for a basic SQM environment? Or would you be running both if trying to do some more exotic SQM measures? Or, a dosen't matter?
In the US. Cox Cable, my provider, had a few options you could get from them. You could also use your own, rather than rent or buy from them, if you wanted to. I guess some other providers keep things more locked down, HW wise.
The Puma guys got a lot of bad press on that, hopefully a FW fix is possible, and makes it out to users. You should also keep the pressure on, complaining to your provider, maybe they might let you use a different modem to fix their "problem".
Fsclavo... you probably want to experiment with the settings, look for some of the tuning guides that are avaiable, if you haven't already... sometimes a bit of adjusting can make a big difference.
I agree but here's the thing. The Compal CH7465-LG is the best of them. There is also Ubee but I would be getting a used device, not new. I think I will just wait for the firmware update and see then.
So, no "unapproved" modems, at all? Ah well...
I'm curious, does the SQM reduce the ping spikes much, or at all? On my bursty service, if I scale things back a lot, speed wise, till the C7 can fully handle the DL, it flattens most of it out.
Unfortunately, that's putting a DL cap of 140-145mbit on my 300mbit service, yielding me about 125-130mbit DL speed. A lot to give up, but it will smooth out most of my latency bursts. Until someone figures out how to utilize the Hardware NAT acceleration, that's about it for full SQM control on a C7.
Well, this seems slightly off, luci-app-sqm is just a "front" end to create and edit /etc/config/sqm as well as call /etc/init.d/sqm to make the new settings active. So you certainly will need the sqm-scripts package, while luci-app-sqm is sort of optional. BUT, if you install luci-app-sqm (via the GUI or via "opkg update ; opkg install luci-app-sqm") it will automatically install sqm-scripts unless that package is installed already, while installing sqm-scripts will not "drag in" luci-app-sqm.
Hope this clarifies things.
No, I can only choose from like 3 or 4 approved modems, provided from ISP. Only two of them have the bridge mode, one of them is Compal (affected by the Puma6 bug) and the other is an older unit (Ubee) which is being replaced by Compal modems. That's actually if I'm going to convince my ISP to replace the modem. There are very, very unwilling to to replace modems if there are "operational" and the Puma6 bug is not making the device faulty according to my ISP. The truth is that all customers on 24-months
contract with UPC can forget about replacing their modems unless there are like dead faulty. I'm basically screwed here.
Yes, SQM helps a lot but my brother is still seeing bursts of latency while gaming and it's not SQM or LEDE fault. It's a broken firmware in my modem. Nothing I can really do about it. The worst thing is even if I manage to put my hands on a updated firmware I can't update my modem since only ISPs are allowed to update them.
OK, thanks again Sebastian... Checked and yes, both are there. Been a bit since I upgraded, I think I remember 2 if not 3 things pulled in when luci-sqm-apps is added. I also see a sqm-scripts-extra looking in the software repository, but it has a 2016- version number... old stuff I'd guess. So I guess I shouldn't be missing anything.
R43k3n... that sucks... I guess you're left with experimenting with lower limit settings for cake, maybe some more latency improvement can be had, might be worth sacrificing some speed to keep the gamer happier.
Well, those were intended to distribute testing scripts to users more quickly, but it turned out there was no feedback on those at all, so this is currently on hold (also https://lede-project.org/docs/howto/sqm now contains the most relevant information to configure cake's more advanced features, making the extra scripts even less relevant)
FYI. I have noticed with my ISP Charter.com Docsis 3.0, with SQM my goodput is -8% ingress and -6% egress from any total bandwidth entered. This is whether or not 18 bytes overhead is specified.
Well, the expected goodput would be at (100 * (1500 - 20 - 20)/(1500 + 18)) = 96.18% so around 4% of the "loss" you observe is purely caused by the fact that the shaper's bandwidth is specified as gross bandwidth, while goodput is defined typically as TCP/IP payload size. So we are talking about 2 to 4% percent under performance, but how do you test that, especially how many concurrent streams do you use?
Dslreports 16/4 betterspeedtest.sh 5/5
Ah, seems reasonable,. The reason I asked is that a single TCP stream will not be able to saturate a link, but once you add a few tcp streams the loss by TCPs up- and down ramping (attempting to probe the available bandwidth) gets smaller, at 16 streams I certainly would expect less than 4% though.
One other question to think about would be TCP options or smaller MTU than 1500 (for example I use PPPoE and tcp timestamps and 26 bytes of overhead, so expect I goodput to be maximally at (100 * (1500 - 8 - 20 - 20 - 12)/(1500 + 26))) = 94.36% of (dePTMd) Syncrate. PPPoE is not going to be your problem, but TCP options and smaller MTUs on the path t the speedtest server might be... For a quick and dirty test I recommend http://www.speedguide.net/analyzer.php which should show the path MTU as well as whether tcp timestamps are in use...
here it is SG TCP/IP Analyzer
IP Address: 68.185.237.4 ()
Client OS: Mac OS
Browser: Chrome 57.0.2987.133
User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36
Please Read the Analyzer FAQ if the above is not your IP address.
TCP options string = 020405b4010303050101080a35d9693c0000000004020000
MTU = 1500
MTU is fully optimized for broadband.
MSS = 1460
Maximum useful data in each packet = 1448, which is less than MSS because of Timestamps, or other TCP/IP options used.
Default TCP Receive Window (RWIN) = 131744
RWIN Scaling (RFC1323) = 5 bits (scale factor: 2^5=32)
Unscaled TCP Receive Window = 4117
Your TCP Window limits you to: 5270 kbps (659 KBytes/s) @ 200ms
Your TCP Window limits you to: 2108 kbps (263 KBytes/s) @ 500ms
MTU Discovery (RFC1191) = ON
Time to live left = 54 hops
TTL value is ok.
Timestamps (RFC1323) = ON
Note: Timestamps add 12 bytes to the TCP header of each packet, reducing the space available for useful data.
Selective Acknowledgements (RFC2018) = ON
IP type of service field (RFC1349) = 00000000 (0)
Ah, so RFC1323 timestamps are enabled, so the expected maximal goodput will be at:
(100 * (1500 - 20 - 20 -12)/(1500 + 18)) = 95.39% of the shaped rate. That still leaves 1.5 to 3.5% missing. Not too bad, actually. How long do you run the speedtests? I typically recommend to go at least for 30 seconds see https://forum.openwrt.org/t/sqm-qos-recommended-settings-for-the-dslreports-speedtest-bufferbloat-testing/2803 for how to get the most out of the dslreports speedtest...
The only other ting to try would be to repeat the speedtest at different timepoints (including in the dead of night) to see whether it might be related to overall traffic in charter's network...
Best Regards
ECN must be enabled on the client you're using for speed testing. (It's not enabled by default on about every operating systems)
I've tested many many times between 8-10am, 12am-4am, and after 6pm. The same results no matter the bandwidth entered, overhead, what-not. The throughput remains the same; as far as ecn, enabled that long ago on my Macbook Pro running Yosemite. See below>
Jasons-MacBook-Pro:~ Jason$ sysctl -a | grep ecn
net.inet.tcp.ecn_initiate_out: 1
net.inet.tcp.ecn_negotiate_in: 1
net.inet.ipsec.ecn: 1
net.inet6.ipsec6.ecn: 1
root@LEDE:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 384438196 bytes 1026218 pkt (dropped 0, overlimits 0 requeues 1)
backlog 0b 0p requeues 1
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc cake 8013: dev eth1 root refcnt 2 bandwidth 5Mbit besteffort dual-srchost nat rtt 40.0ms noatm overhead 18 via-ethernet mpu 64
Sent 23611531 bytes 92966 pkt (dropped 7, overlimits 141652 requeues 0)
backlog 0b 0p requeues 0
memory used: 294272b of 4Mb
capacity estimate: 5Mbit
Tin 0
thresh 5Mbit
target 3.6ms
interval 41.6ms
pk_delay 7.8ms
av_delay 3.4ms
sp_delay 14us
pkts 92973
bytes 23616188
way_inds 12
way_miss 315
way_cols 0
drops 7
marks 2537
sp_flows 1
bk_flows 1
un_flows 0
max_len 1514
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
Sent 221169550 bytes 157285 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan1 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8014: dev ifb4eth1 root refcnt 2 bandwidth 66Mbit besteffort dual-dsthost nat wash rtt 40.0ms noatm overhead 18 via-ethernet mpu 64
Sent 223371540 bytes 157285 pkt (dropped 0, overlimits 202856 requeues 0)
backlog 0b 0p requeues 0
memory used: 238080b of 4Mb
capacity estimate: 66Mbit
Tin 0
thresh 66Mbit
target 2.0ms
interval 40.0ms
pk_delay 54us
av_delay 14us
sp_delay 5us
pkts 157285
bytes 223371540
way_inds 5
way_miss 315
way_cols 0
drops 0
marks 123
sp_flows 1
bk_flows 1
un_flows 0
max_len 1514
The RTT 40ms ingress & egress is experimenting for gaming to a dedicated call of duty server located in San Antonio per MSN Network. The sole reason I even cared about latency was from latency sensitive games and voip. What a journey it's been haha. I also noticed latency and/or buffer bloat doesn't oscilate much from full throughput to even 1/2 using sqm. Lower than 30% goodput cause damage it seems. Something I'm not seeing here.
OK, so disabling GRO on smaller links should be stickied! It didn't change my bandwidth mystery, but did improve latency a lot.