WRT3200ACM - Question and instructions regarding VLAN and separating one WAN connection into two LAN (or VLANs)

uh, i kind of changed the settings like at least 50 times by now, all i can manage is to just post the latest output of those commands, along with the 60 sec downstream/ 55 sec upstream speedtest output

if that's helpful:
root@Malik_wrt3200acm:~# cat /etc/config/sqm

config queue
option debug_logging '0'
option verbosity '5'
option enabled '1'
option interface 'eth1'
option qdisc_advanced '0'
option qdisc 'cake'
option script 'piece_of_cake.qos'
option linklayer 'ethernet'
option overhead '18'
option download '53600'
option upload '4700'
root@Malik_wrt3200acm:~# tc -d qdisc
qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev eth0 root
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc cake 8120: dev eth1 root refcnt 9 bandwidth 4700Kbit besteffort triple-isolate rtt 100.0ms raw
linklayer ethernet overhead 18
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
qdisc noqueue 0: dev br-br_wifi root refcnt 2
qdisc cake 8121: dev ifb4eth1 root refcnt 2 bandwidth 53600Kbit besteffort triple-isolate wash rtt 100.0ms raw
linklayer ethernet overhead 18
root@Malik_wrt3200acm:~# tc -d qdisc
qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev eth0 root
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc cake 8120: dev eth1 root refcnt 9 bandwidth 4700Kbit besteffort triple-isolate rtt 100.0ms raw
linklayer ethernet overhead 18
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
qdisc noqueue 0: dev br-br_wifi root refcnt 2
qdisc cake 8121: dev ifb4eth1 root refcnt 2 bandwidth 53600Kbit besteffort triple-isolate wash rtt 100.0ms raw
linklayer ethernet overhead 18
root@Malik_wrt3200acm:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
Sent 17064602008 bytes 12421281 pkt (dropped 0, overlimits 0 requeues 34)
backlog 0b 0p requeues 34
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 17064602008 bytes 12421281 pkt (dropped 0, overlimits 0 requeues 34)
backlog 0b 0p requeues 34
maxpacket 6056 drop_overlimit 0 new_flow_count 9584 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc cake 8120: dev eth1 root refcnt 9 bandwidth 4700Kbit besteffort triple-isolate rtt 100.0ms raw
Sent 43416578 bytes 183793 pkt (dropped 2150, overlimits 38801 requeues 0)
backlog 0b 0p requeues 0
memory used: 120228b of 4Mb
capacity estimate: 4700Kbit
Tin 0
thresh 4700Kbit
target 5.0ms
interval 100.0ms
pk_delay 3.5ms
av_delay 312us
sp_delay 25us
pkts 185943
bytes 46683488
way_inds 0
way_miss 181
way_cols 0
drops 2150
marks 0
sp_flows 0
bk_flows 1
un_flows 0
max_len 13212

qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
Sent 402783072 bytes 294536 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-br_wifi root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8121: dev ifb4eth1 root refcnt 2 bandwidth 53600Kbit besteffort triple-isolate wash rtt 100.0ms raw
Sent 408410214 bytes 291789 pkt (dropped 2747, overlimits 477572 requeues 0)
backlog 0b 0p requeues 0
memory used: 193048b of 4Mb
capacity estimate: 53600Kbit
Tin 0
thresh 53600Kbit
target 5.0ms
interval 100.0ms
pk_delay 140us
av_delay 5us
sp_delay 1us
pkts 294536
bytes 412601086
way_inds 0
way_miss 190
way_cols 0
drops 2747
marks 0
sp_flows 1
bk_flows 1
un_flows 0
max_len 8832

post1

edit:I just realized, even though I have disabled the wifi's, the dhcp alotted ip's still exist for those devices. could this have an impact on the sqm's performance with my current setup?
edit2: sir, the br-br_wifi was just my earlier attempt at trying to isolate the wifi and the ethernet into two separate interfaces, but this br-br_wifi is the same thing as the original br-lan. Please don't assume anything funky here :blush:

I would love to switch to another game by the way, but alas... it is the only game I actually enjoy AND OR am any good at. I've played this same game, throughout its versions for at least over 13 years now. So, giving up on this is simply not an option so far :frowning: . I think the game roughly mostly only uses udp packets though, because looking at the port forwarding:
UDP 27000 to 27015 inclusive (Game client traffic)
UDP 27015 to 27030 inclusive (Typically Matchmaking and HLTV)
TCP 27014 to 27050 inclusive (Steam downloads)
UDP 4380.
And the "Steam downloads" only occur during the updates, which is never during the actual gameplay itself.

edit:
Also no, just having a higher PING does not necessarily guarantee you always having an advantage. yes, when you first peek, you get the advantage that you see the enemy slightly before they see you, but granted you have no jitter issues; you will have just as good of reflexes and spraying (holding down the trigger, following the recoil patterns per each different weapon) ability. And your shots will register. As is; with jitter, if I shoot at the person's head sometimes, the server simply ignores the fact that i fired a bullet anywhere, as I can even see a bullet hole behind the person's head. Many many MANY of the CS:GO players currently are faced with horrible to terrible jitter issues and simply think they actually suck at the game, (CS:GO stands for Counter-Strike: Global Offensive). But I know when I hit my shots, because I've played the previous versions and my aim is among the best in the world. I don't mean to brag, but it is simply the truth, as i've proven it time and time again on numerous occasions, with recorded demos and the whole ordeal. That's why I bought this expensive router in order to try and counter the jitter issue and play this game so I can enjoy being at the top of reddit.com page since CS:GO is the #1 FPS online at the moment, and it is also the least forgiving for making mistakes amongst any other FPS played online. With the biggest egos... and the biggest stakes... this game is it as far as FPS goes. I don't brag though, I deleted all the demos I had of me destroying professional name-known players. I just like the little joy I get from breaking people's egos when they think they're on top of the world, and put them back into their place. I don't mean to monopolize anything. I'm a fair player, I even let people kill me if they seem like nice people :wink:

edit:
I'm not sure if I can even test this any other way than to play the actual game for hours and hours after changing the setting each time. I don't think the cmd ping replies are any good indication, nor are these 60 second DS and 55 second US results any useful at all.
Both of these results were with the basic setting sqm down 56600 and up 4500
post2
post3

I suppose nothing good ever comes easy.

Thanks for the data, it looks pretty sane, toy might want to set the MPU to 64 though (but this will only help to account very small packets correctly, in most cases such packets are rare).

the triple-isolate, while not a bad default is not as easy to understand as the explicit dual-srchost/dual-dsthost combination for upstream/downstream in that triple isolate attempts to limit any direction to hog the connection, but in your case that might not be the right thing...

No, sqm/cake only cares for actual data packets (in your case only packets going to/from eth1) it does not care about the IPs existing on your network and it also does not care about traffic not crossing eth1 (so if eth1 is your WAN interface, sqm will not care about your internal traffic).

If you had not mentioned that I would have never noticed, I was really just intersted in the individual cake shaper instances :wink:

Oh, I was only joking...

This makes a lot of sense. What you could do while playing take a packet capture (say from the router's command line interface "tcpdump -s 1600 -i eth1 -w eth1.cap") and then look what kind of data is transmitted. You will need to transfer the capture file from the router to your computer (where you should be able to open and view it using wireshark). PLease note that this will take up space on your router so only run if you gave sufficient space (or better, if possible, connect a fast usb stick to the router, mount it, and store the capture file to that). That way you can confirm and analyse your games traffic patterns (it might actually be easier to use wireshark on your computer and capture the data from the background, assuming you use a PC and not a console for gaming).

BTW, I am so remote from FPS games that I fear I have no real idea about such a game's network quality requirements (the last FPSs I played were original doom and quake, both probably last century).

Both of these speedtests look quite decent to me, if you look into the detailed bufferbloat reports, by clicking on the links below the bufferbloat plots on dslreports' results page; where are the large delays occurring, all over the place ot just at the beginning? If you paste the result link from the results page here, people can see your detaill\ed results, which might be good for more detailed discussion.

http://www.dslreports.com/speedtest?table=1&focus=65097a33d775b1d6bc531084ca8d8211 this is the link for the long duration tests, although most of them were capped under a download of 70.5 Mb/s, some of the more recent ones were uncapped and only capped by the SQM scripts from the router itself.

I have wireshark installed, and can run it on the side while gaming, and I can even make it so the game doesn't alt-tab out (meaning, while gaming simultaneously i can view the packets and such, and save/record whatever i need to). I use a windows 7 pc. It's a pretty powerful machine.

I'm not sure what you mean by the triple-isolate, although I do understand the dual srchost/dual-dsthost that you explained in the "extra extra dangerous configs" last 2 lines part. Which is what I will be trying to game under today. And I can go ahead and set the MPU to 64, but the page said "only use this part if your MTU > 1500" and i've tried doing tests for finding if my isp will allow mtu's greater than 1500 and found that they all fail, 1500-28 was the biggest i could manage with the custom command for the size in cmd, sorry i forget what it's called at the moment.

I can get on with the gaming within 2 hours from now, and game for probably 2 hours. And then after about 8:30 pm I can do the rest (I am on -6 GMT) so it's 12:51 pm right now for me.

@moeller0 Once again, I very very much appreciate your precise responses and elaborate explanations. You are a very kind person, may you be rewarded.

Unfortunatel this link does not work for me. But have a look at the first post of https://forum.openwrt.org/t/sqm-qos-recommended-settings-for-the-dslreports-speedtest-bufferbloat-testing/2803 to get recommendations how to link dslreports results for this forum (this also includes recommendations for configuring that speedtest).

Cake defaults to triple-isolate mode, where it tries to control connection hogging for both internal and external hosts, but for your use case I believe strict per internal IP fairness should be easier to predict.

For your testing I would recommend you add the following to your /etc/config/sqm (or if option lines with the same names already exist just change the values as shown here)
option linklayer 'ethernet'
option overhead '18'
option qdisc 'cake'
option script 'layer_cake.qos'
option iqdisc_opts 'nat dual-dsthost mpu 64'
option eqdisc_opts 'nat dual-srchost mpu 64'
option linklayer_adaptation_mechanism 'cake'

Yes, that help text is a bit obsolete...

Best Regards

1 Like

For the links for the last two images I posted (from previous posts):


I have now changed the settings as you've instructed in your latest post (just above this one) and will do a few tests now, before i have to leave (20 mins)
56600 DS / 4500 US


76600 DS / 4700 US


46600 DS / 4700 US
http://www.dslreports.com/speedtest/24387441 - OMG THIS RESULT IS AWESOME!!
http://www.dslreports.com/speedtest/24387551 - omg i never thought i'd see these!! lol

50600 DS / 5000 US
http://www.dslreports.com/speedtest/24387712 - result is silky smooth!!
http://www.dslreports.com/speedtest/24387842 - looks a bit rougher, but there is my fam streaming atm...

40600 DS / 4000 US
http://www.dslreports.com/speedtest/24387989 - the DS only spiked once, the US spiked a lot though
http://www.dslreports.com/speedtest/24388121 - i don't know what to make of that

Ok gotta run, will be back in a few to do more tests.

edit:
55600 DS / 4650 US

60600 DS / 5800 US

52600 DS / 5600 US

I don't understand why it would spike up the bufferbloat chart because of ONE spike? That doesn't even make any sense to compute in that sense. It should be the average. But I suppose I'm nowhere near knowledgeable enough to have an opinion on this hah.

82600 DS / 5600 US

It seems each time I raise the DS, the US graph looks way better, and each time I increase the US, the DS graph looks way better. But obviously the DS is essentially capped at 55Kbps unless i want the DS graph to look like crap. Hmm. I suppose my solution currently is to just keep the 47kpbs DS/ 4.7kpbs US... for a solid overall relative standby-ping to whilst-downloading/uploading ping to be equal. And when I mentioned in the gameplay that I can notice the "jitter" I meant as in when my ping fluctuates. If it's steady at 130 (albeit the game does compensate for jitter/bufferbloat just as you said because i remember one time i was lagging like crazy and couldn't hit anything much less move around without very noticeable choppiness, my ping was only up to 138 from 130. And at 130 i was a smooth and efficient player, but at 138 i was a disabled robot on its last 5% of battery). I will await your further instructions or input (if you have any final words or even a way to climb higher than this).

I should mention also that I also would need the ability to be able to upload up to about 3Mbps while playing my game, because streaming on twitch.tv is the thing I really want to do, because unless you're streaming... people just think you're cheating and plus if they can't see your reactions it's not really any fun to play anyway. I'm not going to get any real followers, but I'd like to be able to publicize my during-gameplay facial expressions. Thanks a ton as well man, these new specifications for the tests I'm running now are amazing and very accurate and have a LOT of data in the graphs compared to the 5 big fat lines I was getting in the graphs from before even though i'd put everything I could in order to gain elaborate output from the speedtests.

Nah, everyone is entitled to their own opinion; and it is always worth discussing as the details and rationale are not that obvious.
If you just over the bars in the bufferbloat bar representation (the one with just 3 bars) it will reveal the average latency for the three categories (and if you hover at the upper end of the red bar it will even reveal the maximum). Now, if you think about it a single delayed packet can already lead to a noticeable glitch in say a VoIP application (about on-line games I am not sure). Most users will really not care that much about a single delayed packet (after all even gamers have some tolerance :wink: ) but the average simply hides too much. So one could argue to look at the XX percentile (like 95 or 99) to get an idea about the suitability of a link, but I like that dslreports seems to report the maximum and the individual probes as that should allow each user to see all the details for them selves. You might note though that the bufferbloat grading as reported on the small result images does not look at the maximum.

That really should not be, the two measurements are taken in sequence and are separated by a few seconds of of idle testing, and if you do not saturate a direction of your link the shaper will not come into action, so this looks like spurious correlation and certainly should not be causal. (Under link saturation things might be somewhat different, but I believe your tests where from a rather quite period, otherwise your downstream measurements would show much less bandwidth).

So one issue with that is, that you most likely did measure the icmp echo request/eche response RTT against a different server than the one used for your game; in that case it is not clear whether the packets used the same network path and if the delay was not caused by your direkt uplink it is not clear whether your game packets did not encounter larger delays (or was the ping statistic taken from the game itself; do on-line games actually offer something like that?)

Okay that is going to be a bit tricky, the proposed configuration will default to fair sharing between concurrently active hosts, so if your family uses more than around 5-3 = 2 Mbps upstream combined your upload will suffer...
Try to install iftop on the router (opkg update ; opkg install iftop) and run "iftop -i eth1" during a typical usage day of your family members, you should see their combined traffic in real time; while this does not fix anything it should allow you to assess whether outgoing streaming is a viable option with per internal-IP-Fairness or not.

Memory
Total Available
328344 kB / 513616 kB (63%)
Free
325668 kB / 513616 kB (63%)
Buffered
2676 kB / 513616 kB (0%)

why is my memory less than like 90% or whatever? I know I installed some packages in order to try to find some sort of traffic control package myself, but I don't know what exactly i did, and now I do have live realtime graphs and such under the Status tab; i'm not sure if that comes with Luci automatically or if some package I installed used some memory to store data and that data file(s) is what is hogging the memory. Also, how do I go about deleting them if they are hogging? Where would I look for them? (I only know how to SSH into the router with putty, I don't know of any other way to give my router commands)

Also, I was reading up a bit on QoS and SQM and how they work, and they essentially rely on dropping packets... yet all the tests I've done only check for TCP type of traffic. Would it be exactly the same for UDP as well? (not really a question I care for the answer for, rather i'm trying to hint if I can somehow allow the packets from ports 27000 to 27030 UDP to not be dropped as much as possible). How would I go about doing that?
Also throughout the night, as I was awake, testing and such, gaming... I was streaming on twitch and uploading even up to 4.2 Mbps at some times, I didn't notice any problems. But obviously right now if I even try to do a bandwidth test, I only get half of what I have set the egress as (one of my parents is happily streaming content right now :smile: ).

I need to somehow manually limit their egress. I don't need to do it per IP or anything, as I only have one wireless "radio" enabled right now, and it's handling all the traffic quite well... both my parents can stream and it doesn't bog it down at all for them.

So, kind sir; could you pretty please guide me, if and when you do get the time and energy to do so, as to how I can limit the one wireless interface's egress to about 1Mbps, and also the igress to 50Mbps, since i'm using 97600 DS and 5200 US now after realizing how the bufferbloat speedtests weren't really what I thought they were and I'm getting awesome results with even these values. And also prioritize/save/cushion all the UDP traffic from port ranges 27000 to 27030, and have them not be affected by SQM/QoS or something? I know I'm asking a lot. Take your time. I don't expect anything, so this is absolutely your call, you've already helped me so much I can't thank you enough. May God bless you.

Well your router has some applications running and those consume memory, also the tmpfs mounted under /tmp will also consume some memory. Router flash memory has only a smallish number of erase cycles and can not be replaced easily (short of desoldering the old flash and soldering in a new chip) so it seems wise to only store to flash occasionally. OpenWrt/LEDE will try to keep all ephemeral information stored in a ram disk as RAM has virtually unlimited erase cycles, but that memory is taken from your router's main memory.

I believe it does.

Yes it would.

Not really, as it would defeat the purpose and functioninig of the traffic shaper. If you request at maximum X Mbps the router needs to drop packets that exceed that rate no matter whether they are TCP or UDP. SQM-scripts tries to drop each flow independently (that is if a flow exceeds his fair share of the available bandwidth it will start dropping) as long as your UDP flows are small they should be mostly spared.

This is great as it indicated the per-internal IP fairness does work, but how is the bufferbloat rating and the bufferbloat plots? If things work as intended they should be still as good as with your earlier tests in the night, as sqm tries to "fairly" share the bandwidth while keeping latency under load increase sane for all users.

Everything is possible, but easy it is not. Instantiating another shaper on the wifi interface and settings its INGRESS/DOWNLOADING bandwidth to 1000 and the UPLOADING/EGRESS to 50000 should work (for internally facing interfaces the directionality of Uploading/Downloading fields is flipped in relation to internet upload and download, as the direction is always specified in relation to the router, and router and internet are only aligned for the wan interface, but I digress)/ BUT this would be quite drastic and unfriendly as all wifi users would only have that 1 Mbps all of the time, even when you do not use your computer they would be throttled. (You could create a second configuration in luci-app-sqm for the wifi interface and man

ually enable and disable that around your game playing, so the rest of the family would have the full 5Mbps as long as you are not playing and did not forget o disable that shaper again). While I believe this to be unfriendly it might be a decent idea for testing whether that really helps. But I would not recommend that for continuous use.

In theory another option for that would be to use layer_cake instead of piece_of_cake and replace the two dual-xxhost keywords with "flows" that way there is no fair sharing between internal IPs but packets with the proper diffserv markings will get some reserved bandwidth. But the challenge will be to make sure your UDP packets get the correct markings. But I fear while I can sketch out this generic description I will not be able to help you with the implementation.

Best Regards

wouldn't it be as easy as installing the luci-app-qos and adding those UDP port ranges into that, like a port forward kind of scenario? And I could also just limit the bandwidth from there as well... It's not like they'll notice the sqm awesomeness of web pages refusing to open while they're watching a video... if it buffers a bit they won't mind! But I'm curious if this will also cause me to buffer even though the separate bandwidth I will have saved for myself (around 3-4 Mbits upstream and 20Mbits downstream)... The rest they can share. And i don't understand why they would ever even notice the 1Mbits - 1.5Mbits upstream ever being an issue? I mean you said you need about 3% of the total download as the upload right? Meaning 1250/.03 = at least 41Mbits... That's PLENTY for them to watch anything they want! And they do one thing at a time, it's not like they watch videos while downloading something in the background or have a video playing on one monitor and browse on another... They literally stare at one screen at a given time. And any type of videos they watch are essentially in 480 to a maximum of 720p quality, and they would not even notice the buffer time if they were to switch videos around very fast (say between 3-5 different youtube video links in a matter of under 10 seconds)... And any video even on 720 p only takes about a maximum of 20 Mbits to load the next minute-playability every few seconds. I'm not sure why you think it's so unfriendly.

Also I don't quite understand how I would word the "dual-xxhost with flows";
Would that translate one of them from "nat dual-dsthost mpu 64" to "nat dual-flowshost mpu 64" or to "nat flows mpu 64" or to "flows 64" or to "flows" or to "flows dual-dsthost mpu 64" ? I apologize for not understanding this right away.

And also with the mention of luci-app-qos, if i were to ONLY enable the luci-app-qos, (and turn off the sqm) would i still face issues if they were capped between 50Mbits DS/ 1.5 Mbits US ? Also, would having both of them installed at the same time give me any issues, since on https://wiki.openwrt.org/doc/uci/qos it mentioned make sure to uninstall another qos-related package before installing another one.

I feel guilty for harvesting your brainpower so much. Man you must be worn out!! Take some time between responses please, this is almost stressing me out because I'm like a kid with a new toy with all these options and packages.

edit:
I also don't understand why it has a section for "Number of bytes" in the luci-app-qos settings part where the ports/tcp or udp and source/destination are specified along with their priority rates. I'll play around with all of those, but Number of bytes makes no sense so far.

Once again, thanks a lot for your help sir.

Not really, while it should be possible to use qos-scrripts and sqm-scripts concurrently (I have not tested that myself though) it will not work on the same interface. But more to the point all ports > 1024 (or so) can be used by any application, so there is no guarantee that this will only apply pririty to your gaming packets.

Simple the very moment they upload something or send an email with a largish attachment they will notice that it takes longer that it used to, and that might not be appreciated (well if you pay for the link you obviously can set the rules).

Because I still remember the time with low uplink (and honestly, I consider your full 5 Mbps to be on the low side already) and I believe that static allocation schemes are overly pessimistic, but that is just my personl opinion, your network, your rules...

Just "flows mpu 64", sorry for being unclear.

As I can see that would probably work if you set a rule prioritizing your relevant port range, but IIRC qos-scripts uses the hfsc shaper which currently is buggy, I certainly would not use qos-scripts if I could avoid it.

As stated above, I believe the biggest issue would be trying to instantiate both on the same interface at the same time, but I have not tested that myself.

I fear I can not really help you with qos-scripts due to lack of relevant first hand experience with it.

Best Regards

1 Like

After a long time...

Great success. I did have to go with the sqm-extra-scripts, for the triple WAN/LAN queue setup scripts, and cake as the qdisc.

http://www.dslreports.com/speedtest/25491271 lookie! And this is while i'm streaming a 4k video on wifi

but what's really really weird is that, even though i'm using the "test_LAN_triple-isolate__piece_of_cake.qos" on LAN (eth0) (the ethernet ports, separated from the wifis/wlans)... The download/ingress IS my download. It's not flipped as it should be, according to its directions and practically any other thing I've tried. I suppose that since it's advertised as a LAN-issued script, two negatives make a positive somehow? haha either way I'm very pleased.

I'm using "test_WAN_triple-isolate__piece_of_cake.qos:" on the actual WLAN port (eth1), and that's working just how it should, with ingress as the download... I spent a lot of nights trouble shooting this, finally got the reward :heart_eyes::sunglasses:

Looking inside the mentioned script shows:

#sm: flip upload and download bandwith so that the GUI values reflect directionality in regard to the ISP
#sm: NOTE this is ugly and should be performed in defaults.sh or functions.sh if at all
#sm: but for quick and dirty testing this should do
local ORIG_UPLINK=${UPLINK}
local ORIG_DOWNLINK=${DOWNLINK}
UPLINK=${ORIG_DOWNLINK}
DOWNLINK=${ORIG_UPLINK}

The LAN scripts really just differ from normal scripts in this hidden flip so from the end user's perspective the GUI fields' direction matches their naming (I was just getting tired of explaining the flipping over and over again). For the dual-isolation options the script also adjusts the sdual-rchost/dual-dsthost keywords for the effective directionality, but for triple-isolate that should not matter. But please note that it still is recommended to add the "nat" keywords to both ingress and egress advanced option strings...

Best Regards

1 Like

I don't want to split the bandwidth up evenly between the interfaces though... can't I just keep the "flows mpu 64" commands in there instead?

you can, but flows is mutually exclusive with triple-isolate:

user@computer:~# tc qdisc add root cake help
Usage: ... cake [ bandwidth RATE | unlimited* | autorate_ingress ]
                [ rtt TIME | datacentre | lan | metro | regional |
                  internet* | oceanic | satellite | interplanetary ]
                [ besteffort | diffserv8 | diffserv4 | diffserv-llt |
                  diffserv3* ]
                [ flowblind | srchost | dsthost | hosts | flows |
                  dual-srchost | dual-dsthost | triple-isolate* ]
                [ nat | nonat* ]
                [ wash | nowash * ]
                [ memlimit LIMIT ]
                [ ptm | atm | noatm* ] [ overhead N | conservative | raw* ]
                [ mpu N ]
                (* marks defaults)

Maybe it would help if you could post both the output of:

  1. cat /etc/config/sqm
  2. tc -s qdisc

And the following keywords should work better in combination with the nat keyword:
srchost | dsthost | hosts | dual-srchost | dual-dsthost | triple-isolate
only the dual and triple optins will also attempt per-flow fairness for the different IPs (okay triple is a bit more complicated, but it will still maintain per-flow fairness)

Best Regards

cat /etc/config/sqm

config queue
option debug_logging '0'
option verbosity '5'
option linklayer 'none'
option enabled '1'
option interface 'wlan1'
option download '800'
option upload '20000'
option qdisc_advanced '1'
option qdisc 'cake'
option script 'piece_of_cake.qos'
option squash_dscp '1'
option squash_ingress '1'
option ingress_ecn 'ECN'
option egress_ecn 'NOECN'
option qdisc_really_really_advanced '0'

config queue
option debug_logging '0'
option verbosity '5'
option interface 'eth1'
option qdisc_advanced '1'
option qdisc 'cake'
option download '108600'
option upload '5800'
option qdisc_really_really_advanced '1'
option squash_ingress '1'
option ingress_ecn 'ECN'
option egress_ecn 'NOECN'
option squash_dscp '1'
option linklayer 'ethernet'
option linklayer_advanced '1'
option tcMTU '2047'
option tcTSIZE '128'
option tcMPU '64'
option overhead '18'
option linklayer_adaptation_mechanism 'tc_stab'
option script 'test_WAN_triple-isolate__piece_of_cake.qos'
option enabled '1'
option iqdisc_opts 'mpu 64 nat dual-dsthost'
option eqdisc_opts 'mpu 64 nat dual-srchost'
option ilimit '18'
option elimit '18'

config queue
option debug_logging '0'
option verbosity '5'
option interface 'eth0'
option qdisc 'cake'
option qdisc_advanced '1'
option linklayer 'ethernet'
option linklayer_advanced '1'
option tcMTU '2047'
option tcTSIZE '128'
option egress_ecn 'NOECN'
option squash_dscp '1'
option squash_ingress '1'
option ingress_ecn 'ECN'
option tcMPU '64'
option overhead '18'
option linklayer_adaptation_mechanism 'tc_stab'
option script 'test_LAN_triple-isolate__piece_of_cake.qos'
option download '58120'
option upload '0'
option qdisc_really_really_advanced '1'
option enabled '1'
option iqdisc_opts 'mpu 64 nat dual-dsthost'
option eqdisc_opts 'mpu 64 nat dual-srchost'
option ilimit '18'
option elimit '18'

tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8092: dev eth0 root refcnt 9 bandwidth 58120Kbit besteffort dual-srch ost nat rtt 100.0ms raw mpu 64
Sent 7336203013 bytes 5095143 pkt (dropped 5493, overlimits 8643501 requeues 0)
backlog 0b 0p requeues 0
memory used: 1219912b of 4Mb
capacity estimate: 58120Kbit
Tin 0
thresh 58120Kbit
target 5.0ms
interval 100.0ms
pk_delay 5.7ms
av_delay 3.9ms
sp_delay 51us
pkts 5100636
bytes 7344567013
way_inds 25501
way_miss 9378
way_cols 0
drops 5493
marks 0
sp_flows 1
bk_flows 1
un_flows 0
max_len 5920

qdisc cake 808f: dev eth1 root refcnt 9 bandwidth 5800Kbit besteffort dual-srcho st nat rtt 100.0ms raw mpu 64
Sent 900892165 bytes 4433000 pkt (dropped 6709, overlimits 247732 requeues 0)
backlog 0b 0p requeues 0
memory used: 134Kb of 4Mb
capacity estimate: 5800Kbit
Tin 0
thresh 5800Kbit
target 5.0ms
interval 100.0ms
pk_delay 760us
av_delay 18us
sp_delay 0us
pkts 4439709
bytes 911008269
way_inds 31746
way_miss 46433
way_cols 0
drops 6709
marks 0
sp_flows 1
bk_flows 1
un_flows 0
max_len 13216

qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
Sent 8956752918 bytes 7154584 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 808c: dev wlan1 root refcnt 5 bandwidth 20Mbit besteffort triple-isol ate rtt 100.0ms raw
Sent 1904541270 bytes 2058612 pkt (dropped 357, overlimits 1463600 requeues 0)
backlog 0b 0p requeues 0
memory used: 574606b of 4Mb
capacity estimate: 20Mbit
Tin 0
thresh 20Mbit
target 5.0ms
interval 100.0ms
pk_delay 943us
av_delay 75us
sp_delay 0us
pkts 2058969
bytes 1905062418
way_inds 21225
way_miss 26289
way_cols 0
drops 357
marks 134
sp_flows 1
bk_flows 1
un_flows 0
max_len 1514

qdisc ingress ffff: dev wlan1 parent ffff:fff1 ----------------
Sent 552518013 bytes 1576637 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 808d: dev ifb4wlan1 root refcnt 2 bandwidth 800Kbit besteffort triple -isolate wash rtt 100.0ms raw
Sent 551291349 bytes 1553057 pkt (dropped 23580, overlimits 1657153 requeues 0)
backlog 0b 0p requeues 0
memory used: 368768b of 4Mb
capacity estimate: 800Kbit
Tin 0
thresh 800Kbit
target 22.7ms
interval 117.7ms
pk_delay 1.2ms
av_delay 101us
sp_delay 1us
pkts 1576637
bytes 574590931
way_inds 15457
way_miss 56688
way_cols 0
drops 23580
marks 398
sp_flows 5
bk_flows 2
un_flows 0
max_len 1514

qdisc cake 8090: dev ifb4eth1 root refcnt 2 bandwidth 108600Kbit besteffort dual -dsthost nat rtt 100.0ms raw mpu 64
Sent 9220774285 bytes 7154494 pkt (dropped 90, overlimits 4260588 requeues 0)
backlog 0b 0p requeues 0
memory used: 170204b of 5430000b
capacity estimate: 108600Kbit
Tin 0
thresh 108600Kbit
target 5.0ms
interval 100.0ms
pk_delay 130us
av_delay 39us
sp_delay 1us
pkts 7154584
bytes 9220906131
way_inds 20722
way_miss 46593
way_cols 0
drops 90
marks 0
sp_flows 2
bk_flows 1
un_flows 0
max_len 12960

Thanks for the data, now I see what you have, but what behavior exactly would you like to see and what do you see instead?

1 Like

I honestly see the behavior that I would like to see, given my internet connection line (which is sub-par at best) - given all my trials and errors. But the commands you have given me, the dangerous/advanced options included! have ended up working out for the BEST POSSIBLE lowest bufferbloat, while others are using the internet or not. And it is thanks to YOUR HELP, moeller0, AND the eduperez as well. I was very hesitant, until eduperez told me to just un-bridge and not mess with VLANS, and the rest of the options and such, a little bit of forum browsing along with your instructions (moeller0), I thank you both for your support and helpful efforts. As they have concluded and affirmed results.

Thank you both. May you both be guided by God.

1 Like

If I separate the wired and the wireless interfaces, which protocol should I use for the wireless?