SQM overhead MPLS

@moeller0

root@OpenWrt 16:04 #mtr -ezb4w -c 10 8.8.8.8
Start: 2020-04-03T16:07:00+0000
HOST: OpenWrt                                                              Loss%   Snt   Last   Avg  Best  Wrst StDev
@Not a TXT record
  1. AS???    192.168.0.254 (192.168.0.254)                                 0.0%    10    0.7   0.7   0.5   1.0   0.1
  2. AS7018   99-113-132-1.lightspeed.nworla.sbcglobal.net (99.113.132.1)   0.0%    10   27.5  28.1  22.4  40.1   5.6
  3. AS7018   99.55.24.120 (99.55.24.120)                                   0.0%    10   28.5  27.9  27.3  28.5   0.4
  4. AS7018   12.123.239.98 (12.123.239.98)                                 0.0%    10   43.4  41.8  38.0  45.1   2.6
       [MPLS: Lbl **25042** TC 0 S u TTL 1]
  5. AS7018   12.122.28.29 (12.122.28.29)                                   0.0%    10   45.8  46.0  41.8  48.9   2.4
       [MPLS: Lbl 0 TC 0 S u TTL 1]
       [MPLS: Lbl **27325** TC 0 S u TTL 1]
  6. AS7018   12.122.140.237 (12.122.140.237)                               0.0%    10   40.3  41.1  40.3  41.4   0.3
  7. AS7018   12.255.10.8 (12.255.10.8)                                     0.0%    10   38.0  38.0  37.4  38.5   0.3
  8. AS15169  172.253.71.63 (172.253.71.63)                                10.0%    10   45.8  77.3  38.7 130.9  31.2
  9. AS15169  108.170.225.107 (108.170.225.107)                             0.0%    10   38.0  38.5  37.5  39.9   0.8
 10. AS15169  dns.google (8.8.8.8)                                          0.0%    10   38.7  38.1  37.2  39.1   0.6

Are the MPLS Labels a concern per label (4bytes) or nil?
ASN AS7018=At&t

Only of the MPLS link is your bottleneck.

One last question...Have we settled on 22 or 26 bytes for the total overhead?
VDSL2 sans PPPOE

We didn't settle on any non-zero overhead! What's the speed test result, and what does the HH5A report as the VDSL2 line rate?

I think cake's man-page is spot on:
" ETHERNET: 6B dest MAC + 6B src MAC + 2B ethertype + 4B Frame Check
Sequence +
PTM: 1B Start of Frame (S) + 1B End of Frame (Ck) + 2B TC-CRC
(PTM-FCS)"
for a sum of 22 Bytes, unless your ISP also throws in a VLAN tag...

I still think this is reasonable. Why do you think 26?

A vlan tag is what make me curious. I work in a datacenter which every single switch is tagged. I don't see why a large isp, in my case At&t, wouldn't do that. Most CPE has a vlan id 0, which would still technically be a carried tag. I could be wrong as it may be stripped along the way or ignored.

Well, after a discussion with @patrakov I think the best advice* is to simply set overhead >= true overhead. Technically it is interesting to figure out what overhead really is applicable to a link, but for reducing bufferbloat it is sufficient to configure an overhead that is not smaller than the true overhead, so 26 rather than 22. The throughput lost by configuring too large a per-packet-overhead typocally is rather small...

*) Also the advice @dlakelan gave in https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm-details

I think I can speak freely on this in the USA. The listed quote from @patrakov would be accurate, and so we follow your advice @moeller0 to apply true overhead or slightly above.

Example 2: the ADSL modem connects at 18 Mbit/s, and the user pays for > "as fast as the modem can get" connection. Then, the "adsl" keyword is > relevant, and the bandwidth needs to be set to 18 Mbit/s.

I see there is a 2 byte MVNETA_MH_SIZE from the ethtool mventa source code

And? If your true bottleneck is inside the mvneta NIC then you might/should consider these two bytes in your per-packet-overhead, but I severely doubt that this actually affects the on-the-wire speed of mvneta ethernet interfaces and hence assume this can safely be ignored.

1 Like