Speeding up PPPoE

can you try a test with this ?
echo reno > /proc/sys/net/ipv4/tcp_congestion_control

Wow, 1gbit/s down and 0.5gbit/s up must be nice !
Is sqm qos still usable with fastpath ?

I was thinking: Is there a way to find out whether PPPoE is using encryption and/or compression by any chance? That could explain the rather large CPU overhead I am seeing with PPPoE enabled. I've supplied pppd with the debug option and this is what I am seeing in my logs. Is there anything that tells me encryption/compression is being used?:

Fri Feb 23 11:08:58 2018 daemon.info pppd[1015]: Plugin rp-pppoe.so loaded.
Fri Feb 23 11:08:58 2018 daemon.info pppd[1015]: RP-PPPoE plugin version 3.8p compiled against pppd 2.4.7
Fri Feb 23 11:08:58 2018 daemon.notice pppd[1015]: pppd 2.4.7 started by root, uid 0
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]: Send PPPOE Discovery V1T1 PADI session 0x0 length 4
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  dst ff:ff:ff:ff:ff:ff  src 56:02:c8:64:3f:99
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  [service-name]
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]: Recv PPPOE Discovery V1T1 PADO session 0x0 length 27
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  dst 56:02:c8:64:3f:99  src 80:38:bc:0b:cd:2e
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  [service-name] [AC-name 195.190.228.161] [end-of-list]
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]: Send PPPOE Discovery V1T1 PADR session 0x0 length 4
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  dst 80:38:bc:0b:cd:2e  src 56:02:c8:64:3f:99
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  [service-name]
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]: Recv PPPOE Discovery V1T1 PADS session 0x20ef length 8
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  dst 56:02:c8:64:3f:99  src 80:38:bc:0b:cd:2e
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]:  [service-name] [end-of-list]
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]: PADS: Service-Name: ''
Fri Feb 23 11:08:58 2018 daemon.info pppd[1015]: PPP session is 8431
Fri Feb 23 11:08:58 2018 daemon.warn pppd[1015]: Connected to 80:38:bc:0b:cd:2e via interface eth0.6
Fri Feb 23 11:08:58 2018 daemon.debug pppd[1015]: using channel 1
Fri Feb 23 11:08:59 2018 daemon.info pppd[1015]: Using interface pppoe-wan
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: Connect: pppoe-wan <--> eth0.6
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [LCP ConfReq id=0x1 <mru 1492> <magic 0xb4f9c21>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [LCP ConfReq id=0x2 <mru 1500> <auth pap> <magic 0xc4a12eb0>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [LCP ConfAck id=0x2 <mru 1500> <auth pap> <magic 0xc4a12eb0>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [LCP ConfAck id=0x1 <mru 1492> <magic 0xb4f9c21>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [LCP EchoReq id=0x0 magic=0xb4f9c21]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [PAP AuthReq id=0x1 user=<hidden> password=<hidden>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [LCP EchoRep id=0x0 magic=0xc4a12eb0]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [PAP AuthAck id=0x1 "Authentication success,Welcome!"]
Fri Feb 23 11:08:59 2018 daemon.info pppd[1015]: Remote message: Authentication success,Welcome!
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: PAP authentication succeeded
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: peer from calling number 80:38:BC:0B:CD:2E authorized
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [IPCP ConfReq id=0x1 <addr 0.0.0.0> <ms-dns1 0.0.0.0> <ms-dns2 0.0.0.0>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [IPV6CP ConfReq id=0x1 <addr fe80::348c:7a28:7c2e:58ea>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [IPCP ConfReq id=0x1 <addr 195.190.228.161>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [IPCP ConfAck id=0x1 <addr 195.190.228.161>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [IPV6CP ConfReq id=0x1 <addr fe80::8238:bcff:fe0b:cd2e>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [IPV6CP ConfAck id=0x1 <addr fe80::8238:bcff:fe0b:cd2e>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [IPCP ConfNak id=0x1 <addr 86.88.184.57> <ms-dns1 195.121.1.34> <ms-dns2 195.121.1.66>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: sent [IPCP ConfReq id=0x2 <addr 86.88.184.57> <ms-dns1 195.121.1.34> <ms-dns2 195.121.1.66>]
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [IPV6CP ConfAck id=0x1 <addr fe80::348c:7a28:7c2e:58ea>]
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: local  LL address fe80::348c:7a28:7c2e:58ea
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: remote LL address fe80::8238:bcff:fe0b:cd2e
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: Script /lib/netifd/ppp6-up started (pid 1104)
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: rcvd [IPCP ConfAck id=0x2 <addr 86.88.184.57> <ms-dns1 195.121.1.34> <ms-dns2 195.121.1.66>]
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: local  IP address 86.88.184.57
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: remote IP address 195.190.228.161
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: primary   DNS address 195.121.1.34
Fri Feb 23 11:08:59 2018 daemon.notice pppd[1015]: secondary DNS address 195.121.1.66
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: Script /lib/netifd/ppp-up started (pid 1105)
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: Script /lib/netifd/ppp-up finished (pid 1105), status = 0x1
Fri Feb 23 11:08:59 2018 daemon.debug pppd[1015]: Script /lib/netifd/ppp6-up finished (pid 1104), status = 0x9
Fri Feb 23 11:09:33 2018 daemon.info pppd[1015]: System time change detected.

what is your MTU ? if 1492 then the cpu will need to fragment the 1500 MTU packets so it will be a hit on the speed
you can try baby jumbo frames (see forum, set 1508 mtu on pppoe and if supported by ISP it will make link with 1500 MTU) or set by dhcp (not working for windows clients, have to set it manually) the MTU to your internal devices to 1492, this way you can leverage some cpu in doing packet fragmentation

I have tried baby jumbo frames, but they result in identical speeds. That is to be expected, since MSS clamping already makes sure frames do not have to be fragmented. If the throughput is limited due to a packet/second limitation, it would at most give you 8 / 1492 * 100 = 0.53% speed-up.

Why not simply take a packet capture on the ethernet interface that transports the PPPoE packets, if you can read the packet contents easily (and find expected patterns) this would indicate no compression/encryption (I believe ping can be used to send specific patterns that you than can look for).

Very good idea. PPPoE is used on the WAN interface for connecting to my ISP. Running a cable from the Router WAN port to a free ethernet port on my computer, and then running a second cable from a second ethernet port on my computer to the NTU (using a fibre connection), and finally bridging the two ethernet ports on my computer should do the trick, right? And then simply use wireshark to listen to the passing traffic.

If the router runs lede it is even easier, simply "opkg update; opkg install tcpdump" and the you can use tcp dump directly on the router, yu might want to use a USB stick to store the capture.

Routing the traffic through another machine to snoop on it is possible but certainly not that easy as you need to forward these packets properly.

Should I dump the packets on the pppoe_wan interface? Or on the eth0.6 interface? My gut feeling says eth0.6, since the packets traveling through the PPPoE tunnel can be unencrypted/uncompressed even if the tunnel itself isn't. And since the tunnel runs through eth0.6, encryption/compression would only be visible there. Am I right in my assumptions?

edit: I have tried both interfaces and piping the output to grep, which is searching for specific content on a http site. Both interfaces are able to show the actual content of the packets. So no compression/encryption seems to be in use. It's definitely the encapsulation / decapsulation that is hitting the CPU hard and unfortunately not very well threaded.

I wonder how difficult it would be to get the accel-ppp software working on Lede. It's written from the ground up with performance in mind and is properly multithreaded. Roaring penguin PPPoE is showing its age :frowning:

Yep, eth0.6 would have been the one that would show compressed/encrypted PPPoE payloads as unintelligible payloads, actually comparing (concurrent) captures of eth0.6 and pppoe-wan was probably the best idea.
As an alternative to getting another pppoe client into lede, you might want to talk to your ISP anf try to convince them to use DHCP (with option 82, see https://slaptijack.com/networking/what-is-dhcp-option-82/) instead of PPPoE :wink:

IPoE is definitely the easiest option, but I doubt my ISP would want to change their infrastructure just because I insist on using my own router instead of the ISP provided one. :wink: Officially this isn't even supported. I guess I could switch to an ISP that does use IPoE instead.. Or maybe buy something ARM based with more CPU power for PPPoE.

I switched from PPPoE to DHCP long ago and my ISP changed nothing...I can use either or, and sometimes do for testing.

I've tried DHCP, but it is not working. My ISP explicitly requires a PPPoE connection unfortunately :frowning:

Or switch to an x86-based router board.

1 Like

Hello all,

i seem to be running into the same issue on a wrt1900AC router running LEDE. Jus wondering what this "fast-path" is and how can i use it.
Have there been any enhancements to the PPOE speeds in LEDE?

Update. Enabling "NAT software offloading" under firewall settings, resolved the speed issue.

1 Like

Hi all, how I configure fastpath ?

You don't, it's a modem feature - which is decided during the handshake between DSLAM and modem, but decided by the DSLAM.

How do I know if my modem has this feature? what do you mean by DSLAM?