L2 link aggregation on EA8500

Hello,

I have a Linksys EA8500 with the latest OpenWRT.

According the the TOH the switch chip is a Qualcomm Atheros QCA8337. Taking a look into this chip's datasheet, there is one reference to link aggregation, so I guess this router should support it (I'm no expert,, that's my understanding).

In my home I have two CAT5e links between each floor, so I'd like to implement a 2GB trunk (Link aggregation / bonding ) on each pair. for that, I have the EA8500 in the attic and two TP-Link switches in the groundfloor and basement, respectively (diagram attached).

Why? Because even though I have 1GB cards/links in my PC, NAS, mediacenter, etc, the real bandwidth is never close to that. If I use 2GB trunks I know I'll also not be close to 2GB but at least I expect to be over 1GB and therefore I believe I'll be able to get much closer to 1GB. Latest but not the least, this would also provide some redundancy in the case some cable gets broken.

Another reason may also be because I'm a geek and would like to learn how to do it !

Anyway, I saw some posts regarding link aggregation here in the forum.
My understanding so far (please correct me if I'm wrong), is:

  • L2 Link aggregation is not so common, although some people are trying to achieve it
  • OpenWRT doesn't have any dedicated support for this L2 link aggregation.
  • The only way to implement this link aggregation is with OS-level commands, but most of them seem to be handled at layer 3 (routing) instead of layer 2 (switching).

Between the TP Link switches I was able to configure the 2GB trunnk and it's working great. Just need to understand if its possible to do it between the EA8500 and the TP-Link switch.

Any thoughts on this?

One thing to be aware of is although link aggregation to the switch's phys, there is the pathway to the CPU itself that may be bandwidth limited. I don't have a data sheet for the IPQ8064, but it may be that it is connected over (R)GMII, which is spec-ed for gigabit rates.

2 Likes

That's a good point I haven't thought about before.

Doing a quick search online, I believe it wouldn't be an issue...

In comes IPQ8064, which has 2 Krait 300 CPUs clocked at up to 1.4 GHz, two network accelerator engines clocked at up to 730 MHz, capable of processing up to 5 Gbps of aggregate throughput through the SoC all built on TSMC's 28nm LP process. There's a host of I/O in IPQ8064 as well, 3 PCIe 1x lanes, one SATA3 port, 2 USB 3.0 ports, and XGMII (10 Gbps media independent interface). There's also the same PCDDR3 1066 interface we saw with Snapdragon 600 for memory. IPQ8062 is a cut down variant of IPQ8062 with less performance and fewer interfaces, though I'm not clear on the exact differences.

Anyhow, the purpose here is also testing it and check how it goes. Any idea on how to configure it in L2? (port bonding, not IP bounding)

Cheers,
Joaoabs

1 Like

If the bonding gets handled completely by the switch fabric (and that depends to a large part on the switch driver used by OpenWrt, which doesn't necessarily have the same offloading capabilities as the proprietary firmware - e.g. the NSS cores aren't used at all by OpenWrt), you won't see a slowdown - if it has to move through the CPU ports, you will be capped to around 500-650 MBit/s.

2 Likes

Love to see some iperf stats .... if you ever get a chance

1 x 1GB vs 2 x 1GB-BOND

From what i've read the performance might be in the +30% range....???

I could only imagine that you create 2 interfaces eth0.4 eth0.5 (for example) with the needed vlan config in the switch and add these to an bond device. From my understanding it would also be necessary to create a new mac address for each of these new created interfaces.

But then you loose defenitly the possibility to route specific vlans over the bond interface.

Feel free to correct me.

Regards
Sebastian