Transparently encrypting all traffic on a fiber link?

I have three sites, with more than 1km distance between each of them, that are connected via rented, dedicated single mode optical fiber. I would like to use COTS hardware and software to encrypt data traversing these fiber links ideally transparently to devices (enterprise switching gear from Dell that lacks that kind of feature, unfortunately) at the respective ends.

Are there any standards/protocols/projects (either available in x86_64 OpenWrt, or on Linux-powered platforms in general) that I should look into to build a prototype for this kind of use case?

I have started playing around with a lab setup that uses Linux' MACsec (802.11ad) support to encrypt a L2 link between two hosts, but have found it troublesome to also involve a Linux bridge over MACsec-enabled device and another NIC on one side of that link to allow other Ethernet devices' frames to traverse that segment. I am actually not sure it's possible, since this seems an underexplored/less well documented area of Linux networking capabilities, and I haven't looked at packet captures yet to actually understand what is going on/where the frames are going exactly.

I would appreciate any tips or insights you might have to get me on the right track! :slight_smile:

Start with Wireguard on all ends.

The performance will depend on your hardware, but with even a low tear x86 I guess you'll manage GBit speed with no problems.

Wireguard starts with a L3 routed network, so separate IP ranges for all 3 sites.

If you want a single L2 broadcast domain across all of your sites, you will need to put gre tunnel interfaces on top of that. Mind the "gre tunnel", not the "gre". Here you can easily configure which other bridge device of your OpenWRT box your gre tunnel should be bridged with.

But while Wireguard only requires one Wireguard interface with two peers each, a gre tunnel setup will require one gre tunnel per direction, making it a total of 2 gre tunnel devices per site and 6 gre tunnel devices in total. Not entirely unmanageable bit this obviously has the potential of getting out of hand.

You might want to play around a little bit and hope your traffic doesn't loop. I never implemented this physically but only as virtual devices in VirtualBox. This worked decently well, but real routing might behave differently than my virtual network.


Are you sure about this? The rest of your answer seems rock solid, but this comment took me by surprise... and, in my experience, just one tunnel is needed; perhaps I am missing something here?

1 Like

@eduperez, it depends on the topology and connectivity, i.e. 1 server and 2 clients vs. 3 peers, see star vs. fully connected.

@colo, you should carefully consider L3 tunneling as the actual need for L2 is often overestimated.

1 Like

have some reading there too:

this might surely help also.

Wireguard is the easiest way to go.
except the **L2 part

Thanks, but... I still do not see the need to use two GRE tunnels between each pair of nodes. Why are tunnels not bidirectional?

FTR: I got my 2-node lab setup working today with MACsec + gretap + bridging, and will also explore the wg option. Looking forward to trying this out on something faster than puny 10-year-old Celeron CPUs :slight_smile: Thanks for the nudge towards GRE, that's what I'd been missing - it's simply been too long since I've had to deal with L2 protocols in any non-trivial capacity.

1 Like

As far as I know gre on Linux supports point to multi point. You configure the local gre interface and then you add each neighbor.
If all routers are running Linux then wireguard and evpn/vxlan (with frr) could also be an option if you need layer 2 shizzle between the sides.
But macsec makes total sense if you have the need of adding "enterprise" gear to the game :face_exhaling:

Ps: ATM I do only see the need for more then one local gre interface only for redundant deployments like with binding a gre interface to a specific wan interface.
But... not quite sure... but then I would maybe go the evpn path to have have dynamic routing and failover abilities.

Maybe I didn't make myself clear enough.

In theory, there's something like "multipoint GRE". But I've never used it and OpenWRT doesn't provide config options for it. You might be able to work some command-line magic, but I wouldn't do that but stick to what OpenWRT gives you in terms of configuration options.

So GRE, at least in the OpenWRT context, is bidirectional (meaning you can use the same link for tx and rx traffic), but it is not multipoint.

As far as OpenWRT settings go, GRE and GRETAP both in IPv4 and IPv6 variants, all allow for a single peer address. So if you want 3 sites to be connected, you need a link from 1 to 2, a link from 2 to 3 and a link from 3 to 1. Which means site 1 has one GRETAP interface targeting site 2 and another one targeting site 3. Just as site 2 has one interface targeting site 1 and one interface targeting site 3. And just as site 3 has one interface targeting site 1 and one interface targeting site 2.

You don't need "two tunnels to the same site", but you need individual tunnels (which will end up being individual interfaces) for each foreign site.

But as @vgaetera pointed out, I'd consider if you really need L2 tunneling or if L3 routing is just enough. What's the reason for L2 tunneling? Since your sites are at least 1km apart, you don't really expect, e.g., Wi-Fi clients to seamlessly roam around, do you?

1 Like

There are many broken enterprise grade solutions out there which require layer 2 connectivity. So if you have an office and two data center nearby and you have to deal with this kind of mess then for one or another reason you have to push layer 2 traffic between these 3 sides. That some day shit will hit the fan because stretched layer 2 is most of the time a bad idea does not however makes this need go away.

Even if current Openwrt UCI does not support multi point gre config it can be easily hacked together with a hook script. Or someone should dig why it is not yet supported to set multiple neighbors on a single gre interface.


So, it's just me that misunderstood the comment, thanks!

Not that your are not correct, but I find the term "fully connected" misleading.

I.e. with (i)BGP networks or "gosip protocols" deployments, its normally coined "mesh" and "full mesh", to reference an (m):(n) or respective (n):(n-1) topology.

PS: Hub-and-Spoke ( Spoke–hub distribution paradigm) is sometimes also a nice topology...