Slack's Nebula on OpenWrt -- discussion thread

I'm trying to create a nebula package for OpenWrt and I got the binaries built (they are about 10Mb each when installed on the router). I can create outgoing connections from the router, but the inbound connections do not work. I don't have the time to tinker with it (create interface, firewall zone/rules), but if you're willing to, let me know.

Makefile is available here, and the mipsel_24kc packages I've created are here.

PS. There is no init script yet.

1 Like

I must admit that I skimmed the introduction to Nebula, but what is the advantage of Nebula over let's say Zerotier?

It says for encryption it uses AEAD, and of course it says gcm or chacha. Could it work with AEAD such as aes-128-cbc-hmac-sha256 or something that is more available for hardware acceleration on the low(er) end routers?

My goal here is to package nebula for OpenWrt (as-is, without patching). Your questions might be better asked at SlackHQ's github.

Any success on packaging nebula for OpenWRT?
I'm looking for a new router to use and having nebula on it would be great.

Nebula seems to offer releasese for most architectures found on routers.
There is a link to an issue where nebula on openwrt can be discussed.

@stangri: did you start packaging ? did you push anything anywhere ?

Yes, it's been merged to the snapshots and 21.02-snapshots, so should be available in 21.02 when it releases. I didn't even send the PR for 19.07 as I felt that 21.02 release was imminent.

1 Like

That is great, thanks.
Looking forward for the release.
I'm looking to run it as a lighthouse on the router.

Should work OOB and create a firewall rule to allow inbound traffic on the port specified in yml-file automagically. LMK how it goes.

My home network is overly complicated and I can't get public or even dynamic IPs so I appreciate very much a software-defined network that does not require a ridiculously over-complicated setup like Zerotier or Tailsomething for self-hosting.

I really hope this is the Wireguard of software-defined networks.

I'm currently testing it between a PC, my router and a cloud server with public IP configured as "lighthouse", and it's working fine for basic pinging the devices with Nebula software installed on them.

I'm trying to get the OpenWrt router (it's a x86 VM) to share devices in the local LAN to devices connected to the Nebula, I have seen a couple options in the config that could maybe do it
and I'm trying them out blindly


  # local_allow_list allows you to filter which local IP addresses we advertise
  # to the lighthouses. This uses the same logic as `remote_allow_list`, but
  # additionally, you can specify an `interfaces` map of regular expressions
  # to match against interface names. The regexp must match the entire name.
  # All interface rules must be either true or false (and the default will be
  # the inverse). CIDR rules are matched after interface name rules.
  # Default is all local IP addresses.
  #local_allow_list:
    # Example to block tun0 and all docker interfaces.
    #interfaces:
      #tun0: false
      #'docker.*': false
    # Example to only advertise this subnet to the lighthouse.
    #"10.0.0.0/8": true

and

  # Unsafe routes allows you to route traffic over nebula to non-nebula nodes
  # Unsafe routes should be avoided unless you have hosts/services that cannot run nebula
  # NOTE: The nebula certificate of the "via" node *MUST* have the "route" defined as a subnet in its certificate
  unsafe_routes:
    #- route: 172.16.1.0/24
    #  via: 192.168.100.99
    #  mtu: 1300 #mtu will default to tun mtu if this option is not sepcified

Has anyone already done some playing around with them?

I've tried searching the web for more documentation, but I only got very basic setups

that I was able to figure out myself already by just looking at the config file

The only place where I get some people discussing anything more advanced that that is in the Issues section of their github

Also @stangri why does the package does not include the default config file nor create the /etc/nebula folder? I think those should be included for the very least

I've figured it out with some assistence from an old Issue thread in their github https://github.com/slackhq/nebula/issues/214#issuecomment-675220635

What I want to do:
I want to use Nebula to connect to hosts in my LAN from outside, which is basically what a normal VPN does, but with less bs involved since my home network setup is a bit complicated for unrelated reasons, to run a classic VPN on my network I would need to get a network engineer certification to figure it out, and I'm not smart enough.

Some background info:

  • cloud server has public IP 123.123.123.123 (a fake IP I'm using in this example instead of the real one I used, for obvious reasons)
  • OpenWrt LAN network is 192.168.11.0/24
  • Nebula's own virtual network is 192.168.20.0/24

---OpenWrt router setup---
Installed the nebula and nebula-certs packages in OpenWrt (I actually recompiled from source a whole new master snapshot firmware with these packages included, but that's just how I roll, don't judge).

Created the /etc/nebula folder and copied the default config.yml from here https://github.com/slackhq/nebula/blob/master/examples/config.yml

created the main certs
nebula-cert ca -name "albydomain Inc"

created the certs for the "lighthouse" (the server with public IP, that is used by every other node to find each other)
nebula-cert sign -name "lighthouse1" -ip "192.168.20.1/24"

create the certs for the OpenWrt device that will share the LAN
nebula-cert sign -name "openwrt-router" -ip "192.168.20.2/24" -subnets "192.168.11.0/24"

create the certs for a test setup on a PC
nebula-cert sign -name "testPC1" -ip "192.168.20.10/24"

now I have a bunch of files in /etc/nebula, so I copy over ca.crt, lighthouse1.crt and lighthouse1.key, testPC1.crt and testPC1.key

On the OpenWrt router this is the parts of the default config I changed in the /etc/nebula/config.yml

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/openwrt-router.crt
  key: /etc/nebula/openwrt-router.key

static_host_map:
 "192.168.20.1": ["123.123.123.123:4242"]

  hosts:
    - "192.168.20.1"

  inbound:
    - port: any
      proto: any
      host: any

I created a firewall zone in OpenWrt to masquerade and forward traffic from the raw nebula1 interface (no need to create an "unmanaged" interface in /etc/config/interfaces and then assign that to a firewall zone anymore in 21.02, you can just add unmanaged interfaces to firewall like this)

config zone
	option name 'nebula'
	option masq '1'
	list device 'nebula1'
	option forward 'REJECT'
	option input 'REJECT'
	option output 'ACCEPT'

config forwarding
	option src 'nebula'
	option dest 'lan'

and then restart the service to make it read the config and be ready to connect to the lighthouse
service nebula restart

---Virtual Cloud Server setup---
on the virtual server with public IP that will be my lighthouse I have installed Debian 11, and enabled certificate login on ssh (and disabled password login)
I then installed ufw package for easy firewall management since I have to open port 4242 on the lighthouse for Nebula service.

ufw disable
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 4242
ufw enable

note that if you have your ssh at a nonstandard port (default is 22, you need to open that port too)

and I downloaded the linux package from the github releases page https://github.com/slackhq/nebula/releases
aka https://github.com/slackhq/nebula/releases/download/v1.4.0/nebula-linux-amd64.tar.gz

then extracted it in the folder /opt/nebula (it's the "standard" folder for non-packaged software installed manually)

I moved the ca.crt, lighthouse1.crt and lighthouse1.key from the OpenWrt device to the cloud server, in that folder.

Then I added again the same default config file, and this is the parts of the default config I changed in the /opt/nebula/config.yml

pki:
  ca: /opt/nebula/ca.crt
  cert: /opt/nebula/lighthouse1.crt
  key: /opt/nebula/lighthouse1.key

lighthouse:
  am_lighthouse: true

#static_host_map:
#  "192.168.100.1": ["100.64.22.11:4242"]

 # hosts:
 #   - "192.168.100.1"

  unsafe_routes:
    - route: 192.168.11.0/24
      via: 192.168.20.2
      mtu: 1500 #mtu will default to tun mtu if this option is not sepcified

  inbound:
    - port: any
      proto: any
      host: any

nebula can be started manually on the server with

/opt/nebula/nebula -config /opt/nebula/config.yml

but this isn't the most amazing thing to do, I want it as a service, possibly a systemd service, so the init system will take care of restarting it if crashes and also start it on boot.

so I downloaded the service file for systemd from the examples/service_scripts folder https://github.com/slackhq/nebula/tree/master/examples/service_scripts
(there is also an init script for people that value "init freedom" or others that are using a distro like Alpine or Gentoo that uses OpenRC instead of systemd)
So I download it with
wget https://raw.githubusercontent.com/slackhq/nebula/master/examples/service_scripts/nebula.service

And of course I must edit this line to point to the right path for my server
ExecStart=/opt/nebula/nebula -config /opt/nebula/config.yml

so now copy the service file in systemd folder for services, enable it and start with systemd

cp nebula.service /etc/systemd/system/
systemctl enable nebula.service
systemctl status nebula.service

---TestPC setup---
download the nebula package for your architecture and OS, move the ca.crt, testPC1.crt and testPC1.key to the test PC, then add the default config file and make the following changes (the path for ca.crt and other key files might differ in Windows or MacOS, I used another Linux system so for me it's again stuff in the /opt/nebula folder)

pki:
  ca: /opt/nebula/ca.crt
  cert: /opt/nebula/testPC1.crt
  key: /opt/nebula/testPC1.key

static_host_map:
 "192.168.20.1": ["123.123.123.123:4242"]

  hosts:
    - "192.168.20.1"

  unsafe_routes:
    - route: 192.168.11.0/24
      via: 192.168.20.2
      mtu: 1500 #mtu will default to tun mtu if this option is not sepcified

  inbound:
    - port: any
      proto: any
      host: any
1 Like

Thanks for sharing your fw config and the suggestion to create the /etc/nebula, I'll try to implement it in the next update. Including the "default" config is most likely a no-go, because:

  1. Some people may want to run the lighthouse on the router, some may want to run a regular node, configs are different.
  2. (Most importantly) without the CA certificate any config file is useless. The CA certificate is pointless without the private key and including the config, certificate and private key is pointless because including the private key creates non-secure configuration.

I didn't explain myself well. I was asking to include the "example config" from upstream.

I've called it "default config" because I've seen cases of "default config" that does nothing useful on its own but is full of comments and its only there to provide a framework for user edit
-https://github.com/openwrt/packages/blob/master/net/keepalived/files/keepalived.config
-https://github.com/openwrt/packages/blob/master/utils/prometheus/files/prometheus.yml
-https://github.com/openwrt/packages/blob/38f01ad2c9738c6c5a5dbda85019bb9cf5bc10f8/admin/zabbix/Makefile#L215 (I know 100% that Zabbix does nothing unless you go in and configure your things)

P.S. I've also forgot to thank you for your effort of packaging Nebula in OpenWrt. I'm pretty sure I could not have just used their binaries in OpenWrt.
I'm sorry if I came out as rude but above I was just focused on getting the thing to work and I was frustrated by the lack of documentation.

Also if Nebula does not let me down in the next few weeks I'm probably going to add UCI support for it, to at least automate the firewall/NAT thing for sharing the local LAN. In the hopes that someone with webdevelopment experience can do a luci-app for it.

Ah, I see. Nebula is a large binary, so including a lighthouse.yml.example and node.yml.example in /etc/nebula might not be a bad idea.

I've hit a roadblock creating the nebula-proto package (and luci-proto-nebula), so I haven't sent PR/merged it into packages, but if you're compiling your own, you may want to switch to nebula-service from my repo: https://github.com/stangri/source.openwrt.melmac.net/tree/master/nebula. It brings a newer nebula package.

yeah, I've seen your thread Need help creating a netifd compatible nebula package - #6 by jow

I'm not surprised. Everyone hits a roadblock when trying to use proto/netifd for anything that isn't very basic or a copy-paste of existing ones made by core devs, because there is no docs on netifd nor anybody that knows how netifd works besides a couple core devs that wrote it.

It just enrages me that multiple things I could script in an afternoon require advanced reverse-engineering sessions if I tried to do with netifd. For what benefit? Unless that situation changes, I'm not terribly interested in investing time on it.

For example ModemManager proto has similar issues where the proto support is only partial (can bring up the modem but then it fails to detect any status change to it, like if the modem reports that it's not connected or shuts down). And what's stopping it? They don't know enough about how netifd works https://github.com/openwrt/packages/issues/14096

Also the bonding proto is partial and relies on assumptions that make it useless for my usecase(s), and has a huge amount of code to do stuff that should require only a few lines. (and also does not seem to work right with DSA switches, but that is probably a separate issue). https://github.com/openwrt/packages/issues/16779

I'd personally recommend to stay away from the netifd/proto subsystem and just do everything in the init script/UCI, then for the luci-app-nebula just add buttons in Luci to restart service or reload config, like also the luci-app-openvpn does.

netifd support is required if you want to be able to select nebula as an interface for SSH access. Outside of that, it will be enabling SSH on all interfaces.

If that is the only issue I think you can work around it.

Technically speaking, dropbear always listens on all interfaces, but it listens on specific IP addresses

Write an interface name, for example lan. With this setting you can limit connections to clients that can reach the IP of this interface. So for example the LAN IP of the interface can only be seen from clients in the LAN network, but not from the WAN in the default firewall configuration. It's used in dropbear's -p option that does the following: “Listen on specified address and TCP port. If just a port is given listen on all addresses. up to 10 can be specified (default 22 if none specified). ” from https://openwrt.org/docs/guide-user/base-system/dropbear#dropbear

The dropbear init script takes interface names from the uci config and figures out the IP of those interfaces, then add to a list of IP addresses that dropbear will listen to. https://github.com/openwrt/openwrt/blob/master/package/network/services/dropbear/files/dropbear.init#L171

So I think you can work just around this with firewall rules, you create a interface-less firewall zone linked to the raw device like I made above, then make a NAT rule that remaps traffic from the nebula's device IP to the LAN address or some other IP that dropbear is listening to (you might need to have the script figure out this from the dropbear uci config).

I'll try make a last attempt to fix netifd implementation to the point where it pushes the necessary information to ubus.