Difference between DSA and driver-level VLANs in the context of RPi4 (no switch)?

I run pi-hole in the container. It needs to serve all of my zones except for iot (no WAN access allowed on that zone). So it's lan, guest, and lxc.

Perhaps I am making the setup more complex by adding a VLAN 4 and a dedicated interface for the lxc. I am happy to have it exist with my guest zone on the guest interface.

What if I simplify it, for interfaces on the Pi, remove LXC all together and some how get the container on the GUEST interface. Thoughts?

Next step is me bring-up the bridge and the lxc interface when I am having trouble doing. But first, what do you think about my simplification idea?

Do you have anything connecting to your Pi on VLAN 4? If this was made just for lxc, you can get rid of it.

Makes sense, here's how I would do it. Just let lxc create the interface and IP configuration. Get rid of this:

config interface 'lxc'
	option device 'br-lxc.4'
	option proto 'static'
	option ipaddr '10.0.4.1'
	option netmask '255.255.255.0'

Then, create a new firewall zone and include the lxc interface e.g.:


Let's see what happens when you don't specify a gateway. Also change the address to 10.0.4.1/24. This way, your devices can reach pi-hole over 10.0.4.1.

Does this all make sense?

Backing up, are all these changes using the subinterfaces of eth0 setup? If so, here the state I am at currently (not using VLAN filtering and bridges).

/etc/config/network

config interface 'loopback'
	option device 'lo'
	option proto 'static'
	option ipaddr '127.0.0.1'
	option netmask '255.0.0.0'

config globals 'globals'
	option ula_prefix 'fd1a:184b:b879::/48'
	option packet_steering '1'

config device
	option name 'eth0'
	option ipv6 '0'

config device
	option name 'eth1'
	option ipv6 '0'

config device
	option name 'eth0.1'
	option type '8021q'
	option ifname 'eth0'
	option vid '1'
	option ipv6 '0'

config device
	option name 'eth0.3'
	option type '8021q'
	option ifname 'eth0'
	option vid '3'
	option ipv6 '0'

config device
	option name 'eth0.4'
	option type '8021q'
	option ifname 'eth0'
	option vid '4'
	option ipv6 '0'

config device
	option name 'eth0.5'
	option type '8021q'
	option ifname 'eth0'
	option vid '5'
	option ipv6 '0'

config device
	option name 'br-lxc.4'
	option type 'bridge'
	option ipv6 '0'
	list ports 'eth0.4'

config device
	option name 'wg0'
	option ipv6 '0'

config interface 'wan'
	option device 'eth1'
	option proto 'dhcp'
	option peerdns '0'
	option delegate '0'
	list dns '1.1.1.1'
	list dns '1.0.0.1'

config interface 'lan'
	option proto 'static'
	option ipaddr '10.9.8.1'
	option netmask '255.255.255.0'
	option device 'eth0.1'

config interface 'guest'
	option proto 'static'
	option ipaddr '10.9.7.1'
	option netmask '255.255.255.0'
	option device 'eth0.3'

config interface 'lxc'
	option proto 'static'
	option ipaddr '10.0.4.1'
	option netmask '255.255.255.0'
	option device 'br-lxc.4'

config interface 'iot'
	option proto 'static'
	option ipaddr '10.9.5.1'
	option netmask '255.255.255.0'
	option device 'eth0.5'

config interface 'wg0'
	option proto 'wireguard'

No we're doing routing now. We just create an interface for lxc and route other subnets there. Get rid of:

config device
	option name 'br-lxc.4'
	option type 'bridge'
	option ipv6 '0'
	list ports 'eth0.4'
config interface 'lxc'
	option proto 'static'
	option ipaddr '10.0.4.1'
	option netmask '255.255.255.0'
	option device 'br-lxc.4'

Lxc should create an interface which won't appear on Interfaces page. Then set up firewall like my reply above.

Sorry, we covered a few setups. I am asking if I should revert to the one not using VLAN filtering and br0. I think so but want to be clear. You asked me to get rid of several sections from the configuration based on subinterfaces. Does my question make sense?

If so, I removed those two sections and now have this:

inprogress network

config interface 'loopback'
	option device 'lo'
	option proto 'static'
	option ipaddr '127.0.0.1'
	option netmask '255.0.0.0'

config globals 'globals'
	option ula_prefix 'fd1a:184b:b879::/48'
	option packet_steering '1'

config device
	option name 'eth0'
	option ipv6 '0'

config device
	option name 'eth1'
	option ipv6 '0'

config device
	option name 'eth0.1'
	option type '8021q'
	option ifname 'eth0'
	option vid '1'
	option ipv6 '0'

config device
	option name 'eth0.3'
	option type '8021q'
	option ifname 'eth0'
	option vid '3'
	option ipv6 '0'

config device
	option name 'eth0.4'
	option type '8021q'
	option ifname 'eth0'
	option vid '4'
	option ipv6 '0'

config device
	option name 'eth0.5'
	option type '8021q'
	option ifname 'eth0'
	option vid '5'
	option ipv6 '0'

config device
	option name 'wg0'
	option ipv6 '0'

config interface 'wan'
	option device 'eth1'
	option proto 'dhcp'
	option peerdns '0'
	option delegate '0'
	list dns '1.1.1.1'
	list dns '1.0.0.1'

config interface 'lan'
	option proto 'static'
	option ipaddr '10.9.8.1'
	option netmask '255.255.255.0'
	option device 'eth0.1'

config interface 'guest'
	option proto 'static'
	option ipaddr '10.9.7.1'
	option netmask '255.255.255.0'
	option device 'eth0.3'

config interface 'iot'
	option proto 'static'
	option ipaddr '10.9.5.1'
	option netmask '255.255.255.0'
	option device 'eth0.5'

config interface 'wg0'
	option proto 'wireguard'
...

I'm leaving the choice to you, whether do VLAN filtering on bridge or just create subinterfaces of eth0.

What I pointed to remove are related to lxc, we're going to take a different way at it.

You can get rid of this too if you don't use VLAN 4 for any devices connecting to the Pi.

config device
	option name 'eth0.4'
	option type '8021q'
	option ifname 'eth0'
	option vid '4'
	option ipv6 '0'

OK. I'd like to work off the subinterfaces configuration. I removed the 3 sections you called out. What do you suggest I use for the lxc configuration itself? Your suggestion was to omit the gateway therein and change the IP address assigned. Beyond that, still use the veth setup and link to what?

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = 
lxc.net.0.flags = up
lxc.net.0.ipv4.address = 10.0.4.1/24

Put lxc-test there (as the name of the interface at lxc.net.0.link =) and start lxc. See if it runs fine, then check if lxc-test interface is created by running ip link command.

OK. Here is the current config:

/etc/config/network

config interface 'loopback'
	option device 'lo'
	option proto 'static'
	option ipaddr '127.0.0.1'
	option netmask '255.0.0.0'

config globals 'globals'
	option ula_prefix 'fd1a:184b:b879::/48'
	option packet_steering '1'

config device
	option name 'eth0'
	option ipv6 '0'

config device
	option name 'eth1'
	option ipv6 '0'

config device
	option name 'eth0.1'
	option type '8021q'
	option ifname 'eth0'
	option vid '1'
	option ipv6 '0'

config device
	option name 'eth0.3'
	option type '8021q'
	option ifname 'eth0'
	option vid '3'
	option ipv6 '0'

config device
	option name 'eth0.5'
	option type '8021q'
	option ifname 'eth0'
	option vid '5'
	option ipv6 '0'

config device
	option name 'wg0'
	option ipv6 '0'

config interface 'wan'
	option device 'eth1'
	option proto 'dhcp'
	option peerdns '0'
	option delegate '0'
	list dns '1.1.1.1'
	list dns '1.0.0.1'

config interface 'lan'
	option proto 'static'
	option ipaddr '10.9.8.1'
	option netmask '255.255.255.0'
	option device 'eth0.1'

config interface 'guest'
	option proto 'static'
	option ipaddr '10.9.7.1'
	option netmask '255.255.255.0'
	option device 'eth0.3'

config interface 'iot'
	option proto 'static'
	option ipaddr '10.9.5.1'
	option netmask '255.255.255.0'
	option device 'eth0.5'

config interface 'wg0'
	option proto 'wireguard'

Here is the lxc config:

lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = aarch64
lxc.rootfs.path = dir:/srv/lxc/pihole/rootfs
lxc.uts.name = pihole

lxc.net.0.type = veth
lxc.net.0.link = lxc-test
lxc.net.0.flags = up
lxc.net.0.ipv4.address = 10.0.4.1/24

When I try to start it, it fails:

# lxc-start -n pihole -F
lxc-start: pihole: network.c: netdev_configure_server_veth: 708 No such file or directory - Failed to attach "vethU6Vom5" to bridge "lxc-test", bridge interface doesn't exist
lxc-start: pihole: network.c: lxc_create_network_priv: 3419 No such file or directory - Failed to create network device
lxc-start: pihole: start.c: lxc_spawn: 1826 Failed to create the network
lxc-start: pihole: start.c: __lxc_start: 2053 Failed to spawn container "pihole"
lxc-start: pihole: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: pihole: tools/lxc_start.c: main: 313 Additional information can be obtained by setting the --logfile and --logpriority options

I'm thinking the type of the network needs to be something else?

Can you rename lxc.net.0.link = lxc-test to lxc.net.0.name = lxc-test?

Done. Now it starts.

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP qlen 1000
    link/ether 60:a4:b7:59:24:af brd ff:ff:ff:ff:ff:ff
4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether dc:a6:32:02:c1:c2 brd ff:ff:ff:ff:ff:ff
131: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
132: eth0.3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
133: eth0.5@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
134: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN qlen 1000
    link/[65534] 
137: ifb4eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
    link/ether 72:27:93:1b:43:a7 brd ff:ff:ff:ff:ff:ff
142: veth1ZgkoO@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether fe:64:43:d4:bd:dc brd ff:ff:ff:ff:ff:ff

I cannot ping it. If I attach to it lxc-attach -n pihole I cannot ping outside of it.

Ok so let me quote this from linuxcontainers.org

veth: a virtual ethernet pair device is created with one side assigned to the container and the other side on the host. lxc.net.[i].veth.mode specifies the mode the veth parent will use on the host. The accepted modes are bridge and router. The mode defaults to bridge if not specified. In bridge mode the host side is attached to a bridge specified by the lxc.net.[i].link option. If the bridge link is not specified, then the veth pair device will be created but not attached to any bridge.

Getting rid of lxc.net.0.link made lxc switch to routing mode but it did not give the name we specified for some reason. Apparently, I can't even read something I quote.
Add lxc.net.0.veth.mode = router. Keep lxc.net.0.name = lxc-test as is

Anyway, set up firewall for lxc-test as I stated above, it should start working. Also check the IP configuration of lxc-test by ip a and check whether route to 10.0.4.0/24 is created by ip route

I made the firewall zone, but still can't ping 10.0.4.1.

Some output (removing my external IP addy):

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP qlen 1000
    link/ether 60:a4:b7:59:24:af brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.xxx.xxx/22 brd xxx.xxx.xxx.255 scope global eth1
       valid_lft forever preferred_lft forever
4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether dc:a6:32:02:c1:c2 brd ff:ff:ff:ff:ff:ff
131: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
    inet 10.9.8.1/24 brd 10.9.8.255 scope global eth0.1
       valid_lft forever preferred_lft forever
132: eth0.3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
    inet 10.9.7.1/24 brd 10.9.7.255 scope global eth0.3
       valid_lft forever preferred_lft forever
133: eth0.5@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
    inet 10.9.5.1/24 brd 10.9.5.255 scope global eth0.5
       valid_lft forever preferred_lft forever
134: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN qlen 1000
    link/[65534] 
    inet 10.200.200.200/24 brd 10.200.200.255 scope global wg0
       valid_lft forever preferred_lft forever
137: ifb4eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
    link/ether 72:27:93:1b:43:a7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7027:93ff:fe1b:43a7/64 scope link 
       valid_lft forever preferred_lft forever
149: vethieaEl1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether fe:74:75:3b:f4:f9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc74:75ff:fe3b:f4f9/64 scope link 
       valid_lft forever preferred_lft forever

And

# ip route
default via xxx.xxx.xxx.1 dev eth1  src xxx.xxx.xxx.34 
10.0.4.1 dev vethieaEl1 scope link 
10.9.5.0/24 dev eth0.5 scope link  src 10.9.5.1 
10.9.7.0/24 dev eth0.3 scope link  src 10.9.7.1 
10.9.8.0/24 dev eth0.1 scope link  src 10.9.8.1 
10.200.200.0/24 dev wg0 scope link  src 10.200.200.200 
10.200.200.201 dev wg0 scope link 
10.200.200.202 dev wg0 scope link 
10.200.200.203 dev wg0 scope link 
xxx.xxx.xxx.0/22 dev eth1 scope link  src xxx.xxx.xxx.34 

Hmm, I still don't see 10.0.4.1/24 on the interface.

Replace lxc.net.0.name -> lxc.net.0.veth.pair this should create an interface with a name we specify.

After running lxc, try ip addr add 10.0.4.1/24 dev lxc-test. See what happens.

Yes, I too found that bit about lxc.net.0.veth.pair in the name page. So the name is consistent now and what I used in the firewall zone.

After starting the container, still no ability ping, but after I ran the command you suggested, I can ping it.

# ip addr add 10.0.4.1/24 dev lxc-test
# ping 10.0.4.1
PING 10.0.4.1 (10.0.4.1): 56 data bytes
64 bytes from 10.0.4.1: seq=0 ttl=64 time=0.133 ms
64 bytes from 10.0.4.1: seq=1 ttl=64 time=0.249 ms

And

# ip route
default via xxx.xxx.xxx.1 dev eth1  src xxx.xxx.xxx.34 
10.0.4.0/24 dev lxc-test scope link  src 10.0.4.1 
10.0.4.1 dev lxc-test scope link 
10.9.5.0/24 dev eth0.5 scope link  src 10.9.5.1 
10.9.7.0/24 dev eth0.3 scope link  src 10.9.7.1 
10.9.8.0/24 dev eth0.1 scope link  src 10.9.8.1 
10.200.200.0/24 dev wg0 scope link  src 10.200.200.200 
10.200.200.201 dev wg0 scope link 
10.200.200.202 dev wg0 scope link 
10.200.200.203 dev wg0 scope link 
xxx.xxx.xxx.0/22 dev eth1 scope link  src xxx.xxx.xxx.34 

And:

# ip a
...
137: ifb4eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
    link/ether 72:27:93:1b:43:a7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7027:93ff:fe1b:43a7/64 scope link 
       valid_lft forever preferred_lft forever
152: lxc-test@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether fe:d8:6e:bc:44:f3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.4.1/24 scope global lxc-test
       valid_lft forever preferred_lft forever
    inet6 fe80::fcd8:6eff:febc:44f3/64 scope link 
       valid_lft forever preferred_lft forever

Ok, sounds good. There must be an option to automatically specify the IP on the interface when lxc creates it. Moving on for now.

Can the container connect to the internet right now? Does everything work?

No, I cannot ssh into the container from the router. If I attach with lxc-attach I have no connectivity inside it either.

OK, the lxc.net.[i].ipv4.gateway option specifies gateway to use on the inside of the container, can you try putting 10.0.4.1 there?

P.S. I'm getting this feeling that just bridging to an interface which we can control with LuCI looks like a better idea in terms of automation and ease of control. Anywho, this was quite entertaining.

Just create a bridge e.g. lxcbr0.
Set up lxc to use lxcbr0 as the link. Set up IP 10.0.4.250/24 gateway 10.0.4.1.
Create network using lxcbr0 interface. Set up IP 10.0.4.1/24.
Create firewall zone for this network.

Voila?

The difference is traffic will be routed to lxcbr0 instead of lxc-test and we have control over LuCI since the network with lxcbr0 appears on Interfaces page & /etc/config/network.

I redefined the IP to the higher number and added the gateway:

lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = aarch64

# Container specific configuration
lxc.rootfs.path = dir:/srv/lxc/pihole/rootfs
lxc.uts.name = pihole

lxc.net.0.type = veth
lxc.net.0.veth.pair = lxc-test
lxc.net.0.veth.mode = router
lxc.net.0.flags = up
lxc.net.0.ipv4.address = 10.0.4.250/24
lxc.net.0.ipv4.gateway = 10.0.4.1

It is now partially working.

  • There is no DNS resolution in the container
  • I cannot get DNS resolution working outside of the container from the lan zone
  • I can ssh to the container from the lan zone
  • I can connect to pihole's web interface from the lan zone

Here is the firewall forwarding page:

Interestingly, the 'lxc' firewall zone I created seems empty.
If I edit the 'lan' zone for example:

That's just the visuals of LuCI. This is because it only shows networks defined on this zone. Since we define an interface, it shows empty.

What do you think about this? You oughta remove lxc.net.0.veth.mode = router since we go back to bridging. And add lxc.net.0.link = lxcbr0