I run pi-hole in the container. It needs to serve all of my zones except for iot (no WAN access allowed on that zone). So it's lan, guest, and lxc.
Perhaps I am making the setup more complex by adding a VLAN 4 and a dedicated interface for the lxc. I am happy to have it exist with my guest zone on the guest interface.
What if I simplify it, for interfaces on the Pi, remove LXC all together and some how get the container on the GUEST interface. Thoughts?
Next step is me bring-up the bridge and the lxc interface when I am having trouble doing. But first, what do you think about my simplification idea?
Let's see what happens when you don't specify a gateway. Also change the address to 10.0.4.1/24. This way, your devices can reach pi-hole over 10.0.4.1.
Backing up, are all these changes using the subinterfaces of eth0 setup? If so, here the state I am at currently (not using VLAN filtering and bridges).
/etc/config/network
config interface 'loopback'
option device 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config globals 'globals'
option ula_prefix 'fd1a:184b:b879::/48'
option packet_steering '1'
config device
option name 'eth0'
option ipv6 '0'
config device
option name 'eth1'
option ipv6 '0'
config device
option name 'eth0.1'
option type '8021q'
option ifname 'eth0'
option vid '1'
option ipv6 '0'
config device
option name 'eth0.3'
option type '8021q'
option ifname 'eth0'
option vid '3'
option ipv6 '0'
config device
option name 'eth0.4'
option type '8021q'
option ifname 'eth0'
option vid '4'
option ipv6 '0'
config device
option name 'eth0.5'
option type '8021q'
option ifname 'eth0'
option vid '5'
option ipv6 '0'
config device
option name 'br-lxc.4'
option type 'bridge'
option ipv6 '0'
list ports 'eth0.4'
config device
option name 'wg0'
option ipv6 '0'
config interface 'wan'
option device 'eth1'
option proto 'dhcp'
option peerdns '0'
option delegate '0'
list dns '1.1.1.1'
list dns '1.0.0.1'
config interface 'lan'
option proto 'static'
option ipaddr '10.9.8.1'
option netmask '255.255.255.0'
option device 'eth0.1'
config interface 'guest'
option proto 'static'
option ipaddr '10.9.7.1'
option netmask '255.255.255.0'
option device 'eth0.3'
config interface 'lxc'
option proto 'static'
option ipaddr '10.0.4.1'
option netmask '255.255.255.0'
option device 'br-lxc.4'
config interface 'iot'
option proto 'static'
option ipaddr '10.9.5.1'
option netmask '255.255.255.0'
option device 'eth0.5'
config interface 'wg0'
option proto 'wireguard'
Sorry, we covered a few setups. I am asking if I should revert to the one not using VLAN filtering and br0. I think so but want to be clear. You asked me to get rid of several sections from the configuration based on subinterfaces. Does my question make sense?
If so, I removed those two sections and now have this:
inprogress network
config interface 'loopback'
option device 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config globals 'globals'
option ula_prefix 'fd1a:184b:b879::/48'
option packet_steering '1'
config device
option name 'eth0'
option ipv6 '0'
config device
option name 'eth1'
option ipv6 '0'
config device
option name 'eth0.1'
option type '8021q'
option ifname 'eth0'
option vid '1'
option ipv6 '0'
config device
option name 'eth0.3'
option type '8021q'
option ifname 'eth0'
option vid '3'
option ipv6 '0'
config device
option name 'eth0.4'
option type '8021q'
option ifname 'eth0'
option vid '4'
option ipv6 '0'
config device
option name 'eth0.5'
option type '8021q'
option ifname 'eth0'
option vid '5'
option ipv6 '0'
config device
option name 'wg0'
option ipv6 '0'
config interface 'wan'
option device 'eth1'
option proto 'dhcp'
option peerdns '0'
option delegate '0'
list dns '1.1.1.1'
list dns '1.0.0.1'
config interface 'lan'
option proto 'static'
option ipaddr '10.9.8.1'
option netmask '255.255.255.0'
option device 'eth0.1'
config interface 'guest'
option proto 'static'
option ipaddr '10.9.7.1'
option netmask '255.255.255.0'
option device 'eth0.3'
config interface 'iot'
option proto 'static'
option ipaddr '10.9.5.1'
option netmask '255.255.255.0'
option device 'eth0.5'
config interface 'wg0'
option proto 'wireguard'
...
OK. I'd like to work off the subinterfaces configuration. I removed the 3 sections you called out. What do you suggest I use for the lxc configuration itself? Your suggestion was to omit the gateway therein and change the IP address assigned. Beyond that, still use the veth setup and link to what?
Put lxc-test there (as the name of the interface at lxc.net.0.link =) and start lxc. See if it runs fine, then check if lxc-test interface is created by running ip link command.
# lxc-start -n pihole -F
lxc-start: pihole: network.c: netdev_configure_server_veth: 708 No such file or directory - Failed to attach "vethU6Vom5" to bridge "lxc-test", bridge interface doesn't exist
lxc-start: pihole: network.c: lxc_create_network_priv: 3419 No such file or directory - Failed to create network device
lxc-start: pihole: start.c: lxc_spawn: 1826 Failed to create the network
lxc-start: pihole: start.c: __lxc_start: 2053 Failed to spawn container "pihole"
lxc-start: pihole: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: pihole: tools/lxc_start.c: main: 313 Additional information can be obtained by setting the --logfile and --logpriority options
I'm thinking the type of the network needs to be something else?
veth: a virtual ethernet pair device is created with one side assigned to the container and the other side on the host. lxc.net.[i].veth.mode specifies the mode the veth parent will use on the host. The accepted modes are bridge and router. The mode defaults to bridge if not specified. In bridge mode the host side is attached to a bridge specified by the lxc.net.[i].link option. If the bridge link is not specified, then the veth pair device will be created but not attached to any bridge.
Getting rid of lxc.net.0.link made lxc switch to routing mode but it did not give the name we specified for some reason. Apparently, I can't even read something I quote.
Add lxc.net.0.veth.mode = router. Keep lxc.net.0.name = lxc-test as is
Anyway, set up firewall for lxc-test as I stated above, it should start working. Also check the IP configuration of lxc-test by ip a and check whether route to 10.0.4.0/24 is created by ip route
I made the firewall zone, but still can't ping 10.0.4.1.
Some output (removing my external IP addy):
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP qlen 1000
link/ether 60:a4:b7:59:24:af brd ff:ff:ff:ff:ff:ff
inet xxx.xxx.xxx.xxx/22 brd xxx.xxx.xxx.255 scope global eth1
valid_lft forever preferred_lft forever
4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether dc:a6:32:02:c1:c2 brd ff:ff:ff:ff:ff:ff
131: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
inet 10.9.8.1/24 brd 10.9.8.255 scope global eth0.1
valid_lft forever preferred_lft forever
132: eth0.3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
inet 10.9.7.1/24 brd 10.9.7.255 scope global eth0.3
valid_lft forever preferred_lft forever
133: eth0.5@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether dc:a6:32:02:c1:c1 brd ff:ff:ff:ff:ff:ff
inet 10.9.5.1/24 brd 10.9.5.255 scope global eth0.5
valid_lft forever preferred_lft forever
134: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN qlen 1000
link/[65534]
inet 10.200.200.200/24 brd 10.200.200.255 scope global wg0
valid_lft forever preferred_lft forever
137: ifb4eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
link/ether 72:27:93:1b:43:a7 brd ff:ff:ff:ff:ff:ff
inet6 fe80::7027:93ff:fe1b:43a7/64 scope link
valid_lft forever preferred_lft forever
149: vethieaEl1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether fe:74:75:3b:f4:f9 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc74:75ff:fe3b:f4f9/64 scope link
valid_lft forever preferred_lft forever
And
# ip route
default via xxx.xxx.xxx.1 dev eth1 src xxx.xxx.xxx.34
10.0.4.1 dev vethieaEl1 scope link
10.9.5.0/24 dev eth0.5 scope link src 10.9.5.1
10.9.7.0/24 dev eth0.3 scope link src 10.9.7.1
10.9.8.0/24 dev eth0.1 scope link src 10.9.8.1
10.200.200.0/24 dev wg0 scope link src 10.200.200.200
10.200.200.201 dev wg0 scope link
10.200.200.202 dev wg0 scope link
10.200.200.203 dev wg0 scope link
xxx.xxx.xxx.0/22 dev eth1 scope link src xxx.xxx.xxx.34
Yes, I too found that bit about lxc.net.0.veth.pair in the name page. So the name is consistent now and what I used in the firewall zone.
After starting the container, still no ability ping, but after I ran the command you suggested, I can ping it.
# ip addr add 10.0.4.1/24 dev lxc-test
# ping 10.0.4.1
PING 10.0.4.1 (10.0.4.1): 56 data bytes
64 bytes from 10.0.4.1: seq=0 ttl=64 time=0.133 ms
64 bytes from 10.0.4.1: seq=1 ttl=64 time=0.249 ms
And
# ip route
default via xxx.xxx.xxx.1 dev eth1 src xxx.xxx.xxx.34
10.0.4.0/24 dev lxc-test scope link src 10.0.4.1
10.0.4.1 dev lxc-test scope link
10.9.5.0/24 dev eth0.5 scope link src 10.9.5.1
10.9.7.0/24 dev eth0.3 scope link src 10.9.7.1
10.9.8.0/24 dev eth0.1 scope link src 10.9.8.1
10.200.200.0/24 dev wg0 scope link src 10.200.200.200
10.200.200.201 dev wg0 scope link
10.200.200.202 dev wg0 scope link
10.200.200.203 dev wg0 scope link
xxx.xxx.xxx.0/22 dev eth1 scope link src xxx.xxx.xxx.34
And:
# ip a
...
137: ifb4eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN qlen 32
link/ether 72:27:93:1b:43:a7 brd ff:ff:ff:ff:ff:ff
inet6 fe80::7027:93ff:fe1b:43a7/64 scope link
valid_lft forever preferred_lft forever
152: lxc-test@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether fe:d8:6e:bc:44:f3 brd ff:ff:ff:ff:ff:ff
inet 10.0.4.1/24 scope global lxc-test
valid_lft forever preferred_lft forever
inet6 fe80::fcd8:6eff:febc:44f3/64 scope link
valid_lft forever preferred_lft forever
OK, the lxc.net.[i].ipv4.gateway option specifies gateway to use on the inside of the container, can you try putting 10.0.4.1 there?
P.S. I'm getting this feeling that just bridging to an interface which we can control with LuCI looks like a better idea in terms of automation and ease of control. Anywho, this was quite entertaining.
Just create a bridge e.g. lxcbr0.
Set up lxc to use lxcbr0 as the link. Set up IP 10.0.4.250/24 gateway 10.0.4.1.
Create network using lxcbr0 interface. Set up IP 10.0.4.1/24.
Create firewall zone for this network.
Voila?
The difference is traffic will be routed to lxcbr0 instead of lxc-test and we have control over LuCI since the network with lxcbr0 appears on Interfaces page & /etc/config/network.