My Docker version is 20.10.22, with default bridge docker.network.ip.address/16. This is a virtual bridge device that bridges all the virtual LAN ports of the existing docker containers. The network segment is different from my home network home.network.ip.address/24. The default gateway is WAN. This interface is assigned to the firewall zone Docker0 shown in the above pictures. The interface has a static address docker.network.0.1. The config is similar to my LAN interface config, which works well. The interface Docker0 reports the error "Network device is not present" irrelevant to how I configure the bridged ports.
Moreover, another solution that I've tried and failed was that I have set up two NAT rules, statically rewriting docker.network.ip.address/16 addresses to my home network gateway and the other way around.
can you remove all firewall rules you added and try the followings?
edit docker zone, at the bottom add wan zone to "Allow forward to destination zones"
i cannot follow your network config, screenshots are not really helps. could you please copy the content of /etc/config/network? pls remove sensitive details (wan isp account for example) but keep ip address details of lan+docker interfaces.
3, copy the content of /etc/config/dockerd too
After I checked the /etc/config/dockerd, I found out that the WAN port was blocked by Docker configuration, as shown in the figure. This is the default configuration and there is no GUI option that shows this.
After I deleted the arrow-pointed line, the containers have access to the internet.
There is nothing to do with the firewall configuration you and I mentioned. I have tried not to do the point 1 you have mentioned, and the container also has the access.
with the config above any traffic which is NOT related/established would be blocked on interface wan, meaning traffic initiated from docker to wan direction and the respective reply traffic is allowed. but no traffic initiated from wan to docker for example,
docker network to actually access internet - which can be controlled with the firewall zone setup, as suggested allowing docker to wan direction.
Thank you for your hint. I have edited the config and it works as well.
But I have two questions regarding the config
As far as I know, Openwrt uses nftables instead of iptables, and you are giving an extra iptables arg, I didn't find the information of this config command in the document. Could you tell me where I could look it up, or could you briefly explain what this extra_iptables_args means?
My firewall zone for WAN has been set to reject incoming connections and only allow outgoing connections, which means, as of my knowledge, the incoming connection (or I would say the return connection) will be accepted only if an outgoing connection was initiated. Is it still not possible to filter out the connection from outside the network, if a connection was not initiated by my docker container?
well, containers usually receive dynamic ip address, so you have to fix that first (custom network, fixed subnet). then comment out extra option and create a respective rule in DOCKER-USER chain something similar -s 172.17.0.4 -o $wan -j ACCEPT where $wan is your actual wan interface. but i have not tested it.
Thanks, I'll try to do it. But if I comment out the extra option, my container will not be able to connect to the internet, right? I'll try to append a custom rule in DOCKER-USER without commenting out the extra option to see if this ACCEPT can be treated as an exception rule.
which rejects all traffic from $wan to docker0 unless it is related traffic. but it does not control from docker to $wan direction at all. so no, your container will still able to connect to internet - in theory, but you can test it easily
when you say expose your docker to wan do you mean allowing uncontrolled traffic initiated from wan accepted by your docker, or allowing reply traffic for the data flow initiated by your docker container?
for example if you are using adguard home or pi hole as filtering DNS service, it needs obviously connection to upstream DNS server (i.e. access to internet). but that's a traffic initiated by your docker container and you want to allow reply traffic from wan only. which is already covered with this rule above (plus the zone setting).
but if you want to expose your e.g. containerized nginx web server, i.e. any random internet client access your web server that's a different story as i see: you need to punch a hole on your firewall anyhow (see luci / firewall / port forward).
so not sure about your use case but a) you may want to re-think your use case if it really needs to expose anything, b) if so maybe it is better to use the standard owrt approach (port forwarding instead of playing with dockerd) and enhance docker security (privileged container vs unprivelged etc).
though, i think is not really recommended to expose docker containers directly on wan.
But... If in the end there is a random web application exposed via a Proxy then where is the difference? Sure databases and such which are not needed to be accessible should not be reachable. Just ensure that you just allow ports you really need. (I just hate all this unnecessary port forwarding and NAT shizzle. We have reached a point where most of "young" people do not even know that we have routing! And do not need NAT in many situations..../rant)
not sure what is your point which would be different to mine as i feel we agree on the allow ports only you need if you need part
"random web application exposed via a Proxy then where is the difference": application gateways exist for a reason, to expose web application securely: protocol analysis, DOS protection, TLS policy, WAF etc. but web (application) security is a totally different topic imho.
My intention is to deploy a Snapdrop and NextCloud in Docker and enable secure file sharing between home network and WAN.
The device R (abbrev. for Remote) is the device I want to share file with my home device H (abbrev. for Home)
My first attempt was to use VPN to connect the R to my home network, but it was not successful due to some restrictions.
Then I tried the following solution with my DDNS ready:
Deploy a stand-alone Snapdrop container with internal IP 172.17.0.4
Forward external port 50080 to 172.17.0.4:80, and forward external port 50443 to 172.17.0.4:443
Set up a customized NAT rule to translate all the IP addresses of the connections incoming to 172.17.0.4 with external port 50080 and 50443 to 192.168.1.1 (home network address)
In this case, I'm expecting to expose this Snapdrop service to the public network and share the files with my own Cert.
I'm not quite sure if this NAT will make Snapdrop think that device R is in the same home network as H since I've been stuck in setting up the extra arguments for iptables. I've received an error message "bad argument ACCEPT" when setting up the iptables. In the worst case I'm gonna switch the solution to NextCloud, create an account with storage quota 2GB (which is on my NAS, different from my OpenWRT router) and also inject my own Cert. However, I still want Snapdrop since this is a real-time file transmission and the shared file doesn't land in my storage, while NextCloud is a net-disk-based solution which I'm trying to avoid even if the service is deployed by myself.
Well, I know it will never be too careful when it comes to the security issue. But using one Docker container for real-time file sharing and only exposing the needed ports to the internet is the most secure solution I could figure out based on my poor knowledge XD. If there are other secure file sharing solutions over the internet, I'd also like to give it a shot.
ok. don't do this if your files have any value for you.
setup wireguard vpn, there are plenty of guides how to set on owrt. then you can access your nextcloud securely without expose it to internet.
for what purpose? how would it stop hackers to crack your nextcloud?
keep in mind once you expose NC to internet it is really exposed: anybody can try to connect it not just you. CVEs related to NC link these are the possible attack vectors, so it is your call if you open the opportunity to hackers break in to your system by exposing your server.
if you are referring to a (self-signed) server cert that will be not enough to close these vulnerabilities. with a (self-signed) server cert still any client can connect (that's the whole purpose of server cert, i.e. make sure for clients that the server they are connecting to is the one they suppose to connect to, it works for any client). if you mean client cert to identify the client the vulnerabilities are still there.
traditional firewall is for network protection not an application protection. if a port is open and there is stupid application which has known default user/password, weak password policy, sensitive to brute force login, does not do input validation etc etc etc then it is like an invitation for hackers to break in, stole your data, install virus and so on.