OpenWrt as operating system for server

Hi.

Let's say that I want to host a website, on a (virtual) server. If OpenWRT comes with all packages I want, would it be safe to use OpenWRT? Another scenario would be to use a snapshot version (until next release comes..) and use docker to provide necessary software. Maybe latter would be safer approach, but let's forget about safety of web servers and talk about operating system's safety with this purpose in general.

OpenWRT is being used widely in range of home/office gateways and should be providing safe enough environment for those connections, so why wouldn't it be safe enough then for a server? It should.. But maybe there's something that should be re-considered (preferably on the safety side of this..)

Here's my plan:

  • arch: x86_64
  • br-lan is tied to dummy network interface
  • wan is tied to ethernet that is server's actual connection
  • luci is set to run in other port (for example :8080)
  • only IPv4 is being used (at the moment at least)
  • Access to luci/ssh is available only through VPN, preferably zerotier.
  • Most probably using caddy as a proxy and Nginx as a web server.
  • Exim4 for mail forwarding.

Partition plan:
1: boot
2: recovery (currently installed version of OS, but with bare minimum to operate)
3: root filesystem (same as recovery + server/extra software)
4: swap
5: spare (empty partition used if I want to upgrade, I can install new version to here and set it up before I overwrite recovery with it, and then re-create root filesystem)

Pros:

  • very lighweight and fast OS.
  • small footprint
  • can contain minimum necessary software for it's purpose

Cons:

  • limited software when compared to bigger os's.
  • ???

So, mostly I am interested in safety of server when using OpenWRT as server's operating system, but of course, other issues that come to mind are also welcome. And good ideas too :blush:

My current server runs with CentOS 8, I have a budget KVM server, there's no "easy way" to install OpenWRT, but it's not that hard either, there's a limited set of ISO's that I can boot from, and I could use any of those to write combined image to disk and boot it, set initial stuff through VNC connection and go from there, so this is how I would set it up. Next question is, is it worth my while and further, will it be destroyed due to some issues with security?

Why not keep CentOS? Well, like I told, I chose the budget version and CentOS is very heavy, especially since I use Cockpit for management which isn't very light weight. Actually, I already took a second KVM server for 1 month and have system already running and working (except web and mail server stuff..) so it's totally do-able.

Luci would be great for server's management and since all outbound connections to luci/ssh/etc is available only through LAN(dummy) and VPN, I think main issues with server's security are with Firewall, kernel and so on- but iptables afaik is the most used and one of the most reliable firewalls, and well, most servers probably run with Linux - so my first idea is that this wouldn't be issue, just looking for another opinion :slight_smile:

all comes down to general OS setup and daemon version/setup...

the short answer is if you know what you are doing... then sure... simple web services are do-able in a light and reasonably secure manner...

the long answer... at least for the majority of half modern web daemons with and semblance of complexity... is that a full distro offers much more in terms of facility, scalability, guidance and support in that space.

1 Like

You're not going to lower maintence, rather increase it because of what OpenWrt targets.
See following threads about the topic

1 Like

I know what I am doing. Software limitation is not a issue also since I can compile my own packages, usually I have compiled whole OS but have stopped doing that since THAT takes a lot of effort to keep up the update train..

My current server setup on CentOS is running server software in containers through Podman, so I am likely to take this route with OpenWRT as well (though starting containers with Podman had a issue and I reverted to docker.. but there's also nice luci app for maintaining docker containers..) so updating server software would not be a big issue, since it's not tied that much to OpenWRT itself.

So, it seems there's no one saying that openwrt's core, would not be any more vulnurable from other distributions regarding to firewalling? Which is my main concern here.

Maintenance it not issue, I explained my hard disk configuration, I use openwrt on all my home routers, and few of them are PC Engine's APU's, so I use similar hardware configuration there also to provide support for updating. Instead of just flashing new os (in this case, using dd directly) - I rather run multiple versions and keep the old one until I am finished setting up the new one. And I trust more on my own skills, than running yum update to just see what parts just broke, when configuring my os, so maintenance of OpenWRT is more natural for me instead of trying to update something automatically and then googling around if someone else had same issue and if there's a hacky way to overcome the new issue.

Full distro's capability is not what I am after, since I need my system running, not updating itself automatically since there might be few months until I check if my server's running fine- automatically updating might be a issue in that case. I rather run with outdated proven components that provide working and then when I have a chance, update it to fit my needs. And since my needs here are quite reasonable, there's not so much updating or system does not need to grow that much. I rather want to setup a light weight system that just sits there doing it's thing. And when it comes to full distro, you forgot to mention that there comes also more or less, complexity which has a smaller footprint with minimal distros like OpenWRT. Alpine would be my next choice, but then I would be limited to do everything through cli, which is mostly fine, except if I just want to see if there's enough free ram and cpu hasn't gone high usage forever till rebooting, like it does with CentOS with my setup.

Web site on the server is not critical, I have few iOS apps and Apple requires me to host a support site, so it's there, not many hits on the site even on weekly basis, and exim4 is there just to re-direct my mail with my domain to my gmail account, "cool email address" - still, I usually share my gmail address to everyone instead... It's just something that can be done, so why not do that also, since it won't affect performance of server a lot.

This starts to seem a decent choice to think about with my mileage. Not for everyone, but for me it just might be what I am looking for..

in general i'm on your side... and you get maximum credit for posing your question to others in a clear, concise and well documented scenario ( perhaps a teency bit too much which is fine but reduces responses by putting off casual / scanning )

that said...

  1. if you truly did 'know what you were doing' you would not run luci period
  2. you would focus more on the hardware requirements
  3. you would take a 'service centric' approach... focussing first on truly core services... and assess suitability(codebase,hw)... implement... then revise... ( you have actually done this quite well... just need to zoom in on the key elements moving forward )

not in anyway directed as a criticism... moreso... slow down and take a pragmatic and open approach to each key factor... clearly...

from where is sit... that would be;

  • mail server requirement / choice / patchlevel / featureset / exposure
    and
  • need for gui access

for the other stuff just go ahead and try... ( one at a time... )

this is a little misguided... and verbatim it will also get you into trouble...

  • run stable = good
  • outdated = warning bells
  • updates = good
  • auto-updates = configurable
2 Likes

With the recent additions of SELinux to master, Openwrt is moving in a direction that this could make this a viable use-case

1 Like

that said...

  1. if you truly did 'know what you were doing' you would not run luci period
  2. you would focus more on the hardware requirements
  3. you would take a 'service centric' approach... focussing first on truly core services... and assess suitability(codebase,hw)... implement... then revise... ( you have actually done this quite well... just need to zoom in on the key elements moving forward )

Using CLI is just damn clumsy with mobile phone. Actually I have for a long time looked for Luci theme that would sit even better to iPhone's display. I also like to use mosh, and there's a (commercial) app even for it, but still, on the go - it's far easier (and safer) to check status of device/server with something that doesn't come in magnifying class sized letters nor requires me to type. It's out of scope, but I am a truck driver and even though you shouldn't be using a phone while driving, I most often connect to my VPN and check server's status from cockpit even though it's forbidden. If I add status of my containers to overview (requires additional Lua script) I think I've got everything I need.

Settings that I have made through Luci, is setting timezone, as it's a bit difficult to remember EET-2EEST,M3.5.0/3,M10.5.0/4 other than that, then there's package manager - I don't recall installing anything through that, but it's a great way to see if you have installed something that can/should be removed. Since opkg list does throw a long list that I prefer to view instead with my browser.

I do create my containers through cli, as I make directories that I use container's mount points under directory, I put there usually also create.sh that contains all parameters used to create container, yet - I want a simple interface that I can use to check that my containers are still alive, and that I can use to see their logs, since doing that through console once again sucks. Also, since I am a bit of paranoid, I want a simple user interface where I can monitor if my firewall rules are up-to-date, even though they always are :slight_smile: but my zones, forwards, rules and redirections are better managed through configuration file. /etc/firewall.user modification is not necessary on my setup. I do need to add a non-root user for http services, as I want to separate ownerships inside containers from rest of system; but this user isn't allowed to login to OpenWRT; so only user that has login, is what is provided; root.

Hardware requirements are what they are; server is hosted on other side of globe and since I am using a budget solution, there is not that much levering or possibility to request special requirements. So focusing on hardware requirements is trivial, it is what it is, but Xeon with 1gb runs with 50% memory usage on CentOS with my setup - it's likely that I can go below with lighter weight distribution, since alternatives like ubus instead of dbus need a lot less.

Hardware provided is designed for running containers.

If one isn't ready to manage manually their server, they might be better with plain hosting solution that provides a set of required tools for general usage.

My current whole containerized setup is following:

  • caddy (proxy)
  • nginx
  • php-fpm
  • mariadb
  • exim4
  • certbot

And then there is application that I wrote in C (could had done it with python, but once again, I prefer minimal footprint so C binary works better) that runs on main system, creates a socket that is available in for certbot. Once SSL certificates are updated, Certbot executes a script that restarts web services (currently also cockpit is restarted, since without a certificate, I cannot use it with iOS because of login process doesn't work with self-signed certificate, known issue) by sending a command to that socket. This application runs as forked and is very simple, it only allows restarting of service and it only agrees to restart service if hash of SSL certificates has changed since last restart.

All my containers are customised for my own needs, by myself - and they use alpine as it also has a small footprint.

Mail server must be as lightweight as possible, but needs to support forwarding. Exim4 just does this. I don't know if with requirement of gui access you mean a gui or web interface, since a real gui is out of question.

run stable: centos does not run stable - I have a cron job that reboots it once a week, since after 2 weeks, cpu load goes to 89-95% and never comes down unless I reboot.
auto-updates: broke my system last time, since I use custom components that are not always compatible with rest of the system. I also have mixed package base from fedora. It works, but.....
updates: manual updates pose same threat on a customised system, openwrt (snapshot) already comes with all my needs, so only thing missing is that previously spoken application that restarts web services when needed.

You are building a sand castle.
This can give you a lot of fun and hack experience.
But it will take tons of time and effort wasted on maintenance.

If you have enough free time today, tomorrow may not be so.
And sometimes issues just happen, not asking whether you are free or not.

1 Like

What on earth are you all talking about?
I've been running with this setup for a year now and all maintenance done is that updates are no longer automatically installed and weekly reboots that are automated and once there was a update in base alpine image - which is a no brainer, just re-created my container's as they already were set to use latest tag.

All configuration of each part is provided within a mount point, there's support for dynamic startup scripts, all logs are exported in files and rotated in my selected timespan and count.

Once you have created a solid setup, there is no need to adjust it all the time. My setup is fully automated, php uses a socket that is exported to nginx, so is mariadb's socket. SSL certificates are exposed to CentOS, caddy, Nginx and Certbot. I have my own custom script for CertBot that I did put a great effort in, even though it is a fork of one of existing publicly available scripts. Every container runs a single application. Socket of my web service restarter, is exposed to Certbot as well and it includes my client application that sends command required for restart.

I am not quite sure how you think this is something difficult to setup in OpenWRT environment, since it's far more complex to setup in my opinion with CentOS, since it's not as streamline as OpenWRT, and for the base installation, I didn't need much that didn't came already with OpenWRT, if I recall correctly, all installed in recovery addition to what comes with stock image, is nano, luci, zerotier, dummy-kmod, some filesystem modules, their fsck's and miss's, gdisk, some crypto modules to support hw crypto and that's it. Rootfs is copy of recovery partition set to boot as default, to rootfs I installed docker components and for a test it runs exim4 at the moment. Overlay fs is not being used on my install, as my updating plans don't include anything that would advantage of it. 2 systems instead and 3rd when upgrading.

But, enlighten me and let me know what maintenance are you talking about?

Sh*t happens, sometimes. More often if you are un-prepared and haven't given much effort in what you do. It took few hours to set it up, but server deployment with complete existing setup takes 15 minutes.

This thread has gone widely off-topic, maybe it's time to remind from original question and it's variations:

  • Is there something that should be taken into consideration if one decides to use OpenWRT as his/her server's operating system with default iptables setup that doesn't serve properly with server as it suites better for gateway use?
  • Is there a reason why iptables is a bad idea from the beginning - or iptables used in OpenWRT where currently kernel isn't SELinux patched?
  • If so; is there a alternative firewalling solution? - which would you prefer, and why, what are it's advantages over iptables?
  • And if there's none - why are all available solutions bad idea for this usage (and here we are talking about stock provided packages, not custom built)

When situation is following:

  • only IPv4
  • server applications run from containers
  • machine is x86_64
  • LAN bridge is actually a dummy device
  • Administration is available only through VPN
  • All open ports are forwarded to containers.

What he's trying to tell you is that package management, packaging and upgrade path(s) are far from ideal for your use-case.

What he's trying to tell you is that package management, packaging and upgrade path(s) are far from ideal for your use-case.

Ok. Well then there's some misunderstanding here. Like I told- I don't upgrade my current setup either since there's a high chance of breaking it when packages come from sources of different distributions that are partially compatible but not fully; trial and error base.
And I don't plan to upgrade too often with OpenWRT as well, I run several routers at my home, all with OpenWRT(and that's why I have a terrible wifi..) - my main gateway has been running 4 years with same system. I have plans to update when next release is out.

Although; I plan to update my server from time to time, give or take, a six months could be ideal. I am far too busy with my projects to tinker with one project all the time. I am pretty sure that I did show my partition mapping in one post, but can't find it now for some reason, so here goes again:
boot, recovery, rootfs, swap, work

That's why there was this last partition: work, it's not being used for anything unless I am updating. Idea is that I store updated kernel in boot, update grub.conf (vmlinuz-RELEASE) and comment out already there waiting entry.
After this I extract rootfs to work and boot from there. Build my system there and finally overwrite recovery and rootfs from there, then I just copy my container mount points to new installation and re-create containers and there we go.

I never upgrade a package unless it's independent of other packages and libraries, unless those depends match current versions of libraries. And I usually won't go that way even then, when system is setup and tuned; there's a old saying: never touch a working system. So like I said, maintenance is not the issue here. Maintenance is obsolete since system is rebuilt when upgrade is necessary and available.

rootfs's size is 16G. Mounted data takes approximately 300M (contains a database too..) for the containers. work is 2G. I usually also download available packages and set local repositories, so on emergency, package's can be also installed locally (through VNC) - and plan to do it this time as well, I actually had a script somewhere that automates the process feed at a time. This is because once I installed a snapshot and next year... Feeds were incompatible. Lesson learned there.

But main point probably is that I got a blessing for my firewall concerns..

And although my account is new, I've been member for this community and running OpenWRT for about 15 years, haven't just been logged in here now for awhile and noticed that my account was gone.

A lot of text for a simple headline question.

I answer the headline question.

If I see your question from the cyber security and information security standpoint the answer is that you should not have the firewall, router, switch and DHCP server of OWRT in the same hardware as the server, which I guess you are using for information storage?

It is a lot safer and easier to have a Synology or RaspberryPi (with in practical terms a full flown Debian system installed) with harddrive as a server inside the network.

Your network is so advanced so you can see the problem from a small business solution instead of a home solution. The router at the cybersecurity front line needs to be updated more often than 4years. The rest inside the network is an in depth defense.

But the security maintenance is a lot easier if these two functions are separated.

2 Likes

Server's DHCP and router functions are disabled. It's nic doesn't have switch capability and it has only one physical interface. Dummy interface is there to provide anchor for vpn.

From that list, there is only firewall.

And this is not a router. This is intel xeon server. I don't have physical access to it, it is located on other side of world, and I pay for it with money. It out ranks Synology and rpi any time in computing. Like most of my hardware, whether they were in remote location or not.

Skip the quoted text for shorter post. Question is at the end.

At my home which is completely different scenario, I run 3 servers, 2 with atoms for low power consuming and one xeon soc on supermicro with 64gb of ecc ram and yet I think I should had gotten one with 8 cores instead of 4 and that I should upgrade my main server's ram to 128gb. I only use WD's reds as hdd's and crucial's disk as ssd's, also ram only from crucial. I host a distributed compiling farm and host my own git server where I commit changes to my code. So yes, it is pretty advanced. I am not worried about my home's security at all. I have upgraded my gateway but not yet had time to install and set it up. When I do, I will also install fresh.

So now, description:

  • server with openwrt, maintain periods: long.
  • I won't be logged in there that often, I do need a simple web based interface in a secure connection that can be achieved with most mobile platforms where I can see status of my server.
  • I won't be on a website in long times; there's a contact form which posts directly to my email. I have created the site, so I know exactly what there is- therefore it's unlikely that I visit it often. No reason to.
  • It should just sit and do it's thing. Indefinetly hopefully.

And the question:
So, if ports to administration features like ssh are blocked, how big is the risk that my server gets unwelcome visitors?

how big is the risk that my server gets unwelcome visitors?

100%.

But just before you yourself say you already have total control of your cyber security and are of no concern in that matter.
And then you ask how bad you cyber security is?

Now the server is on the other side of the globe and anyway you have many servers?
If you have the server on the other side of the globe you don’t control it anyway so the basic question isn’t a question. Or what server do we talk about (in a sentence of max 10words!).

You change the information you give us and the actual question in every post you make after you get a answer that is never good enough in this tread and everytime drown everything in way to long texts!?
Why?
Just get to the real question in the first post.

1 Like

What? 100% sure that iptables gives in and opens ssh port to non-welcome visitors? I am genuinely surprised.

Sorry, it seems that I can't put it in 10 words..

Server's behind home are not very usable from outside LAN due to fact that there is no landline available for area, only available connection method is mobile broadband and reception is not very good at there; I have a connection that allows me to go as high as 200mbps (incoming), but my pcie modem cannot handle faster than 150mbps, yet speediest reports 2mbps :wink:

Servers on my home do not share content over internet, they store files and provide different virtual machines for different purposes that can be accessed in ways that suite their purpose. For example, there is a virtual machine that runs a vehicle diagnostics system and connects with bluetooth serial connection.

I am located in Finland, my server which sits on Finland, is located in LA, USA. Just because I got a nice deal from there with unlimited bandwidth. This server that is located in USA, is the server that I am talking about. My servers on home run with freebsd and work for backups/storage/distcc/other services. Purpose of server on USA is basically a web server and mail forwarder, as iOS developer, it's mandatory to host a support/marketing site, so that's the reason.

Real question has remained the same all the time- information is also the same, I asked about OpenWRT's firewalling security if used as OS of server, and I never said it's a router;

But here's the question again, with more than 10 words: if I run a server that has OpenWRT as base os, is provided iptables enough to block access to ssh and what ever method I decide to use for administrating, when I block these ports from public and allow them only from vpn.

openwrt is just an OS that has iptables. Since you aren't using it in its intended way, you'll have to provide custom iptables to do what you want. It can do anything that iptables can do. It'll be up to you to do it correctly.

1 Like

Yes, this was the answer I am looking for. As default it will not be the best solution, but with changes it will do just fine.

Why not use a Debian distro made for servers and with APT package update function?

3 Likes

That would absolutely be my recommendation as well.

2 Likes