Given how prevalent the need for reliable internet access has become in recent years, I'm a strong proponent of keeping things simple - not just during good times, but also in the failure case.
For me, that implies:
- running OpenWrt on x86_64 (dedicated hardware), yes
- running OpenWrt in a virtual machine (aside from testing or for very specific -optional- subnets), hell no
Yes, OpenWrt works quite well as a virtual machine - and there's no problem doing that for experiments, to feed a lab network (e.g. of virtual machines) with internet that way, but to rely on this for daily operations of the basic network is another topic- unless you're really in an enterprise environment with hot-failover live migration and standby resources
- if you do this nevertheless, keep track of the stacking order (cold-boot, re-bootstrapping) and their implicit dependencies
- try to keep policy decisions about the configuration in one place, don't end up configuring your network in multiple places (worst case example, managed switch <--> hypervisor <--> router-VM), that will get out of sync quickly.
- yes, having a dedicated ethernet card for WAN and LAN (even if the later normally is a segmented trunk port for various LAN subnets) helps a lot.
- keep functionality simple and easily replaceable
- keep cold-standby replacements, doesn't need to be a feature complete or normal performance alternative, but enough to bootstrap your network again (e.g. an old plastic router will do for a few days, even if you're normally used to (above-) 1 GBit/s WAN speeds).
- resist the temptation to overload the OpenWrt installation, just because it would have resources (CPU cycles, storage, RAM) to spare
it's a security nightmare waiting to happen
It's really tempting to add additional server instances to an x86 OpenWrt system, but security and the ability to properly audit your system quickly suffers that way. -
don't become a slave to your own technology, you're at home, not at work.
/K.I.S.S.