Linux, the kernel for OpenWrt, uses RAM as appropriate, based on general rules. Just like on a desktop with 32 GB of memory and only 2 GB in use, if your computational needs don't require more RAM, it isn't used.
I'll rephrase the question again. Most OpenWRT packages are specificially optimized to run on low-resource systems, and are hence are optimized at compile time for small package size and small memory footprint.
If you were compiling OpenWRT specifically for systems with larger memory sizes, is there anything that could be tweaked to improve performance/reduce bottlenecks by making use of more RAM?
Maybe the answer is simply 'no'. If so, fair enough!
Yes, use “standard” C libraries, utilities, make it easy to run an arbitrary build of any source, ...
OpenWrt is highly optimized for running on tiny flash, with tiny RAM, and slow processors to be able to primarily route traffic and manage wireless connectivity. LuCI is only the tip of the iceberg. As soon as you remove those design limitations, the choices made look a lot more like a desktop/server distro.
tl; dr the only viable choice to "not waste your hardware" is to run OpenWrt as a VM as part of a bigger "hyperconverged" system. Like for example using your home NAS/server/HTPC to run it.
Even changing options won't change the fact that the job does not really need that much resources.
Eh, the answer is yes, you can recompile it to use a normal linux distro library, run a different optimization level in the compiler (currently it is optimizing for size due to obvious reasons) and so on.
The point is that it won't make a huge difference, especially on performance. x86 hardware is so blatantly overpowered if compared to embedded systems that even a 10% increase isn't going to matter for routing and firewall jobs.
I have worked with high end custom router systems (basically a server box doing router/firewall work) running modern Debian (normal desktop distribution, the base of Ubuntu) that aren't really using more than 128 MB of RAM unless you really start pushing hundreds of clients doing whatever.
Not really... and it doesn't need more, unless one adds more packages.
Alternate distro's are similarly efficient - I've got a FreeBSD-based box that runs my primary network with 4GB of RAM, and even loaded, it uses realistically about a half-GB, and that's with a fair amount of bells/whistles...
That team has done a good job on x86 all told... but I've found that OpenWRT can be just as efficient on a smaller memory footprint.
You can't. A common DNS entry won't exceed 512 bytes. So 3.5GB can hold 7 million entries. I just queried some random domains, and most domains have a TTL of 7 days. So you would roughly have to query a million unique domains a day to use 3.5GB.
How about a transparent caching proxy? Although I admit it's a bit too late for that, as most sites are moving to https.
A more useful way would be to run either a desktop/server distro or a bare-metal hypervisor on the hardware and run OpenWrt as a pure router/firewall (no services) in a container, and then your service hosts in other containers.
OpenWRT mainly targets small footprint and relatively "low-performance" targets and the distro is also tuned for that kind of devices. You can do some adjustments and change a few lines of code to improve the situtation in terms of optimization but if you want to have more features available you're probably better off looking for another distro.
Some great suggestions, however not quite my use-case.
I've started using Intel NUC IOT devices for routing/AP type jobs. These are cheap, low power, high temperature range and compact. They tend to be quite processor restricted (Intel Atom single/dual cores at ~1.4Ghz) however are almost always supplied with 4GB RAM modules.
I guess maybe this is where the LEDE schism started (focusing on wider array of embedded) and I don't want to open that wound, but there is an argument that as Flash and RAM is becoming very cheap as a commodity, available Flash and RAM on IOT/microPC devices is going to start going up-up-up.
I would love, for example, that OpenWRT could start supporting larger more intensive packages, things like Unifi Controller/Unifi Video. I do think there is a gap in the market here, because the other resource constraint on the NUC devices is the embedded storage is only 4GB. So it's bizarrely massively over-sized for OpenWRT, yet at the same time, barely enough to run Ubuntu Server 18.06 with UniFi video installed (in order to get it to install I had to install package-by-package and clean APT each time, as the size of the archives plus the installed packages pushed it over the 3.5G available).
Maybe a fork to two builds... one targeted at router hardware, one targeted at IOT hardware?