I want advice (opinion) on which is better: using ssh or uhttp (https) via uhttpd-mod-rpc for ubus rpc calls.
I have multiple APs (round 10) I want to get telemetry and control configs of. I have a linux box that I use to control these AP. I already have custom ubus methods for all the actions I want to perform and data I want to collect.
My reasoning for using ssh only:
I already setup public/private key this way I only have one service running on AP, only one port open, instead of another service and another port.
I don't need to generate a self-signed CA for setting up https
ssh is just as fast and secure as https.
although working with ssh is not as easy and clean as https
one of the members in IRC made a good point: with uhttpd-mod-rpc I can make use of rpcd acl to control what ubus calls are allowed, but I am the only one calling these methods so this doesn't post a risk
don't know how I can have connections to different AP from linux box at the same time.
for uhttpd
easy to work with on the client side because it's json and http
just as secure as ssh
easy on the client side to have multiple connections with many servers (that is with many APs from a single linux box)
but i have to setup a password, which I have to store on the linux box.
Please let me know if I am thinking correctly. Appreciate it
I'd never trust either uhttpd or dropbear for automated "root" access.
I'd consider MQTT for your telemetry, over TLS if appropriate. Being "push" rather than "pull", you get some advantages there in that it is a local process that acquires the data, rather than a remote request.
I use git over SSH for configuration management. rsync over SSH is another good option. By "SSH" I prefer OpenSSH, or an equivalently secure implementation, especially if you're using command-specific keys.
If you've got the "disk space" and memory, yes. I build packages into my images directly now. As I found out, squashfs can significantly save space. Now that I can build a complete image from scratch in under half an hour and with a proper /etc/sysupgrade.conf preserve all the config I want, the pain of upgrading isn't what it once was -- "just" build a new image and flash it as all the needed packages are then current and no download/install is required.
One more question about openssh: I am going to install openssh-server and should I also install openssh-client, openssh-client-utils and openssh-keygen. I will be generating keys on the linux box, not on the AP. I will be transferring public key to the AP via the firmware.
Since I use git fetchfrom the OpenWRT box, I find the client valuable. It is pretty big though. If you're not ssh-ing out of the box, then the tiny dropbear used as a client to trusted servers may meet your needs.
My setup: linux box (as a controller/router) and 10 APs (as dumb APs) connected to the linux box for dns, dhcp, etc. I will using APs just as access points and not as routers.
My ideal situation for configuration:
make a change on the controller
generate uci config file on the controller and send it AP (I was thinking rsync)
or send a rpc command to the AP (don't know yet, json-rpc vs ssh)
for telemetry:
small data (like you said MQTT)
but for heavy data and custom requests (don't know yet)
If you know what those requests might be, you can listen for specific topics with MQTT, then send the data. blah/blah/AP-09/command/send/log or the like -- not executing the content, but responding to a pre-defined topic or token. Yes, there is the possibility of a DoS attack, but if you encrypt the channel and use auth on it (as well as access to the MQTT broker to which your clients connect), then, at least for me, the risks are reasonable. Between AWS using a known cert and Let's Encrypt for your own, you should be able to establish identity reasonably well these days, without resorting to self-signed certificates.
That's how I handle many things at home, including mission-critical systems like turning on the espresso maker. Not the fireplace though -- that will always be a manual switch, at least to turn on!