I've solved my problem-but hopefully this thread can help others/tell me why I had to do it the way I did.
Basically, I wanted to put a LEDE instance in the cloud-which seems a bit strange-but the reason for this was to use the excellent Luci interface-specifically, luci-app-statistics and collectd. Reason is, I have some remote probes which I wanted to all send back their information to one master server. I did investigate trying to build a collectd on its its own-but I'm not a Linux expert-and I thought why go through all that, when LEDE/OpenWRT gives me that already
I've built it on Vultr (is a bit cheaper than Digital Ocean, and is as easy to use). Here's the funny thing though-I wanted the stats to be persistent across reboots (defaults to /tmp on install so you lose all stats if you need to reboot)-but when I used the LEDE x86 image from scratch (using this guide ) , the whole /root system would come up as read-only when I change the directory from /tmp/rrd to something like /etc/rrd.
I tried various solutions (thankfully Vultr bills you on an hourly rate, so you can destroy images at will, and then build a new one-they really are very good!) but in the end, the only way to have LEDE as the working build was to install the Chaos Calmer OpenWRT image -and then upgrade to LEDE! I have no idea why I had to to do it this way, hence the post-I am guessing it is to do with either the mounting of the disks-or some rw/ro permissions on LEDE.
Any thoughts would be most welcome-I'm happy with what I have now, all my remote LEDE/OpenWRT/Rasp Pi's running collectd are happily sending back their stats to the Vultr server, and can be accessed via the normal Luci web interface.
Have you tried importing LEDE image as snapshot and deploying that?
I was using it like that for some time
Thanks for the reply-no,I hadn't.I have saved the working build as a snapshot, but didn't know you could import a base LEDE image as an original image. I used a custom ISO (systembuild32 I think), this gave me the base Linux, inserted the CC image, got rid of the systembuild32 ISO-and then upgraded to LEDE.
Looking at my Vultr dashboard, how would I do this-I see how you can get a snapshot, but not import one
It might be that the missing image padding is causing your issues. Try building your deployment upon a padded image:
dd if=lede-17.01.4-x86-64-combined-ext4.img of=padded.img bs=500M conv=sync
You can pull snapshots from http or https remote.
But it can only be done from this link,I have not been able to find it in dashboard but.
Do you know if this method is used, there would be more drive space on the LEDE image? One of the frustrations of this is I am paying for a 25GB image-but only seem to realise a 250MB LEDE image-which when all the packages I want are installed, only leave about 150MB left for logs/rrd data etc
Yes, it should be 500M minus the size of the rootfs contents. You can adjust it accordingly.
Ok-thanks for both the ideas.I will have a bash, and see how it goes. Jow-could you explain further to a noob like me why the likes of collectd/rrd, upon changing from /tmp to /etc, cause the whole / to become read only?
I'm keen to understand the underlying reason
A less resource-intensive approach is this:
# create a 4GB padded.bin full of zeros
dd if=/dev/zero of=padded.bin bs=1M count=4096
# place original image into the beginning
dd if=lede-17.01.4-x86-64-combined-ext4.img of=padded.bin conv=notrunc
This should yield a 4GB sized
padded.bin file without requiring 4GB of ram to create it.
I spun up a new image-the last post before this, I tried but ran out of space on the Vultr image-which got me thinking. It has a graphical interface, which included gparted. Once I had that running, I could see the other "wasted" 24.5GB. So I created a 24GB ext4 partition, used the original padded image-and then mounted the 24GB ext4 as /overlay. Now I have a nice clean LEDE image in the cloud, with 24GB of useable space-so should be future proofed! Collectd, I can move the rrd files from /tmp and still remains rw capable so all seems good!
If I'm missed anything, please shout but if not, thanks for the help
Could you elaborate a bit more on the steps you followed to get your virtual router instance up-and-running at vultr.com? Or perhaps a better question is, what steps would you use now if you were starting from scratch?
I've been documenting what needs to be done as I go along-I started with a VMware box on my MACbook before progressing onto the vultr one-so there are subtle differences. I got most of it working through trial and error, the only sticking point was
a) the image becoming rw when I changed the /tmp directory for collectd (which was my main reason for using the cloud in the first place)
b) using the full space (25GB) offered by vultr on their $5 a month package
That said, once I get the chance, I'd like to put it up on a blog, for others to follow-its working great now.
What version of linux did you use as a starting point on Vultr.com?
I haven't had much luck using the DamnSmallLinux ISO referenced in the response to my original post on this subject (the post you linked to and called a "guide" in your first post in this thread). I haven't been able to get SSH access to it, and the vultr console isn't getting the job done either (the pipe key "|" doesn't work for some reason for example).
A little help getting past square one without having to wait for your blog post would be much appreciated!
Well I'll answer my own question here: SystemRescueCD is what you want to use as your initial boot ISO on vultr.com -- forget about DamnSmallLinux! This ISO has the key elements you need to install a LEDE image (in my case the ROOter build of LEDE).
SSH is enabled from the start, all you need to do is change the root password. The ISO also includes GParted, which allows you to partition the virtual SSD. You'll also need to mount it to a directory you create to save the LEDE image. You'll want to use SSH as much as possible as the vultr console can be frustrating to use and doesn't seem to support the "|" key.
This guide is useful as a framework:
However, a number of things are different using vultr and the SystemRescueCD. ROOter is distributed differently than LEDE and as such I had to download the x86 zip and extract the img.gz file, which I then self-hosted on Dropbox in order to be able to use wget.
The post in this thread by @jow entitled "A less resource-intensive approach is this:" is essential to end up with read-write result and not read-only. As @cabbers mentioned in the next post after that, you can use GParted to create an additional partition to use the rest of the SSD space available.
I'm now working on getting SSH and LuCI running along with a WAN interface. Any tips on those elements @cabbers? I'll add to this post once I have LEDE/ROOter fully configured, but for now it boots, and I have console access.
Apologies Scott-i was going to reply but got caught up in work stuff. Yes, SystemRescueCD is the one I used as well-mounted that as a custom ISO, and then used that through the boot and build process. I went about it slightly differently, and may be a bit long winded-I built a partition on the 25GB, used that for my destination directory for the padded image-then put the completed file (about 330MB) back into the original directory-then trashed the 25GB partition (this was used later on as a new partition, about 24.5GB, as /overlay-thereby having a massive 24GB LEDE image to use). I had to do this as the build kept on hanging (I could see from the Vultr console that CPU was hitting +100% constantly-I was also running out of disk space every time).
Once you have the LEDE image running, as before, you need to
passwd a new passwordd for SSH to work.
For WAN access you need to go into the console and using vi (as you cant connect to the internet to get nano ) and use vi /etc/config/network
I'm not going to go through how to use vi, but suffice to say, take out all the stuff for the 'LAN' interface, leaving you with the below
root@Cloud-LEDE-3:~# cat /etc/config/network
config interface 'loopback'
option ifname 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config globals 'globals'
option ula_prefix 'fde5:5450:328b::/48'
config interface 'wan'
option ifname 'eth0'
option proto 'dhcp'
Do a /etc/init.d/network restart, you should pick up the WAN IP assigned by Vultr-and you should be now connected to the Internet.
You also need to stop (I also disabled it) the firewall using
/etc/init.d/firewall stop and
/etc/init.d/firewall disable -this will then allow you access to SSH and LUCI. You can start it again once you are happy with the build. And once you ARE happy, you can then remove the SystemRescueCD ISO-and you should boot automatically into LEDE.