Context
As I written at the beginning of the selfhosting series about my docker swarm cluster, I have used so far a GlusterFS volume mounted on all nodes of the swarm. It has worked well until now, but the truth is I believe it was too heavy for the raspberry pi (mainly on the pi2 and 3) and had a few issues with it over the years (issues when mounting the drive with only part of the data and other small glusterfuck^^).
As I finally invested and installed a NAS1 at home recently (more on this later), I decided to leverage it for the storage of the cluster.
Disclaimer: I know that it is not the best way of managing this, specially when having a bigger usage and that GlusterFS is made for this and is one of the best choice. But for a selfhosted cluster with almost no traffic based on raspberry pi, it was overkilled.
Setup
On all raspberry pi, I had to install sshfs:
sudo apt install sshfs
Then, enable the user_allow_other
to manage access permissions and
right through sshfs:
sudo vim /etc/fuse.conf
And uncomment user_allow_other
.
Create the directory where the remote directory will be mounted, eg (on all cluster’s node):
sudo mkdir -p /mnt/nas_storage/
Now, create an ssh key on all nodes if it is not already the case
(without passphrase) with ssh-keygen
.
Add each keys in the authorized_keys
files of the right user.
You can test if it works as expected (and add the server in the
known_hosts
file):
sshfs -o allow_other user@server:/path/to/remote/directory/ /mnt/nas_storage
If your data are in /mnt/nas_storage
, it works. Now you can unmount it
for now:
fusermount -u /mnt/nas_storage
To automount on demand the sshfs directory, edit the /etc/fstab
file
and add:
user@server:/path/to/remote/directory /mnt/nas_storage/ fuse.sshfs noauto,x-systemd.automount,_netdev,users,IdentityFile=/path/to/sshkey.pub,allow_other,reconnect 0 0
-
Based on Helios64 for the hardware, armbian and openmediavault for the software. Detailed blog post to follow. ↩︎