S3 backed docker volumes
HA Vaultwarden? You'll want more than just HA for passwords - think attachments and notes too. Using S3/Gdrive for storage with rclone.
S3 backed docker volumes
I’ve been trying to figure out how to get my vaultwarden redundant - meaning the attachments, notes and whatnot besides the passwords (as those are stored in a HA cluster).
After much research I found the solution which I am currently testing, and will update more about later on as it progresses! That is using S3 backed storage for it - The one I chose was Minio which can be self hosted. rclone documentation you can figure out how to add other backends like AWS, GoogleDrive, OneDrive and iDrive e2.
Some links may be affiliate links that keep this site running.
Needless to say, this is on top of the regular backups I run (Which I will write on later) for configurations and important files.
Let’s get started
You will need to run this on every node that you want to have access to the backend.
rclone Docker plugin
To have the backend for s3 we will be using rclone docker plugin, for that we need to install it and create the needed directories.
1
2
3
4
5
sudo apt-get -y install fuse
sudo mkdir -p /var/lib/docker-plugins/rclone/config
sudo mkdir -p /var/lib/docker-plugins/rclone/cache
docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
docker plugin list
The explanation for these commands is:
-
sudo apt-get -y install fuse
: This command installs the FUSE (Filesystem in Userspace) software on your system without asking for confirmation. FUSE is necessary for certain types of file systems that run in user space, which some applications require to function properly. -
sudo mkdir -p /var/lib/docker-plugins/rclone/config
andsudo mkdir -p /var/lib/docker-plugins/rclone/cache
: These two commands create directories on your system that will be used to store configuration files and cache data for a Docker plugin related to rclone. The-p
flag ensures that any necessary parent directories are created as well. -
docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
: This command installs the rclone Docker volume plugin from the Docker plugin registry. The plugin is tagged specifically for amd64 architecture. It’s being installed with verbose output (args="-v"
), given an alias (rclone
), and granted all permissions it requests, which allows it to function without any restrictions inside your Docker environment. -
docker plugin list
: This command lists all installed Docker plugins on your system. It will show you information about each plugin, including its name, tag, description, and whether it’s enable
After the last command you should see:
1
2
ID NAME DESCRIPTION ENABLED
9a249a6a3d48 rclone:latest Rclone volume plugin for Docker true
Great! We have the plugin installed, now we need to configure it in docker compose or a cli volume configuration.
Before we move to the next step, we need to configure the backend - this can be done using the rclone config
command on a different host (or same one if you have rclone installed on it) and copying the output to the files mentioned below, setting the file manually.
we need to create a new file for the configuration:
1
sudo nano /var/lib/docker-plugins/rclone/config/rclone.conf
Inside we are going to have the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Name of the backend
[minio-vaultwarden]
# Type of backend
type = s3
# Provider we are connecting to
provider = Minio
# Your credentials
access_key_id = <Access_Key>
secret_access_key = <Secret_Key>
# How you defined the bucket
acl = <private/public>
# Region
region = <region>
# Endpoint to connect to
endpoint = http[s]://<IP/Host>:<Port>
# Left empty on purpose, configure if you need to
location_constraint =
Docker Compose
We will modify our vaultwarden compose file to the following configuration:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
services:
vaultwarden:
image: vaultwarden/server:1.30.1-alpine
hostname: vaultwarden
container_name: vaultwarden
restart: always
# ports:
# - 80:80
env_file:
- ".env"
volumes:
- ./vw-data:/data
- vaultwarden-attachsend:/minio-backend
volumes:
vaultwarden-attachsend:
driver: rclone
driver_opts:
remote: 'minio-vaultwarden:vaultwarden-attachsend'
allow_other: 'true'
vfs_cache_mode: full
poll_interval: 0
In the file definition, minio-vaultwarden
is the name of the backend we defined earlier and vaultwarden-attachsend
is the bucket name, the /attachments
is a folder in the bucket where we are saving files to.
I use the
SENDS_FOLDER
andATTACHMENTS_FOLDER
variables to define different folders for vaultwarden configuration.SENDS_FOLDER=/minio-backend/sends
,ATTACHMENTS_FOLDER=/minio-backend/attachments
You can see that we are NOT syncing everything over to the bucket, the reason is that even S3 buckets are not built for frequent on demand access to folder structures, S3 buckets are built for durability and accessibility rather than low-latency storage functionality. While Vaultwarden is a stateless app and we could just throw the entire directory over to the bucket, this is good practice for data.
Do note that data is cached on the volume and has to be retrieved as well. You do need to pay for bandwidth and storage on S3 providers - take that into account.
Go ahead and start your container 😄
Troubleshooting
- Context deadline exceeded
1
Error response from daemon: Post "http://plugin.moby.localhost/VolumeDriver.Mount": context deadline exceeded
Bring your docker stack down, clear the plugin cache and bring up your docker compose back up:
1
2
3
cd /var/lib/docker-plugins/rclone/cache
sudo rm -R *
sudo rm *