Post

Migrating Uptime Kuma on Docker

Migrating Uptime Kuma on Docker

Sometimes there is a need to migrate servers, such is the case for my instance of Uptime Kuma and this post. As I just grabbed another instance at HostHatch, I have decided to move away my non critical workloads away from Linode.

Some links may be affiliate links that keep this site running.

Anyway, as that is time to migrate, I’d like to take my historic data and configurations and move those over so that I don’t need to adjust the scripts I have running. We have discussed all the configurations in UptimeKuma Part 1 & Part 2, I did cover that I use a cloudflare tunnel, and so the only change you need to do there is change the pointer of your CNAME/A to the new IP once this migration is done.

I host majority of my cloud instances on HostHatch VPS (Virtual Private Server) Instance (In Asia) for a steal. Some of the other hosts I use are RackNerd (US) and WebHorizon (Asia+Europe) VPS, and decided that it is time to move away from Linode - which is a Great service, but I am looking to reduce the billing on instances. For comparison, I save more than 50% on HostHatch compared to Linode ($3.33 compared to $8) - Don't get me wrong, if this was an extremely (like REALLY) critical application, I would keep it on Linode.

Let’s get to it

Old host

If you used a custom compose file that maps the data directory to your local directory, you just have to copy it over and you are done! However, this is not the case with Linode, where the data folder is sitting on a docker volume.

1
docker volume ls

The command above will list all the docker volumes we have, we are looking specifically for the uptime-kuma volume.

"UptimeKuma docker volume" UptimeKuma docker volume

Once this is done, we need to export somehow the data on the volume out, I have experimented with the various commands, and found that tarring the directory inside the docker, then extracting it on the other side offered ease of use compared to using the built in export command, even though it is more commands to type at the end.

On the old host, type in:

1
docker exec uptime-kuma tar cvf /tmp/backup-uptimekuma.tar /app/data

This will create a tar of the /app/data directory and save it inside the container in the /tmp directory. We will then copy out the file from the container to our host by running the built-in docker cp command as follows:

1
docker cp uptime-kuma:/tmp/backup-uptimekuma.tar ~/backup-uptimekuma.tar

"UptimeKuma export volume" UptimeKuma export volume

Once that is done, you need to transfer this file to your new host - if that is not a critical application (which for us it isn’t), we would recommend you host it on either HostHatch,WebHorizon or RackNerd, for more critical ones do checkout Linode or your favourite enterprise provider, though there is a small premium for it.

If you have a cloudflare tunnel running I’d recommend to stop the container using docker stop uptime-kuma.

Do not delete your running instance until you confirm everything on the new one is in order and working!

New host

I like to manage the docker infrastructure through docker compose files, and so for this I created a new one which is detailed below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 version: '3'

services:

  uptime-kuma:
    container_name: uptime-kuma
    hostname: uptime-kuma
    image: louislam/uptime-kuma:1
    volumes:
      - uptime-kuma:/app/data
    restart: always

volumes:
  uptime-kuma:
    name: uptime-kuma

Once that is done we are going to bring up the container using the docker compose up -d command. Now if you had followed previous guides, we had a cloudflare tunnel up and running, you can keep that tunnel up, just make sure to bring down the old host if you haven’t done so.

We will now copy over the tar into the container, and extract the file inside of it using:

1
2
docker cp ~/backup-uptimekuma.tar uptime-kuma:/tmp/backup-uptimekuma.tar
docker exec uptime-kuma tar xvf /tmp/backup-uptimekuma.tar -C /

All that is left is to restart the container and we are good to go!

1
docker compose down && docker compose up -d

"UptimeKuma importing volume" UptimeKuma importing volume

If you are using A records, just point your DNS to the new IP of your server.

If you are using the previous scripts we published in Uptime Kuma - Part 1 then you just need to update your DNS records or do nothing if you use a tunnel. If you are using IP, then you’ll need to update all your scripts.

You can go ahead and check the logs for any issues you might have by using the command docker compose logs -f .

Cover image credits go to Jeffery Hamilton / Unsplash

This post is licensed under CC BY 4.0 by the author.