Post

High Availability selfhosted Ghost instance

Create a highly available ghost blog, Here we define all the steps needed to migrate to such a solution.

High Availability selfhosted Ghost instance

High Availability selfhosted Ghost instance

I received a comment on Reddit that my current install of ghost is not accessible from a location in the U.S, this made me wonder if the site is not accessible from other locations as well.

The backend of this website is on a HA cluster of mariadb servers and so the database is accessible at all times, the web though is only located on one server because ghost requires quite a bit of content to stay in the folders to function - such as these articles and feature images.

How do I then get to make this highly available?

Some links may be affiliate links that keep this site running.

Well, the solution came to me when I was looking for a S3 backend for docker. Using S3 as the backend for files that do not require constant modifications helps with syncing it across different locations in a timely matter, this would also allow me to scale this single instance to 3 which will serve the U.S/EU and Asia. I can balance, adjust traffic for latency, as well issue certificates using Technitium Apps.

You will need a selfhosted mysql database (galera cluster) for this configuration - meaning a database you can connect from both hosts for HA (of at least the webapp)

I leaned towards moving ghost to docker for a few reasons and that is version control, consistency, logging and extensibility. By switching to a a docker setup I can bring in my own reverse proxy (caddy) into the mix and enforce better filtering on the web layer using crowdsec. A docker set up will also allow me to sync a specific folder over docker volumes such as for the folder /content where all the mentioned files earlier reside.

I host majority of my cloud instances on HostHatch VPS (Virtual Private Server) Instance (In Asia) for a steal. Some of the other hosts I use are RackNerd (US) and WebHorizon (Asia+Europe) VPS, and decided that it is time to move away from Linode - which is a Great service, but I am looking to reduce the billing on instances. For comparison, I save more than 50% on HostHatch compared to Linode ($3.33 compared to $8) - Don't get me wrong, if this was an extremely (like REALLY) critical application, I would keep it on Linode.

Let’s get started

Migrating from ghost install to Docker

As I am setting a new server in an additional location, I started by a fresh provisioning of the server and hardening it, as well as installing docker. We will first need to create an external network - this will help isolate containers better.

1
docker network create caddy_net

Caddy

We will start with creating the configuration for Caddy, we keep this as a separate definition from the rest as it will allow us to better control on our docker stacks.

Run the following commands to create our folder structure:

1
2
3
sudo mkdir -p /docker/caddy
sudo chown $USER:$USER -R /docker
nano /docker/caddy/docker-compose.yaml

Docker compose configuration below, check out Caddy latest release and replace it below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: "3.9"

services:
  caddy:
    image: caddy:2.7.5
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./caddy_data:/data
      - ./caddy_config:/config
    networks:
      - caddy_net
    # Below is optional for 
    env_file:
      - ".env"

networks:
  caddy_net:
    external: true

We will now prepare our Caddy file for the ghost instance.

1
sudo nano /docker/caddy/Caddyfile

Paste the following configuration inside:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
    # Your email for LetsEncrypt registration.
    email youremail@domain.com 

    # Staging environment, comment after your test run works
    acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}

yourdomain.com {
    reverse_proxy ghost:2368
}

# Redirect traffic to 'www' to yourdomain.
www.yourdomain.com {
    redir https://yourdomain.com{uri}
}

The configuration above uses a HTTP-01 challanege, if you’d like to use DNS-01 challenge check out the caddy documentation.

Some specific configuration is below to make it easier:
Cloudflare DNS-01 Challange

You will need to use the image slothcroissant/caddy-cloudflaredns:v2.7.5 in your compose file.

1
2
3
4
5
6
7
8
9
10
11
12
 yourdomain.com {
 reverse_proxy ghost:2368
 tls { 
 dns cloudflare {env.CF_API_TOKEN}
 }
}
www.yourdomain.com {
 redir https://yourdmain.com{uri}
 tls { 
 dns cloudflare {env.CF_API_TOKEN}
 }
}
DNS-01 - RFC2136 Configuration

You will need to build your own image using the module here: http://github.com/caddy-dns/lego-deprecated

Now that we have set up our caddy, we can continue to configure ghost.

Make sure to open your firewall to port 80 & 443.

Ghost Configuration

First, you have to follow the configuration to set up a S3 backed volume using rclone.

Then we will configure our ghost docker-compose file:

1
2
sudo mkdir /docker/ghost
nano /docker/ghost/docker-compose.yaml

Paste this inside the file, you can find the docker ghost versions here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: '3.8'

services:

  ghost:
    image: ghost:5.75.1
    restart: unless-stopped
    networks:
      - caddy_net
    volumes:
      - /docker/ghost/log:/var/lib/logs
      - ghost-content:/var/lib/ghost/content
      - ghost-themes:/var/lib/ghost/current/content/themes
      - ./config.production.json:/var/lib/ghost/config.production.json:ro
      
volumes:
  ghost-content:
    driver: rclone
    driver_opts:
      remote: 'minio:ghost-content'
      allow_other: 'true'
      vfs_cache_mode: full
      poll_interval: 0

  ghost-themes:
    driver: rclone
    driver_opts:
      remote: 'minio:ghost-themes'
      allow_other: 'true'
      vfs_cache_mode: full
      poll_interval: 0

networks:
  caddy_net:
    external: true

There is some symlinking happening in the themes, for some reason the developers put the themes in a different folder than the rest of the content, and then it is being linked over. That is why you see me also defining a specific bucket for the themes.

We will also create an external configuration file for all the settings:

1
nano /docker/ghost/config.production.json

Paste the following inside, You can find all the configuration variables here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
{
  "url": "http://YOUR_DOMAIN",
  "server": {
    "port": 2368,
    "host": "0.0.0.0"
  },
  "database": {
    "client": "mysql",
    "connection": {
      "host": "DBHOST",
      "user": "GHOST_DBUSER",
      "password": "GHOST_DBPASS",
      "database": "GHOST_DB"
    }
  },
  "mail": {
    "from": "SMTP_SERVER",
    "transport": "Direct",
    "options": {
      "service": "SMTP_SERVICE_NAME",
      "host": "SMTP_HOST",
      "port": 587,
      "secure": false,
      "requireTLS": true,
      "auth": {
        "user": "SMTP_USER",
        "pass": "SMTP_PASS"
      }
    }
  },
  "logging": {
    "transports": [
      "file",
      "stdout"
    ]
  },
  "process": "systemd",
  "paths": {
    "contentPath": "/var/lib/ghost/content"
  },
  "logging": {
    "path": "/var/lib/logs",
    "useLocalTime": true,
    "level": "info",
    "rotation": {
      "enabled": true,
      "count": 15,
      "period": "1d"
    },
    "transports": ["stdout", "file"]
  }
}

Make sure to backup your ghost instance and database first just in case, also make sure to put the content folder contents in the S3 bucket you configured earlier (if you have an existing ghost instance - don’t forget to copy over the themes that exist in /var/lib/ghost/current/content/themes of the ghost install into their own bucket.

Ghost S3 content bucket ghost content upload

Starting your containers

Let’s start the container, this may take some time because of attaching of the S3 volume.

1
2
cd /docker/ghost
docker compose up -d && docker compose logs -f

Ghost container startup Ghost startup

Everything looks good so far! That means we can go ahead and start the caddy instance (make sure nginx is disabled if you are running all on one server). We are looking for the lines in red in the log, meaning that certificates were obtained successfully and caddy started.

1
2
cd /docker/caddy
docker compose up -d && docker compose logs -f

Caddy attained certificate TLS certificate obtained by caddy

We have a working server with ghost that is backed on a S3 volume, what do we have left? All that is left is to do these steps again on the other servers you want to configure, and also move your original server to this configuration. If you were running a “bare-metal” install, meaning you went installing ghost on your OS rather than on docker, then you’d need to stop ghost and disable nginx on startup.

Disabling original ghost and nginx

This is a simple one, we are not going to delete this yet, as we want it as sort of a backup for a while until we feel more comfortable running it in this configuration.

1
2
3
4
sudo systemctl stop ghost_<your site name>.service
sudo systemctl disable ghost_<your site name>.service
sudo systemctl stop nginx.service
sudo systemctl disable nginx.service

What’s next?

Next is geo-balancing your host according to where visitors are coming from! Check out this post, on how I achieve doing that AND failover with Technitium DNS.

This post is licensed under CC BY 4.0 by the author.