Contents

WordPress Containerized with SQLite

Building a WordPress site in a container, backed by SQLite

What’s this about?

I have a handful of WordPress sites I’ve built for friends and family over the years. They’ve each been built and deployed in different ways as my development skills changed over that timeframe, which means each time I need to revisit one of the sites, I have to relearn how it’s deployed and how to make updates.

I want to rethink how I deploy these WordPress sites in a way that is

  • modern
  • repeatable
  • reliable
  • cheap

Currently, each site has their own dedicated VM, and all pieces of the stack run on that VM. None of them have backups, so if I lose any of the VM’s, I’m screwed. Some of them, I’ve been doing development in production, and I can’t guarantee that the parts I maintain are even stored to a repository.

So the solution I hope to create here

  • Makes the runtime ephemeral
  • Keeps state in a more reliable location
  • Treats the environment as “immutable”, where changes must be commited to the repository and redeployed.

The New Stack

Runtime

Rather than a VM-based deployment, I’ve chosen to containerize each site. Here’s a few reasons I prefer to containerize these sites going forward:

  • The runnable artifact is easily portable.
  • The production environment is easily repeatable locally.
  • There’s a wide variety of tooling around containers that I enjoy.
  • Builds are fast.

By using containers, I’m personally pushed to treating the runtime as more ephemeral than if I were using a VM. Treating the containers as though they won’t live for too long encourages better practices regarding the stability of the stateful portions of the site. This is realistically how I should’ve been treating my VM’s too, but I naively never really bothered with that. So a brand new architecture provides a wide open opportunity to push myself toward better practices.

Database

WordPress requires a MySQL Database. I don’t have anything against MySQL, but operating it requires more thought than handling an SQLite database. SQLite is not supported by WordPress, but there are shims that the community have created to add support for SQLite. I’m using this one without issue as of yet. Since it’s not officially supported, there’s certainly some degree of risk for errors to occur, but I’m okay with that.

SQLite isn’t necessarily the best solution, particularly for a container-based architecture. For one thing, it limits your scalability options to vertical-scalability. It also means I can’t deploy a highly-available system; if the container crashes, the site remains down until a new container replaces it.

For my purposes, however, these are not concerns I worry about. I’m primarily running portfolio and brochure websites with low traffic. If there’s downtime, that’s okay with me. No one is dying or losing sleep over this kind of downtime.

The SQLite database that the application makes use of does reside directly on the container, which means if the container goes down, the database disappears with it. This is obviously not a good scenario.

To alleviate that, I’m using Litestream. Litestream continuously streams SQLite changes to a variety of external storages. Litestream becomes the primary process in the container, and starts a subprocess to run Apache to handle requests to the WordPress site. On startup, Litestream restores the database from external storage, and continues to replicate the database back to external storage. So when a container goes down, I have a copy of the database for the next container to start from.

Uploads

With the container runtime being treated as ephemeral, the uploads directory can’t live directly on the container. Just as with the SQLite database, if not handleded differently, each time the container is removed, all of the uploads would disappear with it.

Uploads are a little easier to handle, since handling assets in an ephemeral environment common problem than the database issues we get with SQLite, since any highly-available application needs to think about this problem. The common solution, which I employ here, is to put uploads in an object store separate from the container. HumanMade’s S3 Uploads plugin for WordPress does exactly this for me. Uploaded files get pushed to an S3 compatible object store, and when links are provided to the upload, it rewrites the URL to the object store’s url.

A few caveats

Composer

Since none of the sites I’m migrating have used Composer, I’d like to not add another component to the new stack that I need to be concerned about. One reason I’d consider Composer is to install WordPress for me, but the official WordPress container image does that for me. The other reason I’d consider Composer is to install plugins, but I am comfortable with installing plugins directly into the container image at build time. In my opinion, it’s even simpler than using Composer, because I don’t need to learn how to do things “the Composer way”. I can just copy files around and call it a day.

Auto-Updating

Automatic Updates to WordPress Core, plugins, or third party themes are pointless, since those updates will be lost when the container is replaced. The correct way to update those dependencies is to change the versions in the Containerfile and rebuild/redeploy.

I’ve specifically built these sites for non-technical users to be able to update content themselves, without having to call me up. I need a way to communicate to them “Just don’t bother updating these - I’ll handle it.” With the base assumption that they’ll forget anything I tell them, it’s better to find a way in the code to prevent them from either performing updates or from seeing that anything is outdated. I don’t do so in this post, but it’s worth exploring different options to manage this.

The idea is for the code in the container to never change from what’s built into the image, so that you can know that redeploying the image won’t have any undesired side-effects as a result of using different versions of packages.

Step by Step

This step by step assumes an empty repository as a starting point. See the final repository here.

Podman/Docker
I use the podman and podman-compose cli tools for interacting with container images. They are mostly interchangable with the docker and docker-compose cli tools, so where you see podman, you should be able to switch that with docker without issue.

Database

Action

Create two files: Containerfile and wp-config.php. These install and configure the SQLite shim.

Containerfile:

1
2
3
4
5
6
7
8
FROM docker.io/alpine:latest as downloader
RUN apk add curl
RUN curl https://raw.githubusercontent.com/aaemnnosttv/wp-sqlite-db/v1.1.0/src/db.php -o
/db.php

FROM docker.io/wordpress:5.8.1-php7.4-apache
COPY --from=downloader /db.php /usr/src/wordpress/wp-content/db.php
COPY ./wp-config.php /usr/src/wordpress/wp-config.php

wp-config.php:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?php 

/** Absolute path to the WordPress directory. */
if ( ! defined( 'ABSPATH' ) ) {
	define( 'ABSPATH', __DIR__ . '/' );
}

define('DB_DIR', ABSPATH . 'wp-content/database');
define('DB_FILE', getenv('DB_FILE') ?: 'db');
$table_prefix  = 'wp_';

/**
 * If we're behind a proxy server and using HTTPS, we need to alert WordPress of that fact
 * see also https://wordpress.org/support/article/administration-over-ssl/#using-a-reverse-proxy
 */
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') {
    $_SERVER['HTTPS'] = 'on';
}
if (isset($_SERVER['HTTP_X_FORWARDED_HOST'])) {
    $_SERVER['HTTP_HOST'] = $_SERVER['HTTP_X_FORWARDED_HOST'];
}

/** Sets up WordPress vars and included files. */
require_once ABSPATH . 'wp-settings.php';

Current Directory Structure:

.
├── Containerfile
├── wp-config.php
Description

The Containerfile installs the SQLite shim and copies our wp-config.php file into the image. A few Containerfile practices I employ that are worth noting, in case you’re unfamiliar:

  1. Containerfile is just a Dockerfile, but employs a vendor-agnostic naming convention.
  2. Uses a Multistage Build. See tip below for more info.
  3. Uses the fully qualified image name. Podman on my machine expects the fully qualified name of the image, whereas Docker will assume the docker.io when not present. Docker should still read this fine. docker.io is the registry for images you’ll find on Docker Hub.

The shim works out of the box. It will create a database file in wp-content/database. I’ve set the DB_DIR and DB_FILE constants in wp-config.php explicitly for two reasons:

  1. I don’t want to revisit the shim’s source code in the distant future to find the default location of my database.
  2. Later in the Step by Step, we’ll use Litestream to copy the database into the container at startup. In order to keep a consistant location of the database between the shim and Litestream, it’s safer to define this explicitly.

The wp-config.php also has some boilerplate I borrowed from the default config file that the WordPress image uses when one isn’t provided.

Disappearing Database
Be aware that the database will be lost when the container stops. Persistence will be addressed later.
Multistage Build

A Multistage Build, in concept, creates several images from which you can inherit artifacts from. When creating a Containerfile with several image definitions, refer to the images as stages. Each stage can grab artifacts from stages defined previously in the file. The reason to do this here is to keep the final image as minimally additive to the base WordPress image as possible.

In the current situation, my final image does not need curl installed, but I need curl or a similar tool to download a remote file. There are other ways to get remote files into an image without a tool like this, but later we’ll add some plugins which need to be unzipped. We can add any other tools we need, such as unzip, and do the work of unzipping archives in this former stage, then only copy the things we need into our final image. So we reduce the installed packages in the final image, and don’t need to concern ourselves much with cleanup of interim artifacts.

Here’s why I think it’s important to be minimally additive to the final image:

  1. I trust the maintainers of the official WordPress image to invest more resources into the security of the image than I am able to. Installing new packages has the potential to introduce unpatched security issues into my image.
  2. The smaller that I can keep the final image in size, the faster my production environment can download the image and start running it. A significant chunk of startup time of an image on a fresh production node is getting a copy of the image to read. Fewer bytes = less time to download.
Verify

Make sure that a SQLite database file is created and populated. The database file is created immediately, but isn’t populated until you’ve completed the site setup.

Startup

1
2
podman build -t wp -f Containerfile .
podman run -d -p 8080:80 --name wp wp

Setup

Visit http://localhost:8080/wp-admin and setup the site.

Inspect

1
2
podman cp /var/www/html/wp-content/database/db .
sqlite3 db

You should be able to poke around the database and see that it was populated!

Cleanup:

1
podman stop wp && podman rm wp

Theme

For the sake of simplicity, this Step by Step will copy a free theme into the repository. Using SKT Software theme.

Action

“Create” a theme:

1
2
3
curl -O https://downloads.wordpress.org/theme/skt-software.3.0.zip
unzip skt-software.3.0.zip
rm skt-software.3.0.zip

Add the theme to the Containerfile

1
2
3
4
5
6
7
8
--- a/Containerfile
+++ b/Containerfile
@@ -4,4 +4,5 @@ RUN curl https://raw.githubusercontent.com/aaemnnosttv/wp-sqlite-db/v1.1.0/src/d
 
 FROM docker.io/wordpress:5.8.1-php7.4-apache
 COPY --from=downloader /db.php /usr/src/wordpress/wp-content/db.php
+COPY ./skt-software /var/www/html/wp-content/themes/skt-software
 COPY ./wp-config.php /usr/src/wordpress/wp-config.php

Current Directory Structure:

.
├── Containerfile
├── skt-software
│   ├── ...
└── wp-config.php
Description

For the sake of simplicity in the Step by Step, I’m electing to use a free theme, rather than create one. There’s nothing special about the one I chose.

The main idea is to create a directory for your theme, and keep all theme files there. Don’t polute the root directory with theme files. Leave that for build and config-like files.

Notice that the SQLite shim and wp-config.php went into /usr/src/wordpress/, but the theme is going into /var/www/html/. When the container starts, the entrypoint copies files from /usr/src/wordpress/ -> /var/www/html/. The entrypoint allows for us to copy themes and plugins directly into where they’ll be served from. But any other WordPress files we want to modify need to be placed in the source directory to be copied in at run time.

Disappearing Database
Be aware that the database will be lost when the container stops. Persistence will be addressed later.
Verify

Make sure that a SQLite database file is created and populated. The database file is created immediately, but isn’t populated until you’ve completed the site setup.

Startup

1
2
podman build -t wp -f Containerfile .
podman run -d -p 8080:80 --name wp wp

Setup

  1. Visit http://localhost:8080/wp-admin and setup the site.
  2. Under Appearance > Themes, select the SKT Software theme.

Inspect

Visit the frontend at http://localhost:8080/, and see the theme in use.

Cleanup:

1
podman stop wp && podman rm wp

Plugins

Adding a single third-party plugin for demonstration.

Action

Install plugin from wordpress.org:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
--- a/Containerfile
+++ b/Containerfile
@@ -1,8 +1,12 @@
 FROM docker.io/alpine:latest as downloader
-RUN apk add curl
+RUN apk add curl unzip
 RUN curl https://raw.githubusercontent.com/aaemnnosttv/wp-sqlite-db/v1.1.0/src/db.php -o /db.php
+RUN curl -O https://downloads.wordpress.org/plugin/advanced-custom-fields.5.10.2.zip \
+    && unzip advanced-custom-fields.5.10.2.zip \
+    && mv advanced-custom-fields.5.10.2.zip /advanced-custom-fields
 
 FROM docker.io/wordpress:5.8.1-php7.4-apache
 COPY --from=downloader /db.php /usr/src/wordpress/wp-content/db.php
+COPY --from=downloader /advanced-custom-fields /var/www/html/wp-content/plugins/advanced-custom-fields
 COPY ./skt-software /var/www/html/wp-content/themes/skt-software
 COPY ./wp-config.php /usr/src/wordpress/wp-config.php

Current Directory Structure:

.
├── Containerfile
├── skt-software
│   ├── ...
└── wp-config.php
Description

Installing a plugin is simply taking the plugin directory and copying it into /var/www/html/wp-content/themes/. The plugin can come from anywhere - your repository, a remote repository, wordpress.org, etc…

In this case, I’m installing Advanced Custom Fields from wordpress.org. Notice in the top-right of the page, there’s a Download button. This is a direct link to the zip archive of the plugin.

Disappearing Database
Be aware that the database will be lost when the container stops. Persistence will be addressed later.
Verify

Startup

1
2
podman build -t wp -f Containerfile .
podman run -d -p 8080:80 --name wp wp

Setup

  1. Visit http://localhost:8080/wp-admin and setup the site.
  2. Under Appearance > Themes, select the SKT Software theme.
  3. Under Plugins, activate the Advanced Custom Fields plugin.

Cleanup:

1
podman stop wp && podman rm wp

Local Development

Action

Mount the theme directory:

1
podman run -d -p 8080:80 --name wp -v "$(pwd)/skt-software":/var/www/html/wp-content/themes/skt-software wp
Description
By mounting the theme directory, any changes you make to the theme directory will immediately be reflected in the container, and thus on localhost:8080.
Disappearing Database
Be aware that the database will be lost when the container stops. Persistence will be addressed later.
Verify

Setup

  1. Visit http://localhost:8080/wp-admin and setup the site.
  2. Under Appearance > Themes, select the SKT Software theme.
  3. Visit the homepage: http://localhost:8080/

Inspect

Make some change to the files. I changed the output of the footer.php.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
--- a/skt-software/footer.php
+++ b/skt-software/footer.php
@@ -41,11 +41,11 @@ if ( is_active_sidebar( 'fc-1' ) || is_active_sidebar( 'fc-2' ) || is_active_sid
 <div id="copyright-area">
 <div class="copyright-wrapper">
 <div class="container">
-     <div class="copyright-txt"><?php esc_html_e('SKT Software','skt-software'); ?></div>
+     <div class="copyright-txt"><?php esc_html_e('SKT Software - also, hello!','skt-software'); ?></div>
      <div class="clear"></div>
 </div>           
 </div>
 </div><!--end .footer-wrapper-->
 <?php wp_footer(); ?>
 </body>
-</html>
\ No newline at end of file
+</html>^M

Refresh the homepage, and see the footer change.

Cleanup:

1
podman stop wp && podman rm wp

Data Persistence

Minio

Action

Create a directory for the database, and don’t commit it:

1
mkdir -p .local/database

.gitignore

1
.local/database

Create an entrypoint to create the bucket

.local/makebucket-entrypoint.sh

1
2
3
4
5
#!/bin/sh

/usr/bin/mc config host add --quiet --api s3v4 local http://minio:9000 minioadmin minioadmin;
/usr/bin/mc mb --quiet local/litestream/;
/usr/bin/mc policy set public local/litestream;
1
chmod +x .local/makebucket-entrypoint.sh

Define containers:

compose.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
version: '3'

services:
  minio:
    command: server /data --console-address :9001
    container_name: minio
    image: quay.io/minio/minio:latest
    ports:
      - 9000:9000
      - 9001:9001
    volumes:
      - ./.local/database:/data
  makebucket:
    depends_on:
      - minio
    image: quay.io/minio/mc:latest
    volumes:
      - ./.local/makebucket-entrypoint.sh:/entrypoint
    entrypoint: /entrypoint

Current Directory Structure:

.
├── .gitignore
├── .local
│   ├── database
│   ├── makebucket-entrypoint.sh
├── compose.yml
├── Containerfile
├── skt-software
│   ├── ...
└── wp-config.php
Description

To persist data when the WordPress container shuts down, we’ll use Litestream. Litestream is a tool built specifically for SQLite data replication, and supports numerous backends for which to replicate to. For local development, we’ll use Minio in a separate container to emulate an S3 bucket.

Here, we create a compose file, since we’re going to coordinate multiple containers together. In this section, I only want to get Minio working locally and make sure I can persist data, and in the next section we can add the WordPress container as well.

There are two containers being created here. The first we’re calling minio, which is the minio server process that will host the local S3 bucket. The second is makebucket, which uses an image that has the mc command from minio. mc is a cli tool similar to the aws s3 cli tools. The container will remain alive only long enough to do some startup work, then it will exit.

In this case, I want to make sure that a bucket is created for us to put objects into, and that those objects persist on container restarts.

Note that I’ve added the entrypoint script as a file, whereas I could just put it directly in the compose.yml file since it’s only going to be used for local development purposes. docker-compose is able to parse this type of entrypoint just fine, but podman-compose is not able to at the time of this writing. Rather than fight the issue in podman-compose, it’s easier to make the entrypoint a separate file.

So, the chain of operation right now is first for the minio container to start, then the makebucket container will start and run the mc commands to create a bucket for us to use.

Verify

Setup

1
podman-compose -f compose.yml up

Inspect

  • Add a file to the bucket
  • Restart the containers
    1
    
    podman-compose -f compose.yml down && podman-compose -f compose.yml up
    
  • Check that the file is still in the bucket
  • Validate the volume mount
    1
    
    ls .local/database/litestream
    

Cleanup:

1
podman-compose -f compose.yml down

Litestream

Action

litestream.yml

1
2
3
4
5
6
7
8
dbs:
  - path: /var/www/html/wp-content/database/db
    replicas:
      - type: s3
        bucket: litestream
        path:   wp
        region: us-east-1
        endpoint: http://localhost:9000

entrypoint.sh

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
#!/usr/bin/env bash

set -ex

# Restore the database if it does not already exist.
if [ -f /var/www/html/wp-content/database/db ]; then
    echo "Database already exists, skipping restore"
else
    echo "No database found, restoring from replica if exists"
    mkdir -p /var/www/html/wp-content/database
    litestream restore -v -if-replica-exists -config /etc/litestream.yml /var/www/html/wp-content/database/db
    chown -R www-data:www-data /var/www/html/wp-content/database
fi

# Run litestream with your app as the subprocess.
# docker-entrypoint.sh is copied by the wordpress base image and set as the ENTRYPOINT
# apache2-foreground is the default CMD
exec litestream replicate -config /etc/litestream.yml -exec "docker-entrypoint.sh apache2-foreground"

Containerfile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
--- a/Containerfile
+++ b/Containerfile
@@ -4,9 +4,16 @@ RUN curl https://raw.githubusercontent.com/aaemnnosttv/wp-sqlite-db/v1.1.0/src/d
 RUN curl -O https://downloads.wordpress.org/plugin/advanced-custom-fields.5.10.2.zip \
     && unzip advanced-custom-fields.5.10.2.zip \
     && mv advanced-custom-fields.5.10.2.zip /advanced-custom-fields
+RUN curl -OL https://github.com/benbjohnson/litestream/releases/download/v0.3.5/litestream-v0.3.5-linux-amd64-static.tar.gz \
+    && tar -C / -xzf litestream-v0.3.5-linux-amd64-static.tar.gz
 
 FROM docker.io/wordpress:5.8.1-php7.4-apache
+COPY --from=downloader /litestream /usr/local/bin/litestream
 COPY --from=downloader /db.php /usr/src/wordpress/wp-content/db.php
 COPY --from=downloader /advanced-custom-fields /var/www/html/wp-content/plugins/advanced-custom-fields
+COPY --chown=www-data:www-data ./litestream.yml /etc/litestream.yml
+COPY --chown=www-data:www-data ./entrypoint.sh /scripts/entrypoint.sh
 COPY ./skt-software /var/www/html/wp-content/themes/skt-software
 COPY ./wp-config.php /usr/src/wordpress/wp-config.php
+
+ENTRYPOINT ["/scripts/entrypoint.sh"]

compose.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
--- a/compose.yml
+++ b/compose.yml
@@ -1,6 +1,22 @@
 version: '3'
 
 services:
+  wp:
+    build:
+      context: .
+      dockerfile: Containerfile
+    container_name: wp
+    depends_on:
+      - makebucket
+    ports:
+      - 8080:80
+    environment:
+      - LITESTREAM_ACCESS_KEY_ID=minioadmin
+      - LITESTREAM_SECRET_ACCESS_KEY=minioadmin
+    volumes:
+      - ./skt-software:/var/www/html/wp-content/themes/skt-software
+      - ./.local/wp-entrypoint.sh:/local-entrypoint
+    command: /local-entrypoint
   minio:
     command: server /data --console-address :9001
     container_name: minio

.local/wp-entrypoint.sh

1
2
3
4
5
6
7
8
9
#!/bin/sh

until curl -f http://localhost:9000/minio/health/live; do
    >&2 echo "minio api not available - sleeping"
    sleep 1
done

>&2 echo "minio api available; running entrypoint"
/scripts/entrypoint.sh

Current Directory Structure:

.
├── .gitignore
├── .local
│   ├── database
│   ├── makebucket-entrypoint.sh
│   ├── wp-entrypoint.sh
├── compose.yml
├── Containerfile
├── entrypoint.sh
├── litestream.yml
├── skt-software
│   ├── ...
└── wp-config.php
Description

Using Litestream isn’t an overly complicated ordeal, but there are several files being touched for this portion.

The first is litestream.yml. There’s nothing in the config file that couldn’t be defined on the CLI, but since one of my goals is to be able to revisit the project long into the future with few knowledge gaps, it’s better to have a structured definition. The config file is pretty self-explanatory.

WordPress gets a new entrypoint, which is responsible for restoring the existing database, and replicating any new changes made to the database. It also starts the default entrypoint as a subprocess. A few notes about entrypoint.sh:

  • The conditional allows for you to place your own database file into the container, but note that Litestream will replicate it to the external storage.
  • The chown ensures that the Apache user is able to utilize the database.
  • The last line of entrypoint.sh begins replication of the database, and with the -exec flag, we start a subprocess, which is the WordPress image’s default entrypoint.

Containerfile has been updated to install Litestream, and add the new Litestream config and entrypoint that were just created.

Rather than starting the WordPress container directly as we have been so far, adding it to the compose.yml to start along with Minio. This has an added benefit of declaring some startup order, to make sure that the makebucket container has started before WordPress starts.

However, there’s a bit of a race condition between the wp and makebucket containers. depends_on only waits for the depended-upon container to start, not necessarily to be ready. wp requires the bucket to have been created, and generally starts for me faster than makebucket is able to complete. This is really only an issue during local development, so I don’t want to add extra logic directly to entrypoint.sh. Rather, I continue the pattern of adding local-only workarounds to .local/.

So for local development, there’s also a wrapper around the WordPress entrypoint named .local/wp-entrypoint.sh. This pings the health endpoint on the minio container until it returns a success code, then it begins the normal entrypoint. This allows the root directory code to look like it should for production, and local development works. Notice in compose.yml that we overwrite the entrypoint that was defined in Containerfile to use .local/wp-entrypoint, which has been mounted into the container at run time.

Why not use a volume mount?

For offline persistence, the easy answer would be for the container to mount a volume, and persist the database file there. There’s two reasons I wouldn’t do that in this case.

The first is because the WordPress image runs the Apache webserver as user www-data. Any volume mount I’ve tried to add adds as use root. So when www-data tries to create or update an SQLite file in the volume mount, it can’t due to permission errors. This is an old issue in the Moby project (Docker), and is the same issue in Podman. It’s been around for so long because it’s not an easy problem to solve. The Github issue contains a couple of workarounds, my favorite being starting another container to change ownership of the mount on startup. However, in our case, there’s a second reason for not using volume mounts, and for that reason, I’d prefer to just avoid the issue altogether.

The second reason is because we’re using Litestream for data persistence in production. It makes sense, from the concern of consistency between environments, to run Litestream for local development as well. You could certainly persist your local database to a cloud-vendored bucket, but for local development purposes, it’s easier to start a Minio container and create a bucket there, so that you can develop offline.

Verify

Setup:

Start with a clean slate, to verify that everything works when there is no existing database.

1
2
3
4
rm -rf .local/database
mkdir -p .local/database
podman-compose -f compose.yml down
podman-compose -f compose.yml up --build

Visit http://localhost:8080/wp-admin and setup the site. (Database is not saved/created until you’ve completed setup).

Inspect

See that the database was persisted to Minio

Visit http://localhost:9001/buckets/litestream/browse and see that there’s a directory wp/.

1
ls .local/database/litestream

Restore a copy of the database locally, and inspect it

  • Install Litestream locally, if you haven’t already
  • Set access variables. Note that I had an issue where my environment already had AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set with actual AWS credentials, and Litestream appeared to favor those environment variables over the LITESTREAM_ variables, so I had to overwrite them in the shell session.
1
2
3
4
export LITESTREAM_ACCESS_KEY_ID=minioadmin
export LITESTREAM_SECRET_ACCESS_KEY=minioadmin
litestream restore -o wp.db s3://litestream.localhost:9000/wp
sqlite3 wp.db

Reboot the containers to verify that the database is restored

1
2
podman-compose -f compose.yml down
podman-compose -f compose.yml up

Visit http://localhost:8080/wp-admin. If you can login, then the database was successfully restored.

Cleanup:

1
podman-compose -f compose.yml down

Deployability

Action

.local/litestream.yml

1
2
3
4
5
6
7
8
dbs:
  - path: /var/www/html/wp-content/database/db
    replicas:
      - type: s3
        bucket: litestream
        path:   wp
        region: us-east-1
        endpoint: http://localhost:9000

litestream.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
--- a/litestream.yml
+++ b/litestream.yml
@@ -1,8 +1 @@
-dbs:
-  - path: /var/www/html/wp-content/database/db
-    replicas:
-      - type: s3
-        bucket: litestream
-        path:   wp
-        region: us-east-1
-        endpoint: http://localhost:9000
+# Add your production config here

compose.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
--- a/compose.yml
+++ b/compose.yml
@@ -14,6 +14,7 @@ services:
       - LITESTREAM_ACCESS_KEY_ID=minioadmin
       - LITESTREAM_SECRET_ACCESS_KEY=minioadmin
     volumes:
+      - ./.local/litestream.yml:/etc/litestream.yml
       - ./.local/wp-entrypoint.sh:/local-entrypoint
     command: /local-entrypoint
   minio:

wp-config.php

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
--- a/wp-config.php
+++ b/wp-config.php
@@ -5,6 +5,9 @@ if ( ! defined( 'ABSPATH' ) ) {
 	define( 'ABSPATH', __DIR__ . '/' );
 }
 
+define('WP_HOME', getenv('SITENAME') ?: 'http://localhost:8080');
+define('WP_SITEURL', getenv('SITENAME') ?: 'http://localhost:8080');
+
 define('DB_DIR', ABSPATH . 'wp-content/database');
 define('DB_FILE', getenv('DB_FILE') ?: 'db');
 $table_prefix  = 'wp_';
Description

What’s been created up to this point is a reproducable container image for running a WordPress site. The container does contain state in the form of an SQLite database, but since that state is streamed to some external storage, the container can be handled as though it were stateless.

The only real difference between running this container locally vs. in a production environment is where the SQLite database is replicated to. For local development, I’ve described out an offline environment. As long as the image has already been pre-built, development could be done entirely offline locally. For production, we point Litestream at a more reliable and online storage.

To make things simpler, we can keep two litestream.yml files - one for local development, and one for production. Following the pattern set so far, I placed the local copy at .local/litestream.yml and mounted it into the wp container at run time to overwrite the production file that was copied into the image at build time. Everything else can remain as it was.

There’s also one last change to make your life easier while changing the database for production, and that is to set the sitename and home variables. If those get set in the database, it can cause some annoying-to-debug redirects. Set the SITENAME variable in your deployment to the URL you’ll use in production, and this will fallback to localhost when not set.

Verify

Setup:

1
podman-compose -f compose.yml up --build

Inspect

Just check that the config file is still what it used to be, which is the contents of what is now .local/litestream.yml

1
podman exec wp cat /etc/litestream.yml

Cleanup:

1
podman-compose -f compose.yml down

And that’s it!

See the final repository here

Migration

The original purpose of this exercise was to create a more maintainable deployment process for several WordPress sites I had existing already. I found myself conflating the migration process with the setup process while writing the post originally, so I decided to break it into two separate pieces.

Now that I’ve created a base image, I’m going to outline a few of the common migration tasks I’ve found myself repeating with the sites I’ve handled thus far.

My typical deployment prior to containerization was a LAMP-stack VM which ran all of the site’s components. These were small sites, so I didn’t worry much about offloading assets to object storage, regular backups of the database, etc… So migration means getting everything off the VM and into the new architecture.

The things that I’m most concerned about are the stateful portions of the VM, namely the database and user uploads.

Diffs
Diffs in this Migration portion of the post are relative to the repository created in the step by step above.

Database

Action

Install conversion tool:

1
2
curl -O https://raw.githubusercontent.com/dumblob/mysql2sqlite/master/mysql2sqlite
chmod +x mysql2sqlite

Optionally, move the executable into your $PATH.

Dump the database:

1
mysqldump --skip-extended-insert --compact --no-tablespaces --user [USERNAME] --password [DBNAME] > dump.sql

Convert to SQLite:

1
./mysql2sqlite dump.sql | sqlite3 dump_sqlite.db

Create replica:

1
litestream replicate -config ./litestream.yml
Description

The mysql2sqlite tool is the evolution of a few predecessors, which the project’s README credits. It mostly works as defined, but I did have an issue of my MySQL user not having all of the privileges it needed to handle the dump.

1
mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s)for this operation' when trying to dump tablespaces

Upon a not very extensive search through the interwebz, I landed on applying an additional --no-tablespaces flag with no further issues.

I’m assuming you have an existing and valid litestream.yml at this point. If you don’t, you need to create one first or define the arguments in the replicate command. Follow the Litestream documentation for more help.

If all went well, you should have a new directory containing the “first generation” of your Litestream replica in the external storage you configured in litestream.yml.

Uploads

Action

Containerfile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
--- a/Containerfile
+++ b/Containerfile
@@ -1,6 +1,8 @@
 FROM docker.io/alpine:latest as downloader
 RUN apk add curl unzip
 RUN curl https://raw.githubusercontent.com/aaemnnosttv/wp-sqlite-db/v1.1.0/src/db.php -o /db.php
+RUN curl -OL https://github.com/humanmade/S3-Uploads/releases/download/2.3.0/manual-install.zip \
+    && unzip manual-install.zip -d /s3-uploads
 RUN curl -O https://downloads.wordpress.org/plugin/advanced-custom-fields.5.10.2.zip \
     && unzip advanced-custom-fields.5.10.2.zip \
     && mv advanced-custom-fields.5.10.2.zip /advanced-custom-fields
@@ -10,6 +12,7 @@ RUN curl -OL https://github.com/benbjohnson/litestream/releases/download/v0.3.5/
 FROM docker.io/wordpress:5.8.1-php7.4-apache
 COPY --from=downloader /litestream /usr/local/bin/litestream
 COPY --from=downloader /db.php /usr/src/wordpress/wp-content/db.php
+COPY --from=downloader /s3-uploads /usr/src/wordpress/wp-content/plugins/s3-uploads
 COPY --from=downloader /advanced-custom-fields /var/www/html/wp-content/plugins/advanced-custom-fields
 COPY --chown=www-data:www-data ./litestream.yml /etc/litestream.yml
 COPY --chown=www-data:www-data ./entrypoint.sh /scripts/entrypoint.sh

wp-config.php

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
--- a/wp-config.php
+++ b/wp-config.php
@@ -9,6 +9,15 @@ define('DB_DIR', ABSPATH . 'wp-content/database');
 define('DB_FILE', getenv('DB_FILE') ?: 'db');
 $table_prefix  = 'wp_';
 
+/** humanmade/S3-Uploads */
+define('S3_UPLOADS_BUCKET', getenv('UPLOADS_BUCKET'));
+define('S3_UPLOADS_REGION', getenv('BUCKET_REGION'), ?: 'us-east-1');
+// You can set key and secret directly:
+define('S3_UPLOADS_KEY', getenv('UPLOADS_ACCESS_KEY_ID'));
+define('S3_UPLOADS_SECRET', getenv('UPLOADS_SECRET_ACCESS_KEY'));
+// Or if using IAM instance profiles, you can use the instance's credentials:
+//define('S3_UPLOADS_USE_INSTANCE_PROFILE', true);
+
 /**
  * If we're behind a proxy server and using HTTPS, we need to alert WordPress of that fact
  * see also https://wordpress.org/support/article/administration-over-ssl/#using-a-reverse-proxy

Sync to Object Storage

1
2
# export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
aws s3 sync wp-content/uploads s3://[BUCKET]/uploads/
Description

Since WordPress is now containerized and “stateless” (the database is replicated externally, so we can treat the container as though it were stateless), user uploads can’t go into the container. They need to be stored and served externally.

I’m putting my user uploads into an S3 compatible object storage, and using this S3 Uploads plugin for WordPress, which handles uploading to the object storage and rewriting URLs to come from the storage.

For the actual migration from my VM to the object storage, I created a set of credentials specifically for the VM, then deleted them after the sync.

Make sure the bucket is publicly readable.

If you are keeping the Litestream replica in a different bucket than the user uploads (I recommend keeping them separate. It makes bucket permissioning more simple), then you’ll need to inject separate credentials into the container for Litestream and the S3 Uploads plugin.

Litestream can read LITESTREAM_ACCESS_KEY_ID and LITESTREAM_SECRET_ACCESS_KEY, so it’s easy enough to inject credentials by those names for Litestream. However, in my experience, if I also have credentials named AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, Litestream will prefer those variables. So I inject the S3 Uploads credentials with a different name (UPLOADS_), and read the matching environment variable in the config file. This way, there are no AWS_ environment variables, so as to keep the distinction clear for Litestream and S3 Uploads.