MariaDB


PostgreSQLIn addition to Postgres, there are some deployments that will only leverage MySQL or MariaDB. I have opted to use the MariaDB Helm chart from Bitnami to deploy a central MariaDB instance for these deployments. Unfortunately at the time of this writing, the chart does not have the same backup section that the Postgres chart does. However we can easily create our own backup job.

For this CronJob we'll be setting up an extra NFS volume and volumeMount in the Bitnami Helm chart as well as leveraging the account created in the chart to grant privileges to dump everything. Finally we create a shell script on the same volume and volumeMount so that it can be executed from the pod.

This example shows configuration done on the secondary server in a replication architecture. If using a Standalone configuration you will need to adjust these configurations. Also because this is a shared instance, the account created in the chart is not utilized by an actual database so I'm using that user

First, we need to adjust the values.yaml for the helm chart. We're going to define the user and password in the configuration to allow the MariaDB client to run without challenging for credentials. We are also going to mount our backup volume as well.

secondary:
...
  configuration: |-
    ...        
    [client]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    default-character-set=UTF8
    user={ { .Values.auth.username }}
    password={ { .Values.auth.password }}
    ...
...
  ## @param secondary.extraVolumes Optionally specify extra list of additional volumes to the MariaDB secondary pod(s)
  ##
  extraVolumes: #[]
    - name: backup
      persistentVolumeClaim:
        claimName: backup-2
  ## @param secondary.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the MariaDB secondary container(s)
  ##
  extraVolumeMounts: #[]
    - name: backup
      mountPath: /backup

Due to the way Grav interprets code blocks, there is a space between the two { that should be removed

The user= and password= lines in the client portion of the configuration references the user created in the auth: portion of the chart. You can substitute static values if you are creating a different user for your backups. I'm also using an existing Persistent Volume Claim named backup-2 in this example. You can substitute your own PVC or define whatever storage you need.

Update your MariaDB Helm installation so those changes are present. and now we can move on to prepare the user in MariaDB

Using Adminer or other database administration tool of your choice, we need to make some changes to the created users. You could also create a new user if needed

GRANT RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT ON *.* TO `dbuser`@`%`;
FLUSH PRIVILEGES;

Now the dbuser account has the necessary privileges to create the backup of all databases.

We also need to create a simple shell script that will backup all the databases. I also added a command to remove all files that are older than 30 days. You can adjust this number to your needs. This file is stored on the root of the PVC that is mounted as /backup on the secondary server in the Helm chart.

mariadb-backup.sh

#!/bin/bash

mariadb-dump --all-databases > /backup/mariadb/backup-$(date '+%Y-%m-%d-%H-%M').sql
find /backup/mariadb -type f -mtime +30 -delete

Don't forget to make this executable with a chmod 755

Now we can create the service account (with roles and bindings) and CronJob to execute the above backup script daily.

00-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: backup
  labels:
    name: backup

01-config.yaml

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: backup
  name: kubectl-exec
rules:
- apiGroups:
  - extensions
  - apps
  resources:
  - deployments
  - pods
  verbs:
  - 'exec'

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubectl-exec
  namespace: backup
subjects:
- kind: ServiceAccount
  name: sa-kubectl-exec
  namespace: backup    
roleRef:
  kind: Role
  name: kubectl-exec
  apiGroup: ""

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-kubectl-exec
  namespace: backup

02-cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  labels:
    app: mariabdb-backup
  name: mariadb-backup
  namespace: backup
spec:
  schedule: "0 1 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-kubectl-exec
          containers:
            - name: mariadb-backup
              image: bitnami/kubectl:latest
              imagePullPolicy: Always
              env:
                - name: TZ
                  value: "America/NewYork"
              volumeMounts:
                - name: kubectl-config
                  mountPath: /.kube
              resources:
                limits:
                  cpu: '500m'
                  memory: 100Mi
                requests:
                  cpu: 50m
                  memory: 10Mi
              command: 
                - "/bin/sh"
                - "-c"
                - "kubectl exec -it -n database mariadb-secondary-0 -c mariadb -- /backup/mariadb-backup.sh"
          restartPolicy: OnFailure
          volumes:
            - name: kubectl-config
              secret:
                secretName: kubectl-config

I like to create a shell script to deploy these files as is makes it easy to redeploy any changes I make

build-mariadb-backup.sh

#!/bin/bash

kubectl apply -f 00-namespace.yaml \ 
              -f 01-configs.yaml \
              -f 02-cronjob.yaml

Now we can we execute the above bash script to deploy the CronJob.

This will create the backup files. You'll now want to use something like Rclone to send these files to an offsite location