PostgreSQL


PostgreSQLBy far, a majority of my databases run in PostgreSQL and I use the Bitnami Helm Chart to deploy it in my Kubernetes cluster. One of the nice things about using the Bitnami Helm chart is they have an portion of the chart that will create a pg-dump CronJob for you. I simply configure this in the chart and then let my Rclone Backups back it up to offsite storage. We simply need to modify the backup portion of the chart. Below is an example of what I have done. For brevity, I'll be showing only the appropriate edits I've made in this section.

...
backup:
  ## @param backup.enabled Enable the logical dump of the database "regularly"
  enabled: true
...
    ## @param backup.cronjob.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if backup.cronjob.resources is set (backup.cronjob.resources is recommended for production).
    ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
    ##
    resourcesPreset: "nano"
...     
    storage:
      ## @param backup.cronjob.storage.enabled Enable using a `PersistentVolumeClaim` as backup data volume 
      ##
      enabled: true
      ## @param backup.cronjob.storage.existingClaim Provide an existing `PersistentVolumeClaim` (only when `architecture=standalone`)
      ## If defined, PVC must be created manually before volume will be bound
      ##
      existingClaim: "backup-1"
      ## @param backup.cronjob.storage.resourcePolicy Setting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted
      ##
      resourcePolicy: ""
      ## @param backup.cronjob.storage.storageClass PVC Storage Class for the backup data volume
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ## set, choosing the default provisioner.
      ##
      storageClass: ""
      ## @param backup.cronjob.storage.accessModes PV Access Mode
      ##
      accessModes:
        - ReadWriteManny
      ## @param backup.cronjob.storage.size PVC Storage Request for the backup data volume
      ##
      size: 700Gi
      ## @param backup.cronjob.storage.annotations PVC annotations
      ##
      annotations: {}
      ## @param backup.cronjob.storage.mountPath Path to mount the volume at
      ##
      mountPath: /backup/pgdump
      ## @param backup.cronjob.storage.subPath Subdirectory of the volume to mount at
      ## and one PV for multiple services.
      ##
      subPath: ""
      ## Fine tuning for volumeClaimTemplates
      ##
      volumeClaimTemplates:
        ## @param backup.cronjob.storage.volumeClaimTemplates.selector A label query over volumes to consider for binding (e.g. when using local volumes)
        ## A label query over volumes to consider for binding (e.g. when using local volumes)
        ## See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#labelselector-v1-meta for more details
        ##
        selector: {}
    ## @param backup.cronjob.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the backup container
    ##
    extraVolumeMounts: []
    ## @param backup.cronjob.extraVolumes Optionally specify extra list of additional volumes for the backup container
    ##
    extraVolumes: []
...

I'm using an existing PVC I created named backup-1 for this example. You can create your own persistent volume, or define it in the chart configuring backup.cronjob.extraVolumeMounts and backup.cronjob.extraVolumes in the chart.

Once this is deployed there will be a job that runs daily at midnight that will create a dump of the PostgreSQL server that included all the databases, roles, and other configurations to a volume you've specified that you can then backup.