💾Backup and Restore

Here's are the storage worth backing up:
PostgreSQL
Laputa Storage & MongoDB
Curity config
S3
PostgreSQL
Backup
You can run a Kubernetes CronJob to dump the database. Set these parameters in the values.yaml
:
postgresql:
backup:
## @param backup.enabled Enable the logical dump of the database "regularly"
enabled: false
cronjob:
## @param backup.cronjob.schedule Set the cronjob parameter schedule
schedule: '@daily'
## @param backup.cronjob.timeZone Set the cronjob parameter timeZone
timeZone: ''
## @param backup.cronjob.command Set backup container's command to run
command:
- /bin/bash
- -c
- PGPASSWORD="${PGPASSWORD:-$(< "$PGPASSWORD_FILE")}" pg_dumpall --clean --if-exists --load-via-partition-root --quote-all-identifiers --no-password --file="${PGDUMP_DIR}/pg_dumpall-$(date '+%Y-%m-%d-%H-%M').pgdump"
storage:
## @param backup.cronjob.storage.enabled Enable using a `PersistentVolumeClaim` as backup data volume
##
enabled: true
## @param backup.cronjob.storage.existingClaim Provide an existing `PersistentVolumeClaim` (only when `architecture=standalone`)
## If defined, PVC must be created manually before volume will be bound
##
existingClaim: ''
## @param backup.cronjob.storage.resourcePolicy Setting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted
##
resourcePolicy: ''
## @param backup.cronjob.storage.storageClass PVC Storage Class for the backup data volume
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
storageClass: ''
## @param backup.cronjob.storage.accessModes PV Access Mode
##
accessModes:
- ReadWriteOnce
## @param backup.cronjob.storage.size PVC Storage Request for the backup data volume
##
size: 8Gi
## @param backup.cronjob.storage.annotations PVC annotations
##
annotations: {}
## @param backup.cronjob.storage.mountPath Path to mount the volume at
##
mountPath: /backup/pgdump
## @param backup.cronjob.storage.subPath Subdirectory of the volume to mount at
## and one PV for multiple services.
##
subPath: ''
## Fine tuning for volumeClaimTemplates
##
volumeClaimTemplates:
## @param backup.cronjob.storage.volumeClaimTemplates.selector A label query over volumes to consider for binding (e.g. when using local volumes)
## A label query over volumes to consider for binding (e.g. when using local volumes)
## See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#labelselector-v1-meta for more details
##
selector: {}
Restore
Restoring data to a database may overwrite existing records and result in data loss if not performed carefully. Ensure that a full backup is taken prior to any restore operation. Proceed only if you fully understand the implications of the restoration process. The authors of this document are not responsible for any data loss or system issues resulting from improper use.
Find a way to connect to the database. You can forward the service port if you have admin access to the Kubernetes cluster:
kubectl port-forward -n <namespace> svc/toucan-stack-postgresql-primary 5432:5432
Restore the backup using the command:
psql -h localhost -p 5432 -U toucan-admin -d "$DATABASE" -f "<backup file>" -v ON_ERROR_STOP=1
MongoDB
This guide will guide you through the steps required to backup and restore Toucan's MongoDB database outside of Kubernetes.
A better approach is to use a Kubernetes CronJob to dump the database, in which you won't need to run the restore command manually and won't need to forward the port.
Backup
Get the MongoDB credentials:
kubectl get secret --namespace toucan toucan-stack-mongodb -o jsonpath='{.data.mongodb-root-password}' | base64 --decode
Forward the MongoDB port if you have admin access to the Kubernetes cluster:
kubectl port-forward -n <namespace> svc/toucan-stack-mongodb 27017:27017
Mongo dump:
mongodump --uri="mongodb://admin:<password>@localhost:27017" --gzip --archive="<backup file>"
Restore
Get the MongoDB credentials:
kubectl get secret --namespace toucan toucan-stack-mongodb -o jsonpath='{.data.mongodb-root-password}' | base64 --decode
Forward the MongoDB port if you have admin access to the Kubernetes cluster:
kubectl port-forward -n <namespace> svc/toucan-stack-mongodb 27017:27017
Mongo restore:
mongorestore --verbose --uri="mongodb://admin:<password>@localhost:27017" --gzip --archive="<backup file>"
Persistent Volume backup (Laputa Storage, MongoDB, Curity config, PostgreSQL)
Persistent Volume backups are filesystem snapshots, and not logical backups.
Backup
By using Kubernetes, you have used a storage provisioner to provision a volume for Laputa. Most storage provisioner can backup the data automatically.
See either your cloud provider or the storage provisioner documentation for more information.
Here's a non-exhaustive list of storage provisioners with backup capabilities:
Including OVH, GKE, EKS, ... via web UI.
You can also look at external-snapshotter if you want a Kubernetes-native way to handle snapshots. In which, some provider implements its API:
EKS - CSI Snapshot Controller (In which, they have their own in-house CSI snapshot controller).
There are no VolumeSnapshot
for the local-path-provisioner
. (You should simply snapshot the host volume via the Web UI of your cloud provider.)
external-snapshotter
is not periodic. VolumeSnapshot
must be created manually.
It is worth saying that there is probably an open source project offering this feature like this one.
To create a snapshot using external-snapshotter
,
Create a
VolumeSnapshotClass
(if not present):
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-cinder-snapclass-in-use-v1
deletionPolicy: Delete
driver: cinder.csi.openstack.org # OVH or OpenStack
parameters: # Read the CSI driver documentation for more information
force-create: 'true'
You can verify your snapshot class by using kubectl get volumesnapshotclass
.
Create a
VolumeSnapshot
:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: postgresql-snapshot
namespace: toucan
spec:
volumeSnapshotClassName: csi-cinder-snapclass-in-use-v1
source:
persistentVolumeClaimName: postgresql-pvc
After creating the snapshot, you should get a VolumeSnapshotContent
.
Restore
You can restore a snapshot using the Web UI of your cloud provider.
If you are using external-snapshotter
, to restore a snapshot, you need to reference the VolumeSnapshot
in the PersistentVolumeClaim
. Set these parameters in the values to restore the snapshot:
postgresql:
persistence:
enabled: true
storageClass: csi-cinder-high-speed
dataSource:
name: postgresql-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
size: 10Gi
If you fear losing data, remember to set the retainPolicy
to Retain
on the PersistentVolume
. You should delete the existing StatefulSet
and PersistentVolumeClaim
before upgrading the chart. If the retainPolicy
is set to Delete
, the previous data will be lost.
Last updated
Was this helpful?