๐พBackup and Restore

Here's are the storage worth backing up:
PostgreSQL
Laputa Storage & MongoDB
Curity config
S3
PostgreSQL
Backup
You can run a Kubernetes CronJob to dump the database. Set these parameters in the values.yaml:
Restore
Restoring data to a database may overwrite existing records and result in data loss if not performed carefully. Ensure that a full backup is taken prior to any restore operation. Proceed only if you fully understand the implications of the restoration process. The authors of this document are not responsible for any data loss or system issues resulting from improper use.
Find a way to connect to the database. You can forward the service port if you have admin access to the Kubernetes cluster:
Restore the backup using the command:
MongoDB
This guide will guide you through the steps required to backup and restore Toucan's MongoDB database outside of Kubernetes.
A better approach is to use a Kubernetes CronJob to dump the database, in which you won't need to run the restore command manually and won't need to forward the port.
Backup
Get the MongoDB credentials:
Forward the MongoDB port if you have admin access to the Kubernetes cluster:
Mongo dump:
Restore
Get the MongoDB credentials:
Forward the MongoDB port if you have admin access to the Kubernetes cluster:
Mongo restore:
Garage S3
Check out their documentation for backup and restore.
Persistent Volume backup (Laputa Storage, Garage S3, MongoDB, Curity config, PostgreSQL)
Persistent Volume backups are filesystem snapshots, and not logical backups.
Backup
By using Kubernetes, you have used a storage provisioner to provision a volume for Laputa. Most storage provisioner can backup the data automatically.
See either your cloud provider or the storage provisioner documentation for more information.
Here's a non-exhaustive list of storage provisioners with backup capabilities:
Including OVH, GKE, EKS, ... via web UI.
You can also look at external-snapshotter if you want a Kubernetes-native way to handle snapshots. In which, some provider implements its API:
EKS - CSI Snapshot Controller (In which, they have their own in-house CSI snapshot controller).
There are no VolumeSnapshot for the local-path-provisioner. (You should simply snapshot the host volume via the Web UI of your cloud provider.)
external-snapshotter is not periodic. VolumeSnapshot must be created manually.
It is worth saying that there is probably an open source project offering this feature like this one.
To create a snapshot using external-snapshotter,
Create a
VolumeSnapshotClass(if not present):
You can verify your snapshot class by using kubectl get volumesnapshotclass.
Create a
VolumeSnapshot:
After creating the snapshot, you should get a VolumeSnapshotContent.
Restore
You can restore a snapshot using the Web UI of your cloud provider.
If you are using external-snapshotter, to restore a snapshot, you need to reference the VolumeSnapshot in the PersistentVolumeClaim. Set these parameters in the values to restore the snapshot:
If you fear losing data, remember to set the retainPolicy to Retain on the PersistentVolume. You should delete the existing StatefulSet and PersistentVolumeClaim before upgrading the chart. If the retainPolicy is set to Delete, the previous data will be lost.
Last updated
Was this helpful?