⚙️Configure persistence

Parameters

To configure the persistence of our Helm Charts, each component of the stack has a persistence field.

Here's the list of field:

yaml: values.override.yaml
laputa:
  persistence: # ...

curity:
  admin:
    persistence: # ...

postgresql:
  primary:
    persistence: # ...

mongodb:
  persistence: # ...

layout-redis:
  master:
    persistence: # ...

laputa-redis:
  master:
    persistence: # ...

impersonate-redis:
  master:
    persistence: # ...

In which, we've enabled and disabled the persistence of each component, based on the needs of the deployment. For example, most Redis doesn't need to be persisted and can be disabled.

Here's a common list of options for the persistence field:

yaml: values.override.yaml: laputa > persistence
## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
##
persistence:
  ## @param laputa.persistence.enabled Enable persistence using Persistent Volume Claims
  ##
  enabled: true
  ## @param laputa.persistence.storageClass Storage class of backing PVC
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: ''
  ## @param laputa.persistence.annotations Persistent Volume Claim annotations
  ##
  annotations: {}
  ## @param laputa.persistence.accessModes Persistent Volume Access Modes
  ##
  accessModes:
    - ReadWriteOnce
  ## @param laputa.persistence.size Size of data volume
  ##
  size: 8Gi
  ## @param laputa.persistence.existingClaim The name of an existing PVC to use for persistence
  ##
  existingClaim: ''
  ## @param laputa.persistence.selector Selector to match an existing Persistent Volume for WordPress data PVC
  ## If set, the PVC can't have a PV dynamically provisioned for it
  ## E.g.
  ## selector:
  ##   matchLabels:
  ##     app: my-app
  ##
  selector: {}
  ## @param laputa.persistence.dataSource Custom PVC data source
  ##
  dataSource: {}

subPath and mountPath have been purposely omitted from the above example. This is used internally to debug the Helm Chart.

We do not recommend editing subPath and mountPath.

Dynamic provisioning

To dynamically provision storage, we use a StorageClass. StorageClass is a Kubernetes resource defined by a storage provisioner, which is itself provided by the cloud provider, or by your Kubernetes administrator.

You can list the available provisioners with:

shell
kubectl get sc

For the sake of the example, we will use local-path as our storage provisioner.

Your StorageClass should look like this:

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

To inspect the resource definition, you can run:

shell
kubectl get sc local-path -o yaml

Behind the scenes of the Helm Chart, persistence is actually a PersistentVolumeClaimTemplate. This is used to generate a PersistentVolumeClaim for each replica of the StatefulSet.

To ask for a volume, simply set:

yaml: values.override.yaml
persistence:
  storageClass: local-path
  size: 8Gi

Upon deployment, the StatefulSet will create a PersistentVolumeClaim with the requested size, which will trigger the creation of a PersistentVolume.

Static provisioning

We highly recommend dynamic provisioning for production, for ease of use. You can always fallback to static provisioning if you need to.

For the sake of the example, we will use hostPath with a node selector. If you are interested in other options, you might be interested in checking out Container Storage Interface (CSI) Drivers.

To statically provision a volume, you need to create a PersistentVolume:

yaml: pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-volume-0
  labels:
    app: my-app
spec:
  capacity:
    storage: 8Gi
  hostPath:
    path: /path/to/host/volume
    type: DirectoryOrCreate
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain # Highly recommended in static provisioning
  storageClassName: '' # Set an empty string to disable dynamic provisioning
  volumeMode: Filesystem
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - controller-0 # The name of the node

We bind the PersistentVolume to the node since the data is local to the node.

If you wish for an alternative that supports data migration, you would use a block storage provided by the cloud provider, or from your own storage infrastructure.

Apply it:

shell
kubectl apply -f pv.yaml

Simply set:

yaml: values.override.yaml
persistence:
  selector:
    matchLabels:
      app: my-app # Must match the labels of the PV
  storageClass: '-'

Last updated

Was this helpful?