# Configure persistence

## Parameters

To configure the persistence of our Helm Charts, each component of the stack has a `persistence` field.

Here's the list of field:

{% code title="yaml: values.override.yaml" %}

```yaml
laputa:
  persistence: # ...

curity:
  admin:
    persistence: # ...

postgresql:
  primary:
    persistence: # ...

mongodb:
  persistence: # ...

garage:
  persistence:
    meta:
      # ...
    data:
      # ...

layout-redis:
  master:
    persistence: # ...

laputa-redis:
  master:
    persistence: # ...

impersonate-redis:
  master:
    persistence: # ...

dataexecution-redis:
  master:
    persistence: # ...
```

{% endcode %}

In which, we've enabled and disabled the persistence of each component, based on the needs of the deployment. For example, most Redis doesn't need to be persisted and can be disabled.

Here's a common list of options for the `persistence` field:

{% code title="yaml: values.override.yaml: laputa > persistence" overflow="wrap" %}

```yaml
## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
##
persistence:
  ## @param laputa.persistence.enabled Enable persistence using Persistent Volume Claims
  ##
  enabled: true
  ## @param laputa.persistence.storageClass Storage class of backing PVC
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: ''
  ## @param laputa.persistence.annotations Persistent Volume Claim annotations
  ##
  annotations: {}
  ## @param laputa.persistence.accessModes Persistent Volume Access Modes
  ##
  accessModes:
    - ReadWriteOnce
  ## @param laputa.persistence.size Size of data volume
  ##
  size: 8Gi
  ## @param laputa.persistence.existingClaim The name of an existing PVC to use for persistence
  ##
  existingClaim: ''
  ## @param laputa.persistence.selector Selector to match an existing Persistent Volume for WordPress data PVC
  ## If set, the PVC can't have a PV dynamically provisioned for it
  ## E.g.
  ## selector:
  ##   matchLabels:
  ##     app: my-app
  ##
  selector: {}
  ## @param laputa.persistence.dataSource Custom PVC data source
  ##
  dataSource: {}
```

{% endcode %}

{% hint style="info" %}
`subPath` and `mountPath` have been purposely omitted from the above example. This is used internally to debug the Helm Chart.

We do not recommend editing `subPath` and `mountPath`.
{% endhint %}

## Dynamic provisioning

To dynamically provision storage, we use a `StorageClass`. `StorageClass` is a Kubernetes resource defined by a storage provisioner, which is itself provided by the cloud provider, or by your Kubernetes administrator.

You can list the available provisioners with:

{% code title="shell" overflow="wrap" %}

```shell
kubectl get sc
```

{% endcode %}

For the sake of the example, we will use `local-path` as our storage provisioner.

Your `StorageClass` should look like this:

{% code title="yaml" overflow="wrap" %}

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```

{% endcode %}

{% hint style="info" %}
To inspect the resource definition, you can run:

{% code title="shell" overflow="wrap" %}

```shell
kubectl get sc local-path -o yaml
```

{% endcode %}
{% endhint %}

{% tabs %}
{% tab title="Using the PVC template" %}
Behind the scenes of the Helm Chart, `persistence` is actually a `PersistentVolumeClaimTemplate`. This is used to generate a `PersistentVolumeClaim` for each replica of the StatefulSet.

To ask for a volume, simply set:

{% code title="yaml: values.override.yaml" %}

```yaml
persistence:
  storageClass: local-path
  size: 8Gi
```

{% endcode %}

Upon deployment, the `StatefulSet` will create a `PersistentVolumeClaim` with the requested size, which will trigger the creation of a `PersistentVolume`.
{% endtab %}

{% tab title="Using an existing PVC" %}
If you think the Helm Chart is too limiting, you can always create your own PVC and set the `existingClaim` field.

{% code title="yaml: pvc.yaml" overflow="wrap" %}

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: local-path
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
```

{% endcode %}

Apply it:

{% code title="shell" overflow="wrap" %}

```shell
kubectl apply -f pvc.yaml
```

{% endcode %}

In which, you can set the `existingClaim` field:

{% code title="yaml: values.override.yaml" %}

```yaml
persistence:
  existingClaim: my-pvc
```

{% endcode %}

Feel free to check the availables parameters of a `PersistentVolumeClaim` at the [Kubernetes documentation - Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
{% endtab %}
{% endtabs %}

## Static provisioning

{% hint style="info" %}
We **highly** recommend dynamic provisioning for production, for ease of use. You can always fallback to static provisioning if you need to.
{% endhint %}

For the sake of the example, we will use `hostPath` with a node selector. If you are interested in other options, you might be interested in checking out [Container Storage Interface (CSI) Drivers](https://kubernetes-csi.github.io/docs/drivers.html).

To statically provision a volume, you need to create a `PersistentVolume`:

{% code title="yaml: pv.yaml" overflow="wrap" %}

```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-volume-0
  labels:
    app: my-app
spec:
  capacity:
    storage: 8Gi
  hostPath:
    path: /path/to/host/volume
    type: DirectoryOrCreate
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain # Highly recommended in static provisioning
  storageClassName: '' # Set an empty string to disable dynamic provisioning
  volumeMode: Filesystem
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - controller-0 # The name of the node
```

{% endcode %}

{% hint style="info" %}
We bind the `PersistentVolume` to the node since the data is local to the node.

If you wish for an alternative that supports data migration, you would use a block storage provided by the cloud provider, or from your own storage infrastructure.
{% endhint %}

Apply it:

{% code title="shell" overflow="wrap" %}

```shell
kubectl apply -f pv.yaml
```

{% endcode %}

{% tabs %}
{% tab title="Using the PVC template" %}
Simply set:

{% code title="yaml: values.override.yaml" %}

```yaml
persistence:
  selector:
    matchLabels:
      app: my-app # Must match the labels of the PV
  storageClass: '-'
```

{% endcode %}
{% endtab %}

{% tab title="Using an existing PVC" %}
You probably want to claim it with a `PersistentVolumeClaim`:

{% code title="yaml: pvc.yaml" overflow="wrap" %}

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: ''
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  volumeName: my-volume-0 # Must match the name of the PV
  # Alternatively, you can also set a selector.
```

{% endcode %}

Apply it:

{% code title="shell" overflow="wrap" %}

```shell
kubectl apply -f pvc.yaml
```

{% endcode %}

In which, you can set the `existingClaim` field:

{% code title="yaml: values.override.yaml" %}

```yaml
persistence:
  existingClaim: my-pvc
```

{% endcode %}
{% endtab %}
{% endtabs %}
