⛴️Deploy Toucan in Air-Gapped Environment

In this section, we will deploy Toucan in an air-gapped environment using Helm Charts. We'll assume this configuration:

  • Traffic is only exposed internally:

    • The machine can only be contacted through private networks (VPC or VPN).

    • The machine cannot be reached from the internet and is stricly blocked by the firewall.

  • A Private DNS is configured to forward auth-toucan.example.com and toucan.example.com to the machine IP.

This guide assumes a strict air-gapped environment:

  • At first, you have a networked environment. You have access to the internet and can download files to put on a USB stick.

  • Then, you deploy the Toucan Stack on your air-gapped environment.

Description and additional requirements

This guide does NOT cover the deployment of Kubernetes in an Air-Gapped environment. If you are interested, we recommend you to read the k3s - Air-Gap Install guide. We recommend using the Private Registry method.

In this guide, we plan to follow that method:

  1. In a networked environment, you have access to the internet and will download files.

  2. In an air-gapped environment, you have no access to the internet and will need to put these files on Kubernetes:

    1. Helm Charts will be hosted on the deployment server containing the tools for installing the Toucan Stack.

    2. The Docker images will be hosted on a local registry.

    3. The deployment will use that local registry to pull the images.

Therefore, you will need:

  • A storage to transfer files from the networked environment to the air-gapped environment. Recommended size is 10GB.

  • After uploading the container images in the local registry, container layers will be uncompressed. The registry will requires at least the double.

Preparations in the networked environment

1. Download the Private Registry

If you are using minikube, k3s, or another Kubernetes distribution, it's very possible there is already a private registry, or a way to load container images directly on the container runtime. You should check the documentation of your Kubernetes distribution.

If not, in this guide, we'll install zot as a private registry.

  1. Download the Helm Chart

    shell: user@home:/work
    helm pull --repo https://zotregistry.dev/helm-charts zot
  2. Download the container image:

shell: user@home:/work
helm template --repo https://zotregistry.dev/helm-charts zot zot --skip-tests | grep 'image:' | awk '{print $2}' | sort | uniq | while read image; do
  image=$(echo "$image" | sed 's/"//g' | sed "s/'//g")
  echo "Transferring $image"
  docker pull "$image"
  mkdir -p "$(dirname "$image")"
  docker save "$image" | gzip > "$(echo $image | sed 's/:/-/')".tar.gz
done

2. Download Cert-manager

In an air-gapped environment, you will need to handle TLS certificates without using public ACME servers. cert-manager is recommended to automatically handle the PKI for you.

  1. Download the Helm Chart

shell: user@home:/work
helm pull --repo https://charts.jetstack.io/ cert-manager
  1. Download the container image:

shell: user@home:/work
helm template --kube-version 1.31 --repo https://charts.jetstack.io/ cert-manager cert-manager --skip-tests | grep 'image:' | awk '{print $2}' | sort | uniq | while read image; do
  image=$(echo "$image" | sed 's/"//g' | sed "s/'//g")
  docker pull "$image"
done

3. Download Toucan-Stack

  1. Download the Helm Chart

shell: user@home:/work
helm pull oci://quay.io/toucantoco/charts/toucan-stack
  1. Download the container images:

shell: user@home:/work
helm template --set curity.config.license.secretName=dummy toucan-stack oci://quay.io/toucantoco/charts/toucan-stack --skip-tests | grep 'image:' | awk '{print $2}' | sort | uniq | while read image; do
  image=$(echo "$image" | sed 's/"//g' | sed "s/'//g")
  docker pull "$image"
done

You should have every files required for the air-gapped installation!

Installation in the air-gapped environment

1. Deploy the Private Registry

Since there is no registry to host the container registry image (chicken-egg problem), we need to deploy a private registry manually.

  1. Transfer the zot container image file directly on the Kubernetes node.

shell: user@home:/work
scp ./ghcr.io/project-zot/zot-linux-amd64-v*.tar.gz root@<node-ip>:zot-linux-amd64.tar.gz

If you are in a multi-node setup, you should do this for all the nodes. Since the registry requires a volume, it is better to stick the registry to a single node using nodeSelectors and use a local-path/hostPath volume.

  1. Import the image in the container runtime:

shell: root@node-0:/home/user
gunzip zot-linux-amd64.tar.gz

# containerd
ctr --namespace k8s.io image import zot-linux-amd64.tar

# OR, docker
docker load -i zot-linux-amd64.tar

You might need to pass --address to ctr since some distributions move the containerd.sock:

  • Minikube: /run/docker/containerd/containerd.sock

  • k0s: /run/k0s/containerd.sock

  1. Deploy the registry using Helm:

shell: user@home:/work
helm upgrade --install zot ./zot*.tgz \
  --set persistence=true \
  --set pvc.create=true \
  --set pvc.storageClassName=local-path \
  --set service.nodePort=32000 \
  --set nodeSelector."kubernetes\.io/hostname"=node-0

2. Transfer the images to the Private Registry

  1. Edit the /etc/docker/daemon.json to indicates that <node-0 ip>:32000 is not secured by TLS:

json: user@home:/etc/docker/daemon.json
{
  "insecure-registries": ["<node-0 ip>:32000"]
}

And run:

shell: user@home:/work
sudo systemctl restart docker
  1. Using the images you've pulled from earlier steps, tag them to the private registry:

shell: user@home:/work
docker tag <registry>/<repo>/<image>:<tag> <node-0 ip>:32000/<repo>/<image>:<tag>

If you don't remember which images you've pulled, you can use the following command to push all the images to the private registry:

shell: user@home:/work
remove_registry_hostname() {
  image_name="$1"
  slash_count=$(echo "$image_name" | grep -o "/" | wc -l)

  if [ "$slash_count" -ge 2 ]; then
    echo "$image_name" | cut -d '/' -f2-
  else
    echo "$image_name"
  fi
}

helm template --kube-version 1.31 cert-manager ./cert-manager*.tgz --skip-tests | grep 'image:' | awk '{print $2}' | sort | uniq | while read image; do
  image=$(echo "$image" | sed 's/"//g' | sed "s/'//g")
  docker tag "$image" "<node-0 ip>:32000/$(remove_registry_hostname "$image")"
  docker push "<node-0 ip>:32000/$(remove_registry_hostname "$image")"
done

helm template --set curity.config.license.secretName=dummy toucan-stack ./toucan-stack*.tgz --skip-tests | grep 'image:' | awk '{print $2}' | sort | uniq | while read image; do
  image=$(echo "$image" | sed 's/"//g' | sed "s/'//g")
  docker tag "$image" "<node-0 ip>:32000/$(remove_registry_hostname "$image")"
  docker push "<node-0 ip>:32000/$(remove_registry_hostname "$image")"
done

Replace <node-0 ip> with the IP address of the node where you've deployed the registry.

3. Deploy cert-manager

Deploy cert-manager

Simply run:

shell: user@home:/work
helm upgrade --install \
  cert-manager ./cert-manager*.tgz \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true \
  --set image.repository="<node-0 ip>:32000/jetstack/cert-manager-controller" \
  --set cainjector.image.repository="<node-0 ip>:32000/jetstack/cert-manager-cainjector" \
  --set webhook.image.repository="<node-0 ip>:32000/jetstack/cert-manager-webhook" \
  --set startupapicheck.image.repository="<node-0 ip>:32000/jetstack/cert-manager-startupapicheck"

Configure a TLS certificate issuer

This guide uses cert-manager to manage the TLS certificates for the ingress controller. cert-manager automatically fetches certificates from ACME (Automatic Certificate Management Environment) servers, such as Let's Encrypt.

If your prefer to self-manage certificates, you can use kubectl to import secrets directly. However, we heavily recommend to use cert-manager to manage the private and public TLS certificates for the ingress controller.

To generate certificates, we need a ClusterIssuer resource. We are assuming you have no Certificate Authority (CA) in your air-gapped environment, and we need to create one.

  1. Create a self-signed issuer:

yaml: /work/selfsigned-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: selfsigned-issuer
  namespace: cert-manager
spec:
  selfSigned: {}

This self-signed issuer will be used to create a Certificate Authority. Deploy it:

shell: /work/
kubectl apply -f selfsigned-issuer.yaml
  1. Create a Certificate for the Certificate Authority:

yaml: /work/ca-certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: ca-certificate
  namespace: cert-manager
spec:
  secretName: root-ca
  issuerRef:
    name: selfsigned-issuer
    kind: Issuer
  isCA: true
  duration: 43800h # 5 year
  renewBefore: 720h # 30 days before expiry
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  subject:
    organizations: [Toucan Toco]
    countries: [FR]
    organizationalUnits: [IT]
    localities: [Paris]
  commonName: Toucan Root CA

Deploy it:

shell: /work/
kubectl apply -f ca-certificate.yaml

You should share the CA's certificate to your users. Use kubectl get secret root-ca -n cert-manager -o jsonpath='{.data.tls\.crt}' | base64 -d to get the certificate.

Users will need to import it in their browser. See this guide.

  1. Create a ClusterIssuer that can be used by the whole cluster to fetch certificate from the CA:

yaml: /work/cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: private-cluster-issuer
  namespace: cert-manager
spec:
  ca:
    secretName: root-ca

Deploy it:

shell: /work/
kubectl apply -f cluster-issuer.yaml

4. Deploy Toucan Stack

1

Create a namespace

Create a Kubernetes namespace to deploy the Toucan Stack Helm charts in.

shell: /work/
kubectl create namespace toucan

Namespaces are used to avoid name conflicts between different projects. Since we are deploying a stack of services, we can use the same namespace for all of them, and avoid conflicting with your own projects.

2

Deploy the Curity secret

You should have a JSON file in this format:

json: Subscription_YYY-MM-DD.json
{
  "Company": "[email protected]",
  "Edition": "Community",
  "Environment": "",
  "Expires": "2025-12-13",
  "Feature": "",
  "GracePeriod": "30",
  "Groups": "1",
  "Issued": "2024-12-13",
  "Licence": "ey...",
  "License": "ey...",
  "Product": "Identity Server",
  "Tier": "Subscription",
  "Type": "Production",
  "Version": "4.3",
  "expired": false
}

Copy the value from the License or Licence field, and create the secret with:

shell: /work/
# generic: The type of secret to create.
# --namespace: The namespace to create the secret in.
# toucan-curity: The name of the secret to create.
# --from-literal: The key and value of the secret to create.
kubectl create secret generic --namespace toucan toucan-curity \
  --from-literal=license=<value>

Replace <value> with your with the value from the JSON file, i.e. the License or Licence field.

3

Deploy the Helm charts

Since we are using Helm, we can patch the necessary values to inject the credentials and secrets. We also need to expose the service to the external network and secure it with TLS.

  1. Create the values file, which will override the default values.

shell: /work/
touch values.override.yaml # You can name it whatever you want
  1. (optional) For strict air-gapped environments, assuming you have transferred the container images to your air-gapped container registry, add these lines to values.override.yaml override the registry:

yaml: /work/values.override.yaml
global:
  imageRegistry: 'localhost:32000'
  security:
    allowInsecureImages: true

# Gotenberg doesn't use global.imageRegistry
gotenberg:
  image:
    repository: localhost:32000/gotenberg/gotenberg
  1. Add these lines to disable password checking since it requires an internet connection:

yaml: /work/values.override.yaml
curity:
  config:
    credentialPolicy:
      # Since we're air-gapped, we cannot download the dictionary.
      dictionary:
        enabled: false
  1. Add this line to inject the Curity secret:

yaml: /work/values.override.yaml
# ...

curity:
  config:
    license:
      secretName: toucan-curity
      secretKey: license
  1. Add this line to select your storage provisioner:

yaml: /work/values.override.yaml
global:
  # ...
  defaultStorageClass: local-path
# ...

You can fetch the available storage classes with:

shell: /work/
kubectl get storageclass

You should see something like this:

shell: /work/
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path           rancher.io/local-path   Delete          WaitForFirstConsumer   false                  121d
  1. (Optional) Override the volume size:

yaml: /work/values.override.yaml
# ...
laputa:
  persistence:
    size: 10Gi

curity:
  # ...
  admin:
    persistence:
      size: 8Gi

postgresql:
  primary:
    persistence:
      size: 10Gi

mongodb:
  persistence:
    size: 8Gi

NOTE: This is only useful if you are using a storage provisioner which handles "sizing". The local-path-provisioner does NOT! So these values doesn't mean anything, but many cloud provider and block storage provisioners do.

  1. Expose the Toucan Stack by adding these lines:

yaml: /work/values.override.yaml
global:
  # ...

  ## global.hostname configures the helm chart to use toucan.example.com as "public" domain.
  hostname: toucan.example.com

nginx:
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: private-cluster-issuer # private-cluster-issuer references the previously created ClusterIssuer
    tls: true

curity:
  # ...
  runtime:
    ingress:
      enabled: true
      ingressClassName: nginx
      hostname: auth-toucan.example.com
      annotations:
        cert-manager.io/cluster-issuer: private-cluster-issuer # private-cluster-issuer references the previously created ClusterIssuer
      tls: true

Annotations are used by controllers like cert-manager to trigger side effects.

  1. Lastly, you need to inject the CA's certificate to the internal services that uses toucan.example.com:

yaml: /work/values.override.yaml
laputa:
  config:
    common:
      REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt

  extraVolumes:
    - name: spicedb-certs
      secret:
        secretName: '{{ template "toucan-stack.spicedb.tls.secretName" . }}'
        items:
          - key: ca.crt
            path: ca.crt
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: spicedb-certs
      mountPath: /spicedb-certs
    - name: ca-bundle
      mountPath: /etc/ssl/certs

layout:
  extraEnvVars:
    - name: NODE_EXTRA_CA_CERTS
      value: /etc/ssl/certs/ca-certificates.crt

  extraVolumes:
    - name: spicedb-certs
      secret:
        secretName: '{{ template "toucan-stack.spicedb.tls.secretName" . }}'
        items:
          - key: ca.crt
            path: ca.crt
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: spicedb-certs
      mountPath: /spicedb-certs
    - name: ca-bundle
      mountPath: /etc/ssl/certs

dataset:
  extraEnvVars:
    - name: SSL_CERT_FILE # For httpx
      value: /etc/ssl/certs/ca-certificates.crt
    - name: REQUESTS_CA_BUNDLE # For requests
      value: /etc/ssl/certs/ca-certificates.crt

  extraVolumes:
    - name: spicedb-certs
      secret:
        secretName: '{{ template "toucan-stack.spicedb.tls.secretName" . }}'
        items:
          - key: ca.crt
            path: ca.crt
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: spicedb-certs
      mountPath: /spicedb-certs
    - name: ca-bundle
      mountPath: /etc/ssl/certs

impersonate:
  extraVolumes:
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: ca-bundle
      mountPath: /etc/ssl/certs

vault:
  server:
    extraVolumes:
      - name: ca-bundle
        secret:
          secretName: 'toucan.example.com-cert'
          items:
            - key: ca.crt
              path: ca-certificates.crt

    extraVolumeMounts:
      - name: ca-bundle
        mountPath: /etc/ssl/certs/
  1. At this point, your values.override.yaml should looks like (minus the volume size overrides):

yaml: /work/values.override.yaml
global:
  imageRegistry: 'localhost:32000'
  security:
    allowInsecureImages: true
  imagePullSecrets:
    - dockerconfigjson
  defaultStorageClass: local-path
  hostname: toucan.example.com

nginx:
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: private-cluster-issuer
    tls: true

laputa:
  config:
    common:
      REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt

  extraVolumes:
    - name: spicedb-certs
      secret:
        secretName: '{{ template "toucan-stack.spicedb.tls.secretName" . }}'
        items:
          - key: ca.crt
            path: ca.crt
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: spicedb-certs
      mountPath: /spicedb-certs
    - name: ca-bundle
      mountPath: /etc/ssl/certs

layout:
  extraEnvVars:
    - name: NODE_EXTRA_CA_CERTS
      value: /etc/ssl/certs/ca-certificates.crt

  extraVolumes:
    - name: spicedb-certs
      secret:
        secretName: '{{ template "toucan-stack.spicedb.tls.secretName" . }}'
        items:
          - key: ca.crt
            path: ca.crt
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: spicedb-certs
      mountPath: /spicedb-certs
    - name: ca-bundle
      mountPath: /etc/ssl/certs

dataset:
  extraEnvVars:
    - name: SSL_CERT_FILE # For httpx
      value: /etc/ssl/certs/ca-certificates.crt
    - name: REQUESTS_CA_BUNDLE # For requests
      value: /etc/ssl/certs/ca-certificates.crt

  extraVolumes:
    - name: spicedb-certs
      secret:
        secretName: '{{ template "toucan-stack.spicedb.tls.secretName" . }}'
        items:
          - key: ca.crt
            path: ca.crt
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: spicedb-certs
      mountPath: /spicedb-certs
    - name: ca-bundle
      mountPath: /etc/ssl/certs

impersonate:
  extraVolumes:
    - name: ca-bundle
      secret:
        secretName: 'toucan.example.com-cert'
        items:
          - key: ca.crt
            path: ca-certificates.crt

  extraVolumeMounts:
    - name: ca-bundle
      mountPath: /etc/ssl/certs

gotenberg:
  image:
    repository: localhost:32000/gotenberg/gotenberg

vault:
  server:
    extraVolumes:
      - name: ca-bundle
        secret:
          secretName: 'toucan.example.com-cert'
          items:
            - key: ca.crt
              path: ca-certificates.crt

    extraVolumeMounts:
      - name: ca-bundle
        mountPath: /etc/ssl/certs/

curity:
  config:
    license:
      secretName: toucan-curity
      secretKey: license

    credentialPolicy:
      dictionary:
        enabled: false

  runtime:
    ingress:
      enabled: true
      ingressClassName: nginx
      hostname: auth-toucan.example.com
      annotations:
        cert-manager.io/cluster-issuer: private-cluster-issuer
      tls: true
  1. Deploy the Toucan Stack:

shell: /work/
helm upgrade --install toucan-stack ./toucan-stack*.tgz \
  --namespace toucan \
  --values ./values.override.yaml

If the installation fails with:

shell: /work/
Error: INSTALLATION FAILED: failed post-install: 1 error occurred:
        * timed out waiting for the condition

You should check the health of the deployment. Use kubectl get <deployments/statefulsets/pods> -n toucan to check the status of the deployment. And use kubectl logs <pod-name> -c <container-name> -n toucan to check the logs of the deployment.

We highly recommend using a Kubernetes GUI for troubleshooting like for example Headlamp.

  1. To get the Admin password, run the following command:

shell: /work/
kubectl get secret --namespace toucan toucan-stack-auth -o jsonpath='{.data.toucan-admin-password}' | base64 --decode
  1. You should be able to access the Toucan Stack at https://toucan.example.com and login with the admin credentials. Enter [email protected] for the username and the password you got from the previous step.

Last updated

Was this helpful?