⛴️Deploy Toucan using Helm Charts

In this section, we will deploy Toucan using Helm Charts. We'll assume this configuration:

  • Traffic is exposed publicly.

  • Public DNS is configured to forward auth-toucan.example.com and toucan.example.com to the machine IP.

If you have an air-gapped environment, we can recommend you to read this guide: Toucan - Air-Gapped Deployment.

(optional) Self-hosting a Kubernetes distribution

Since we are introducing Kubernetes with the v3 release, this section will help you deploy a Kubernetes distribution on your infrastructure. Note that if your cloud provider offers a Kubernetes distribution, we heavily recommend using it. Some cloud providers like Scaleway also offers a free Kubernetes control plane.

We will opt for the most standard deployment, while being simple so you can easily maintain and find online documentation.

We chose k3s, an easy-to-use Kubernetes distribution, which is also suitable for production deployments on any machine.

Installing k3s

1

SSH on host(s)

shell: /work/
ssh root@<host>
2

Install the k3s control plane

In K3s, control nodes are also worker nodes. To choose between single node control plane and multi nodes control plane, you should consider your fault tolerance requirements.

A worker plane does not impact the availability of the control plane. A fully failing worker plane will not bring down the control plane. This is why some cloud provider offers free control plane.

A control plane tolerates N12\lfloor\frac{N-1}{2}\rfloor failures. A failing control plane brings down the entire cluster.

Here's a list of recommendation based on your number of nodes available:

Nodes
Setup

1

1 Control+Worker

2

1 Control+Worker, 1 Worker

3

1 Control+Worker, 2 Workers

4

1 Control, 3 Worker

5

3 Control, 2 Workers; Or 2 Control, 1 Control+Worker, 2 Workers

...

...

Basically, if you plan to use an high available control plane, make sure at least N12\lfloor\frac{N-1}{2}\rfloor nodes that are pure control nodes.

Single node control plane.

Simply run:

shell: root@control-1:~/
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading
# --disable=traefik: disable embedded Traefik
curl -sfL https://get.k3s.io | sh -s - --disable=traefik

Traefik is disabled here because we'll deploy NGINX Ingress controller in another step.

The script is idempotent and can be used to upgrade an existing installation. By default, the configuration is made for single node control plane (SQLite is used as database).

If you prefer to use ETCD, you can run the following command:

shell: root@control-1:~/
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading, and load the server shared secret
# --disable=traefik: disable embedded Traefik
# --cluster-init: initialize the etcd cluster
curl -sfL https://get.k3s.io | sh -s - --disable=traefik \
  --cluster-init

You can also run this command to migrate from sqlite to ETCD.

By default, K3s mixes the control plane and the worker plane.

If you wish to separate the control plane from the plane, add a taint:

curl -sfL https://get.k3s.io | sh -s - --disable=traefik \
  ... \
  --node-taint node-role.kubernetes.io/control-plane=true:NoSchedule
3

(optional) Install the k3s worker plane

To join a cluster, fetch the token from the control plane:

shell: root@control-0:~/
# cat: display the content of a file
cat /var/lib/rancher/k3s/server/node-token

And use it on the worker node:

shell: root@worker-0:~/
curl -sfL https://get.k3s.io | K3S_URL=https://<ip or hostname of control-0>:6443 K3S_TOKEN=mynodetoken sh -

Fetch the super admin Kubernetes credentials

shell: root@control-0:~/
cat /etc/rancher/k3s/k3s.yaml

Merge the configuration in your kubernetes configuration ~/.kube/config:

yaml: ~/.kube/config.yaml
apiVersion: v1
clusters:
  - cluster:
      certificate-authority-data: LS0t...
      server: https://<ip or hostname of control-0>:6443
    name: default # Can be renamed
contexts:
  - context:
      cluster: default # points to clusters.cluster[].name
      user: default # points to users.user[].name
    name: default # Can be renamed
current-context: default # points to contexts.context[].name
kind: Config
preferences: {}
users:
  - name: default # Can be renamed
    user:
      client-certificate-data: LS0t...
      client-key-data: LS0t...

The directory ~/.kube doesn't exists?

Run:

yaml: ~/
# Create the directory
mkdir -p ~/.kube

# Create the file
touch ~/.kube/config

# Make the directory only readable by the current user
chmod 700 ~/.kube

# Make the file only readable by the current user
chmod 600 ~/.kube/config

The server address must be replaced with the address of your control node (the address you use for SSH for example) and you must allow port 6443/tcp on your firewall.

The access is protected using mTLS, which is as strong as a VPN, or SSH. Be sure to not share your credentials with anyone.

To test the connection, run:

shell: /work/
kubectl get nodes

You should get something similar to:

shell: ~/
NAME            STATUS   ROLES                  AGE      VERSION
controller-0    Ready    control-plane,master   1d       v1.32.3+k3s1

You can close the SSH port and use kubectl directly. Do note that the client or server certificates could rotate, so you would need to refetch the credentials.

Install NGINX Ingress Controller

Since you've installed k3s, by default, you have ServiceLB ready, which will allow you to expose services to the public.

If you are hosting k3s on a cloud, you might also be interested in using the Cloud Controller Manager of your cloud provider to link your cluster to your cloud. Follow this guide to disable the built-in Cloud Controller Manager. You'll need to follow the guide of your cloud provider to deploy the Cloud Controller Manager.

If you opt for a Kubernetes distribution without ServiceLB and cannot use a Cloud Controller Manager, you might be interested in MetalLB.

If you've got kubectl ready, helm is also ready.

Run:

shell: /work/
helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

That's it! To check if your ingress controller is ready, check the health of the service:

shell: /work/
# svc: service
# -n: namespace
kubectl get svc -n ingress-nginx

You should get something similar to:

shell: /work/
# LoadBalancer: The service is exposed externally, and should have an external IP.
# ClusterIP: The service is only expose internally.
# PORTS <left>:<right>: The ports exposed by the service. The left is the port exposed on the load balancer and internally,
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.43.181.110   192.168.122.249   80:32601/TCP,443:32465/TCP   42s
ingress-nginx-controller-admission   ClusterIP      10.43.46.48     <none>            443/TCP                      42s

The most important one is the LoadBalancer type, which means you can access the ingress controller through your public IP. Look for the EXTERNAL-IP field, and check if it matches your network interface IP (check with ip a on the host, and check the ip of eth0). If it is empty, it means you have an issue with the Cloud Controller Manager, or MetalLB/ServiceLB.

Have you configured the DNS?

Now that the ports 80 and 443 are opened, you can test your DNS records by trying to access your domain: http://toucan.example.com and http://auth-toucan.example.com, in which you should get a 404 error or a 200 success.

If you don't have any of these, here's a list of errors and possible solutions:

Error
Cause
Solution

ERR_NAME_NOT_RESOLVED/DNS_PROBE_FINISHED_NXDOMAIN

The domain name does not exist or the DNS cannot resolve it.

Reconfigure your DNS. You should set an A record with the key: toucan.example.com. and the value <your machine IP>

ERR_CONNECTION_REFUSED

The port 80 is accessible, but closed. No application is running on this port.

Check the health of your ingress deployment.

ERR_CONNECTION_TIMED_OUT

A firewall is blocking the request. Or, a misconfigured routing table is failing the response.

Check the firewall on your cloud or on the machine (iptables, nftables, ufw). Look for inbound and outbound traffic. Check the machine routing table. Look if the packets come and go from the correct network interface. (ip route, ip a, ip neigh)

ERR_SSL_PROTOCOL_ERROR

Trying to access HTTPS version on a port expecting HTTP.

Check the address in the browser, it should start with http://.

Configure TLS with cert-manager

This guide uses cert-manager to manage the TLS certificates for the ingress controller. cert-manager automatically fetches certificates from ACME (Automatic Certificate Management Environment) servers, such as Let's Encrypt.

If you plan to manage certificate externally, you might be interested in the External Secrets Operator, which is a service used to import secrets securely from external sources.

Lastly, if you opt for none and prefer to self-manage certificates, you can use kubectl to import secrets directly.

shell: /work/
helm install \
  cert-manager jetstack/cert-manager \
  --repo https://charts.jetstack.io \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

To quickly setup certificates with Let's Encrypt, we can deploy a ClusterIssuer, and we'll use the HTTP01 method.

  1. Create a file named cluster-issuer.yaml:

yaml: /work/cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: http01
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: [email protected]
    server: https://acme-v02.api.letsencrypt.org/directory
    preferredChain: 'ISRG Root X1'
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      # This is your identity with your ACME provider. Any secret name
      # may be chosen. It will be populated with data automatically,
      # so generally nothing further needs to be done with
      # the secret. If you lose this identity/secret, you will be able to
      # generate a new one and generate certificates for any/all domains
      # managed using your previous account, but you will be unable to revoke
      # any certificates generated using that previous account.
      name: example-issuer-account-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

ACME servers check the ownership the domain name using the HTTP-01 or DNS-01 challenge.

HTTP-01 check by opening a TCP port 80, and sending a GET request to the /.well-known/acme-challenge/ path. For example, if you ask for a certificate for example.com, the request will be sent to https://example.com/.well-known/acme-challenge/. If your DNS is configured to forward example.com to the machine IP, the ACME server will be able to reach that URL, and the challenge will be completed.

DNS-01 check by using a DNS provider API to set a TXT record that looks like _acme-challenge.example.com. For example, if you ask for a certificate for example.com, the ACME server will check for _acme-challenge.example.com in your DNS provider. If your DNS is auto-configured by cert-manager, the challenge will be completed.

Learn more at: Let's Encrypt - Challenge Types.

In the long term, DNS-01 is preferred because it is able to generate wildcard certificates and doesn't require to open the port 80. But it requires to host the DNS API credentials in the cluster.

  1. Deploy it:

shell: /work/
kubectl apply -f cluster-issuer.yaml

Deploy Toucan Stack

1

Create a namespace

Create a Kubernetes namespace to deploy the Toucan Stack Helm charts in.

shell: /work/
kubectl create namespace toucan

Namespaces are used to avoid name conflicts between different projects. Since we are deploying a stack of services, we can use the same namespace for all of them, and avoid conflicting with your own projects.

2

Deploy the Quay registry credentials

After gaining access Toucan Toco's Quay registry, you can send it to Kubernetes by running the following command:

shell: /work/
# Replace <username> and <password> with your credentials
# docker-registry: The type of secret to create.
# --namespace: The namespace to create the secret in.
# dockerconfigjson: The name of the secret to create.
# --docker-server: The server address of the registry.
# --docker-username: The username for the registry.
# --docker-password: The password for the registry.
kubectl create secret docker-registry --namespace toucan dockerconfigjson \
  --docker-server=quay.io \
  --docker-username=<username> \
  --docker-password=<password>

To fetch your Quay credentials, you can generate an encrypted password on Quay.io:

  1. Go to Account Settings.

  2. Go to the "Gear Menu" on the left side menu.

  3. Click on "Generate Encrypted Password"

    Fetching Quay encrypted password

    Use the encrypted password and username in the --docker-username and --docker-password flags.

3

Deploy the Curity secret

You should have a JSON file in this format:

json: Subscription_YYY-MM-DD.json
{
  "Company": "[email protected]",
  "Edition": "Community",
  "Environment": "",
  "Expires": "2025-12-13",
  "Feature": "",
  "GracePeriod": "30",
  "Groups": "1",
  "Issued": "2024-12-13",
  "Licence": "ey...",
  "License": "ey...",
  "Product": "Identity Server",
  "Tier": "Subscription",
  "Type": "Production",
  "Version": "4.3",
  "expired": false
}

Copy the value from the License or Licence field, and create the secret with:

shell: /work/
# Replace <value> with your license
# generic: The type of secret to create.
# --namespace: The namespace to create the secret in.
# toucan-curity: The name of the secret to create.
# --from-literal: The key and value of the secret to create.
kubectl create secret generic --namespace toucan toucan-curity \
  --from-literal=license=<value>
4

Deploy the Helm charts

Since we are using Helm, we can patch the necessary values to inject the credentials and secrets. We also need to expose the service to the external network and secure it with TLS.

  1. Create the values file, which will override the default values.

shell: /work/
touch values.override.yaml # You can name it whatever you want
  1. Add this line to values.override.yaml inject the registry credentials:

yaml: /work/values.override.yaml
global:
  imagePullSecrets:
    - dockerconfigjson
  1. Add this line to inject the Curity secret:

yaml: /work/values.override.yaml
# ...

curity:
  config:
    license:
      secretName: toucan-curity
      secretKey: license
  1. Add this line to select your storage provisioner:

yaml: /work/values.override.yaml
global:
  # ...
  defaultStorageClass: local-path
# ...

You can fetch the available storage classes with:

shell: /work/
kubectl get storageclass

You should see something like this:

shell: /work/
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path           rancher.io/local-path   Delete          WaitForFirstConsumer   false                  121d
  1. (Optional) Override the volume size:

yaml: /work/values.override.yaml
# ...
laputa:
  persistence:
    size: 10Gi

curity:
  # ...
  admin:
    persistence:
      size: 8Gi

postgresql:
  primary:
    persistence:
      size: 10Gi

mongodb:
  persistence:
    size: 8Gi

NOTE: This is only useful if you are using a storage provisioner which handles "sizing". The local-path-provisioner does NOT! So these values doesn't mean anything, but many cloud provider and block storage provisioners do.

  1. Expose the Toucan Stack by adding these lines:

yaml: /work/values.override.yaml
global:
  # ...

  ## global.hostname configures the helm chart to use toucan.example.com as "public" domain.
  hostname: toucan.example.com

nginx:
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: http01 # http01 references the previously created ClusterIssuer
    extraTls:
      - hosts:
          - toucan.example.com
        secretName: 'toucan.example.com-cert' # This secret will be generated.

 curity:
   # ...
   runtime:
     ingress:
       enabled: true
       ingressClassName: nginx
       hostname: auth-toucan.example.com
       annotations:
         cert-manager.io/cluster-issuer: http01 # http01 references the previously created ClusterIssuer
       extraTls:
         - hosts:
             - auth-toucan.example.com
           secretName: 'auth-toucan.example.com' # This secret will be generated.

Annotations are used by controllers like cert-manager to trigger side effects.

  1. At this point, your values.override.yaml should looks like (minus the volume size overrides):

yaml: /work/values.override.yaml
global:
  imagePullSecrets:
    - dockerconfigjson
  defaultStorageClass: local-path
  hostname: toucan.example.com

nginx:
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: http01
    extraTls:
      - hosts:
          - toucan.example.com
        secretName: 'toucan.example.com-cert'

curity:
  config:
    license:
      secretName: toucan-curity
      secretKey: license

  runtime:
    ingress:
      enabled: true
      ingressClassName: nginx
      hostname: auth-toucan.example.com
      annotations:
        cert-manager.io/cluster-issuer: http01
      extraTls:
        - hosts:
            - auth-toucan.example.com
          secretName: 'auth-toucan.example.com'
  1. Deploy the Toucan Stack:

shell: /work/
helm upgrade --install toucan-stack oci://quay.io/toucantoco/charts/toucan-stack \
 --namespace toucan \
 --values ./values.override.yaml

If the installation fails with:

Error: INSTALLATION FAILED: failed post-install: 1 error occurred:
        * timed out waiting for the condition

You should check the health of the deployment. Use kubectl get <deployments/statefulsets/pods> -n toucan to check the status of the deployment. And use kubectl logs <pod-name> -c <container-name> -n toucan to check the logs of the deployment.

We highly recommend using a Kubernetes GUI for troubleshooting like for example Headlamp.

If you want to lock the version, simply set the --version flag:

shell: /work/
helm upgrade --install toucan-stack oci://quay.io/toucantoco/charts/toucan-stack \
 --namespace toucan \
 --version v1.0.0 \
 --values ./values.override.yaml

If you want to customize the values, you can fetch the default values with:

shell: /work/
helm show values oci://quay.io/toucantoco/charts/toucan-stack | less

It's quite long, so we recommend using a YAML editor able to "group" the values.

  1. To get the Admin password, run the following command:

shell: /work/
kubectl get secret --namespace toucan toucan-stack-auth -o jsonpath='{.data.toucan-admin-password}' | base64 --decode
  1. You should be able to access the Toucan Stack at https://toucan.example.com and login with the admin credentials. Enter [email protected] for the username and the password you got from the previous step.

Last updated

Was this helpful?