⛴️Deploy Toucan using Helm Charts
In this section, we will deploy Toucan using Helm Charts. We'll assume this configuration:
Traffic is exposed publicly.
Public DNS is configured to forward
auth-toucan.example.comandtoucan.example.comto the machine IP.
If you have an air-gapped environment, we can recommend you to read this guide: Toucan - Air-Gapped Deployment.
NOTE: This guide helps you deploy a simple "one-shot" "all-in-one" Toucan Stack, which might not be suitable for production.
We heavily recommend in using an external PostgreSQL database as the one embedded might not be suitable for production:
Please follow the following guide to connect to your external database: Toucan - External Database
If you still wish to deploy PostgreSQL inside Kubernetes, we recommend using CloudNativePG:
Supports failover and multiple standbys replicas.
Supports backups and restores.
Supports migrating data from another PostgreSQL instance.
Supports audit log, monitoring...
(optional) Self-hosting a Kubernetes distribution
Since we are introducing Kubernetes with the v3 release, this section will help you deploy a Kubernetes distribution on your infrastructure. Note that if your cloud provider offers a Kubernetes distribution, we heavily recommend using it. Some cloud providers like Scaleway also offers a free Kubernetes control plane.
We will opt for the most standard deployment, while being simple so you can easily maintain and find online documentation.
We chose k3s, an easy-to-use Kubernetes distribution, which is also suitable for production deployments on any machine.
Installing k3s
SSH on host(s)
ssh root@<host>Install the k3s control plane
In K3s, control nodes are also worker nodes. To choose between single node control plane and multi nodes control plane, you should consider your fault tolerance requirements.
A worker plane does not impact the availability of the control plane. A fully failing worker plane will not bring down the control plane. This is why some cloud provider offers free control plane.
A control plane tolerates failures. A failing control plane brings down the entire cluster.
Here's a list of recommendation based on your number of nodes available:
1
1 Control+Worker
2
1 Control+Worker, 1 Worker
3
1 Control+Worker, 2 Workers
4
1 Control, 3 Worker
5
3 Control, 2 Workers; Or 2 Control, 1 Control+Worker, 2 Workers
...
...
Basically, if you plan to use an high available control plane, make sure at least nodes that are pure control nodes.

Simply run:
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading
# --disable=traefik: disable embedded Traefik
curl -sfL https://get.k3s.io | sh -s - --disable=traefikTraefik is disabled here because we'll deploy NGINX Ingress controller in another step.
The script is idempotent and can be used to upgrade an existing installation. By default, the configuration is made for single node control plane (SQLite is used as database).
If you prefer to use ETCD, you can run the following command:
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading, and load the server shared secret
# --disable=traefik: disable embedded Traefik
# --cluster-init: initialize the etcd cluster
curl -sfL https://get.k3s.io | sh -s - --disable=traefik \
--cluster-initYou can also run this command to migrate from sqlite to ETCD.
(optional) Install the k3s worker plane
To join a cluster, fetch the token from the control plane:
# cat: display the content of a file
cat /var/lib/rancher/k3s/server/node-tokenAnd use it on the worker node:
curl -sfL https://get.k3s.io | K3S_URL=https://<ip or hostname of control-0>:6443 K3S_TOKEN=mynodetoken sh -Fetch the super admin Kubernetes credentials
cat /etc/rancher/k3s/k3s.yamlMerge the configuration in your kubernetes configuration ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t...
server: https://<ip or hostname of control-0>:6443
name: default # Can be renamed
contexts:
- context:
cluster: default # points to clusters.cluster[].name
user: default # points to users.user[].name
name: default # Can be renamed
current-context: default # points to contexts.context[].name
kind: Config
preferences: {}
users:
- name: default # Can be renamed
user:
client-certificate-data: LS0t...
client-key-data: LS0t...The server address must be replaced with the address of your control node (the address you use for SSH for example) and you must allow port 6443/tcp on your firewall.
The access is protected using mTLS, which is as strong as a VPN, or SSH. Be sure to not share your credentials with anyone.
To test the connection, run:
kubectl get nodesYou should get something similar to:
NAME STATUS ROLES AGE VERSION
controller-0 Ready control-plane,master 1d v1.32.3+k3s1You can close the SSH port and use kubectl directly. Do note that the client or server certificates could rotate, so you would need to refetch the credentials.
Install NGINX Ingress Controller
If you've got kubectl ready, helm is also ready.
Run:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespaceThat's it! To check if your ingress controller is ready, check the health of the service:
# svc: service
# -n: namespace
kubectl get svc -n ingress-nginxYou should get something similar to:
# LoadBalancer: The service is exposed externally, and should have an external IP.
# ClusterIP: The service is only expose internally.
# PORTS <left>:<right>: The ports exposed by the service. The left is the port exposed on the load balancer and internally,
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.43.181.110 192.168.122.249 80:32601/TCP,443:32465/TCP 42s
ingress-nginx-controller-admission ClusterIP 10.43.46.48 <none> 443/TCP 42sThe most important one is the LoadBalancer type, which means you can access the ingress controller through your public IP. Look for the EXTERNAL-IP field, and check if it matches your network interface IP (check with ip a on the host, and check the ip of eth0). If it is empty, it means you have an issue with the Cloud Controller Manager, or MetalLB/ServiceLB.
ERR_NAME_NOT_RESOLVED/DNS_PROBE_FINISHED_NXDOMAIN
The domain name does not exist or the DNS cannot resolve it.
Reconfigure your DNS.
You should set an A record with the key: toucan.example.com. and the value <your machine IP>
ERR_CONNECTION_REFUSED
The port 80 is accessible, but closed. No application is running on this port.
Check the health of your ingress deployment.
ERR_CONNECTION_TIMED_OUT
A firewall is blocking the request. Or, a misconfigured routing table is failing the response.
Check the firewall on your cloud or on the machine (iptables, nftables, ufw). Look for inbound and outbound traffic.
Check the machine routing table. Look if the packets come and go from the correct network interface. (ip route, ip a, ip neigh)
ERR_SSL_PROTOCOL_ERROR
Trying to access HTTPS version on a port expecting HTTP.
Check the address in the browser, it should start with http://.
Deploy Toucan Stack
Create a namespace
Create a Kubernetes namespace to deploy the Toucan Stack Helm charts in.
kubectl create namespace toucanNamespaces are used to avoid name conflicts between different projects. Since we are deploying a stack of services, we can use the same namespace for all of them, and avoid conflicting with your own projects.
Deploy the Quay registry credentials to Kubernetes and login with Helm
After gaining access Toucan Toco's Quay registry, you can send it to Kubernetes by running the following command:
# Replace <username> and <password> with your credentials
# docker-registry: The type of secret to create.
# --namespace: The namespace to create the secret in.
# dockerconfigjson: The name of the secret to create.
# --docker-server: The server address of the registry.
# --docker-username: The username for the registry.
# --docker-password: The password for the registry.
kubectl create secret docker-registry --namespace toucan dockerconfigjson \
--docker-server=quay.io \
--docker-username=<username> \
--docker-password=<password>This will allow Kubernetes to fetch container images from the Quay registry.
To sign in to the Quay registry with Helm, run the following command:
helm registry login quay.ioThis will allow to fetch the Toucan Stack Helm charts from the Quay registry.
Deploy the Curity secret
You should have a JSON file in this format:
{
"Company": "[email protected]",
"Edition": "Community",
"Environment": "",
"Expires": "2025-12-13",
"Feature": "",
"GracePeriod": "30",
"Groups": "1",
"Issued": "2024-12-13",
"Licence": "ey...",
"License": "ey...",
"Product": "Identity Server",
"Tier": "Subscription",
"Type": "Production",
"Version": "4.3",
"expired": false
}Copy the value from the License or Licence field, and create the secret with:
# Replace <value> with your license
# generic: The type of secret to create.
# --namespace: The namespace to create the secret in.
# toucan-curity: The name of the secret to create.
# --from-literal: The key and value of the secret to create.
kubectl create secret generic --namespace toucan toucan-curity \
--from-literal=license=<value>Deploy the Helm charts
Since we are using Helm, we can patch the necessary values to inject the credentials and secrets. We also need to expose the service to the external network and secure it with TLS.
Create the values file, which will override the default values.
touch values.override.yaml # You can name it whatever you wantAdd this line to
values.override.yamlinject the registry credentials:
global:
imagePullSecrets:
- dockerconfigjsonAdd this line to inject the Curity secret:
# ...
curity:
config:
license:
secretName: toucan-curity
secretKey: licenseAdd this line to select your storage provisioner:
global:
# ...
defaultStorageClass: local-path
# ...(Optional) Override the volume size:
# ...
laputa:
persistence:
size: 10Gi
curity:
# ...
admin:
persistence:
size: 8Gi
postgresql:
primary:
persistence:
size: 10Gi
mongodb:
persistence:
size: 8GiConfigure TLS for the Toucan Stack:
Create these files:
apiVersion: v1
kind: Secret
metadata:
name: toucan.example.com-tls
stringData:
tls.crt: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
tls.key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
ca.crt: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
type: kubernetes.io/tls
---
apiVersion: v1
kind: Secret
metadata:
name: auth-toucan.example.com-tls
stringData:
tls.crt: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
tls.key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
ca.crt: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
type: kubernetes.io/tlsDeploy the certificates with:
kubectl apply -n <namespace> -f tls-secret.yamlExpose the Toucan Stack by adding these lines:
global:
# ...
## global.hostname configures the helm chart to use toucan.example.com as "public" domain.
hostname: toucan.example.com
nginx:
ingress:
enabled: true
ingressClassName: nginx
tls: true
extraTls:
- hosts:
- toucan.example.com
secretName: 'toucan.example.com-tls' # This secret will be generated.
curity:
# ...
runtime:
ingress:
enabled: true
ingressClassName: nginx
hostname: auth-toucan.example.com
tls: true
extraTls:
- hosts:
- auth-toucan.example.com
secretName: 'auth-toucan.example.com-tls' # This secret will be generated.At this point, your
values.override.yamlshould looks like (minus the volume size overrides):
global:
imagePullSecrets:
- dockerconfigjson
defaultStorageClass: local-path
hostname: toucan.example.com
nginx:
ingress:
enabled: true
ingressClassName: nginx
tls: true
extraTls:
- hosts:
- toucan.example.com
secretName: 'toucan.example.com-tls' # This secret will be generated.
curity:
config:
license:
secretName: toucan-curity
secretKey: license
runtime:
ingress:
enabled: true
ingressClassName: nginx
hostname: auth-toucan.example.com
tls: true
extraTls:
- hosts:
- auth-toucan.example.com
secretName: 'auth-toucan.example.com-tls' # This secret will be generated.Deploy the Toucan Stack:
helm upgrade --install toucan-stack oci://quay.io/toucantoco/charts/toucan-stack \
--namespace toucan \
--values ./values.override.yamlTo get the Admin password, run the following command:
kubectl get secret --namespace toucan toucan-stack-auth -o jsonpath='{.data.toucan-admin-password}' | base64 --decodeYou should be able to access the Toucan Stack at https://toucan.example.com and login with the admin credentials. Enter
[email protected]for the username and the password you got from the previous step.
Last updated
Was this helpful?


