โด๏ธDeploy Toucan using Helm Charts
In this section, we will deploy Toucan using Helm Charts. We'll assume this configuration:
Traffic is exposed publicly.
Public DNS is configured to forward
auth-toucan.example.comandtoucan.example.comto the machine IP.
If you have an air-gapped environment, we can recommend you to read this guide: Toucan - Air-Gapped Deployment.
NOTE: This guide helps you deploy a simple "one-shot" "all-in-one" Toucan Stack, which might not be suitable for production.
We heavily recommend in using an external PostgreSQL database as the one embedded might not be suitable for production:
Please follow the following guide to connect to your external database: Toucan - External Database
If you still wish to deploy PostgreSQL inside Kubernetes, we recommend using CloudNativePG:
Supports failover and multiple standbys replicas.
Supports backups and restores.
Supports migrating data from another PostgreSQL instance.
Supports audit log, monitoring...
(optional) Self-hosting a Kubernetes distribution
Since we are introducing Kubernetes with the v3 release, this section will help you deploy a Kubernetes distribution on your infrastructure. Note that if your cloud provider offers a Kubernetes distribution, we heavily recommend using it. Some cloud providers like Scaleway also offers a free Kubernetes control plane.
We will opt for the most standard deployment, while being simple so you can easily maintain and find online documentation.
We chose k3s, an easy-to-use Kubernetes distribution, which is also suitable for production deployments on any machine.
Installing k3s
SSH on host(s)
Install the k3s control plane
In K3s, control nodes are also worker nodes. To choose between single node control plane and multi nodes control plane, you should consider your fault tolerance requirements.
A worker plane does not impact the availability of the control plane. A fully failing worker plane will not bring down the control plane. This is why some cloud provider offers free control plane.
A control plane tolerates โ2Nโ1โโ failures. A failing control plane brings down the entire cluster.
Here's a list of recommendation based on your number of nodes available:
1
1 Control+Worker
2
1 Control+Worker, 1 Worker
3
1 Control+Worker, 2 Workers
4
1 Control, 3 Worker
5
3 Control, 2 Workers; Or 2 Control, 1 Control+Worker, 2 Workers
...
...
Basically, if you plan to use an high available control plane, make sure at least โ2Nโ1โโ nodes that are pure control nodes.

Simply run:
Traefik is disabled here because we'll deploy NGINX Ingress controller in another step.
The script is idempotent and can be used to upgrade an existing installation. By default, the configuration is made for single node control plane (SQLite is used as database).
If you prefer to use ETCD, you can run the following command:
You can also run this command to migrate from sqlite to ETCD.
Note: While it may seem attractive to use a high-availability control plane, it does entail additional overheads in terms of maintenance and disaster recovery. You MUST have at least โ2Nโ1โโ control plane nodes healthy at all times.

If you need to setup ETCD, you'll need an odd-number of nodes. The fault tolerance is โ2Nโ1โโ (3 nodes = 1 failure, 5 nodes = 2 failures, ...) to avoid a quorum loss during leader election.
And then you can run the following command:
Traefik is disabled here because we'll deploy NGINX Ingress controller in another step.
Make the second node join the ETCD cluster:
Do the same for the third node.
Read more on k3s - High Availability Embedded etcd.
The script is idempotent and can be used to upgrade an existing installation.
By default, K3s mixes the control plane and the worker plane.
If you wish to separate the control plane from the plane, add a taint:
(optional) Install the k3s worker plane
To join a cluster, fetch the token from the control plane:
And use it on the worker node:
Fetch the super admin Kubernetes credentials
Merge the configuration in your kubernetes configuration ~/.kube/config:
The directory ~/.kube doesn't exists?
Run:
The server address must be replaced with the address of your control node (the address you use for SSH for example) and you must allow port 6443/tcp on your firewall.
The access is protected using mTLS, which is as strong as a VPN, or SSH. Be sure to not share your credentials with anyone.
To test the connection, run:
You should get something similar to:
You can close the SSH port and use kubectl directly. Do note that the client or server certificates could rotate, so you would need to refetch the credentials.
Install NGINX Ingress Controller
Since you've installed k3s, by default, you have ServiceLB ready, which will allow you to expose services to the public.
If you are hosting k3s on a cloud, you might also be interested in using the Cloud Controller Manager of your cloud provider to link your cluster to your cloud. Follow this guide to disable the built-in Cloud Controller Manager. You'll need to follow the guide of your cloud provider to deploy the Cloud Controller Manager.
If you opt for a Kubernetes distribution without ServiceLB and cannot use a Cloud Controller Manager, you might be interested in MetalLB.
If you've got kubectl ready, helm is also ready.
Run:
That's it! To check if your ingress controller is ready, check the health of the service:
You should get something similar to:
The most important one is the LoadBalancer type, which means you can access the ingress controller through your public IP. Look for the EXTERNAL-IP field, and check if it matches your network interface IP (check with ip a on the host, and check the ip of eth0). If it is empty, it means you have an issue with the Cloud Controller Manager, or MetalLB/ServiceLB.
Have you configured the DNS?
Now that the ports 80 and 443 are opened, you can test your DNS records by trying to access your domain: http://toucan.example.com and http://auth-toucan.example.com, in which you should get a 404 error or a 200 success.
If you don't have any of these, here's a list of errors and possible solutions:
ERR_NAME_NOT_RESOLVED/DNS_PROBE_FINISHED_NXDOMAIN
The domain name does not exist or the DNS cannot resolve it.
Reconfigure your DNS.
You should set an A record with the key: toucan.example.com. and the value <your machine IP>
ERR_CONNECTION_REFUSED
The port 80 is accessible, but closed. No application is running on this port.
Check the health of your ingress deployment.
ERR_CONNECTION_TIMED_OUT
A firewall is blocking the request. Or, a misconfigured routing table is failing the response.
Check the firewall on your cloud or on the machine (iptables, nftables, ufw). Look for inbound and outbound traffic.
Check the machine routing table. Look if the packets come and go from the correct network interface. (ip route, ip a, ip neigh)
ERR_SSL_PROTOCOL_ERROR
Trying to access HTTPS version on a port expecting HTTP.
Check the address in the browser, it should start with http://.
Deploy Toucan Stack
Create a namespace
Create a Kubernetes namespace to deploy the Toucan Stack Helm charts in.
Namespaces are used to avoid name conflicts between different projects. Since we are deploying a stack of services, we can use the same namespace for all of them, and avoid conflicting with your own projects.
Deploy the Quay registry credentials to Kubernetes and login with Helm
After gaining access Toucan Toco's Quay registry, you can send it to Kubernetes by running the following command:
This will allow Kubernetes to fetch container images from the Quay registry.
To sign in to the Quay registry with Helm, run the following command:
This will allow to fetch the Toucan Stack Helm charts from the Quay registry.
To fetch your Quay credentials, you can generate an encrypted password on Quay.io:
Go to Account Settings.
Go to the "Gear Menu"
on the left side menu.Click on "Generate Encrypted Password"

Fetching Quay encrypted password Use the encrypted password and username in the
--docker-usernameand--docker-passwordflags.
Deploy the Curity secret
You should have a JSON file in this format:
Copy the value from the License or Licence field, and create the secret with:
Deploy the Helm charts
Since we are using Helm, we can patch the necessary values to inject the credentials and secrets. We also need to expose the service to the external network and secure it with TLS.
Create the values file, which will override the default values.
Add this line to
values.override.yamlinject the registry credentials:
Add this line to inject the Curity secret:
Add this line to select your storage provisioner:
You can fetch the available storage classes with:
You should see something like this:
(Optional) Override the volume size:
NOTE: This is only useful if you are using a storage provisioner which handles "sizing". The local-path-provisioner does NOT! So these values doesn't mean anything, but many cloud provider and block storage provisioners do.
Configure TLS for the Toucan Stack:
SUGGESTION: We recommend using cert-manager to issue TLS certificates, which is able to rotate certificates on a regular basis.
You can also use external-secrets, to fetch TLS certificates from a secret manager like AWS Secrets Manager, Hashicorp Vault, etc.
Create these files:
Deploy the certificates with:
Expose the Toucan Stack by adding these lines:
Annotations are used by controllers like cert-manager to trigger side effects.
At this point, your
values.override.yamlshould looks like (minus the volume size overrides):
Deploy the Toucan Stack:
If the installation fails with:
You should check the health of the deployment. Use kubectl get <deployments/statefulsets/pods> -n toucan to check the status of the deployment. And use kubectl logs <pod-name> -c <container-name> -n toucan to check the logs of the deployment.
We highly recommend using a Kubernetes GUI for troubleshooting like for example Headlamp.
If you want to lock the version, simply set the --version flag:
If you want to customize the values, you can fetch the default values with:
It's quite long, so we recommend using a YAML editor able to "group" the values.
To get the Admin password, run the following command:
You should be able to access the Toucan Stack at https://toucan.example.com and login with the admin credentials. Enter
[email protected]for the username and the password you got from the previous step.
Last updated
Was this helpful?