⛴️Deploy Toucan using Helm Charts
In this section, we will deploy Toucan using Helm Charts. We'll assume this configuration:
Traffic is exposed publicly.
Public DNS is configured to forward
auth-toucan.example.com
andtoucan.example.com
to the machine IP.
If you have an air-gapped environment, we can recommend you to read this guide: Toucan - Air-Gapped Deployment.
(optional) Self-hosting a Kubernetes distribution
Since we are introducing Kubernetes with the v3 release, this section will help you deploy a Kubernetes distribution on your infrastructure. Note that if your cloud provider offers a Kubernetes distribution, we heavily recommend using it. Some cloud providers like Scaleway also offers a free Kubernetes control plane.
We will opt for the most standard deployment, while being simple so you can easily maintain and find online documentation.
We chose k3s, an easy-to-use Kubernetes distribution, which is also suitable for production deployments on any machine.
Installing k3s
SSH on host(s)
ssh root@<host>
Install the k3s control plane
In K3s, control nodes are also worker nodes. To choose between single node control plane and multi nodes control plane, you should consider your fault tolerance requirements.
A worker plane does not impact the availability of the control plane. A fully failing worker plane will not bring down the control plane. This is why some cloud provider offers free control plane.
A control plane tolerates failures. A failing control plane brings down the entire cluster.
Here's a list of recommendation based on your number of nodes available:
1
1 Control+Worker
2
1 Control+Worker, 1 Worker
3
1 Control+Worker, 2 Workers
4
1 Control, 3 Worker
5
3 Control, 2 Workers; Or 2 Control, 1 Control+Worker, 2 Workers
...
...
Basically, if you plan to use an high available control plane, make sure at least nodes that are pure control nodes.

Simply run:
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading
# --disable=traefik: disable embedded Traefik
curl -sfL https://get.k3s.io | sh -s - --disable=traefik
Traefik is disabled here because we'll deploy NGINX Ingress controller in another step.
The script is idempotent and can be used to upgrade an existing installation. By default, the configuration is made for single node control plane (SQLite is used as database).
If you prefer to use ETCD, you can run the following command:
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading, and load the server shared secret
# --disable=traefik: disable embedded Traefik
# --cluster-init: initialize the etcd cluster
curl -sfL https://get.k3s.io | sh -s - --disable=traefik \
--cluster-init
You can also run this command to migrate from sqlite to ETCD.
(optional) Install the k3s worker plane
To join a cluster, fetch the token from the control plane:
# cat: display the content of a file
cat /var/lib/rancher/k3s/server/node-token
And use it on the worker node:
curl -sfL https://get.k3s.io | K3S_URL=https://<ip or hostname of control-0>:6443 K3S_TOKEN=mynodetoken sh -
Fetch the super admin Kubernetes credentials
cat /etc/rancher/k3s/k3s.yaml
Merge the configuration in your kubernetes configuration ~/.kube/config
:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t...
server: https://<ip or hostname of control-0>:6443
name: default # Can be renamed
contexts:
- context:
cluster: default # points to clusters.cluster[].name
user: default # points to users.user[].name
name: default # Can be renamed
current-context: default # points to contexts.context[].name
kind: Config
preferences: {}
users:
- name: default # Can be renamed
user:
client-certificate-data: LS0t...
client-key-data: LS0t...
The directory ~/.kube
doesn't exists?
~/.kube
doesn't exists?Run:
# Create the directory
mkdir -p ~/.kube
# Create the file
touch ~/.kube/config
# Make the directory only readable by the current user
chmod 700 ~/.kube
# Make the file only readable by the current user
chmod 600 ~/.kube/config
The server address must be replaced with the address of your control node (the address you use for SSH for example) and you must allow port 6443/tcp on your firewall.
The access is protected using mTLS, which is as strong as a VPN, or SSH. Be sure to not share your credentials with anyone.
To test the connection, run:
kubectl get nodes
You should get something similar to:
NAME STATUS ROLES AGE VERSION
controller-0 Ready control-plane,master 1d v1.32.3+k3s1
You can close the SSH port and use kubectl
directly. Do note that the client or server certificates could rotate, so you would need to refetch the credentials.
Install NGINX Ingress Controller
If you've got kubectl
ready, helm
is also ready.
Run:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
That's it! To check if your ingress controller is ready, check the health of the service:
# svc: service
# -n: namespace
kubectl get svc -n ingress-nginx
You should get something similar to:
# LoadBalancer: The service is exposed externally, and should have an external IP.
# ClusterIP: The service is only expose internally.
# PORTS <left>:<right>: The ports exposed by the service. The left is the port exposed on the load balancer and internally,
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.43.181.110 192.168.122.249 80:32601/TCP,443:32465/TCP 42s
ingress-nginx-controller-admission ClusterIP 10.43.46.48 <none> 443/TCP 42s
The most important one is the LoadBalancer
type, which means you can access the ingress controller through your public IP. Look for the EXTERNAL-IP
field, and check if it matches your network interface IP (check with ip a
on the host, and check the ip of eth0
). If it is empty, it means you have an issue with the Cloud Controller Manager, or MetalLB/ServiceLB.
Have you configured the DNS?
Now that the ports 80 and 443 are opened, you can test your DNS records by trying to access your domain: http://toucan.example.com and http://auth-toucan.example.com, in which you should get a 404 error or a 200 success.
If you don't have any of these, here's a list of errors and possible solutions:
ERR_NAME_NOT_RESOLVED
/DNS_PROBE_FINISHED_NXDOMAIN
The domain name does not exist or the DNS cannot resolve it.
Reconfigure your DNS.
You should set an A record with the key: toucan.example.com.
and the value <your machine IP>
ERR_CONNECTION_REFUSED
The port 80 is accessible, but closed. No application is running on this port.
Check the health of your ingress deployment.
ERR_CONNECTION_TIMED_OUT
A firewall is blocking the request. Or, a misconfigured routing table is failing the response.
Check the firewall on your cloud or on the machine (iptables
, nftables
, ufw
). Look for inbound and outbound traffic.
Check the machine routing table. Look if the packets come and go from the correct network interface. (ip route
, ip a
, ip neigh
)
ERR_SSL_PROTOCOL_ERROR
Trying to access HTTPS version on a port expecting HTTP.
Check the address in the browser, it should start with http://
.
Configure TLS with cert-manager
This guide uses cert-manager to manage the TLS certificates for the ingress controller. cert-manager automatically fetches certificates from ACME (Automatic Certificate Management Environment) servers, such as Let's Encrypt.
If you plan to manage certificate externally, you might be interested in the External Secrets Operator, which is a service used to import secrets securely from external sources.
Lastly, if you opt for none and prefer to self-manage certificates, you can use kubectl
to import secrets directly.
helm install \
cert-manager jetstack/cert-manager \
--repo https://charts.jetstack.io \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
To quickly setup certificates with Let's Encrypt, we can deploy a ClusterIssuer
, and we'll use the HTTP01 method.
Create a file named
cluster-issuer.yaml
:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: http01
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
preferredChain: 'ISRG Root X1'
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
# This is your identity with your ACME provider. Any secret name
# may be chosen. It will be populated with data automatically,
# so generally nothing further needs to be done with
# the secret. If you lose this identity/secret, you will be able to
# generate a new one and generate certificates for any/all domains
# managed using your previous account, but you will be unable to revoke
# any certificates generated using that previous account.
name: example-issuer-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
ingressClassName: nginx
Deploy it:
kubectl apply -f cluster-issuer.yaml
Deploy Toucan Stack
Create a namespace
Create a Kubernetes namespace to deploy the Toucan Stack Helm charts in.
kubectl create namespace toucan
Namespaces are used to avoid name conflicts between different projects. Since we are deploying a stack of services, we can use the same namespace for all of them, and avoid conflicting with your own projects.
Deploy the Quay registry credentials
After gaining access Toucan Toco's Quay registry, you can send it to Kubernetes by running the following command:
# Replace <username> and <password> with your credentials
# docker-registry: The type of secret to create.
# --namespace: The namespace to create the secret in.
# dockerconfigjson: The name of the secret to create.
# --docker-server: The server address of the registry.
# --docker-username: The username for the registry.
# --docker-password: The password for the registry.
kubectl create secret docker-registry --namespace toucan dockerconfigjson \
--docker-server=quay.io \
--docker-username=<username> \
--docker-password=<password>
Deploy the Curity secret
You should have a JSON file in this format:
{
"Company": "[email protected]",
"Edition": "Community",
"Environment": "",
"Expires": "2025-12-13",
"Feature": "",
"GracePeriod": "30",
"Groups": "1",
"Issued": "2024-12-13",
"Licence": "ey...",
"License": "ey...",
"Product": "Identity Server",
"Tier": "Subscription",
"Type": "Production",
"Version": "4.3",
"expired": false
}
Copy the value from the License
or Licence
field, and create the secret with:
# Replace <value> with your license
# generic: The type of secret to create.
# --namespace: The namespace to create the secret in.
# toucan-curity: The name of the secret to create.
# --from-literal: The key and value of the secret to create.
kubectl create secret generic --namespace toucan toucan-curity \
--from-literal=license=<value>
Deploy the Helm charts
Since we are using Helm, we can patch the necessary values to inject the credentials and secrets. We also need to expose the service to the external network and secure it with TLS.
Create the values file, which will override the default values.
touch values.override.yaml # You can name it whatever you want
Add this line to
values.override.yaml
inject the registry credentials:
global:
imagePullSecrets:
- dockerconfigjson
Add this line to inject the Curity secret:
# ...
curity:
config:
license:
secretName: toucan-curity
secretKey: license
Add this line to select your storage provisioner:
global:
# ...
defaultStorageClass: local-path
# ...
(Optional) Override the volume size:
# ...
laputa:
persistence:
size: 10Gi
curity:
# ...
admin:
persistence:
size: 8Gi
postgresql:
primary:
persistence:
size: 10Gi
mongodb:
persistence:
size: 8Gi
Expose the Toucan Stack by adding these lines:
global:
# ...
## global.hostname configures the helm chart to use toucan.example.com as "public" domain.
hostname: toucan.example.com
nginx:
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: http01 # http01 references the previously created ClusterIssuer
extraTls:
- hosts:
- toucan.example.com
secretName: 'toucan.example.com-cert' # This secret will be generated.
curity:
# ...
runtime:
ingress:
enabled: true
ingressClassName: nginx
hostname: auth-toucan.example.com
annotations:
cert-manager.io/cluster-issuer: http01 # http01 references the previously created ClusterIssuer
extraTls:
- hosts:
- auth-toucan.example.com
secretName: 'auth-toucan.example.com' # This secret will be generated.
At this point, your
values.override.yaml
should looks like (minus the volume size overrides):
global:
imagePullSecrets:
- dockerconfigjson
defaultStorageClass: local-path
hostname: toucan.example.com
nginx:
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: http01
extraTls:
- hosts:
- toucan.example.com
secretName: 'toucan.example.com-cert'
curity:
config:
license:
secretName: toucan-curity
secretKey: license
runtime:
ingress:
enabled: true
ingressClassName: nginx
hostname: auth-toucan.example.com
annotations:
cert-manager.io/cluster-issuer: http01
extraTls:
- hosts:
- auth-toucan.example.com
secretName: 'auth-toucan.example.com'
Deploy the Toucan Stack:
helm upgrade --install toucan-stack oci://quay.io/toucantoco/charts/toucan-stack \
--namespace toucan \
--values ./values.override.yaml
To get the Admin password, run the following command:
kubectl get secret --namespace toucan toucan-stack-auth -o jsonpath='{.data.toucan-admin-password}' | base64 --decode
You should be able to access the Toucan Stack at https://toucan.example.com and login with the admin credentials. Enter
[email protected]
for the username and the password you got from the previous step.
Last updated
Was this helpful?