# Deploy Toucan using Helm Charts

In this section, we will deploy Toucan using Helm Charts. We'll assume this configuration:

* Traffic is exposed **publicly**.
* **Public DNS** is configured to forward `auth-toucan.example.com` and `toucan.example.com` to the machine IP.

If you have an air-gapped environment, we can recommend you to read this guide: [Toucan - Air-Gapped Deployment](https://docs-v3.toucantoco.com/self-hosted-toucan/getting-started/air-gapped).

{% hint style="danger" %}
**NOTE**: This guide helps you deploy a simple "one-shot" "all-in-one" Toucan Stack, which might not be suitable for production.

We **heavily** recommend in using an external PostgreSQL database as the one embedded might not be suitable for production:

* Please follow the following guide to connect to your external database: [Toucan - External Database](https://docs-v3.toucantoco.com/self-hosted-toucan/configuration/external-database)
* If you still wish to deploy PostgreSQL inside Kubernetes, we recommend using [CloudNativePG](https://cloudnative-pg.io/):
  * Supports failover and multiple standbys replicas.
  * Supports backups and restores.
  * Supports migrating data from another PostgreSQL instance.
  * Supports audit log, monitoring...
    {% endhint %}

## (optional) Self-hosting a Kubernetes distribution

Since we are introducing Kubernetes with the v3 release, this section will help you deploy a Kubernetes distribution on your infrastructure. Note that if your cloud provider offers a Kubernetes distribution, we heavily recommend using it. Some cloud providers like [Scaleway](https://www.scaleway.com/en/kubernetes-kapsule/) also offers a free Kubernetes control plane.

**We will opt for the most standard deployment, while being simple so you can easily maintain and find online documentation.**

We chose [k3s](https://k3s.io), an easy-to-use Kubernetes distribution, which is also suitable for production deployments on any machine.

### Installing k3s

{% stepper %}
{% step %}
**SSH on host(s)**

{% code title="shell: /work/" overflow="wrap" %}

```shell
ssh root@<host>
```

{% endcode %}
{% endstep %}

{% step %}
**Install the k3s control plane**

In K3s, control nodes are also worker nodes. To choose between single node control plane and multi nodes control plane, you should consider your fault tolerance requirements.

**A worker plane does not impact the availability of the control plane.** A fully failing worker plane will not bring down the control plane. This is why some cloud provider offers free control plane.

**A control plane tolerates** $$\lfloor\frac{N-1}{2}\rfloor$$ **failures.** A failing control plane brings down the entire cluster.

Here's a list of recommendation based on your number of nodes available:

| Nodes | Setup                                                           |
| ----- | --------------------------------------------------------------- |
| **1** | 1 Control+Worker                                                |
| **2** | 1 Control+Worker, 1 Worker                                      |
| **3** | 1 Control+Worker, 2 Workers                                     |
| **4** | 1 Control, 3 Worker                                             |
| **5** | 3 Control, 2 Workers; Or 2 Control, 1 Control+Worker, 2 Workers |
| ...   | ...                                                             |

Basically, if you plan to use an high available control plane, make sure at least $$\lfloor\frac{N-1}{2}\rfloor$$ nodes that are pure control nodes.

{% tabs %}
{% tab title="Single node control plane" %}
![Single node control plane.](https://1809014303-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FZxYYf1KpgarKMgMsDCrw%2Fuploads%2Fgit-blob-212834085801cce12b27ccf97e91cbcb2c2817a6%2Fcontrolplane.drawio.png?alt=media)

Simply run:

{% code title="shell: root\@control-1:\~/" overflow="wrap" %}

```shell
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading
# --disable=traefik: disable embedded Traefik
curl -sfL https://get.k3s.io | sh -s - --disable=traefik
```

{% endcode %}

Traefik is disabled here because we'll deploy NGINX Ingress controller in another step.

**The script is idempotent and can be used to upgrade an existing installation.** By default, the configuration is made for single node control plane (SQLite is used as database).

If you prefer to use ETCD, you can run the following command:

{% code title="shell: root\@control-1:\~/" overflow="wrap" %}

```shell
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | sh -: run the script while downloading, and load the server shared secret
# --disable=traefik: disable embedded Traefik
# --cluster-init: initialize the etcd cluster
curl -sfL https://get.k3s.io | sh -s - --disable=traefik \
  --cluster-init
```

{% endcode %}

You can also run this command to **migrate from sqlite to ETCD.**
{% endtab %}

{% tab title="High available control plane" %}
{% hint style="info" %}
**Note:** While it may seem attractive to use a high-availability control plane, it does entail additional overheads in terms of [maintenance](https://etcd.io/docs/v3.5/op-guide/clustering/) and [disaster recovery](https://etcd.io/docs/v3.5/op-guide/recovery/). You **MUST** have at least $$\lfloor\frac{N-1}{2}\rfloor$$ control plane nodes healthy at all times.
{% endhint %}

![High available control plane.](https://1809014303-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FZxYYf1KpgarKMgMsDCrw%2Fuploads%2Fgit-blob-38560526a132dfdad5ab0d2a49995f40c176c475%2Fhacontrolplane.drawio.png?alt=media)

If you need to setup ETCD, you'll need an **odd-number of nodes**. The fault tolerance is $$\lfloor\frac{N-1}{2}\rfloor$$ (3 nodes = 1 failure, 5 nodes = 2 failures, ...) to avoid a [quorum](https://en.wikipedia.org/wiki/Quorum) loss during leader election.

And then you can run the following command:

{% code title="shell: root\@control-1:\~/" overflow="wrap" %}

```shell
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | K3S_TOKEN=SECRET sh -: run the script while downloading, and load the server shared secret
# --disable=traefik: disable embedded Traefik
# --cluster-init: initialize the etcd cluster
# --tls-san: add a Subject Alternative Name to the server certificate.
# Replace <FIXED_IP> with the IP used by the other nodes to contact this node.
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - --disable=traefik \
  --cluster-init \
  --tls-san=<FIXED_IP> # Optional, needed if using a fixed registration address
```

{% endcode %}

Traefik is disabled here because we'll deploy NGINX Ingress controller in another step.

Make the second node join the ETCD cluster:

{% code title="shell: root\@control-2:\~/" overflow="wrap" %}

```shell
# curl: Download a file from the web
# -s: silent
# -f: fail silently
# -L: follow redirects
# | K3S_TOKEN=SECRET sh -: run the script while downloading, and load the server shared secret
# --disable=traefik: disable embedded Traefik
# --cluster-init: initialize the etcd cluster
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - --disable=traefik \
  --server https://<ip or hostname of control-1>:6443
```

{% endcode %}

**Do the same for the third node.**

Read more on [k3s - High Availability Embedded etcd](https://docs.k3s.io/datastore/ha-embedded).

**The script is idempotent and can be used to upgrade an existing installation.**
{% endtab %}
{% endtabs %}

{% hint style="info" %}
By default, K3s mixes the control plane and the worker plane.

If you wish to separate the control plane from the plane, add a [taint](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/):

```shell
curl -sfL https://get.k3s.io | sh -s - --disable=traefik \
  ... \
  --node-taint node-role.kubernetes.io/control-plane=true:NoSchedule
```

{% endhint %}
{% endstep %}

{% step %}
**(optional) Install the k3s worker plane**

To join a cluster, fetch the token from the control plane:

{% code title="shell: root\@control-0:\~/" overflow="wrap" %}

```shell
# cat: display the content of a file
cat /var/lib/rancher/k3s/server/node-token
```

{% endcode %}

And use it on the worker node:

{% code title="shell: root\@worker-0:\~/" overflow="wrap" %}

```shell
curl -sfL https://get.k3s.io | K3S_URL=https://<ip or hostname of control-0>:6443 K3S_TOKEN=mynodetoken sh -
```

{% endcode %}
{% endstep %}
{% endstepper %}

#### Fetch the super admin Kubernetes credentials

{% code title="shell: root\@control-0:\~/" overflow="wrap" %}

```shell
cat /etc/rancher/k3s/k3s.yaml
```

{% endcode %}

**Merge the configuration** in your kubernetes configuration `~/.kube/config`:

{% code title="yaml: \~/.kube/config.yaml" overflow="wrap" %}

```yaml
apiVersion: v1
clusters:
  - cluster:
      certificate-authority-data: LS0t...
      server: https://<ip or hostname of control-0>:6443
    name: default # Can be renamed
contexts:
  - context:
      cluster: default # points to clusters.cluster[].name
      user: default # points to users.user[].name
    name: default # Can be renamed
current-context: default # points to contexts.context[].name
kind: Config
preferences: {}
users:
  - name: default # Can be renamed
    user:
      client-certificate-data: LS0t...
      client-key-data: LS0t...
```

{% endcode %}

{% hint style="info" %}
**The directory `~/.kube` doesn't exists?**

Run:

{% code title="yaml: \~/" overflow="wrap" %}

```shell
# Create the directory
mkdir -p ~/.kube

# Create the file
touch ~/.kube/config

# Make the directory only readable by the current user
chmod 700 ~/.kube

# Make the file only readable by the current user
chmod 600 ~/.kube/config
```

{% endcode %}
{% endhint %}

The server address must be replaced with the address of your control node (the address you use for SSH for example) and you must allow port 6443/tcp on your firewall.

The access is protected using mTLS, which is as strong as a VPN, or SSH. **Be sure to not share your credentials with anyone.**

To test the connection, run:

{% code title="shell: /work/" overflow="wrap" %}

```shell
kubectl get nodes
```

{% endcode %}

You should get something similar to:

{% code title="shell: \~/" %}

```shell
NAME            STATUS   ROLES                  AGE      VERSION
controller-0    Ready    control-plane,master   1d       v1.32.3+k3s1
```

{% endcode %}

**You can close the SSH port and use `kubectl` directly.** Do note that the client or server certificates could rotate, so you would need to refetch the credentials.

### Install NGINX Ingress Controller

{% hint style="info" %}
Since you've installed k3s, by default, you have [ServiceLB](https://docs.k3s.io/networking/networking-services#service-load-balancer) ready, which will allow you to expose services to the public.

If you are hosting k3s on a cloud, you might also be interested in using the Cloud Controller Manager of your cloud provider to link your cluster to your cloud. Follow this [guide](https://docs.k3s.io/networking/networking-services#deploying-an-external-cloud-controller-manager) to disable the built-in Cloud Controller Manager. You'll need to follow the guide of your cloud provider to deploy the Cloud Controller Manager.

If you opt for a Kubernetes distribution without ServiceLB and cannot use a Cloud Controller Manager, you might be interested in [MetalLB](https://metallb.io).
{% endhint %}

If you've got `kubectl` ready, `helm` is also ready.

Run:

{% code title="shell: /work/" overflow="wrap" %}

```shell
helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace
```

{% endcode %}

That's it! To check if your ingress controller is ready, check the health of the service:

{% code title="shell: /work/" overflow="wrap" %}

```shell
# svc: service
# -n: namespace
kubectl get svc -n ingress-nginx
```

{% endcode %}

You should get something similar to:

{% code title="shell: /work/" %}

```shell
# LoadBalancer: The service is exposed externally, and should have an external IP.
# ClusterIP: The service is only expose internally.
# PORTS <left>:<right>: The ports exposed by the service. The left is the port exposed on the load balancer and internally,
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.43.181.110   192.168.122.249   80:32601/TCP,443:32465/TCP   42s
ingress-nginx-controller-admission   ClusterIP      10.43.46.48     <none>            443/TCP                      42s
```

{% endcode %}

The most important one is the `LoadBalancer` type, which means you can access the ingress controller through your public IP. Look for the `EXTERNAL-IP` field, and check if it matches your network interface IP (check with `ip a` on the host, and check the ip of `eth0`). If it is empty, it means you have an issue with the Cloud Controller Manager, or MetalLB/ServiceLB.

{% hint style="info" %}
**Have you configured the DNS?**

Now that the ports 80 and 443 are opened, you can test your DNS records by trying to access your domain: <http://toucan.example.com> and <http://auth-toucan.example.com>, in which you should get a 404 error or a 200 success.

If you don't have any of these, here's a list of errors and possible solutions:
{% endhint %}

| Error                                                 | Cause                                                                                                        | Solution                                                                                                                                                                                                                                                                                                                                         |
| ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `ERR_NAME_NOT_RESOLVED`/`DNS_PROBE_FINISHED_NXDOMAIN` | The domain name does not exist or the DNS cannot resolve it.                                                 | <p>Reconfigure your DNS.<br>You should set an A record with the key: <code>toucan.example.com.</code> and the value <code>\<your machine IP></code></p>                                                                                                                                                                                          |
| `ERR_CONNECTION_REFUSED`                              | The port 80 is accessible, but closed. No application is running on this port.                               | Check the health of your ingress deployment.                                                                                                                                                                                                                                                                                                     |
| `ERR_CONNECTION_TIMED_OUT`                            | <p>A firewall is blocking the request.<br><br>Or, a misconfigured routing table is failing the response.</p> | <p>Check the firewall on your cloud or on the machine (<code>iptables</code>, <code>nftables</code>, <code>ufw</code>). Look for inbound and outbound traffic.<br><br>Check the machine routing table. Look if the packets come and go from the correct network interface. (<code>ip route</code>, <code>ip a</code>, <code>ip neigh</code>)</p> |
| `ERR_SSL_PROTOCOL_ERROR`                              | Trying to access HTTPS version on a port expecting HTTP.                                                     | Check the address in the browser, it should start with `http://`.                                                                                                                                                                                                                                                                                |

## Deploy Toucan Stack

{% stepper %}
{% step %}
**Create a namespace**

Create a Kubernetes namespace to deploy the Toucan Stack Helm charts in.

{% code title="shell: /work/" overflow="wrap" %}

```shell
kubectl create namespace toucan
```

{% endcode %}

Namespaces are used to avoid name conflicts between different projects. Since we are deploying a stack of services, we can use the same namespace for all of them, and avoid conflicting with your own projects.
{% endstep %}

{% step %}
**Deploy the Quay registry credentials to Kubernetes and login with Helm**

After gaining access Toucan Toco's Quay registry, you can send it to Kubernetes by running the following command:

{% code title="shell: /work/" overflow="wrap" %}

```shell
# Replace <username> and <password> with your credentials
# docker-registry: The type of secret to create.
# --namespace: The namespace to create the secret in.
# dockerconfigjson: The name of the secret to create.
# --docker-server: The server address of the registry.
# --docker-username: The username for the registry.
# --docker-password: The password for the registry.
kubectl create secret docker-registry --namespace toucan dockerconfigjson \
  --docker-server=quay.io \
  --docker-username=<username> \
  --docker-password=<password>
```

{% endcode %}

This will allow Kubernetes to fetch container images from the Quay registry.

To sign in to the Quay registry with Helm, run the following command:

{% code title="shell: /work/" overflow="wrap" %}

```shell
helm registry login quay.io
```

{% endcode %}

This will allow to fetch the Toucan Stack Helm charts from the Quay registry.

{% hint style="info" %}
To fetch your Quay credentials, you can generate an encrypted password on Quay.io:

1. Go to **Account Settings**.
2. Go to the "Gear Menu" ![gear](https://1809014303-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FZxYYf1KpgarKMgMsDCrw%2Fuploads%2Fgit-blob-6e55fb7e4deda003226c680ea4e10a177f84c825%2Fgear.png?alt=media) on the left side menu.
3. Click on "Generate Encrypted Password"

   ![Fetching Quay encrypted password](https://1809014303-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FZxYYf1KpgarKMgMsDCrw%2Fuploads%2Fgit-blob-3e77abb521882dbd5a451d4734c5abc40175ac6b%2Fquaycreds.png?alt=media)

   Use the encrypted password and username in the `--docker-username` and `--docker-password` flags.
   {% endhint %}
   {% endstep %}

{% step %}
**Deploy the Curity secret**

You should have a JSON file in this format:

{% code title="json: Subscription\_YYY-MM-DD.json" overflow="wrap" %}

```json
{
  "Company": "user@example.com",
  "Edition": "Community",
  "Environment": "",
  "Expires": "2025-12-13",
  "Feature": "",
  "GracePeriod": "30",
  "Groups": "1",
  "Issued": "2024-12-13",
  "Licence": "ey...",
  "License": "ey...",
  "Product": "Identity Server",
  "Tier": "Subscription",
  "Type": "Production",
  "Version": "4.3",
  "expired": false
}
```

{% endcode %}

Copy the value from the `License` or `Licence` field, and create the secret with:

{% code title="shell: /work/" overflow="wrap" %}

```shell
# Replace <value> with your license
# generic: The type of secret to create.
# --namespace: The namespace to create the secret in.
# toucan-curity: The name of the secret to create.
# --from-literal: The key and value of the secret to create.
kubectl create secret generic --namespace toucan toucan-curity \
  --from-literal=license=<value>
```

{% endcode %}
{% endstep %}

{% step %}
**Deploy the Helm charts**

Since we are using Helm, we can patch the necessary values to inject the credentials and secrets. We also need to expose the service to the external network and secure it with TLS.

1. Create the values file, which will override the default values.

{% code title="shell: /work/" overflow="wrap" %}

```shell
touch values.override.yaml # You can name it whatever you want
```

{% endcode %}

2. Add this line to `values.override.yaml` inject the registry credentials:

{% code title="yaml: /work/values.override.yaml" %}

```yaml
global:
  imagePullSecrets:
    - dockerconfigjson
```

{% endcode %}

3. Add this line to inject the Curity secret:

{% code title="yaml: /work/values.override.yaml" %}

```yaml
# ...

curity:
  config:
    license:
      secretName: toucan-curity
      secretKey: license
```

{% endcode %}

4. Add this line to select your storage provisioner:

{% code title="yaml: /work/values.override.yaml" %}

```yaml
global:
  # ...
  defaultStorageClass: local-path
# ...
```

{% endcode %}

{% hint style="info" %}
You can fetch the available storage classes with:

{% code title="shell: /work/" overflow="wrap" %}

```shell
kubectl get storageclass
```

{% endcode %}

You should see something like this:

{% code title="shell: /work/" %}

```shell
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path           rancher.io/local-path   Delete          WaitForFirstConsumer   false                  121d
```

{% endcode %}
{% endhint %}

5. (Optional) Override the volume size:

{% code title="yaml: /work/values.override.yaml" %}

```yaml
# ...
laputa:
  persistence:
    size: 10Gi

curity:
  # ...
  admin:
    persistence:
      size: 8Gi

postgresql:
  primary:
    persistence:
      size: 10Gi

mongodb:
  persistence:
    size: 8Gi
```

{% endcode %}

{% hint style="info" %}
**NOTE**: This is only useful if you are using a storage provisioner which handles "sizing". The `local-path-provisioner` does NOT! So these values doesn't mean anything, but many cloud provider and block storage provisioners do.
{% endhint %}

6. Configure TLS for the Toucan Stack:

{% hint style="info" %}
**SUGGESTION**: We recommend using [cert-manager](https://cert-manager.io/) to issue TLS certificates, which is able to rotate certificates on a regular basis.

You can also use [external-secrets](https://github.com/external-secrets/external-secrets), to fetch TLS certificates from a secret manager like AWS Secrets Manager, Hashicorp Vault, etc.
{% endhint %}

Create these files:

{% code title="yaml: tls-secret.yaml" overflow="wrap" %}

```yaml
apiVersion: v1
kind: Secret
metadata:
  name: toucan.example.com-tls
stringData:
  tls.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  tls.key: |
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  ca.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
type: kubernetes.io/tls
---
apiVersion: v1
kind: Secret
metadata:
  name: auth-toucan.example.com-tls
stringData:
  tls.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  tls.key: |
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  ca.crt: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
type: kubernetes.io/tls
```

{% endcode %}

Deploy the certificates with:

{% code title="shell: /work/" overflow="wrap" %}

```shell
kubectl apply -n <namespace> -f tls-secret.yaml
```

{% endcode %}

7. Expose the Toucan Stack by adding these lines:

{% code title="yaml: /work/values.override.yaml" %}

```yaml
global:
  # ...

  ## global.hostname configures the helm chart to use toucan.example.com as "public" domain.
  hostname: toucan.example.com

canopee:
  ingress:
    enabled: true
    ingressClassName: nginx
    tls: true
    extraTls:
      - hosts:
          - toucan.example.com
        secretName: 'toucan.example.com-tls' # This secret will be generated.

 curity:
   # ...
   runtime:
     ingress:
       enabled: true
       ingressClassName: nginx
       hostname: auth-toucan.example.com
       tls: true
       extraTls:
         - hosts:
             - auth-toucan.example.com
           secretName: 'auth-toucan.example.com-tls' # This secret will be generated.
```

{% endcode %}

{% hint style="info" %}
Annotations are used by controllers like cert-manager to trigger side effects.
{% endhint %}

8. At this point, your `values.override.yaml` should looks like (minus the volume size overrides):

{% code title="yaml: /work/values.override.yaml" %}

```yaml
global:
  imagePullSecrets:
    - dockerconfigjson
  defaultStorageClass: local-path
  hostname: toucan.example.com

canopee:
  ingress:
    enabled: true
    ingressClassName: nginx
    tls: true
    extraTls:
      - hosts:
          - toucan.example.com
        secretName: 'toucan.example.com-tls' # This secret will be generated.

curity:
  config:
    license:
      secretName: toucan-curity
      secretKey: license

   runtime:
     ingress:
       enabled: true
       ingressClassName: nginx
       hostname: auth-toucan.example.com
       tls: true
       extraTls:
         - hosts:
             - auth-toucan.example.com
           secretName: 'auth-toucan.example.com-tls' # This secret will be generated.
```

{% endcode %}

9. Deploy the Toucan Stack:

{% code title="shell: /work/" overflow="wrap" %}

```shell
helm upgrade --install toucan-stack oci://quay.io/toucantoco/charts/toucan-stack \
 --namespace toucan \
 --values ./values.override.yaml
```

{% endcode %}

{% hint style="info" %}
If the installation fails with:

```shell
Error: INSTALLATION FAILED: failed post-install: 1 error occurred:
        * timed out waiting for the condition
```

You should check the health of the deployment. Use `kubectl get <deployments/statefulsets/pods> -n toucan` to check the status of the deployment. And use `kubectl logs <pod-name> -c <container-name> -n toucan` to check the logs of the deployment.

We **highly recommend** using a Kubernetes GUI for troubleshooting like for example [Headlamp](https://headlamp.dev).
{% endhint %}

{% hint style="info" %}
If you want to lock the version, simply set the `--version` flag:

{% code title="shell: /work/" overflow="wrap" %}

```shell
helm upgrade --install toucan-stack oci://quay.io/toucantoco/charts/toucan-stack \
 --namespace toucan \
 --version v1.0.0 \
 --values ./values.override.yaml
```

{% endcode %}

If you want to customize the values, you can fetch the default values with:

{% code title="shell: /work/" overflow="wrap" %}

```shell
helm show values oci://quay.io/toucantoco/charts/toucan-stack | less
```

{% endcode %}

It's quite long, so we recommend using a YAML editor able to "group" the values.
{% endhint %}

10. To get the Admin password, run the following command:

{% code title="shell: /work/" overflow="wrap" %}

```shell
kubectl get secret --namespace toucan toucan-stack-auth -o jsonpath='{.data.toucan-admin-password}' | base64 --decode
```

{% endcode %}

11. You should be able to access the Toucan Stack at <https://toucan.example.com> and login with the admin credentials. Enter `admin@example.com` for the username and the password you got from the previous step.
    {% endstep %}
    {% endstepper %}
