📄Logs
Most logs are parsable so you can use Filebeat, Fluentd, Vector, etc. to process them.
If you wish to read the logs "as-is", you can use kubectl logs -f.
Example:
# kubectl logs -n <namespace> <pod> <container>
kubectl logs -n toucan toucan-stack-laputa-0 laputaWe do not provide any log aggregation service in the Helm Charts, because you should be able to fetch all the logs from Kubernetes directly, or by setting up your own sidecar container.
Method 1: Fetch logs from Kubernetes directly
api:
address: 0.0.0.0:8686
enabled: false
playground: true
data_dir: /vector-data-dir
sources:
# Fetch logs from all containers
k8s:
type: kubernetes_logs
transforms:
# Convert to json, if possible
parser:
inputs:
- k8s
source: |
structured, err = parse_json(.message)
if err == null {
. = merge!(., structured)
}
type: remap
sinks:
# Forward to elasticsearch
elasticsearch:
api_version: v8
compression: gzip
endpoints: ["http://elasticsearch:9200"]
healthcheck:
enabled: false
inputs:
- parser
mode: bulk
request:
headers:
AccountID: "0"
ProjectID: "0"
VL-Msg-Field: message,msg,_msg,log.msg,log.message,log
VL-Stream-Fields: stream,kubernetes.pod_name,kubernetes.container_name,kubernetes.pod_namespace
VL-Time-Field: timestamp
type: elasticsearchMethode 2: Fetch logs using a sidecar container
However, if you want to use a sidecar container to process logs, you can use the sidecars parameter in the Helm Charts.
laputa:
sidecars:
- name: vector
image: docker.io/timberio/vector:0.44.0-distroless-libc
volumeMounts:
- name: vector-config
mountPath: /etc/vector
- name: vector-data-dir
mountPath: /vector-data-dir
- name: logs
mountPath: /logs
extraVolumeMounts:
- name: logs
mountPath: /app/logs
extraVolumes:
- name: vector-config
configMap:
name: vector-config
- name: vector-data-dir
emptyDir: {} # You can change this to `persistentVolumeClaim` if you want
- name: logs
emptyDir: {}
Last updated
Was this helpful?