Skip to content

Latest commit

 

History

History
109 lines (70 loc) · 5.4 KB

File metadata and controls

109 lines (70 loc) · 5.4 KB

Kubernetes

Fluentd + Kubernetes

Kubernetes provides two logging endpoints for applications and cluster logs:

  1. Stackdriver Logging for use with Google Cloud Platform; and,
  2. Elasticsearch.

Behind the scenes, there is a logging agent that takes care of the log collection, parsing and distribution: Fluentd.

This document focuses on how to deploy Fluentd in Kubernetes and extend the possibilities to have different destinations for your logs.

Getting Started

This document assumes that you have a Kubernetes cluster running or at least a local (single) node that can be used for testing purposes.

Before getting started, make sure you understand or have a basic idea about the following concepts from Kubernetes:

  • Node

    A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components...

  • Pod

    A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context...

  • DaemonSet

    A DaemonSet ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to them. As nodes are removed from the cluster, those pods are garbage collected. Deleting a DaemonSet will clean up the pods it created...

Since applications runs in Pods and multiple Pods might exists across multiple nodes, we need a specific Fluentd-Pod that takes care of log collection on each node: Fluentd DaemonSet.

Fluentd DaemonSet

For Kubernetes, a DaemonSet ensures that all (or some) nodes run a copy of a pod. To solve log collection, we are going to implement a Fluentd DaemonSet.

Fluentd is flexible enough and has the proper plugins to distribute logs to different third-party applications like databases or cloud services, so the principal question is Where will the logs be stored?. Once we got that question answered, we can move forward configuring our DaemonSet.

The following steps will focus on sending the logs to an Elasticsearch Pod:

Get Fluentd DaemonSet sources

We have created a Fluentd DaemonSet that have the proper rules and container image ready to get started:

Please grab a copy of the repository from the command line using GIT:

$ git clone https://github.com/fluent/fluentd-kubernetes-daemonset

DaemonSet Content

The cloned repository contains several configurations that allow to deploy Fluentd as a DaemonSet. The Docker container image distributed on the repository also comes pre-configured so that Fluentd can gather all the logs from the Kubernetes node's environment and append the proper metadata to the logs.

This repository has several presets for Alpine/Debian with popular outputs:

Logging to Elasticsearch

Requirements

From the fluentd-kubernetes-daemonset/ directory, find the YAML configuration file:

As an example, let's see a part of this file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  ...
spec:
    ...
    spec:
      containers:
      - name: fluentd
        image: quay.io/fluent/fluentd-kubernetes-daemonset
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-logging"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name:  FLUENT_ELASTICSEARCH_SSL_VERIFY
            value: "true"
          - name:  FLUENT_ELASTICSEARCH_SSL_VERSION
            value: "TLSv1_2"
        ...

This YAML file contains two relevant environment variables that are used by Fluentd when the container starts:

Environment Variable Description Default
FLUENT_ELASTICSEARCH_HOST Specify the host name or IP address. elasticsearch-logging
FLUENT_ELASTICSEARCH_PORT Elasticsearch TCP port 9200
FLUENT_ELASTICSEARCH_SSL_VERIFY Whether verify SSL certificates or not. true
FLUENT_ELASTICSEARCH_SSL_VERSION Specify the version of TLS. TLSv1_2

Any relevant change needs to be done in the YAML file before deployment. The defaults assume that at least one Elasticsearch Pod elasticsearch-logging exists in the cluster.

If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is an open-source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.