Skip to content

Commit

Permalink
Merge pull request #15 from santhoshkvuda/070223
Browse files Browse the repository at this point in the history
Release v2.0.1
  • Loading branch information
santhoshkvuda authored Feb 7, 2023
2 parents 1a44e24 + 8e59c52 commit 1da38c7
Show file tree
Hide file tree
Showing 41 changed files with 11,985 additions and 213 deletions.
57 changes: 57 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
.DS_Store

####
## Ignore PEM files
####

**.pem

####
## gitignore for terraform artifacts
####

# Local .terraform directories
**/.terraform/*

## Terraform Locck files
*.terraform.lock.hcl

# .tfstate filesdas
*.tfstate
*.tfstate.*

# Crash log files
crash.log
crash.*.log

# Exclude all .tfvars files, which are likely to contain sensitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
*.tfvars
*.tfvars.json

# Include sample tfvars
!terraform-sample.tfvars

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Include override files you do wish to add to version control using negated pattern
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*

# Ignore CLI configuration files
.terraformrc
terraform.rc

# Ignore util dir
logan/util/*


11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
# Change Log

## 2022-02-07
### Added
- Create a new mount (rw) using the value provided for baseDir.
- Expose "encoding" parameter of Fluentd's tail plugin as part of values.yaml, which allows users to override default encoding (ASCII-8BIT) for applicable logs/log types.
- Partial CRI logs handling.
- Oracle Resource Manager / Terraform support for deploying the solution.
### Changed
- Modified /var/log to mount as readonly by default, except when /var/log is set as baseDir (to store Fluentd state, buffer etc.,).
### Breaking Changes
- Logging Analytics Fluentd Output plugin log location will be derived using baseDir instead using value of fluentd:ociLoggingAnalyticsOutputPlugin:plugin_log_location. The default value still remains unchanged and is a non breaking change except if it was modified to a different value.

## 2022-08-30
### Added
- Helm chart templatisation/parameterisation to provide granular level control on the chart and its values.
Expand Down
129 changes: 127 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,13 @@

This provides an end-to-end monitoring solution for Oracle Container Engine for Kubernetes (OKE) and other forms of Kubernetes Clusters using Logging Analytics, Monitoring and other Oracle Cloud Infrastructure (OCI) Services.

![Sample Services Dashboard](https://user-images.githubusercontent.com/80283985/153080889-62b30482-5a9c-4244-92e3-e7a4df5ba33e.png)
![Kubernetes Cluster Summary Dashboard](logan/images/kubernetes-cluster-summary-dashboard.png)

![Kubernetes Nodes Dashboard](logan/images/kubernetes-nodes-dashboard.png)

![Topology Based Exploration](https://user-images.githubusercontent.com/80283985/153081174-f22dcf71-d994-4dc5-ad42-9f424c3f1573.png)
![Kubernetes Workloads Dashboard](logan/images/kubernetes-workloads-dashboard.png)

![Kubernetes Pods Dashboard](logan/images/kubernetes-pods-dashboard.png)

## Logs

Expand Down Expand Up @@ -75,6 +78,26 @@ The following are the list of objects supported at present:

## Installation Instructions

### Deploy using Oracle Resource Manager

> **_NOTE:_** If you aren't already signed in, when prompted, enter the tenancy and user credentials. Review and accept the terms and conditions. If you aren't on-boarded to OCI Logging Analytics, refer to [Pre-requisites](#pre-requisites) section to enable Logging Analytics in the region where you want to deploy the stack. The default container image available through the deployment is only for demo/non-production use-cases, we recommend you to refer [Docker Image](#docker-image) section to build your own image.
- Click to deploy the stack

[![Deploy to Oracle Cloud][orm_button]][oci_kubernetes_monitoring_stack]

- Select the region and compartment where you want to deploy the stack.

- Follow the on-screen prompts and instructions to create the stack.

- After creating the stack, click Terraform Actions, and select Plan.

- Wait for the job to be completed, and review the plan.

- To make any changes, return to the Stack Details page, click Edit Stack, and make the required changes. Then, run the Plan action again.

- If no further changes are necessary, return to the Stack Details page, click Terraform Actions, and select Apply.

### Pre-requisites

- Logging Analytics Service must be enabled in the given OCI region before trying out the following Solution. Refer [Logging Analytics Quick Start](https://docs.oracle.com/en-us/iaas/logging-analytics/doc/quick-start.html) for details.
Expand Down Expand Up @@ -393,3 +416,105 @@ subjects:
name: <serviceaccount>
namespace: <namespace>
```

### How to set encoding for logs ?

**Note**: This is supported only through the helm chart based deployment.

By default Fluentd tail plugin that is being used to collect various logs has default encoding set to ASCII-8BIT. To overrided the default encoding, use one of the following approaches.

#### Global level

Set value for encoding under fluentd:tailPlugin section of values.yaml, which applies to all the logs being collected from the cluster.

```
fluentd:
...
...
tailPlugin:
...
...
encoding: <ENCODING-VALUE>
```

#### Specific log type level

The encoding can be set at invidivual log types like kubernetesSystem, linuxSystem, genericContainerLogs, which applies to all the logs under the specific log type.

```
fluentd:
...
...
kubernetesSystem:
...
...
encoding: <ENCODING-VALUE>
```

```
fluentd:
...
...
genericContainerLogs:
...
...
encoding: <ENCODING-VALUE>
```

#### Specific log level

The encoding can be set at individual log level too, which takes precedence over all others.

```
fluentd:
...
...
kubernetesSystem:
...
...
logs:
kube-proxy:
encoding: <ENCODING-VALUE>
```

```
fluentd:
...
...
customLogs:
custom-log1:
...
...
encoding: <ENCODING-VALUE>
...
...
```

## Importing Logging Analytics Kubernetes Dashboards

The Dashboards are imported as part of deploying the Kubernetes solution using [Oracle Resource Manager stack](#deploy-using-oracle-resource-manager). The following steps can be used to import the Dashboards manually to your tenancy.

1. Download and configure [OCI CLI](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm) or open cloud-shell where OCI CLI is pre-installed. Alternative methods like REST API, SDK, Terraform etc can also be used.
1. Find the **OCID** of compartment, where the dashboards need to be imported.
1. Download the dashboard JSONs from [here](logan/terraform/oke/modules/dashboards/dashboards_json/).
1. **Replace** all the instances of the keyword - "`${compartment_ocid}`" in the JSONs with the **Compartment OCID** identified in STEP 2.
- Following are the set of commands for quick reference that can be used in a linux/cloud-shell envirnment :

```
sed -i "s/\${compartment_ocid}/<Replace-with-Compartment-OCID>/g" file://cluster.json
sed -i "s/\${compartment_ocid}/<Replace-with-Compartment-OCID>/g" file://node.json
sed -i "s/\${compartment_ocid}/<Replace-with-Compartment-OCID>/g" file://workload.json
sed -i "s/\${compartment_ocid}/<Replace-with-Compartment-OCID>/g" file://pod.json
```
1. Run the following commands to import the dashboards.

```
oci management-dashboard dashboard import --from-json file://cluster.json
oci management-dashboard dashboard import --from-json file://node.json
oci management-dashboard dashboard import --from-json file://workload.json
oci management-dashboard dashboard import --from-json file://pod.json
```

[orm_button]: https://oci-resourcemanager-plugin.plugins.oci.oraclecloud.com/latest/deploy-to-oracle-cloud.svg

[oci_kubernetes_monitoring_stack]: https://cloud.oracle.com/resourcemanager/stacks/create?zipUrl=https://github.com/oracle-quickstart/oci-kubernetes-monitoring/releases/latest/download/oci-kubernetes-monitoring-stack.zip
2 changes: 1 addition & 1 deletion logan/helm-chart/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 2.0.0
version: 2.0.1

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
Expand Down
77 changes: 75 additions & 2 deletions logan/helm-chart/templates/configmap-logs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ data:
config_file_location {{ .Values.oci.path }}/{{ .Values.oci.file }}
profile_name "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.profile_name }}"
{{- end }}
plugin_log_location "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_location }}"
plugin_log_location "{{ .Values.fluentd.baseDir }}"
plugin_log_level "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_level }}"
plugin_log_file_size "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_file_size }}"
plugin_log_file_count "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_file_count }}"
Expand Down Expand Up @@ -90,6 +90,13 @@ data:
pos_file {{ $.Values.fluentd.baseDir }}/oci_la_fluentd_outplugin/pos/{{ $name }}.logs.pos
tag oci.oke.{{ $name }}.*
read_from_head "{{ $.Values.fluentd.tailPlugin.readFromHead }}"
{{- if $logDefinition.encoding }}
encoding {{ $logDefinition.encoding }}
{{- else if $.Values.fluentd.kubernetesSystem.encoding }}
encoding {{ $.Values.fluentd.kubernetesSystem.encoding }}
{{- else if $.Values.fluentd.tailPlugin.encoding }}
encoding {{ $.Values.fluentd.tailPlugin.encoding }}
{{- end }}
<parse>
{{- if eq $runtime "docker" }}
@type json
Expand Down Expand Up @@ -134,6 +141,20 @@ data:
tag ${tag}
</record>
</filter>
# Concat filter to handle partial logs in CRI/ContainerD
# Docker can also have partial logs but handling is different for different docker versions. Considering Kubernetes/OKE moved to ContainerD/CRI since last 4-5 releases, ignoring docker handling.
# This filter can not be clubbed with concat filter for multiline as both are mutually exclusive.
{{- if eq $runtime "cri" }}
<filter oci.oke.{{ $name }}.**>
@type concat
key message
use_partial_cri_logtag true
partial_cri_logtag_key logtag
partial_cri_stream_key stream
# timeout scenario should not occur in general for partial logs handling
timeout_label "@NORMAL"
</filter>
{{- end }}
{{- if $logDefinition.multilineStartRegExp }}
# Concat filter to handle multi-line log records.
<filter oci.oke.{{ $name }}.**>
Expand All @@ -159,6 +180,13 @@ data:
pos_file {{ $.Values.fluentd.baseDir }}/oci_la_fluentd_outplugin/pos/{{ $name }}.logs.pos
tag oci.oke.{{ $name }}.*
read_from_head "{{ $.Values.fluentd.tailPlugin.readFromHead }}"
{{- if $logDefinition.encoding }}
encoding {{ $logDefinition.encoding }}
{{- else if $.Values.fluentd.linuxSystem.encoding }}
encoding {{ $.Values.fluentd.linuxSystem.encoding }}
{{- else if $.Values.fluentd.tailPlugin.encoding }}
encoding {{ $.Values.fluentd.tailPlugin.encoding }}
{{- end }}
<parse>
{{- if $logDefinition.multilineStartRegExp }}
@type multiline
Expand Down Expand Up @@ -217,6 +245,13 @@ data:
pos_file {{ .Values.fluentd.baseDir }}/oci_la_fluentd_outplugin/pos/syslog.logs.pos
tag oci.oke.syslog.messages.**
read_from_head "{{ .Values.fluentd.tailPlugin.readFromHead }}"
{{- if .Values.fluentd.linuxSystem.logs.syslog.encoding }}
encoding {{ .Values.fluentd.linuxSystem.logs.syslog.encoding }}
{{- else if .Values.fluentd.linuxSystem.encoding }}
encoding {{ .Values.fluentd.linuxSystem.encoding }}
{{- else if .Values.fluentd.tailPlugin.encoding }}
encoding {{ .Values.fluentd.tailPlugin.encoding }}
{{- end }}
<parse>
@type multiline
format_firstline {{ .Values.fluentd.linuxSystem.logs.syslog.multilineStartRegExp }}
Expand Down Expand Up @@ -325,6 +360,11 @@ data:
pos_file {{ $.Values.fluentd.baseDir }}/oci_la_fluentd_outplugin/pos/{{ $name }}.logs.pos
tag oci.oke.{{ $name }}.*
read_from_head "{{ $.Values.fluentd.tailPlugin.readFromHead }}"
{{- if $logDefinition.encoding }}
encoding {{ $logDefinition.encoding }}
{{- else if $.Values.fluentd.tailPlugin.encoding }}
encoding {{ $.Values.fluentd.tailPlugin.encoding }}
{{- end }}
<parse>
{{- if eq "false" ($logDefinition.isContainerLog | toString) }}
{{- if $logDefinition.multilineStartRegExp }}
Expand Down Expand Up @@ -368,7 +408,20 @@ data:
tag ${tag}
</record>
</filter>
# Concat filter to handle partial logs in CRI/ContainerD
# Docker can also have partial logs but handling is different for different docker versions. Considering Kubernetes/OKE moved to ContainerD/CRI since last 4-5 releases, ignoring docker handling.
# This filter can not be clubbed with concat filter for multiline as both are mutually exclusive.
{{- if and (ne "false" ($logDefinition.isContainerLog | toString)) (eq $runtime "cri") }}
<filter oci.oke.{{ $name }}.**>
@type concat
key message
use_partial_cri_logtag true
partial_cri_logtag_key logtag
partial_cri_stream_key stream
# timeout scenario should not occur in general for partial logs handling
timeout_label "@NORMAL"
</filter>
{{- end }}
{{- if and (ne "false" ($logDefinition.isContainerLog | toString)) ($logDefinition.multilineStartRegExp) }}
# Concat filter to handle multi-line log records.
<filter oci.oke.{{ $name }}.**>
Expand Down Expand Up @@ -397,6 +450,11 @@ data:
read_from_head "{{ .Values.fluentd.tailPlugin.readFromHead }}"
# Modify the exclude path once a specific container log config is explictly defined to avoid duplicate collection.
exclude_path [{{ $excludePath }}]
{{- if .Values.fluentd.genericContainerLogs.encoding }}
encoding {{ .Values.fluentd.genericContainerLogs.encoding }}
{{- else if .Values.fluentd.tailPlugin.encoding }}
encoding {{ .Values.fluentd.tailPlugin.encoding }}
{{- end }}
<parse>
{{- if eq $runtime "docker" }}
@type json
Expand Down Expand Up @@ -449,6 +507,21 @@ data:
</filter>
{{- end }}
# Concat filter to handle partial logs in CRI/ContainerD
# Docker can also have partial logs but handling is different for different docker versions. Considering Kubernetes/OKE moved to ContainerD/CRI since last 4-5 releases, ignoring docker handling.
# This filter can not be clubbed with concat filter for multiline as both are mutually exclusive.
{{- if eq $runtime "cri" }}
<filter oci.oke.containerlogs.**>
@type concat
key message
use_partial_cri_logtag true
partial_cri_logtag_key logtag
partial_cri_stream_key stream
# timeout scenario should not occur in general for partial logs handling
timeout_label "@NORMAL"
</filter>
{{- end }}
#customFluentd config
{{- if .Values.fluentd.customFluentdConf }}
{{- include "common.tplvalues.render" (dict "value" .Values.fluentd.customFluentdConf "context" $) | nindent 4 }}
Expand Down
4 changes: 2 additions & 2 deletions logan/helm-chart/templates/configmap-objects.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ data:
config_file_location {{ .Values.oci.path }}/{{ .Values.oci.file }}
profile_name "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.profile_name }}"
{{- end }}
plugin_log_location "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_location }}"
plugin_log_location "{{ .Values.fluentd.baseDir }}"
plugin_log_level "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_level }}"
plugin_log_file_size "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_file_size }}"
plugin_log_file_count "{{ .Values.fluentd.ociLoggingAnalyticsOutputPlugin.plugin_log_file_count }}"
Expand Down Expand Up @@ -103,4 +103,4 @@ data:
tag ${tag}
</record>
</filter>
{{- end }}
{{- end }}
Loading

0 comments on commit 1da38c7

Please sign in to comment.