Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabled service to have multiple ports, and the serviceMonitor to mon… #988

Merged
merged 6 commits into from
Sep 1, 2023

Conversation

tmakamure
Copy link
Contributor

@tmakamure tmakamure commented Aug 25, 2023

Enabled service to have multiple ports, and the serviceMonitor to monitor multiple endpoints on the service, added relevant configuration examples as comments to values.yaml and updated the README file for the new chart fields.

Issue

#987

Description of changes

  • Updated the serviceMonitor.yaml template to allow for more service endpoints to be monitored
  • Updated the service.yaml template to allow for more port to be exposed on the fluent-bit service
  • Updated the values.yaml file to add the new config keys, and comments on how to configure them
  • Updated the README file of the chart to add configuration notes for the new config fields

Checklist

  • Added/modified documentation as required (such as the README.md for modified charts)
  • Incremented the chart version in Chart.yaml for the modified chart(s)
  • Manually tested. Describe what testing was done in the testing section below
  • Make sure the title of the PR is a good description that can go into the release notes

Testing

After making the changes to the chart, I ran helm template . in the chart directory and inspected the output to ensure it is producing the desired effect. An excerpt of the output is as shown below:

---
# Source: aws-for-fluent-bit/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: aws-for-fluent-bit-0.1.28
    app.kubernetes.io/name: aws-for-fluent-bit
    app.kubernetes.io/instance: release-name-aws-for-fluent-bit
    app.kubernetes.io/version: "2.31.11"
    app.kubernetes.io/managed-by: Helm
  name: release-name-aws-for-fluent-bit
  namespace: default
spec:
  ports:
  - name: monitor-agent
    port: 2020
    protocol: TCP
    targetPort: 2020
  - name: metrics
    targetPort: 2021
    protocol: TCP
    port: 2021
  selector:
    app.kubernetes.io/name: aws-for-fluent-bit
    app.kubernetes.io/instance: release-name-aws-for-fluent-bit
  sessionAffinity: None
  type: ClusterIP
---
# Source: aws-for-fluent-bit/templates/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: release-name-aws-for-fluent-bit
  namespace: default
spec:
  endpoints:
  - port: monitor-agent
    scheme: http
    interval: 30s
    path: /api/v1/metrics/prometheus
    scrapeTimeout: 10s
  - interval: 10s
    path: /metrics
    port: metrics
    scheme: http
    scrapeTimeout: 10s
  jobLabel: app.kubernetes.io/instance
  namespaceSelector:
    matchNames:
    - default
  selector:
    matchLabels:
      app.kubernetes.io/name: aws-for-fluent-bit
      app.kubernetes.io/instance: release-name-aws-for-fluent-bit

Pre-Requisites to Testing Procedure

Changes in Values

  • Before testing the chart, the values.yaml file needs to be updated as below:

NB: (Note that we remove the [] on lines 387 and 371 and set the service monitor to enabled

serviceMonitor:
  service: 
    type: ClusterIP
    port: 2020
    targetPort: 2020
    extraPorts: # Removing the empty array placeholder []  because we are now pupulating the array
      - port: 2021
        targetPort: 2021
        protocol: TCP
        name: metrics
  ## When set true then use a ServiceMonitor to configure scraping
  enabled: true # Enabling the servicMonitor
  interval: 30s
  telemetryPath: /api/v1/metrics/prometheus
  labels:
    # app: example-application
    # teamname: example
  timeout: 10s
  relabelings: []
  targetLabels: []
  metricRelabelings: []
  extraEndpoints:  # removing the empty array placeholder [] because now we are populating the array
    - port: metrics
      path: /metrics
      interval: 30s
      scrapeTimeout: 10s
      scheme: http
  • In addition, the serviceMonitor object is a CRD created by the promehteus community. If your are doing helm template locally using helm 3.x please see below:

Templating locally

Helm does not automatically recognise CRDs, to be able to output the template including CRDs we need to:

  • Download the serviceMonitor CRD from the prometheus operator Repo: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd-full/monitoring.coreos.com_servicemonitors.yaml
  • We add this YAML file to our chart directory either in the root together with Chart.yamland values.yaml or under the templates directory. This will help helm to understand the CRD resource.
  • Next in the serviceMonitor.yaml template on line two, there is a condition: {{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.serviceMonitor.enabled ) }} - which checks if the cluster API version has the "monitoring.coreos.com/v1" capabillity. For a dry run, since we are not installing this on an actual cluster with the relevant API version installed, we may temporarily disable that check by changing the and to an or. (If you are test installing on actual cluster - please see the next heading.)
  • Once all this is setup - you can then run helm template . and inspect the output for kind: Service and the kind: ServiceMonitor
  • Revert the condition in serviceMonitor.yaml template back to and AND delete the serviceMonitor CRD file from the chart directory
  • In values.yaml - comment out lines [372-375] amnd [388 - 392], place back the the empty array place holders [] on lines 371 and 387, and set the enabled: false for the serviceMonitor such that theconfig looks like below:
serviceMonitor:
  service: 
    type: ClusterIP
    port: 2020
    targetPort: 2020
    extraPorts: []
      # - port: 2021
      #   targetPort: 2021
      #   protocol: TCP
      #   name: metrics
  ## When set true then use a ServiceMonitor to configure scraping
  enabled: false
  interval: 30s
  telemetryPath: /api/v1/metrics/prometheus
  labels:
    # app: example-application
    # teamname: example
  timeout: 10s
  relabelings: []
  targetLabels: []
  metricRelabelings: []
  extraEndpoints: []
    # - port: metrics
    #   path: /metrics
    #   interval: 30s
    #   scrapeTimeout: 10s
    #   scheme: http

Installing on test cluster

If you are testing by installing the helm chart on a cluster - you need to ensure that you first install the prometheus-operator CRDs on the cluster - especially the serviceMonitor in our case.

Example using minikube cluster
serviceMonitor:
  service: 
    type: ClusterIP
    port: 2020
    targetPort: 2020
    extraPorts:
      - port: 2021
        targetPort: 2021
        protocol: TCP
        name: metrics
  ## When set true then use a ServiceMonitor to configure scraping
  enabled: true
  interval: 30s
  telemetryPath: /api/v1/metrics/prometheus
  labels:
    # app: example-application
    # teamname: example
  timeout: 10s
  relabelings: []
  targetLabels: []
  metricRelabelings: []
  extraEndpoints: 
    - port: metrics
      path: /metrics
      interval: 30s
      scrapeTimeout: 10s
      scheme: http
  • Run helm install aws-for-fluentbit . from the chart directory. Helm will install the chart.
  • To get the service monitor run kubectl get serviceMonitor -n <namespace where chart is deployed> and you will see the service monitor.
  • Run kubectl describe serviceMonitor <serviceMonitor name from previous output> -n <the namespace where it is installed> and inspect the out put.
  • To see the yaml output run: kubectl get serviceMonitor <serviceMonitor name> -n <namespace of serviceMonitor> -o yaml and inspect the output
  • Do the same for the Service resource

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

tmakamure added 3 commits August 25, 2023 19:15
…itor multiple endpoints, added relevant configuration examples as comments to values.yaml
| `serviceMonitor.interval`| Set how frequently Prometheus should scrape |`30s`|
| `serviceMonitor.telemetryPath`| Set path to scrape metrics from |`/api/v1/metrics/prometheus`|
| `serviceMonitor.labels`| Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator |`[]`|
| `serviceMonitor.timeout`| Set timeout for scrape |`10s`|
| `serviceMonitor.relabelings`| Set relabel_configs as per [details](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) |`[]`|
| `serviceMonitor.targetLabels`| Set of labels to transfer on the Kubernetes Service onto the target. |`[]`|
| `serviceMonitor.metricRelabelings`| MetricRelabelConfigs to apply to samples before ingestion. |`[]`|
| `serviceMonitor.extraEndpoints`| Extra endpoints on the fluen-bit service for the serviceMonitor to monitor |`[]`|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fluen-bit service

Typo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good spot - fixing that now

PettitWesley
PettitWesley previously approved these changes Aug 25, 2023
Copy link
Contributor

@PettitWesley PettitWesley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@PettitWesley
Copy link
Contributor

@tmakamure need to update chart version to get CI to pass:

❌ Chart aws-for-fluent-bit/ has the same Chart version as the last release 0.1.29

Comment on lines +32 to +34
{{- end }}
{{- with .Values.serviceMonitor.extraEndpoints }}
{{- toYaml . | nindent 2 }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this template correct?

I tried generating it, and it didn't work:

This was in values.yaml

  metricRelabelings: []
  extraEndpoints: []
      - port: metrics
        path: /metrics
        interval: 30s
        scrapeTimeout: 10s
        scheme: http

And this was the error output from template dry run:

Error: failed to parse stable/aws-for-fluent-bit/values.yaml: error converting YAML to JSON: yaml: line 387: did not find expected key
helm.go:84: [debug] error converting YAML to JSON: yaml: line 387: did not find expected key
failed to parse stable/aws-for-fluent-bit/values.yaml
helm.sh/helm/v3/pkg/cli/values.(*Options).MergeValues
	helm.sh/helm/v3/pkg/cli/values/options.go:55
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:212
main.newTemplateCmd.func2
	helm.sh/helm/v3/cmd/helm/template.go:82
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:872
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:990
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:918
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
	runtime/proc.go:250
runtime.goexit
	runtime/asm_amd64.s:1594

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PettitWesley I have ammended the test intsrctions see below: the template is correct

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PettitWesley I have also updated the chart version

@@ -11,6 +11,14 @@ spec:
port: {{ .Values.serviceMonitor.service.port }}
protocol: TCP
targetPort: {{ .Values.serviceMonitor.service.targetPort }}
{{- if .Values.serviceMonitor.service.extraPorts }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the template for the extraEndpoints below needs to be similar to this, explicitly model each field.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PettitWesley the template is correct I double checked now. I have made the ammendmends on the instrustions on how to do the the test run (Should have included that in PR message too):

Changes in Values

  • Before testing the chart, the values.yaml file needs to be updated as below:

NB: (Note that we remove the [] on lines 387 and 371 and set the service monitor to enabled

serviceMonitor:
  service: 
    type: ClusterIP
    port: 2020
    targetPort: 2020
    extraPorts: # Removing the empty array placeholder []  because we are now pupulating the array
      - port: 2021
        targetPort: 2021
        protocol: TCP
        name: metrics
  ## When set true then use a ServiceMonitor to configure scraping
  enabled: true # Enabling the servicMonitor
  interval: 30s
  telemetryPath: /api/v1/metrics/prometheus
  labels:
    # app: example-application
    # teamname: example
  timeout: 10s
  relabelings: []
  targetLabels: []
  metricRelabelings: []
  extraEndpoints:  # removing the empty array placeholder [] because now we are populating the array
    - port: metrics
      path: /metrics
      interval: 30s
      scrapeTimeout: 10s
      scheme: http
  • In addition, the serviceMonitor object is a CRD created by the promehteus community. If your are doing helm template locally using helm 3.x please see below:

Templating locally

Helm does not automatically recognise CRDs, to be able to output the template including CRDs we need to:

  • Download the serviceMonitor CRD from the prometheus operator Repo: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd-full/monitoring.coreos.com_servicemonitors.yaml
  • We add this YAML file to our chart directory either in the root together with Chart.yamland values.yaml or under the templates directory. This will help helm to understand the CRD resource.
  • Next in the serviceMonitor.yaml template on line two, there is a condition: {{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.serviceMonitor.enabled ) }} - which checks if the cluster API version has the "monitoring.coreos.com/v1" capabillity. For a dry run, since we are not installing this on an actual cluster with the relevant API version installed, we may temporarily disable that check by changing the and to an or. (If you are test installing on actual cluster - please see the next heading.)
  • Once all this is setup - you can then run helm template . and inspect the output for kind: Service and the kind: ServiceMonitor
  • Revert the condition in serviceMonitor.yaml template back to and AND delete the serviceMonitor CRD file from the chart directory
  • In values.yaml - comment out lines [372-375] amnd [388 - 392], place back the the empty array place holders [] on lines 371 and 387, and set the enabled: false for the serviceMonitor such that theconfig looks like below:
serviceMonitor:
  service: 
    type: ClusterIP
    port: 2020
    targetPort: 2020
    extraPorts: []
      # - port: 2021
      #   targetPort: 2021
      #   protocol: TCP
      #   name: metrics
  ## When set true then use a ServiceMonitor to configure scraping
  enabled: false
  interval: 30s
  telemetryPath: /api/v1/metrics/prometheus
  labels:
    # app: example-application
    # teamname: example
  timeout: 10s
  relabelings: []
  targetLabels: []
  metricRelabelings: []
  extraEndpoints: []
    # - port: metrics
    #   path: /metrics
    #   interval: 30s
    #   scrapeTimeout: 10s
    #   scheme: http

Installing on test cluster

If you are testing by installing the helm chart on a cluster - you need to ensure that you first install the prometheus-operator CRDs on the cluster - especially the serviceMonitor in our case.

Example using minikube cluster

serviceMonitor:
  service: 
    type: ClusterIP
    port: 2020
    targetPort: 2020
    extraPorts:
      - port: 2021
        targetPort: 2021
        protocol: TCP
        name: metrics
  ## When set true then use a ServiceMonitor to configure scraping
  enabled: true
  interval: 30s
  telemetryPath: /api/v1/metrics/prometheus
  labels:
    # app: example-application
    # teamname: example
  timeout: 10s
  relabelings: []
  targetLabels: []
  metricRelabelings: []
  extraEndpoints: 
    - port: metrics
      path: /metrics
      interval: 30s
      scrapeTimeout: 10s
      scheme: http
  • Run helm install aws-for-fluentbit . from the chart directory. Helm will install the chart.
  • To get the service monitor run kubectl get serviceMonitor -n <namespace where chart is deployed> and you will see the service monitor.
  • Run kubectl describe serviceMonitor <serviceMonitor name from previous output> -n <the namespace where it is installed> and inspect the out put.
  • To see the yaml output run: kubectl get serviceMonitor <serviceMonitor name> -n <namespace of serviceMonitor> -o yaml and inspect the output
  • Do the same for the Service resource

Copy link
Contributor

@PettitWesley PettitWesley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@PettitWesley PettitWesley merged commit 463b7a4 into aws:master Sep 1, 2023
1 check passed
@PettitWesley
Copy link
Contributor

I ran the dry run again without setting any of the new values, which shows the current state of the values.yaml and template won't break anyone who doesn't use these new fields (no regressions).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants