Skip to content

Commit

Permalink
Merge branch 'develop' into translations-a5d7ba7d
Browse files Browse the repository at this point in the history
  • Loading branch information
sunnyzanchi committed Oct 30, 2024
2 parents e4b761f + 3701611 commit f0f5ef2
Show file tree
Hide file tree
Showing 30 changed files with 658 additions and 641 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ We monitor and collect [metrics](/docs/integrations/kubernetes-integration/under

## Control plane component [#component]

The task of monitoring the Kubernetes control plane is a responsibility of the `nrk8s-controlplane` component, which by default is deployed as a DaemonSet. This component is automatically deployed to master nodes, through the use of a default list of `nodeSelectorTerms` which includes labels commonly used to identify master nodes, such as `node-role.kubernetes.io/control-plane` or `node-role.kubernetes.io/master`. Regardless, this selector is exposed in the `values.yml` file and therefore can be reconfigured to fit other environments.
The task of monitoring the Kubernetes control plane is a responsibility of the `nrk8s-controlplane` component, which by default is deployed as a DaemonSet. This component is automatically deployed to control plane nodes, through the use of a default list of `nodeSelectorTerms` which includes labels commonly used to identify control plane nodes, such as `node-role.kubernetes.io/control-plane`. Regardless, this selector is exposed in the `values.yml` file and therefore can be reconfigured to fit other environments.

Clusters that do not have any node matching these selectors will not get any pod scheduled, thus not wasting any resources and being functionally equivalent of disabling control plane monitoring altogether by setting `controlPlane.enabled` to `false` in the Helm Chart.

Expand All @@ -47,7 +47,7 @@ Each component of the control plane has a dedicated section, which allows to ind

<img
title="Diagram showing a possible configuration scraping etcd with mTLS and API server with bearer Token."
alt="Diagram showing a possible configuration scraping etcd with mTLS and API server with bearer Token. The monitoring is a DaemonSet deployed on master nodes only."
alt="Diagram showing a possible configuration scraping etcd with mTLS and API server with bearer Token. The monitoring is a DaemonSet deployed on control plane nodes only."
src="/images/kubernetes_diagram_integration-cp.webp"
/>

Expand Down Expand Up @@ -153,7 +153,7 @@ Our integration accepts a secret with the following keys:

These certificates should be signed by the same CA etcd is using to operate.

How to generate these certificates is out of the scope of this documentation, as it will vary greatly between different Kubernetes distribution. Please refer to your distribution's documentation to see how to fetch the required etcd peer certificates. In Kubeadm, for example, they can be found in `/etc/kubernetes/pki/etcd/peer.{crt,key}` in the master node.
How to generate these certificates is out of the scope of this documentation, as it will vary greatly between different Kubernetes distribution. Please refer to your distribution's documentation to see how to fetch the required etcd peer certificates. In Kubeadm, for example, they can be found in `/etc/kubernetes/pki/etcd/peer.{crt,key}` in the control plane node.

Once you have located or generated the etcd peer certificates, you should rename the files to match the keys we expect to be present in the secret, and create the secret in the cluster

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,37 +138,33 @@ If you're running version 2, check out these common Kubernetes integration error
<CollapserGroup>
<Collapser
id="invalid-license"
title="Check that the master nodes have the correct labels"
title="Check that the control plane nodes have the correct labels"
>
Execute the following commands to manually find the master nodes:
Execute the following commands to manually find the control plane nodes:
```shell
kubectl get nodes -l node-role.kubernetes.io/master=""
kubectl get nodes -l node-role.kubernetes.io/control-plane=""
```
```shell
kubectl get nodes -l kubernetes.io/role="master"
```
If the master nodes follow the labeling convention defined in the [Control plane component](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring/#component), you should get some output like:
If the control plane nodes follow the labeling convention defined in the [Control plane component](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring/#component), you should get some output like:
```shell
NAME STATUS ROLES AGE VERSION
ip-10-42-24-4.ec2.internal Ready master 42d v1.14.8
NAME STATUS ROLES AGE VERSION
ip-10-42-24-4.ec2.internal Ready control-plane 42d v1.14.8
```
If no nodes are found, there are two scenarios:
Your master nodes don't have the required labels that identify them as masters. In this case, you need to add both labels to your master nodes.
Your control plane nodes don't have the required labels that identify them as control planes. In this case, you need to add both labels to your control plane nodes.
You're in a managed cluster and your provider is handling the master nodes for you. In this case, there is nothing you can do, since your provider is limiting the access to those nodes.
You're in a managed cluster and your provider is handling the control plane nodes for you. In this case, there is nothing you can do, since your provider is limiting the access to those nodes.
</Collapser>
<Collapser
id="unable-connect"
title="Check that the integration is running on the master nodes"
title="Check that the integration is running on the control plane nodes"
>
To identify an integration pod running on a master node, replace `NODE_NAME` in the following command with one of the node names listed in the previous step:
To identify an integration pod running on a control plane node, replace `NODE_NAME` in the following command with one of the node names listed in the previous step:
```shell
kubectl get pods --field-selector spec.nodeName=NODE_NAME -l name=newrelic-infra --all-namespaces
Expand All @@ -177,7 +173,7 @@ If you're running version 2, check out these common Kubernetes integration error
The next command is the same as the previous one, just that it selects the node for you:
```shell
kubectl get pods --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/master="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra --all-namespaces
kubectl get pods --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/control-plane="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra --all-namespaces
```
If everything is correct you should get some output like:
Expand All @@ -187,7 +183,7 @@ If you're running version 2, check out these common Kubernetes integration error
newrelic-infra-whvzt 1/1 Running 0 6d20h
```
If the integration is not running on your master nodes, check that the daemonset has all the desired instances running and ready.
If the integration is not running on your control plane nodes, check that the daemonset has all the desired instances running and ready.
```shell
kubectl get daemonsets -l app=newrelic-infra --all-namespaces
Expand All @@ -198,7 +194,7 @@ If you're running version 2, check out these common Kubernetes integration error
id="indicators"
title="Check that the control plane components have the required labels"
>
Refer to the [discovery of master nodes and control plane components documentation section](/docs/integrations/kubernetes-integration/installation/configure-control-plane-monitoring#discover-nodes-components) and look for the labels the integration uses to discover the components. Then run the following commands to see if there are any pods with such labels and the nodes where they are running:
Refer to the [discovery of control plane nodes and components documentation section](/docs/integrations/kubernetes-integration/installation/configure-control-plane-monitoring#discover-nodes-components) and look for the labels the integration uses to discover the components. Then run the following commands to see if there are any pods with such labels and the nodes where they are running:
```shell
kubectl get pods -l k8s-app=kube-apiserver --all-namespaces
Expand Down Expand Up @@ -228,9 +224,9 @@ If you're running version 2, check out these common Kubernetes integration error
<Collapser
id="cannot-list-pods-for-cluster"
title="Retrieve the verbose logs of one of the integrations running on a master node and check for the control plane components jobs"
title="Retrieve the verbose logs of one of the integrations running on a control plane node and check for the control plane components jobs"
>
To retrieve the logs, follow the instructions on [get logs from pod running on a master node](/docs/integrations/kubernetes-integration/troubleshooting/get-logs-version). The integration logs for every component the following message `Running job: COMPONENT_NAME`. Fro example:
To retrieve the logs, follow the instructions on [get logs from pod running on a control plane node](/docs/integrations/kubernetes-integration/troubleshooting/get-logs-version). The integration logs for every component the following message `Running job: COMPONENT_NAME`. Fro example:
```shell
Running job: scheduler
Expand Down Expand Up @@ -270,7 +266,7 @@ If you're running version 2, check out these common Kubernetes integration error
The following command does the same as the previous one, but also chooses the pod for you:
```shell
kubectl exec -ti $(kubectl get pods --all-namespaces --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/master="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra -o jsonpath="{.items[0].metadata.name}") -- wget -O - localhost:10251/metrics
kubectl exec -ti $(kubectl get pods --all-namespaces --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/control-plane="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra -o jsonpath="{.items[0].metadata.name}") -- wget -O - localhost:10251/metrics
```
If everything is correct, you should get some metrics on the Prometheus format, something like this:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ Please note that these versions had a less flexible autodiscovery options, and d

In versions lower than v3, when the integration is deployed using `privileged: false`, the `hostNetwork` setting for the control plane component will be also be set to `false`.

### Discovery of master nodes and control plane components [#discover-nodes-components]
### Discovery of control plane nodes and control plane components [#discover-nodes-components]

The Kubernetes integration relies on the [`kubeadm`](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) labeling conventions to discover the master nodes and the control plane components. This means that master nodes should be labeled with `node-role.kubernetes.io/master=""` or `kubernetes.io/role="master"`.
The Kubernetes integration relies on the [`kubeadm`](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) labeling conventions to discover the control plane nodes and the control plane components. This means that control plane nodes should be labeled with `node-role.kubernetes.io/control-plane=""`.

The control plane components should have either the `k8s-app` or the `tier` and `component` labels. See this table for accepted label combinations and values:

Expand Down Expand Up @@ -158,11 +158,11 @@ The control plane components should have either the `k8s-app` or the `tier` and
</tbody>
</table>

When the integration detects that it's running inside a master node, it tries to find which components are running on the node by looking for pods that match the labels listed in the table above. For every running component, the integration makes a request to its metrics endpoint.
When the integration detects that it's running inside a control plane node, it tries to find which components are running on the node by looking for pods that match the labels listed in the table above. For every running component, the integration makes a request to its metrics endpoint.

### Configuration

Control plane monitoring is automatic for agents running inside master nodes. The only component that requires an extra step to run is etcd, because it uses mutual TLS authentication (mTLS) for client requests. The API Server can also be configured to be queried using the [Secure Port](https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips).
Control plane monitoring is automatic for agents running inside control plane nodes. The only component that requires an extra step to run is etcd, because it uses mutual TLS authentication (mTLS) for client requests. The API Server can also be configured to be queried using the [Secure Port](https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips).

<Callout variant="important">
Control plane monitoring for [OpenShift](http://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001OH7iAAG) 4.x requires additional configuration. For more information, see the [OpenShift 4.x Configuration](#openshift-4x-configuration) section.
Expand Down Expand Up @@ -424,27 +424,21 @@ If you want to generate verbose logs and get version and configuration informati

<Collapser
id="logs-pod-kubestatemetrics"
title="Get logs from a pod running on a master node"
title="Get logs from a pod running on a control plane node"
>
To get the logs from a pod running on a master node:
To get the logs from a pod running on a control plane node:

1. Get the nodes that are labelled as master:
1. Get the nodes that are labelled as control plane:

```shell
kubectl get nodes -l node-role.kubernetes.io/master=""
```

Or,

```shell
kubectl get nodes -l kubernetes.io/role="master"
kubectl get nodes -l node-role.kubernetes.io/control-plane=""
```

Look for output similar to this:

```shell
NAME STATUS ROLES AGE VERSION
ip-10-42-24-4.ec2.internal Ready master 42d v1.14.8
NAME STATUS ROLES AGE VERSION
ip-10-42-24-4.ec2.internal Ready control-plane 42d v1.14.8
```

2. Get the New Relic pods that are running on one of the nodes returned in the previous step:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ If you see a different result, follow the Kubernetes documentation to [enable ad

### Network requirements [#network-req]

For Kubernetes to talk to our `MutatingAdmissionWebhook`, the master node (or API server container, depending on how the cluster is set up) should allow egress for HTTPS traffic on port 443 to pods in all other nodes in the cluster.
For Kubernetes to talk to our `MutatingAdmissionWebhook`, the control plane node (or API server container, depending on how the cluster is set up) should allow egress for HTTPS traffic on port 443 to pods in all other nodes in the cluster.

This may require specific configuration depending on how your infrastructure is set up (on-premises, AWS, Google Cloud, etc.).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Our integration is compatible and is continuously tested on the following Kubern
</td>
<td>
1.26 to 1.30
1.27 to 1.31
</td>
</tr>
</tbody>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The [`nr-k8s-otel-collector`](https://github.com/newrelic/helm-charts/tree/maste

* **Deamonset Collector**: Deployed on each worker node and responsible for gathering metrics from the underlying host in the node, the `cAdvisor`, the `Kubelet`, and collecting logs from the containers.

* **Deployment collector**: Deployed on the master node and responsible for gathering metrics of Kube state metrics and Kubernetes cluster events.
* **Deployment collector**: Deployed on the control plane node and responsible for gathering metrics of Kube state metrics and Kubernetes cluster events.

<img
title="K8s OpenTelemetry diagram"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ You've completed the [Kubernetes installation procedure](/install/kubernetes/) a

In this case you can change the authentication behavior for each endpoints with the `controlplane.config.[component].autodiscover[].endpoints[].auth` config of the helm [chart values](https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml).

* It's also possible that the controlplane component of the integration is not running on all master nodes. You can doublecheck that running this command:
* It's also possible that the controlplane component of the integration is not running on all control plane nodes. You can doublecheck that running this command:
```bash
kubectl get pod -n <NEWRELIC_NAMESPACE> -l app.kubernetes.io/component=controlplane -o wide
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
title: 'Deprecation notice: Kubernetes'
subject: Kubernetes integration
releaseDate: '2024-10-29'
---

Effective Tuesday, October 29, 2024, our Kubernetes integration drops support for Kubernetes v1.26 and lower. The Kubernetes integration v3.30.0 and higher will only be compatible with Kubernetes versions 1.27 and higher. For more information, read this note or contact your account team.

## Background [#bg]

Enabling compatibility with the latest Kubernetes versions and adding new features to our Kubernetes offering prevents us from offering first-class support to versions v1.26 and lower.

## What's happening [#whats-happening]

* Most major Kubernetes cloud providers have already deprecated v1.26 and lower.

## What do you need to do [#what-to-do]

It's easy: [Upgrade your Kubernetes clusters](/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration#update) to a supported version.

## What happens if you don't make any changes to your account [#account]

The Kubernetes integration may continue to work with unsupported versions. However, we can't guarantee the quality of the solution as new releases may cause some incompatibilities.

Please note that we won't accept support requests for these versions that have reached the end of life stage.
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
subject: Browser agent
releaseDate: "2024-10-28"
version: 1.270.2
features: []
bugs: ["Correct naming for logging pageUrl attribute"]
security: []
---

## v1.270.2

### Bug fixes

#### Correct naming for logging pageUrl attribute
Corrects naming for the Logging pageUrl attribute by using the original url of the page instead of the url of when the event happens. Removes origin attribute from the runtime model object.

## Support statement

New Relic recommends that you upgrade the agent regularly to ensure that you're getting the latest features and performance benefits. Older releases will no longer be supported when they reach [end-of-life](https://docs.newrelic.com/docs/browser/browser-monitoring/getting-started/browser-agent-eol-policy/). Release dates are reflective of the original publish date of the agent version.

New browser agent releases are rolled out to customers in small stages over a period of time. Because of this, the date the release becomes accessible to your account may not match the original publish date. Please see this [status dashboard](https://newrelic.github.io/newrelic-browser-agent-release/) for more information.

Consistent with our [browser support policy](https://docs.newrelic.com/docs/browser/new-relic-browser/getting-started/compatibility-requirements-browser-monitoring/#browser-types), v1.270.2 of the Browser agent was built for and tested against these browsers and version ranges: Chrome 119-129, Edge 119-129, Safari 16-17, and Firefox 121-131. For mobile devices, v1.270.2 was built and tested for Android OS 15 and iOS Safari 16-18.
Loading

0 comments on commit f0f5ef2

Please sign in to comment.