From e34816b569cc796e9c3c863d1e5adb86f20f10ef Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Mon, 28 Aug 2023 11:50:24 -0700 Subject: [PATCH 01/24] fix(Accounts): Add target type query --- .../query-account-audit-logs-nrauditevent.mdx | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx index 060461a31e1..547320282f5 100644 --- a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx +++ b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx @@ -17,11 +17,11 @@ redirects: - /docs/data-apis/understand-data/event-data/query-account-audit-logs-nrauditevent --- -As an additional security measure for using and managing New Relic, you can use the `NrAuditEvent` event to view audit logs that show changes in your New Relic organization. +As an additional security measure for using and managing New Relic, you can use the `NrAuditEvent` event to view audit logs that show changes in your New Relic organization. ## What is the `NrAuditEvent`? [#attributes] -The `NrAuditEvent` is created to record some important types of configuration changes you and your users make in your New Relic organization. Data gathered includes the type of account change, what actor made the change, a human-readable description of the action taken, and a timestamp for the change. Reported information includes: +The `NrAuditEvent` is created to record some important types of configuration changes you and your users make in your New Relic organization. Data gathered includes the type of account change, what actor made the change, a human-readable description of the action taken, and a timestamp for the change. Reported information includes: * Users added or deleted * User permission changes @@ -146,6 +146,19 @@ Note that the query builder in the UI can only query one account at a time. If y SINCE 1 week ago LIMIT MAX ``` + + + + + The `targetType` attribute describes the object that changed, for example, account, role, user, alert conditions or notifications, logs, etc. + To generate a list of `targetType` values for your account, run the query below. Note that this query will only show `targetTypes` that have been touched. + + SELECT uniques(targetType) + FROM NrAuditEvent + SINCE 90 days ago + ### Changes made by specific users [#examples-who] From fa82bca3ae82b9b0fa337c5c4c1f3b9b4d2e1868 Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Mon, 28 Aug 2023 12:03:57 -0700 Subject: [PATCH 02/24] fix(accounts): Add sql formatting and reformat some queries --- .../query-account-audit-logs-nrauditevent.mdx | 74 ++++++++++++------- 1 file changed, 47 insertions(+), 27 deletions(-) diff --git a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx index 547320282f5..91fa30338f4 100644 --- a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx +++ b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx @@ -57,8 +57,10 @@ Note that the query builder in the UI can only query one account at a time. If y > To view all changes to your New Relic account for a specific time frame, run this basic NRQL query: - ``` - SELECT * from NrAuditEvent SINCE 1 day ago + ```sql + SELECT * + FROM NrAuditEvent + SINCE 1 day ago ``` @@ -68,9 +70,11 @@ Note that the query builder in the UI can only query one account at a time. If y > To query what type of change to the account users was made the most frequently during a specific time frame, include the [`actionIdentifier` attribute](#actorIdentifier) in your query. For example: - ``` - SELECT count(*) AS Actions FROM NrAuditEvent - FACET actionIdentifier SINCE 1 week ago + ```sql + SELECT count(*) AS Actions + FROM NrAuditEvent + FACET actionIdentifier + SINCE 1 week ago ``` @@ -80,8 +84,11 @@ Note that the query builder in the UI can only query one account at a time. If y > To query for information about created accounts and who created them, you can use something like: - ``` - SELECT actorEmail, actorId, targetId FROM NrAuditEvent WHERE actionIdentifier = 'account.create' SINCE 1 month ago + ```sql + SELECT actorEmail, actorId, targetId + FROM NrAuditEvent + WHERE actionIdentifier = 'account.create' + SINCE 1 month ago ``` @@ -91,8 +98,10 @@ Note that the query builder in the UI can only query one account at a time. If y > When you include `TIMESERIES` in a NRQL query, the results are shown as a line graph. For example: - ``` - SELECT count(*) from NrAuditEvent TIMESERIES facet actionIdentifier since 1 week ago + ```sql + SELECT count(*) + FROM NrAuditEvent + TIMESERIES facet actionIdentifier since 1 week ago ``` @@ -104,17 +113,20 @@ Note that the query builder in the UI can only query one account at a time. If y To see all the changes made to users, you could use: - ``` - SELECT * FROM NrAuditEvent WHERE targetType = 'user' - SINCE this month + ```sql + SELECT * + FROM NrAuditEvent + WHERE targetType = 'user' + SINCE this month ``` If you wanted to narrow that down to see changes to [user type](/docs/accounts/accounts-billing/new-relic-one-user-management/user-type), you could use: - ``` - SELECT * FROM NrAuditEvent WHERE targetType = 'user' + ```sql + SELECT * FROM NrAuditEvent + WHERE targetType = 'user' AND actionIdentifier IN ('user.self_upgrade', 'user.change_type') - SINCE this month + SINCE this month ``` @@ -124,7 +136,7 @@ Note that the query builder in the UI can only query one account at a time. If y > To query updates for your synthetic monitors during a specific time frame, include the [`actionIdentifier`](/attribute-dictionary/nrauditevent/actionidentifier) attribute in your query. For example: - ``` + ```sql SELECT count(*) FROM NrAuditEvent WHERE actionIdentifier = 'synthetics_monitor.update_script' FACET actionIdentifier, description, actorEmail @@ -140,9 +152,10 @@ Note that the query builder in the UI can only query one account at a time. If y > To query what configuration changes were made to any workload, use the query below. The `targetId` attribute contains the GUID of the workload that was modified, which you can use for searches. Since changes on workloads are often automated, you might want to include the `actorType` attribute to know if the change was done directly by a user through the UI or through the API. - ``` + ```sql SELECT timestamp, actorEmail, actorType, description, targetId - FROM NrAuditEvent WHERE targetType = 'workload' + FROM NrAuditEvent + WHERE targetType = 'workload' SINCE 1 week ago LIMIT MAX ``` @@ -155,9 +168,11 @@ Note that the query builder in the UI can only query one account at a time. If y The `targetType` attribute describes the object that changed, for example, account, role, user, alert conditions or notifications, logs, etc. To generate a list of `targetType` values for your account, run the query below. Note that this query will only show `targetTypes` that have been touched. + ```sql SELECT uniques(targetType) FROM NrAuditEvent SINCE 90 days ago + ``` @@ -170,9 +185,10 @@ Note that the query builder in the UI can only query one account at a time. If y > To see detailed information about any user who made changes to the account during a specific time frame, include [`actorType = 'user'`](#actorType) in the query. For example: - ``` + ```sql SELECT actionIdentifier, description, actorEmail, actorId, targetType, targetId - FROM NrAuditEvent WHERE actorType = 'user' + FROM NrAuditEvent + WHERE actorType = 'user' SINCE 1 week ago ``` @@ -183,8 +199,9 @@ Note that the query builder in the UI can only query one account at a time. If y > To query account activities made by a specific person during the selected time frame, you must know their [`actorId`](#actorId). For example: - ``` - SELECT actionIdentifier FROM NrAuditEvent + ```sql + SELECT actionIdentifier + FROM NrAuditEvent WHERE actorId = 829034 SINCE 1 week ago ``` @@ -195,8 +212,9 @@ Note that the query builder in the UI can only query one account at a time. If y > To identify who ([`actorType`](#actorType)) has made the most changes to the account, include the [`actorEmail` attribute](#actorEmail) in your query. For example: - ``` - SELECT count(*) as Users FROM NrAuditEvent + ```sql + SELECT count(*) as Users + FROM NrAuditEvent WHERE actorType = 'user' FACET actorEmail SINCE 1 week ago ``` @@ -208,7 +226,7 @@ Note that the query builder in the UI can only query one account at a time. If y > To query updates from your synthetic monitors made by a specific user, include the [`actionIdentifier`](/attribute-dictionary/nrauditevent/actionidentifier) and [`actorEmail`](/attribute-dictionary/nrauditevent/actoremail) attribute in your query. For example: - ``` + ```sql SELECT count(*) FROM NrAuditEvent WHERE actionIdentifier = 'synthetics_monitor.update_script' FACET actorEmail, actionIdentifier, description @@ -226,9 +244,11 @@ Note that the query builder in the UI can only query one account at a time. If y > To see detailed information about changes to the account that were made using an API key during a specific time frame, include [`actorType = 'api_key'`](#actorType) in the query. For example: - ``` + ```sql SELECT actionIdentifier, description, targetType, targetId, actorAPIKey, actorId, actorEmail - FROM NrAuditEvent WHERE actorType = 'api_key' SINCE 1 week ago + FROM NrAuditEvent + WHERE actorType = 'api_key' + SINCE 1 week ago ``` From 3c6b3e4223afa0ff1421a8644a9a46be23257000 Mon Sep 17 00:00:00 2001 From: Reese Lee Date: Fri, 1 Sep 2023 15:40:54 -0700 Subject: [PATCH 03/24] Update changes-since-v3.mdx * Modifies language and formatting for increased clarity * Fixes some grammar, capitalization, and punctuation issues * Adds hyperlink to the updated `nri-bundle` chart so readers can more conveniently access it --- .../changes-since-v3.mdx | 57 +++++++++---------- 1 file changed, 26 insertions(+), 31 deletions(-) diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx index 046feae107b..26c0a4b150f 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx @@ -9,9 +9,9 @@ redirects: - /docs/kubernetes-pixie/kubernetes-integration/get-started/changes-since-v3 --- -From version 3 onwards, the Kubernetes solution of New Relic features an [architecture](/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-components/#architecture) which aims to be more modular and configurable, giving you more power to choose how the solution is deployed and making it compatible with more environments. +As of version 3, the New Relic Kubernetes integration features an [architecture](/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-components/#architecture) that aims to be more modular and configurable, giving you more power to choose how it is deployed and making it compatible with more environments. -Data reported by the Kubernetes Integration version 3 hasn't changed since version 2. For version 3, we focused on configurability, stability, and user experience. +Data reported by the Kubernetes integration version 3 hasn't changed since version 2. For version 3, we focused on configurability, stability, and user experience. The Kubernetes integration version 3 (`appVersion`) is included on the `nri-bundle` chart `version` 4. @@ -19,12 +19,12 @@ Data reported by the Kubernetes Integration version 3 hasn't changed since versi ## Migration Guide [#migration-guide] -To make migration from earlier versions as easy as possible, we have developed a compatibility layer that translates most of the options that could be specified in the old newrelic-infrastructure chart to their new counterparts. This compatibility layer is temporary and will be removed in the future. Therefore, we encourage you to read this guide carefully and migrate the configuration with human supervision. +To make migrating from earlier versions as easy as possible, we have developed a compatibility layer that translates most of the configurable options in the old `newrelic-infrastructure` chart to their new counterparts. This compatibility layer is temporary and will be removed in the future. We encourage you to read this guide carefully and migrate the configuration with human supervision. You can read more about the updated `newrelic-infrastructure` chart [here](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure#newrelic-infrastructure). ### Kube State Metrics (KSM) configuration [#ksm-config] - KSM monitoring works out of the box for most configurations, most users will not need to change this config. + KSM monitoring works out of the box for most configurations; most users will not need to change this config. * `disableKubeStateMetrics` has been replaced by `ksm.enabled`. The default is still the same (KSM scraping enabled). @@ -50,7 +50,7 @@ ksm: ### Control plane configuration [#controlplane-configuration] -Control plane configuration has changed substantially. If you previously had control plane monitoring enabled, we encourage you to take a look at the [Configure control plane monitoring](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring) dedicated page. +Control plane configuration has changed substantially. If you previously enabled control plane monitoring, we encourage you to take a look at our [Configure control plane monitoring](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring) documentation. The following options have been replaced by more comprehensive configuration, covered in the section linked above: @@ -61,15 +61,15 @@ The following options have been replaced by more comprehensive configuration, co ### Agent configuration [#agent-configuration] -Agent config file, previously specified in `config` has been moved to `common.agentConfig`. Format of the file has not changed, and the full range of options that can be configured can be found [here](/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/). +The agent config file, previously specified in `config`, has been moved to `common.agentConfig`. The format of the file has not changed, and the full range of options that can be configured can be found [here](/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/). -The following agent options were previously "aliased" in the root of the `values.yml` file, and are no longer available: +The following agent options were previously "aliased" in the root of the `values.yml` file, and are **no longer available**: * `logFile` has been replaced by `common.agentConfig.log_file`. * `eventQueueDepth` has been replaced by `common.agentConfig.event_queue_depth`. -* `customAttributes` has changed in format to a yaml object. The previous format, a manually json-encoded string e.g. `{"team": "devops"}` is deprecated. -* Previously, `customAttributes` had a default `clusterName` entry which might have unwanted consequences if removed. This is no longer the case, users may now safely override `customAttributes` on its entirety. -* `discoveryCacheTTL` has been completely removed, as the discovery is now performed using kubernetes informers which have a built-in cache. +* `customAttributes` has changed in format to a yaml object. The previous format, a manually JSON-encoded string e.g. `{"team": "devops"}`, is deprecated. +* Previously, `customAttributes` had a default `clusterName` entry that might have unwanted consequences if removed. This is no longer the case; users may now safely override `customAttributes` in its entirety. +* `discoveryCacheTTL` has been completely removed, as the discovery is now performed using Kubernetes informers, which have a built-in cache. ### Integrations configuration [#integrations-configuration] @@ -92,7 +92,7 @@ integrations: integrations: # ... ``` -Moreover, now the `--port` and `--tls` flags are mandatory on the discovery command. In the past, the following would work: +Moreover, the `--port` and `--tls` flags are now mandatory in the discovery command. In the past, the following would work: ```yaml integrations: @@ -112,36 +112,31 @@ integrations: exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250 ``` -This change is required because in v2 and below, the `nrk8s-kubelet` component (or its equivalent) ran with `hostNetwork: true`, so `nri-discovery-kubernetes` could connect to the kubelet using `localhost` and plain http. For security reasons, this is no longer the case, hence the need to specify both flags from now on. +This change is required because in v2 and below, the `nrk8s-kubelet` component (or its equivalent) ran with `hostNetwork: true`, so `nri-discovery-kubernetes` could connect to the kubelet using `localhost` and plain http. For security reasons, this is no longer the case; hence, the need to specify both flags from now on. -For more details on how to configure on-host integrations in Kubernetes please check the [Monitor services in Kubernetes](/docs/kubernetes-pixie/kubernetes-integration/link-apps-services/monitor-services-running-kubernetes) page. +For more details on how to configure on-host integrations in Kubernetes, please check our [Monitor services in Kubernetes](/docs/kubernetes-pixie/kubernetes-integration/link-apps-services/monitor-services-running-kubernetes) documentation. ### Miscellaneous chart values [#misc-chart-values] -While not related to the integration configuration, the following miscellaneous options for the helm chart have also changed: +While not related to the integration configuration, the following miscellaneous options for the Helm chart have also changed: * `runAsUser` has been replaced by `securityContext`, which is templated directly into the pods and more configurable. * `resources` has been removed, as now we deploy three different workloads. Resources for each one can be configured individually under: -* `ksm.resources` -* `kubelet.resources` -* `controlPlane.resources` -* Similarly, `tolerations` has been split into three and the previous one is no longer valid: -* `ksm.tolerations` -* `kubelet.tolerations` -* `controlPlane.tolerations` - - -* All three default to tolerate any value for `NoSchedule` and `NoExecute` - - + * `ksm.resources` + * `kubelet.resources` + * `controlPlane.resources` +* `tolerations` has been split into three and the previous one is no longer valid. All three default to tolerate any value for `NoSchedule` and `NoExecute`: + * `ksm.tolerations` + * `kubelet.tolerations` + * `controlPlane.tolerations` * `image` and all its subkeys have been replaced by individual sections for each of the three images that are now deployed: -* `images.forwarder.*` to configure the infrastructure-agent forwarder. -* `images.agent.*` to configure the image bundling the infrastructure-agent and on-host integrations. -* `images.integration.*` to configure the image in charge of scraping k8s data. + * `images.forwarder.*` to configure the infrastructure-agent forwarder. + * `images.agent.*` to configure the image bundling the infrastructure-agent and on-host integrations. + * `images.integration.*` to configure the image in charge of scraping k8s data. ### Upgrade from v2 [#upgrade-from-v2] -In order to upgrade from the Kubernetes integration version 2 (included in [nri-bundle chart](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) versions 3.x), we strongly encourage you to create a `values-newrelic.yaml` file with your desired and configuration. If you had previously installed our chart from the CLI directly, for example using a command like the following: +In order to upgrade the Kubernetes integration from version 2 (included in [nri-bundle chart](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) versions 3.x), we strongly encourage you to create a `values-newrelic.yaml` file with your desired and configuration. If you had previously installed our chart from the CLI directly, for example using a command like the following: ```shell helm install newrelic/nri-bundle \ @@ -176,7 +171,7 @@ logging: enabled: true ``` -After doing this, and adapting any other setting you might have changed according to the [section above](#migration-guide), you can upgrade by running the following command: +After doing this, and adapting any other setting you might have changed according to the [migration guide above](#migration-guide), you can upgrade your `nri-bundle` by running the following command: ```shell helm upgrade newrelic newrelic/nri-bundle \ From 7797799b65a6a79d070814e2e480d670151813f0 Mon Sep 17 00:00:00 2001 From: Reese Lee Date: Fri, 1 Sep 2023 15:44:38 -0700 Subject: [PATCH 04/24] Update changes-since-v3.mdx * Adds hyperlink to the release notes page for the K8s integration --- .../advanced-configuration/changes-since-v3.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx index 26c0a4b150f..c9696113607 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx @@ -11,7 +11,7 @@ redirects: As of version 3, the New Relic Kubernetes integration features an [architecture](/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-components/#architecture) that aims to be more modular and configurable, giving you more power to choose how it is deployed and making it compatible with more environments. -Data reported by the Kubernetes integration version 3 hasn't changed since version 2. For version 3, we focused on configurability, stability, and user experience. +Data reported by the Kubernetes integration version 3 hasn't changed since version 2. For version 3, we focused on configurability, stability, and user experience. See the latest release notes for the integration [here](/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/). The Kubernetes integration version 3 (`appVersion`) is included on the `nri-bundle` chart `version` 4. From 1866704dded1a18bdcb3e9dc2e32715d92c743b5 Mon Sep 17 00:00:00 2001 From: Reese Lee Date: Fri, 1 Sep 2023 16:23:43 -0700 Subject: [PATCH 05/24] Update link-otel-applications-kubernetes.mdx * Updates a hyperlink to the correct doc (for OTLP endpoints) * Modifies some language, punctuation, and formatting for improved clarity * Adds hyperlink to a related blog post for additional info --- .../link-otel-applications-kubernetes.mdx | 22 ++++++++++--------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx index c0b8fc570e6..a8662f9d72e 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx @@ -23,10 +23,10 @@ The steps in this guide enable your application to inject infrastructure-specifi ## Prerequisites [#prereqs] -To be successful with the steps below, you should already be familiar with OpenTelemetry and Kubernetes and have done the following: +To be successful with the steps below, you should already be familiar with OpenTelemetry and Kubernetes, and have done the following: * Created the following environment variables: - * `OTEL_EXPORTER_OTLP_ENDPOINT` ([New Relic endpoint for your region or purpose](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/#review-settings)) + * `OTEL_EXPORTER_OTLP_ENDPOINT` ([New Relic endpoint for your region or purpose](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/get-started/opentelemetry-set-up-your-app/#review-settings)) * `NEW_RELIC_API_KEY` () * Installed the [New Relic Kubernetes integration](/docs/kubernetes-pixie/kubernetes-integration/installation/kubernetes-integration-install-configure) in your cluster * Instrumented your applications with [OpenTelemetry](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/), and successfully sent data to New Relic via OpenTelemetry Protocol (OTLP) @@ -38,7 +38,7 @@ If you have general questions about using collectors with New Relic, see our [In To set this up, you need to add a custom snippet to the `env` stanza of your Kubernetes YAML file. We have an example below that shows the snippet for a sample frontend microservice (`Frontend.yaml`). The snippet includes two sections that do the following: * **Section 1:** Ensure that the telemetry data is sent to the collector. This sets the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. It does this by calling the downward API to pull the host IP. - * **Section 2:** Attach infrastructure-specific metadata. To do this, we capture `metadata.uid` using the downward API and add it to the OTEL_RESOURCE_ATTRIBUTES environment variable. This environment variable is used by the OpenTelemetry Collector’s `resourcedetection` and `k8sattributes` processors to add additional infrastructure-specific context to telemetry data. + * **Section 2:** Attach infrastructure-specific metadata. To do this, we capture `metadata.uid` using the downward API and add it to the `OTEL_RESOURCE_ATTRIBUTES` environment variable. This environment variable is used by the OpenTelemetry Collector’s `resourcedetection` and `k8sattributes` processors to add additional infrastructure-specific context to telemetry data. For each microservice instrumented with OpenTelemetry, add the highlighted lines below to your manifest’s `env` stanza: @@ -77,9 +77,9 @@ fieldPath: metadata.uid ## Configure and deploy the OpenTelemetry Collector as an agent [#agent] -We recommend you deploy the [collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data as well as enrich telemetry data with metadata. For example, the collector can add custom attributes or infrastructure information through processors as well as handle batching, retry, compression and other more advanced features that are handled less efficiently at the client instrumentation level. +We recommend you deploy the [collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data, and enrich telemetry data with metadata. For example, the collector can add custom attributes or infrastructure information through processors, as well as handle batching, retry, compression and additional advanced features that are handled less efficiently at the client instrumentation level. -For help configuring the collector, see the sample collector configuration file below, along with sections about setting up these options: +For help configuring the collector, see the sample collector configuration file below, along with the sections about setting up these options: * [OTLP exporter](#otlp-exporter) * [batch processor](#batch) @@ -147,7 +147,7 @@ service: ### Step 1: Configure the OTLP exporter [#otlp-exporter] -First, configure it by adding an OTLP exporter to your [OpenTelemetry Collector configuration YAML file](https://opentelemetry.io/docs/collector/configuration/) along with your New Relic as a header. +First, add an OTLP exporter to your [OpenTelemetry Collector configuration YAML file](https://opentelemetry.io/docs/collector/configuration/) along with your New Relic as a header. ```yaml exporters: @@ -158,7 +158,7 @@ exporters: ### Step 2: Configure the batch processor [#batch] -The batch processor accepts spans, metrics, or logs and places them into batches to make it easier to compress the data and reduce the number of outgoing requests from the collector. +The batch processor accepts spans, metrics, or logs, and places them into batches to make it easier to compress the data and reduce the number of outgoing requests from the collector. ``` processors: @@ -187,7 +187,7 @@ Detectors: [ gke, gce ] ### Step 4: Configure the Kubernetes Attributes processor (general) [#attributes-general] -When we run the `k8sattributes` processor as part of the OpenTelemetry Collector running as an agent, it detects IP addresses of pods sending telemetry data to the OpenTelemetry Collector agent, using them to extract pod metadata. Below is a basic Kubernetes manifest example with only a processors section. To deploy the OpenTelemetry Collector as a `DaemonSet`, read this [comprehensive manifest example](https://github.com/newrelic-forks/microservices-demo/tree/main/src/otel-collector-agent). +When we run the `k8sattributes` processor as part of the OpenTelemetry Collector running as an agent, it detects the IP addresses of pods sending telemetry data to the OpenTelemetry Collector agent, using them to extract pod metadata. Below is a basic Kubernetes manifest example with only a processors section. To deploy the OpenTelemetry Collector as a `DaemonSet`, read this [comprehensive manifest example](https://github.com/newrelic-forks/microservices-demo/tree/main/src/otel-collector-agent). ```yaml processors: @@ -212,7 +212,7 @@ processors: ### Step 5: Configure the Kubernetes Attributes processor (RBAC) [#rbac] -You need to add configurations for role based access control (RBAC). The `k8sattributes` processor needs `get`, `watch` and `list` permissions for pods and namespaces resources included in the configured filters. See this [example](https://github.com/newrelic-forks/microservices-demo/blob/main/otel-kubernetes-manifests/otel-collector-agent.yaml#L43-L69) of how to configure role based access control (RBAC) for `ClusterRole` to give a `ServiceAccount` the necessary permissions for all pods and namespaces in the cluster. +You need to add configurations for role-based access control (RBAC). The `k8sattributes` processor needs `get`, `watch`, and `list` permissions for pods and namespaces resources included in the configured filters. See this [example](https://github.com/newrelic-forks/microservices-demo/blob/main/otel-kubernetes-manifests/otel-collector-agent.yaml#L43-L69) of how to configure role-based access control (RBAC) for `ClusterRole` to give a `ServiceAccount` the necessary permissions for all pods and namespaces in the cluster. ### Step 6: Configure the Kubernetes Attributes processor (discovery filter) [#discovery-filter] @@ -253,4 +253,6 @@ Click to enlarge the image: ## What's next? [#next] -Now that you've connected your OpenTelemetry-instrumented apps with Kubernetes, check out our [best practices](/docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts/) guide for tips to improve your use of OpenTelemetry and New Relic. +Now that you've connected your OpenTelemetry-instrumented apps with Kubernetes, check out our [best practices](/docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts/) guide for tips to improve your use of OpenTelemetry and New Relic. + +You can also check out this blog post, [Correlate OpenTelemetry traces, metrics, and logs with Kubernetes performance data](https://newrelic.com/blog/how-to-relic/k8s-with-otel) for more information on the steps provided above. From 032e59ecc3be40c38ce85993e5c124c84f820a6b Mon Sep 17 00:00:00 2001 From: Reese Lee Date: Fri, 1 Sep 2023 16:34:33 -0700 Subject: [PATCH 06/24] Update data-governance.mdx Modifies some language, punctuation, and formatting for improved clarity --- .../data-governance.mdx | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx index 17b604d8552..7e5b5f81f42 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx @@ -9,7 +9,7 @@ metaDescription: How to manage your data from the Kubernetes integration. ### Change the scrape interval [#scrape-interval] -The Kubernetes Integration v3 and above allows changing the interval at which metrics are gathered from the cluster. This allows choosing a tradeoff between data resolution and usage. We recommend choosing an interval between 15 and 30 seconds for optimal experience. +The [New Relic Kubernetes integration v3](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3/) and above allows changing the interval at which metrics are gathered from the cluster. This allows choosing a tradeoff between data resolution and usage. We recommend choosing an interval between 15 and 30 seconds for optimal experience. In order to change the scrape interval, add the following to your `values-newrelic.yaml`, under the `newrelic-infrastructure` section: @@ -26,11 +26,11 @@ global: licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ cluster: _K8S_CLUSTER_NAME_ -# ... Other settings as shown above +# ... Other settings # Configuration for newrelic-infrastructure newrelic-infrastructure: - # ... Other settings as shown above + # ... Other settings common: config: interval: 25s @@ -42,7 +42,7 @@ newrelic-infrastructure: ### Filtering Namespaces [#filter-namespace] -The Kubernetes Integration v3 and above allows filtering which namespaces are scraped by labelling them. By default all namespaces are scraped. +The Kubernetes integration v3 and above allows filtering on which namespaces are scraped by labelling them. All namespaces are scraped by default. We use the `namespaceSelector` in the same way Kubernetes does. In order to include only namespaces matching a label, change the `namespaceSelector` by adding the following to your `values-newrelic.yaml`, under the `newrelic-infrastructure` section: @@ -61,11 +61,11 @@ global: licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ cluster: _K8S_CLUSTER_NAME_ -# ... Other settings as shown above +# ... Other settings # Configuration for newrelic-infrastructure newrelic-infrastructure: - # ... Other settings as shown above + # ... Other settings common: config: namespaceSelector: @@ -88,18 +88,18 @@ common: The expressions under `matchExpressions` are concatenated. -In this example namespaces with the label `newrelic.com/scrape` set to `false` will be excluded: +In this example, namespaces with the label `newrelic.com/scrape` set to `false` will be excluded: ```yaml global: licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ cluster: _K8S_CLUSTER_NAME_ -# ... Other settings as shown above +# ... Other settings # Configuration for newrelic-infrastructure newrelic-infrastructure: - # ... Other settings as shown above + # ... Other settings common: config: namespaceSelector: @@ -107,13 +107,13 @@ newrelic-infrastructure: - {key: newrelic.com/scrape, operator: NotIn, values: ["false"]} ``` -See a full list of the settings that can be modified in the [chart's README file](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure). +See a full list of settings that can be modified in the [chart's README file](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure). -#### How can I know which namespaces are excluded? [#excluded-namespaces] +#### How can I find out which namespaces are excluded? [#excluded-namespaces] All the namespaces within the cluster are listed thanks to the `K8sNamespace` sample. The `nrFiltered` attribute determines whether the data related to the namespace is going to be scraped. -Use this query to know which namespaces are being monitored: +Use this query to find out which namespaces are being monitored: ```sql FROM K8sNamespaceSample SELECT displayName, nrFiltered WHERE clusterName = SINCE 2 MINUTES AGO From 211049ab0ce012d394156bca355e7cb782c8af2b Mon Sep 17 00:00:00 2001 From: Reese Lee Date: Fri, 1 Sep 2023 16:44:13 -0700 Subject: [PATCH 07/24] Update pixie-data-security-overview.mdx * Adds hyperlink to the referenced Community Cloud with Pixie * Adds hyperlink to anomaly detection documentation --- .../auto-telemetry-pixie/pixie-data-security-overview.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx index d1b49bdca57..dd8bfc9577e 100644 --- a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx +++ b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx @@ -11,7 +11,7 @@ redirects: - /docs/auto-telemetry-pixie/pixie-data-security-overview --- -Auto-telemetry with Pixie is our integration of Community Cloud for Pixie, a managed version of Pixie open source software. Auto-telemetry with Pixie therefore benefits from Pixie's approach to keeping data secure. The data that Pixie collects is stored entirely within your Kubernetes cluster. This data does not persist outside of your environment, and will never be stored by Community Cloud for Pixie. This means that your sensitive data remains within your environment and control. +Auto-telemetry with Pixie is our integration of [Community Cloud for Pixie](https://docs.px.dev/installing-pixie/install-guides/community-cloud-for-pixie/), a managed version of Pixie open source software. Auto-telemetry with Pixie therefore benefits from Pixie's approach to keeping data secure. The data that Pixie collects is stored entirely within your Kubernetes cluster. This data does not persist outside of your environment, and will never be stored by Community Cloud for Pixie. This means that your sensitive data remains within your environment and control. Community Cloud for Pixie makes queries directly to your Kubernetes cluster to access the data. In order for the query results to be shown in the Community Cloud for Pixie UI, CLI, and API, the data is sent to the client from your cluster using a reverse proxy. @@ -20,7 +20,7 @@ Community Cloud for Pixie’s reverse proxy is designed to ensure: * Data is ephemeral. It only passes through the Community Cloud for Pixie's cloud proxy in transit. This ensures data locality. * Data is encrypted while in transit. Only you are able to read your data. -New Relic fetches and stores data that related to an application's performance. With Auto-telemetry with Pixie, a predefined subset of data persists outside of your cluster. This data is stored in our database, in your selected region. This data persists in order to give you long-term storage, alerting, correlation with additional data, and the ability to use advanced New Relic platform capabilities, such as anomaly detection. +New Relic fetches and stores data that related to an application's performance. With Auto-telemetry with Pixie, a predefined subset of data persists outside of your cluster. This data is stored in our database, in your selected region. This data persists in order to give you long-term storage, alerting, correlation with additional data, and the ability to use advanced New Relic platform capabilities, such as [anomaly detection](/docs/alerts-applied-intelligence/applied-intelligence/anomaly-detection/anomaly-detection-applied-intelligence/). The persisted performance metrics include, but are not limited to: From cfd1a3d2150d0521f489aea41d036c195ff1f516 Mon Sep 17 00:00:00 2001 From: Reese Lee Date: Fri, 1 Sep 2023 16:50:47 -0700 Subject: [PATCH 08/24] Update manage-pixie-memory.mdx Adds hyperlink to Pixie documentation about `vizier-pem`, which is referenced but not linked. --- .../advanced-configuration/manage-pixie-memory.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx index 058414f3863..160b664dd97 100644 --- a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx +++ b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx @@ -20,10 +20,10 @@ You can configure the amount of memory Pixie uses. During the installation, use The primary focus of the [open source Pixie project](https://github.com/pixie-io/pixie) is to build a real-time debugging platform. Pixie [isn't intended to be a long-term durable storage solution](https://docs.px.dev/about-pixie/faq/#data-collection-how-much-data-does-pixie-store) and is best used in conjunction with New Relic. The New Relic integration queries Pixie every few minutes and persists a subset of Pixie's telemetry data in New Relic. -When you install the New Relic Pixie integration, a `vizier-pem` agent is deployed to each node in your cluster via a DaemonSet. The `vizier-pem` agents use memory for two main purposes: +When you install the New Relic Pixie integration, a [`vizier-pem` agent](https://docs.px.dev/reference/architecture/#vizier) is deployed to each node in your cluster via a DaemonSet. The `vizier-pem` agents use memory for two main purposes: -* **Collecting telemetry data**: tracing application traffic or CPU profiles, amongst other. Those values must be stored in memory somewhere, as they're processed. -* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie); and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic. +* **Collecting telemetry data**: tracing application traffic or CPU profiles, amongst others. Those values must be stored in memory somewhere, as they're processed. +* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie), and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic. By default, `vizier-pem` pods have a `2Gi` memory limit, and a `2Gi` memory request. They set aside 60% of their allocated memory for short-term data storage, leaving the other 40% for the data collection. From a40945f1d9a76f92a2cf0bb3d626548a46876f3c Mon Sep 17 00:00:00 2001 From: Reese Lee Date: Fri, 1 Sep 2023 17:02:47 -0700 Subject: [PATCH 09/24] Update get-started-kubecost.mdx * Adds hyperlink to the Prometheus Remote Write documentation * Modifies some language, punctuation, and formatting for improved clarity --- .../kubernetes-pixie/kubecost/get-started-kubecost.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx b/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx index 18cffdd1eb5..adcab8c8744 100644 --- a/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx +++ b/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx @@ -19,7 +19,7 @@ Actionable insights: ## Get started -In order to get started with Kubecost and New Relic, you'll need to set up Prometheus Remote Write in New Relic. Then, you will need to install the Kubecost agent. +To get started, first set up [Prometheus Remote Write](/docs/infrastructure/prometheus-integrations/install-configure-remote-write/set-your-prometheus-remote-write-integration/) in New Relic, then install the Kubecost agent. ### Set up Prometheus Remote Write @@ -31,7 +31,7 @@ Go to the [Prometheus remote write setup launcher in the UI](https://one.newreli ### Install the Kubecost agent to your cluster -Now, we are going to install the Kubecost agent via Helm. +Next, install the Kubecost agent via Helm. 1. Download the template YAML file for the Kubecost agent installation. Save it to `kubecost-values.yaml`. @@ -1175,7 +1175,7 @@ extraObjects: [] 2. Open `kubecost-values.yaml` in an editor of your choice. 3. Go to line 671, and update `YOUR_URL_HERE` to contain the value of the URL you generated for the Prometheus Remote Write integration in the earlier step. It should look something like `https://metric-api.newrelic.com/prometheus/v1/write?prometheus_server=kubecost`. 4. Go to line 672, and update `YOUR_BEARER_TOKEN_HERE` to contain the value of the bearer token you generated for the Prometheus remote write integration in the earlier step. -5. Run the Helm command to add the Kubecost agent to your cluster. It should start sending data to New Relic. +5. Run the Helm command below to add the Kubecost agent to your cluster and start sending data to New Relic: ```shell helm upgrade --install kubecost \ [11:13:30] @@ -1184,8 +1184,8 @@ helm upgrade --install kubecost \ --values kubecost-values.yaml ``` -6. Wait a few minutes. In the previous tab setting up Remote Write, click the "See your data" button to see whether data has been received. -7. Query your data. +6. Wait a few minutes. In the previous tab where you set up Remote Write, click the "See your data" button to see whether data has been received. +7. Query your data: ```sql SELECT sum(`Total Cost($)`) AS 'Total Monthly Cost' FROM (FROM Metric SELECT (SELECT sum(`total_node_cost`) FROM (FROM Metric SELECT (average(kube_node_status_capacity_cpu_cores) * average(node_cpu_hourly_cost) * 730 + average(node_gpu_hourly_cost) * 730 + average(kube_node_status_capacity_memory_bytes) / 1024 / 1024 / 1024 * average(node_ram_hourly_cost) * 730) AS 'total_node_cost' FACET node)) + (SELECT (sum(acflb) / 1024 / 1024 / 1024 * 0.04) AS 'Container Cost($)' FROM (SELECT (average(container_fs_limit_bytes) * cardinality(container_fs_limit_bytes)) AS 'acflb' FROM Metric WHERE (NOT ((device = 'tmpfs')) AND (id = '/')))) + (SELECT sum(aphc * 730 * akpcb / 1024 / 1024 / 1024) AS 'Total Persistent Volume Cost($)' FROM (FROM Metric SELECT average(pv_hourly_cost) AS 'aphc', average(kube_persistentvolume_capacity_bytes) AS 'akpcb' FACET persistentvolume, instance)) AS 'Total Cost($)') From c8db537f6c9235c626d9d3a20aed8cc79e7167d6 Mon Sep 17 00:00:00 2001 From: nilventosa Date: Tue, 5 Sep 2023 14:22:46 +0200 Subject: [PATCH 10/24] fix: update the query for suggested success sli --- .../service-level-management/create-slm.mdx | 40 +++++++++---------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/src/content/docs/service-level-management/create-slm.mdx b/src/content/docs/service-level-management/create-slm.mdx index 4a25d5baa3f..b11cdb0e802 100644 --- a/src/content/docs/service-level-management/create-slm.mdx +++ b/src/content/docs/service-level-management/create-slm.mdx @@ -26,7 +26,7 @@ You can create SLIs and SLOs manually through the [New Relic UI](https://one.new ## Requirements and limitations [#requirements] -To create and manage service levels requires the following: +To create and manage service levels requires the following: * You must be a [full platform user](/docs/accounts/accounts-billing/new-relic-one-user-management/user-type). * You must have the [capability for modifying and deleting events-to-metrics](/docs/accounts/accounts-billing/new-relic-one-user-management/user-permissions#insights). @@ -59,7 +59,7 @@ First of all, identify a "system boundary." This is a part of your system your u Once you have established these top-level service levels, you might find that not all the endpoints of your service behave in the same way, and might want to split it further. For example: * Login transactions might need a higher SLO on errors than a browsing one -* Duration of some operations is much higher than the rest +* Duration of some operations is much higher than the rest For example, at a high level, a key user experience at New Relic could be: *a customer sends us telemetry data and that data is later available to be queried in our product API or UI.* @@ -69,7 +69,7 @@ For that user experience, we could create an SLO like: |--------------|--------|----------|---------------------------------------------------------------------| | last 28 days | 99.9% | latency | data ingested by a user is available to query in less than 1 minute | -Note, these kinds of user experiences typically involve more than one service and are spread across multiple team and org boundaries. +Note, these kinds of user experiences typically involve more than one service and are spread across multiple team and org boundaries. Increasing the granularity of underlying user experiences, another key user experience at New Relic could be: *a customer can use a custom dashboard to visualize their telemetry data.* @@ -106,7 +106,7 @@ Request-based SLOs are based on an SLI defined as the ratio of the number of goo ## Suggested SLIs [#suggested-sli] -In this section you’ll find some SLIs that are typically used to measure the performance of services and browser applications. +In this section you’ll find some SLIs that are typically used to measure the performance of services and browser applications. ### SLIs for APM services and key transactions instrumented with the New Relic agent [#sli-apm] @@ -134,7 +134,7 @@ Based on `Transaction` events, these SLIs are the most common for request-driven ```sql FROM: TransactionError - WHERE: entityGuid = '{entityGuid}' AND error.expected IS FALSE + WHERE: entityGuid = '{entityGuid}' AND error.expected != true ``` Where `{entityGuid}` is the service's GUID. @@ -378,7 +378,7 @@ The following SLIs are based on Google's Browser Core Web Vitals. To determine a realistic number to select for `{cumulativeLayoutShift}` in your environment, one typical practice is to select the 75th percentile of page loads for the last 7 or 15 days, segmented across mobile and desktop devices. Find it by using the query builder: - ```sql + ```sql SELECT percentile(cumulativeLayoutShift, 95) FROM PageViewTiming WHERE entityGuid = '{entityGuid}' since 7 days ago limit max facet deviceType ``` @@ -399,7 +399,7 @@ The following SLIs are based on Google's Browser Core Web Vitals. ```sql FROM: SyntheticCheck - WHERE: entity.guid = '{entityGuid}' + WHERE: entity.guid = '{entityGuid}' ``` Where `{entityGuid}` is the synthetic check's GUID. @@ -408,7 +408,7 @@ The following SLIs are based on Google's Browser Core Web Vitals. ```sql FROM: SyntheticCheck - WHERE: entity.guid = '{entityGuid}' AND result='SUCCESS' + WHERE: entity.guid = '{entityGuid}' AND result='SUCCESS' ``` Where `{entityGuid}` is the synthetic check's GUID. @@ -421,9 +421,9 @@ You can create SLIs and SLOs from several places on [in our UI](https://one.newr * Go to **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Service levels**. You can associate the SLI with any entity across your accounts, including workloads. * From the **Service levels** page in any Service, key transactions, Browser application, or Synthetic monitor. The SLI will be associated with that specific entity. If you use this starting point, New Relic will automatically create the most common service level indicators for this entity type, based on the latest available data. -* From the **Service levels** tab in any workload. You can associate the SLI with any entity in the workload, or the whole workload. - -Data doesn't appear right away after creating an SLI. Expect a few minutes delay before seeing the first SLI attainment results. The data has 13 month retention by default. +* From the **Service levels** tab in any workload. You can associate the SLI with any entity in the workload, or the whole workload. + +Data doesn't appear right away after creating an SLI. Expect a few minutes delay before seeing the first SLI attainment results. The data has 13 month retention by default. Remember that service levels can only be associated with a single account. For details on that, see [the requirements](#requirements). @@ -469,14 +469,14 @@ To create service levels, follow these steps: The account where the data is gathered from matches the account of the entity that the SLI refers to. Please see the section above to know what goes into each field. On the right you'll see the final queries, and at the bottom you'll get a preview of the number of valid and good/bad events in the last days. - + Here’s an example of the percentage-based success rate for a dimensional metric, let’s convert it into the valid/good events for SLI: ```sql FROM Metric - SELECT percentage(sum(scrooge_do_expire_count), + SELECT percentage(sum(scrooge_do_expire_count), WHERE status = 'success') AS 'Success Rate' - WHERE env='production' + WHERE env='production' AND status != 'attempt' ``` @@ -587,7 +587,7 @@ You can also use wildcards in your SLI queries, here's an example: ### Edit SLIs [#edit-sli] -Once you've created an SLI, you can edit it through the service levels list page, by clicking on the **...** menu and then `Edit`, as shown here: +Once you've created an SLI, you can edit it through the service levels list page, by clicking on the **...** menu and then `Edit`, as shown here: -or you can do that same thing through the summary page, by clicking `Edit`: +or you can do that same thing through the summary page, by clicking `Edit`: + - Edit SLIs summary page - +/> + ## Optimize your SLM [#optimize] -For information on how to optimize your SLM implementation, see our [Observability maturity SLM guide](/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/optimize-slm-guide). +For information on how to optimize your SLM implementation, see our [Observability maturity SLM guide](/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/optimize-slm-guide). From e9857db7db9898d6999217ec9511bdac8098ccdd Mon Sep 17 00:00:00 2001 From: nilventosa Date: Tue, 5 Sep 2023 14:26:26 +0200 Subject: [PATCH 11/24] Revert "fix: update the query for suggested success sli" This reverts commit c8db537f6c9235c626d9d3a20aed8cc79e7167d6. --- .../service-level-management/create-slm.mdx | 40 +++++++++---------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/src/content/docs/service-level-management/create-slm.mdx b/src/content/docs/service-level-management/create-slm.mdx index b11cdb0e802..4a25d5baa3f 100644 --- a/src/content/docs/service-level-management/create-slm.mdx +++ b/src/content/docs/service-level-management/create-slm.mdx @@ -26,7 +26,7 @@ You can create SLIs and SLOs manually through the [New Relic UI](https://one.new ## Requirements and limitations [#requirements] -To create and manage service levels requires the following: +To create and manage service levels requires the following: * You must be a [full platform user](/docs/accounts/accounts-billing/new-relic-one-user-management/user-type). * You must have the [capability for modifying and deleting events-to-metrics](/docs/accounts/accounts-billing/new-relic-one-user-management/user-permissions#insights). @@ -59,7 +59,7 @@ First of all, identify a "system boundary." This is a part of your system your u Once you have established these top-level service levels, you might find that not all the endpoints of your service behave in the same way, and might want to split it further. For example: * Login transactions might need a higher SLO on errors than a browsing one -* Duration of some operations is much higher than the rest +* Duration of some operations is much higher than the rest For example, at a high level, a key user experience at New Relic could be: *a customer sends us telemetry data and that data is later available to be queried in our product API or UI.* @@ -69,7 +69,7 @@ For that user experience, we could create an SLO like: |--------------|--------|----------|---------------------------------------------------------------------| | last 28 days | 99.9% | latency | data ingested by a user is available to query in less than 1 minute | -Note, these kinds of user experiences typically involve more than one service and are spread across multiple team and org boundaries. +Note, these kinds of user experiences typically involve more than one service and are spread across multiple team and org boundaries. Increasing the granularity of underlying user experiences, another key user experience at New Relic could be: *a customer can use a custom dashboard to visualize their telemetry data.* @@ -106,7 +106,7 @@ Request-based SLOs are based on an SLI defined as the ratio of the number of goo ## Suggested SLIs [#suggested-sli] -In this section you’ll find some SLIs that are typically used to measure the performance of services and browser applications. +In this section you’ll find some SLIs that are typically used to measure the performance of services and browser applications. ### SLIs for APM services and key transactions instrumented with the New Relic agent [#sli-apm] @@ -134,7 +134,7 @@ Based on `Transaction` events, these SLIs are the most common for request-driven ```sql FROM: TransactionError - WHERE: entityGuid = '{entityGuid}' AND error.expected != true + WHERE: entityGuid = '{entityGuid}' AND error.expected IS FALSE ``` Where `{entityGuid}` is the service's GUID. @@ -378,7 +378,7 @@ The following SLIs are based on Google's Browser Core Web Vitals. To determine a realistic number to select for `{cumulativeLayoutShift}` in your environment, one typical practice is to select the 75th percentile of page loads for the last 7 or 15 days, segmented across mobile and desktop devices. Find it by using the query builder: - ```sql + ```sql SELECT percentile(cumulativeLayoutShift, 95) FROM PageViewTiming WHERE entityGuid = '{entityGuid}' since 7 days ago limit max facet deviceType ``` @@ -399,7 +399,7 @@ The following SLIs are based on Google's Browser Core Web Vitals. ```sql FROM: SyntheticCheck - WHERE: entity.guid = '{entityGuid}' + WHERE: entity.guid = '{entityGuid}' ``` Where `{entityGuid}` is the synthetic check's GUID. @@ -408,7 +408,7 @@ The following SLIs are based on Google's Browser Core Web Vitals. ```sql FROM: SyntheticCheck - WHERE: entity.guid = '{entityGuid}' AND result='SUCCESS' + WHERE: entity.guid = '{entityGuid}' AND result='SUCCESS' ``` Where `{entityGuid}` is the synthetic check's GUID. @@ -421,9 +421,9 @@ You can create SLIs and SLOs from several places on [in our UI](https://one.newr * Go to **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Service levels**. You can associate the SLI with any entity across your accounts, including workloads. * From the **Service levels** page in any Service, key transactions, Browser application, or Synthetic monitor. The SLI will be associated with that specific entity. If you use this starting point, New Relic will automatically create the most common service level indicators for this entity type, based on the latest available data. -* From the **Service levels** tab in any workload. You can associate the SLI with any entity in the workload, or the whole workload. - -Data doesn't appear right away after creating an SLI. Expect a few minutes delay before seeing the first SLI attainment results. The data has 13 month retention by default. +* From the **Service levels** tab in any workload. You can associate the SLI with any entity in the workload, or the whole workload. + +Data doesn't appear right away after creating an SLI. Expect a few minutes delay before seeing the first SLI attainment results. The data has 13 month retention by default. Remember that service levels can only be associated with a single account. For details on that, see [the requirements](#requirements). @@ -469,14 +469,14 @@ To create service levels, follow these steps: The account where the data is gathered from matches the account of the entity that the SLI refers to. Please see the section above to know what goes into each field. On the right you'll see the final queries, and at the bottom you'll get a preview of the number of valid and good/bad events in the last days. - + Here’s an example of the percentage-based success rate for a dimensional metric, let’s convert it into the valid/good events for SLI: ```sql FROM Metric - SELECT percentage(sum(scrooge_do_expire_count), + SELECT percentage(sum(scrooge_do_expire_count), WHERE status = 'success') AS 'Success Rate' - WHERE env='production' + WHERE env='production' AND status != 'attempt' ``` @@ -587,7 +587,7 @@ You can also use wildcards in your SLI queries, here's an example: ### Edit SLIs [#edit-sli] -Once you've created an SLI, you can edit it through the service levels list page, by clicking on the **...** menu and then `Edit`, as shown here: +Once you've created an SLI, you can edit it through the service levels list page, by clicking on the **...** menu and then `Edit`, as shown here: -or you can do that same thing through the summary page, by clicking `Edit`: - +or you can do that same thing through the summary page, by clicking `Edit`: + Edit SLIs summary page - +/> + ## Optimize your SLM [#optimize] -For information on how to optimize your SLM implementation, see our [Observability maturity SLM guide](/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/optimize-slm-guide). +For information on how to optimize your SLM implementation, see our [Observability maturity SLM guide](/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/optimize-slm-guide). From 87450260f284639bacfc403a95fbf29e5e6d680b Mon Sep 17 00:00:00 2001 From: nilventosa Date: Tue, 5 Sep 2023 14:29:35 +0200 Subject: [PATCH 12/24] fix: update the query for suggested success sli --- src/content/docs/service-level-management/create-slm.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/service-level-management/create-slm.mdx b/src/content/docs/service-level-management/create-slm.mdx index 4a25d5baa3f..1804c10e46c 100644 --- a/src/content/docs/service-level-management/create-slm.mdx +++ b/src/content/docs/service-level-management/create-slm.mdx @@ -134,7 +134,7 @@ Based on `Transaction` events, these SLIs are the most common for request-driven ```sql FROM: TransactionError - WHERE: entityGuid = '{entityGuid}' AND error.expected IS FALSE + WHERE: entityGuid = '{entityGuid}' AND error.expected != true ``` Where `{entityGuid}` is the service's GUID. From c4b07104d337494134799e48a84c310f3039fe33 Mon Sep 17 00:00:00 2001 From: David Affinito Date: Tue, 5 Sep 2023 08:09:20 -0700 Subject: [PATCH 13/24] feat(diag-cli): Diagnostics CLI 3.1.0 --- .../pass-command-line-options-nrdiag.mdx | 40 ++++++++ .../run-diagnostics-cli-nrdiag.mdx | 91 ++++++++++++++++++- .../diagnostics-cli-251.mdx | 1 - .../diagnostics-cli-261.mdx | 1 - .../diagnostics-cli-262.mdx | 1 - .../diagnostics-cli-310.mdx | 16 ++++ 6 files changed, 142 insertions(+), 8 deletions(-) create mode 100644 src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-310.mdx diff --git a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx index 2bbd9e6211b..06787eab0c4 100644 --- a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx +++ b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx @@ -287,6 +287,46 @@ To use the following command line options with the Diagnostics CLI: + + + `-list-scripts` + + + + List available scripts. + + + + + + `-script STRING` + + + + View the specified script. Use with -run to run the script. + + + + + + `-run` + + + + Use with -script to run the script. + + + + + + `-script-flags` + + + + Use with -run -script to pass command line flags to the script. + + + `-v` diff --git a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx index a3e25de123e..3bace01f791 100644 --- a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx +++ b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx @@ -208,9 +208,90 @@ To run from PowerShell, add `./` to the start of `cmd`. * For ARM64 systems: ``` - nrdiag_arm64 -suites SUITE NAMES + nrdiag_arm64.exe -suites SUITE NAMES ``` +## Scripts [#scripts] + +Scripts provide an additional datasource for information that isn't collected by a task. The catalog of available scripts can be found in [the Diagnostic CLI's github repository](https://github.com/newrelic/newrelic-diagnostics-cli/tree/main/scriptcatalog). + +### Script output + +Script output is printed to the screen, along with being saved in a file based on the name of the script, ie: `name-of-script.out`. This is saved in the directory specified by `-output-path`, defaulting to the current directory. + +Scripts can also output files, either to the current working directory or the directory specified by `-output-path`. All output files are included in the results zip in the `ScriptOutput/` directory. + +### Script results + +The results of running a script can be found in the `nrdiag-output.json` file with the following schema: + +```json +"Script": { + "Name": "example", + "Description": "Example Description", + "Output": "example output", + "OutputFiles": [ + "/path/to/example.out", + "/path/to/another-file.out" + ], + "OutputTruncated": false +} +``` + +The `Output` field contains the stdout output. If it is over 20000 characters, it is truncated and the `OutputTruncated` field is set to `true`. Even if trucated, the full output is still available in the `ScriptOutput/` directory in the zip file. + +A list of files the script created can be found in the `Outputfiles` field. + +### List, view, and run a script [#list-view-run-script] + + + + To view a list of the scripts available to run, use `-list-scripts`: + ``` + ./nrdiag -list-scripts + ``` + + + To view a script without running it: + ``` + ./nrdiag -script SCRIPT_NAME + ``` + + + To run a script: + ``` + ./nrdiag -script SCRIPT_NAME -run + ``` + + + To run a script with arguments: + ``` + ./nrdiag -script SCRIPT_NAME -run -script-flags "-foo bar" + ``` + + + To run a script and suites at the same time: + ``` + ./nrdiag -script SCRIPT_NAME -run -s SUITE NAMES" + ``` + + + ## Include additional files in the zip [#include-additional-files] If you have additional files that you would like to share with support, you can include them in the `nrdiag-output.zip` file using the `-include` command line flag. This can be used with a single file or a directory. If a directory is provided, all of its subdirectories are included. The total size limit of the files included is 4GB. @@ -237,7 +318,7 @@ To run from PowerShell, add `./` to the start of `cmd`. * For 32-bit systems: ``` - nrdiag -include Path\To\File -attach + nrdiag.exe -include Path\To\File -attach ``` * For 64-bit systems: @@ -306,7 +387,7 @@ Uploading your results to an account will automatically upload the contents of t OR ``` - nrdiag -api-key ${API_KEY} + nrdiag.exe -api-key ${API_KEY} ``` * For 64-bit systems: @@ -317,7 +398,7 @@ Uploading your results to an account will automatically upload the contents of t OR ``` - nrdiag_x64 -api-key ${API_KEY} + nrdiag_x64.exe -api-key ${API_KEY} ``` * For ARM64 systems: @@ -328,7 +409,7 @@ Uploading your results to an account will automatically upload the contents of t OR ``` - nrdiag_arm64 -api-key ${API_KEY} + nrdiag_arm64.exe -api-key ${API_KEY} ``` diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx index 4070e1184df..966a2752dba 100644 --- a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx @@ -2,7 +2,6 @@ subject: Diagnostics CLI (nrdiag) releaseDate: '2023-05-24' version: 2.5.1 -downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_2.5.1.zip' --- ## Changes diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx index b5097c23c2e..0f5a7220a2b 100644 --- a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx @@ -2,7 +2,6 @@ subject: Diagnostics CLI (nrdiag) releaseDate: '2023-07-11' version: 2.6.1 -downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_2.6.1.zip' --- ## Changes diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx index f5e73bf3751..e1705ff2712 100644 --- a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx @@ -2,7 +2,6 @@ subject: Diagnostics CLI (nrdiag) releaseDate: '2023-07-17' version: 2.6.2 -downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_2.6.2.zip' --- ## Changes diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-310.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-310.mdx new file mode 100644 index 00000000000..91f9a99bd85 --- /dev/null +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-310.mdx @@ -0,0 +1,16 @@ +--- +subject: Diagnostics CLI (nrdiag) +releaseDate: '2023-09-05' +version: 3.1.0 +downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_3.1.0.zip' +--- + +## New Feature +- The CLI now supports running scripts to gather additional output that isn't currently collected by a task. For more information, please see the [Run the Diagnostics CLI documentation](https://docs.newrelic.com/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag/#scripts). ([#182](https://github.com/newrelic/newrelic-diagnostics-cli/pull/182), [#185](https://github.com/newrelic/newrelic-diagnostics-cli/pull/185)) + +## Task updates +- Updated Hotspot versions supported by the Java APM agent. ([#183](https://github.com/newrelic/newrelic-diagnostics-cli/pull/183)) + +## Fixes +- Fixed an issue when using `-output-path` where the `nrdiag-filelist.txt` and `nrdiag-output.json` files were not included in the `nrdiag-output.zip`. ([#187](https://github.com/newrelic/newrelic-diagnostics-cli/pull/187)) +- Fixed an issue that prevented some logs from being included in the zip. ([#188](https://github.com/newrelic/newrelic-diagnostics-cli/pull/188)) From 11af82030c053445207cd7861b4244398207b353 Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 09:05:55 -0700 Subject: [PATCH 14/24] fix(accounts): Remove latin --- .../query-account-audit-logs-nrauditevent.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx index 91fa30338f4..6838608e437 100644 --- a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx +++ b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx @@ -165,7 +165,7 @@ Note that the query builder in the UI can only query one account at a time. If y title="What target types are in my account?"> - The `targetType` attribute describes the object that changed, for example, account, role, user, alert conditions or notifications, logs, etc. + The `targetType` attribute describes the object that changed, such as account, role, user, alert conditions or notifications, and logs. To generate a list of `targetType` values for your account, run the query below. Note that this query will only show `targetTypes` that have been touched. ```sql From 618c03e94b56cf8c0881d3a97cd56a1a45545d9c Mon Sep 17 00:00:00 2001 From: zackm Date: Tue, 5 Sep 2023 12:19:15 -0500 Subject: [PATCH 15/24] chore: update to reflect changes to agent --- .../advanced/advanced-config.mdx | 37 +------------------ 1 file changed, 1 insertion(+), 36 deletions(-) diff --git a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx index 764d6e828db..e558d7fe0ef 100644 --- a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx +++ b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx @@ -134,7 +134,6 @@ devices: ext_only: true meraki_config: api_key: APIKEY123ABC - monitor_clients: true monitor_devices: true monitor_org_changes: true monitor_uplinks: true @@ -1289,34 +1288,6 @@ global: The [Meraki Dashboard API](https://developer.cisco.com/meraki/api-latest/) integration pulls various metrics related to the health of your Meraki environment. The combination of various configuration options allows you to set up different monitoring scenarios for your needs. - * `meraki_config.monitor_clients: true`: Uses the [Get Network Clients](https://developer.cisco.com/meraki/api-latest/get-network-clients/) endpoint to iterate through all target networks and return client data. - - - In large environments, this API call has known issues with timeouts against the Meraki Dashboard API, resulting in missing metrics. - - - NRQL to find network client telemetry: - - ```sql - FROM Metric SELECT - latest(status) AS 'Current Client Status', - max(kentik.meraki.clients.RecvTotal) AS 'Total Received Bytes', - max(kentik.meraki.clients.SentTotal) AS 'Total Sent Bytes' - FACET - network AS 'Network Name', - client_id AS 'Client ID', - client_mac_addr AS 'Client MAC', - description AS 'Client Description', - vlan AS 'Client VLAN', - user AS 'Client User', - manufacturer AS 'Client Manufacturer', - device_type AS 'Client Type', - recent_device_name AS 'Latest Device' - WHERE instrumentation.name = 'meraki.clients' - ``` - -
- * `meraki_config.monitor_devices: true && meraki_config.preferences.device_status_only: true`: Uses the [Get Organization Device Statuses](https://developer.cisco.com/meraki/api-latest/get-organization-devices-statuses/) endpoint to list the status of every Meraki device in the organization. NRQL to find device status telemetry: @@ -1436,12 +1407,6 @@ global: API Key (string) [Meraki Dashboard API key](https://documentation.meraki.com/General_Administration/Other_Topics/Cisco_Meraki_Dashboard_API#Enable_API_Access) for authentication. - - meraki_config.monitor_clients - - true | false (Default: false) - Monitor client status and performance per network. *(Not recommended for large environments due to timeout problems)* - meraki_config.monitor_devices @@ -1522,7 +1487,7 @@ global: meraki_config.preferences.device_status_only true | false (Default: false) - Used in combination with `monitor_devices` to restrict polling to only status information. *(This is helpful in large organizations to prevent timeout issues)*. + *Required* when using `monitor_devices: true` to restrict polling to only status information. *(This is used to prevent timeout issues)*. meraki_config.preferences.show_vpn_peers From 1ce2215c40bf2c4bc711f1d941beb087def1ceda Mon Sep 17 00:00:00 2001 From: larry Date: Tue, 5 Sep 2023 11:08:24 -0700 Subject: [PATCH 16/24] release note for 5.7.3 --- .../new-relic-android-5703.mdx | 11 +++++++++++ 1 file changed, 11 insertions(+) create mode 100644 src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx diff --git a/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx b/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx new file mode 100644 index 00000000000..67787ba7c24 --- /dev/null +++ b/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx @@ -0,0 +1,11 @@ +--- +subject: Mobile app for Android +releaseDate: '2023-09-05' +version: 5.7.3 +downloadLink: 'https://play.google.com/store/apps/details?id=com.newrelic.rpm' +--- + +### Notes + +Support for new SLA Details screen +Fixes login issues for some users From c1798e8fa09c1059ed23f5f1d82643bea916dd16 Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 12:24:42 -0700 Subject: [PATCH 17/24] fix(kubernetes): Add button reference formatting --- .../docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx b/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx index adcab8c8744..a7433cabc06 100644 --- a/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx +++ b/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx @@ -1184,7 +1184,7 @@ helm upgrade --install kubecost \ --values kubecost-values.yaml ``` -6. Wait a few minutes. In the previous tab where you set up Remote Write, click the "See your data" button to see whether data has been received. +6. Wait a few minutes. In the previous tab where you set up Remote Write, click the **See your data** button to see whether data has been received. 7. Query your data: ```sql From 5421924448138457196070f31d6c73a4a963fe40 Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 12:28:08 -0700 Subject: [PATCH 18/24] fix(pixie): Remove unnecessary comma --- .../advanced-configuration/manage-pixie-memory.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx index 160b664dd97..10b28517374 100644 --- a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx +++ b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx @@ -23,7 +23,7 @@ The primary focus of the [open source Pixie project](https://github.com/pixie-io When you install the New Relic Pixie integration, a [`vizier-pem` agent](https://docs.px.dev/reference/architecture/#vizier) is deployed to each node in your cluster via a DaemonSet. The `vizier-pem` agents use memory for two main purposes: * **Collecting telemetry data**: tracing application traffic or CPU profiles, amongst others. Those values must be stored in memory somewhere, as they're processed. -* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie), and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic. +* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie) and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic. By default, `vizier-pem` pods have a `2Gi` memory limit, and a `2Gi` memory request. They set aside 60% of their allocated memory for short-term data storage, leaving the other 40% for the data collection. From 20d7d8d69248953b7c7bf40f385684c15449bd8f Mon Sep 17 00:00:00 2001 From: ZuluEcho9 Date: Tue, 5 Sep 2023 14:02:21 -0700 Subject: [PATCH 19/24] fix(nav): fix some entries --- src/nav/accounts.yml | 2 +- src/nav/infrastructure.yml | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/src/nav/accounts.yml b/src/nav/accounts.yml index 05dc0579ee0..ddb5f71ff1f 100644 --- a/src/nav/accounts.yml +++ b/src/nav/accounts.yml @@ -18,7 +18,7 @@ pages: - title: Login troubleshooting path: /docs/accounts/accounts-billing/account-setup/troubleshoot-new-relics-password-email-address-login-problems - title: Users with multiple user records - path: /docs/accounts/accounts-billing/account-setup/multiple-logins-found + path: /docs/accounts/accounts-billing/account-setup/multiple-user-records - title: Email domain capture path: /docs/accounts/accounts-billing/account-setup/domain-capture - title: Account structure diff --git a/src/nav/infrastructure.yml b/src/nav/infrastructure.yml index 5a9c0e8cea9..5721ccd3b04 100644 --- a/src/nav/infrastructure.yml +++ b/src/nav/infrastructure.yml @@ -317,8 +317,6 @@ pages: pages: - title: Infrastructure integration alert threshold path: /docs/infrastructure/amazon-integrations/troubleshooting/cannot-create-alert-condition-infrastructure-integration - - title: No data appears - path: /docs/infrastructure/host-integrations/troubleshooting/not-seeing-host-integration-data - title: Pass infrastructure parameters to integration path: /docs/infrastructure/host-integrations/troubleshooting/pass-infrastructure-agent-parameters-host-integration - title: Run integrations manually From c21eb4c70782fd904eabc849be80f280ca3471b9 Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 14:39:04 -0700 Subject: [PATCH 20/24] fix(nrdiag): Add code formatting --- .../pass-command-line-options-nrdiag.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx index 06787eab0c4..fb5dd1d1f5e 100644 --- a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx +++ b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx @@ -303,7 +303,7 @@ To use the following command line options with the Diagnostics CLI: - View the specified script. Use with -run to run the script. + View the specified script. Use with `-run` to run the script. @@ -313,7 +313,7 @@ To use the following command line options with the Diagnostics CLI: - Use with -script to run the script. + Use with `-script` to run the script. @@ -323,7 +323,7 @@ To use the following command line options with the Diagnostics CLI: - Use with -run -script to pass command line flags to the script. + Use with `-run -script` to pass command line flags to the script. From f1c1026b4eae59a5c14559aca4b455a0686d529a Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 14:46:52 -0700 Subject: [PATCH 21/24] fix(nrdiag): Clarify sentence --- .../diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx index 3bace01f791..5e5686b5659 100644 --- a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx +++ b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx @@ -217,7 +217,7 @@ Scripts provide an additional datasource for information that isn't collected by ### Script output -Script output is printed to the screen, along with being saved in a file based on the name of the script, ie: `name-of-script.out`. This is saved in the directory specified by `-output-path`, defaulting to the current directory. +Script output is printed to the screen and is saved in a file based on the name of the script (for example, `name-of-script.out`). This is saved in the directory specified by `-output-path`, defaulting to the current directory. Scripts can also output files, either to the current working directory or the directory specified by `-output-path`. All output files are included in the results zip in the `ScriptOutput/` directory. From 638a2c5c766bdd73abec4997aca3e0aff7c6b385 Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 14:58:16 -0700 Subject: [PATCH 22/24] fix(Android): Add list formatting --- .../new-relic-android-5703.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx b/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx index 67787ba7c24..8be4583aba1 100644 --- a/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx +++ b/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx @@ -7,5 +7,5 @@ downloadLink: 'https://play.google.com/store/apps/details?id=com.newrelic.rpm' ### Notes -Support for new SLA Details screen -Fixes login issues for some users +* Support for new SLA Details screen +* Fixes login issues for some users From f22c109d345458b79525095120aa81504c5915be Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 15:31:22 -0700 Subject: [PATCH 23/24] fix(network monitoring): Add bold per style guide --- .../network-performance-monitoring/advanced/advanced-config.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx index e558d7fe0ef..b90c8dfc479 100644 --- a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx +++ b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx @@ -1487,7 +1487,7 @@ global: meraki_config.preferences.device_status_only true | false (Default: false) - *Required* when using `monitor_devices: true` to restrict polling to only status information. *(This is used to prevent timeout issues)*. + *Required* when using `monitor_devices: true` to restrict polling to only status information. **(This is used to prevent timeout issues)**. meraki_config.preferences.show_vpn_peers From 545bdcd23fd3a0c29add4d7cf8f239d7dcf1daef Mon Sep 17 00:00:00 2001 From: Rob Siebens Date: Tue, 5 Sep 2023 16:02:58 -0700 Subject: [PATCH 24/24] fix(Network monitoring): Move period inside parentheses. --- .../network-performance-monitoring/advanced/advanced-config.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx index b90c8dfc479..5a4b29505a8 100644 --- a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx +++ b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx @@ -1487,7 +1487,7 @@ global: meraki_config.preferences.device_status_only true | false (Default: false) - *Required* when using `monitor_devices: true` to restrict polling to only status information. **(This is used to prevent timeout issues)**. + *Required* when using `monitor_devices: true` to restrict polling to only status information. **(This is used to prevent timeout issues.)** meraki_config.preferences.show_vpn_peers