diff --git a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx index 060461a31e1..6838608e437 100644 --- a/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx +++ b/src/content/docs/accounts/accounts/account-maintenance/query-account-audit-logs-nrauditevent.mdx @@ -17,11 +17,11 @@ redirects: - /docs/data-apis/understand-data/event-data/query-account-audit-logs-nrauditevent --- -As an additional security measure for using and managing New Relic, you can use the `NrAuditEvent` event to view audit logs that show changes in your New Relic organization. +As an additional security measure for using and managing New Relic, you can use the `NrAuditEvent` event to view audit logs that show changes in your New Relic organization. ## What is the `NrAuditEvent`? [#attributes] -The `NrAuditEvent` is created to record some important types of configuration changes you and your users make in your New Relic organization. Data gathered includes the type of account change, what actor made the change, a human-readable description of the action taken, and a timestamp for the change. Reported information includes: +The `NrAuditEvent` is created to record some important types of configuration changes you and your users make in your New Relic organization. Data gathered includes the type of account change, what actor made the change, a human-readable description of the action taken, and a timestamp for the change. Reported information includes: * Users added or deleted * User permission changes @@ -57,8 +57,10 @@ Note that the query builder in the UI can only query one account at a time. If y > To view all changes to your New Relic account for a specific time frame, run this basic NRQL query: - ``` - SELECT * from NrAuditEvent SINCE 1 day ago + ```sql + SELECT * + FROM NrAuditEvent + SINCE 1 day ago ``` @@ -68,9 +70,11 @@ Note that the query builder in the UI can only query one account at a time. If y > To query what type of change to the account users was made the most frequently during a specific time frame, include the [`actionIdentifier` attribute](#actorIdentifier) in your query. For example: - ``` - SELECT count(*) AS Actions FROM NrAuditEvent - FACET actionIdentifier SINCE 1 week ago + ```sql + SELECT count(*) AS Actions + FROM NrAuditEvent + FACET actionIdentifier + SINCE 1 week ago ``` @@ -80,8 +84,11 @@ Note that the query builder in the UI can only query one account at a time. If y > To query for information about created accounts and who created them, you can use something like: - ``` - SELECT actorEmail, actorId, targetId FROM NrAuditEvent WHERE actionIdentifier = 'account.create' SINCE 1 month ago + ```sql + SELECT actorEmail, actorId, targetId + FROM NrAuditEvent + WHERE actionIdentifier = 'account.create' + SINCE 1 month ago ``` @@ -91,8 +98,10 @@ Note that the query builder in the UI can only query one account at a time. If y > When you include `TIMESERIES` in a NRQL query, the results are shown as a line graph. For example: - ``` - SELECT count(*) from NrAuditEvent TIMESERIES facet actionIdentifier since 1 week ago + ```sql + SELECT count(*) + FROM NrAuditEvent + TIMESERIES facet actionIdentifier since 1 week ago ``` @@ -104,17 +113,20 @@ Note that the query builder in the UI can only query one account at a time. If y To see all the changes made to users, you could use: - ``` - SELECT * FROM NrAuditEvent WHERE targetType = 'user' - SINCE this month + ```sql + SELECT * + FROM NrAuditEvent + WHERE targetType = 'user' + SINCE this month ``` If you wanted to narrow that down to see changes to [user type](/docs/accounts/accounts-billing/new-relic-one-user-management/user-type), you could use: - ``` - SELECT * FROM NrAuditEvent WHERE targetType = 'user' + ```sql + SELECT * FROM NrAuditEvent + WHERE targetType = 'user' AND actionIdentifier IN ('user.self_upgrade', 'user.change_type') - SINCE this month + SINCE this month ``` @@ -124,7 +136,7 @@ Note that the query builder in the UI can only query one account at a time. If y > To query updates for your synthetic monitors during a specific time frame, include the [`actionIdentifier`](/attribute-dictionary/nrauditevent/actionidentifier) attribute in your query. For example: - ``` + ```sql SELECT count(*) FROM NrAuditEvent WHERE actionIdentifier = 'synthetics_monitor.update_script' FACET actionIdentifier, description, actorEmail @@ -140,12 +152,28 @@ Note that the query builder in the UI can only query one account at a time. If y > To query what configuration changes were made to any workload, use the query below. The `targetId` attribute contains the GUID of the workload that was modified, which you can use for searches. Since changes on workloads are often automated, you might want to include the `actorType` attribute to know if the change was done directly by a user through the UI or through the API. - ``` + ```sql SELECT timestamp, actorEmail, actorType, description, targetId - FROM NrAuditEvent WHERE targetType = 'workload' + FROM NrAuditEvent + WHERE targetType = 'workload' SINCE 1 week ago LIMIT MAX ``` + + + + + The `targetType` attribute describes the object that changed, such as account, role, user, alert conditions or notifications, and logs. + To generate a list of `targetType` values for your account, run the query below. Note that this query will only show `targetTypes` that have been touched. + + ```sql + SELECT uniques(targetType) + FROM NrAuditEvent + SINCE 90 days ago + ``` + ### Changes made by specific users [#examples-who] @@ -157,9 +185,10 @@ Note that the query builder in the UI can only query one account at a time. If y > To see detailed information about any user who made changes to the account during a specific time frame, include [`actorType = 'user'`](#actorType) in the query. For example: - ``` + ```sql SELECT actionIdentifier, description, actorEmail, actorId, targetType, targetId - FROM NrAuditEvent WHERE actorType = 'user' + FROM NrAuditEvent + WHERE actorType = 'user' SINCE 1 week ago ``` @@ -170,8 +199,9 @@ Note that the query builder in the UI can only query one account at a time. If y > To query account activities made by a specific person during the selected time frame, you must know their [`actorId`](#actorId). For example: - ``` - SELECT actionIdentifier FROM NrAuditEvent + ```sql + SELECT actionIdentifier + FROM NrAuditEvent WHERE actorId = 829034 SINCE 1 week ago ``` @@ -182,8 +212,9 @@ Note that the query builder in the UI can only query one account at a time. If y > To identify who ([`actorType`](#actorType)) has made the most changes to the account, include the [`actorEmail` attribute](#actorEmail) in your query. For example: - ``` - SELECT count(*) as Users FROM NrAuditEvent + ```sql + SELECT count(*) as Users + FROM NrAuditEvent WHERE actorType = 'user' FACET actorEmail SINCE 1 week ago ``` @@ -195,7 +226,7 @@ Note that the query builder in the UI can only query one account at a time. If y > To query updates from your synthetic monitors made by a specific user, include the [`actionIdentifier`](/attribute-dictionary/nrauditevent/actionidentifier) and [`actorEmail`](/attribute-dictionary/nrauditevent/actoremail) attribute in your query. For example: - ``` + ```sql SELECT count(*) FROM NrAuditEvent WHERE actionIdentifier = 'synthetics_monitor.update_script' FACET actorEmail, actionIdentifier, description @@ -213,9 +244,11 @@ Note that the query builder in the UI can only query one account at a time. If y > To see detailed information about changes to the account that were made using an API key during a specific time frame, include [`actorType = 'api_key'`](#actorType) in the query. For example: - ``` + ```sql SELECT actionIdentifier, description, targetType, targetId, actorAPIKey, actorId, actorEmail - FROM NrAuditEvent WHERE actorType = 'api_key' SINCE 1 week ago + FROM NrAuditEvent + WHERE actorType = 'api_key' + SINCE 1 week ago ``` diff --git a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx index 058414f3863..10b28517374 100644 --- a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx +++ b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/advanced-configuration/manage-pixie-memory.mdx @@ -20,10 +20,10 @@ You can configure the amount of memory Pixie uses. During the installation, use The primary focus of the [open source Pixie project](https://github.com/pixie-io/pixie) is to build a real-time debugging platform. Pixie [isn't intended to be a long-term durable storage solution](https://docs.px.dev/about-pixie/faq/#data-collection-how-much-data-does-pixie-store) and is best used in conjunction with New Relic. The New Relic integration queries Pixie every few minutes and persists a subset of Pixie's telemetry data in New Relic. -When you install the New Relic Pixie integration, a `vizier-pem` agent is deployed to each node in your cluster via a DaemonSet. The `vizier-pem` agents use memory for two main purposes: +When you install the New Relic Pixie integration, a [`vizier-pem` agent](https://docs.px.dev/reference/architecture/#vizier) is deployed to each node in your cluster via a DaemonSet. The `vizier-pem` agents use memory for two main purposes: -* **Collecting telemetry data**: tracing application traffic or CPU profiles, amongst other. Those values must be stored in memory somewhere, as they're processed. -* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie); and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic. +* **Collecting telemetry data**: tracing application traffic or CPU profiles, amongst others. Those values must be stored in memory somewhere, as they're processed. +* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie) and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic. By default, `vizier-pem` pods have a `2Gi` memory limit, and a `2Gi` memory request. They set aside 60% of their allocated memory for short-term data storage, leaving the other 40% for the data collection. diff --git a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx index d1b49bdca57..dd8bfc9577e 100644 --- a/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx +++ b/src/content/docs/kubernetes-pixie/auto-telemetry-pixie/pixie-data-security-overview.mdx @@ -11,7 +11,7 @@ redirects: - /docs/auto-telemetry-pixie/pixie-data-security-overview --- -Auto-telemetry with Pixie is our integration of Community Cloud for Pixie, a managed version of Pixie open source software. Auto-telemetry with Pixie therefore benefits from Pixie's approach to keeping data secure. The data that Pixie collects is stored entirely within your Kubernetes cluster. This data does not persist outside of your environment, and will never be stored by Community Cloud for Pixie. This means that your sensitive data remains within your environment and control. +Auto-telemetry with Pixie is our integration of [Community Cloud for Pixie](https://docs.px.dev/installing-pixie/install-guides/community-cloud-for-pixie/), a managed version of Pixie open source software. Auto-telemetry with Pixie therefore benefits from Pixie's approach to keeping data secure. The data that Pixie collects is stored entirely within your Kubernetes cluster. This data does not persist outside of your environment, and will never be stored by Community Cloud for Pixie. This means that your sensitive data remains within your environment and control. Community Cloud for Pixie makes queries directly to your Kubernetes cluster to access the data. In order for the query results to be shown in the Community Cloud for Pixie UI, CLI, and API, the data is sent to the client from your cluster using a reverse proxy. @@ -20,7 +20,7 @@ Community Cloud for Pixie’s reverse proxy is designed to ensure: * Data is ephemeral. It only passes through the Community Cloud for Pixie's cloud proxy in transit. This ensures data locality. * Data is encrypted while in transit. Only you are able to read your data. -New Relic fetches and stores data that related to an application's performance. With Auto-telemetry with Pixie, a predefined subset of data persists outside of your cluster. This data is stored in our database, in your selected region. This data persists in order to give you long-term storage, alerting, correlation with additional data, and the ability to use advanced New Relic platform capabilities, such as anomaly detection. +New Relic fetches and stores data that related to an application's performance. With Auto-telemetry with Pixie, a predefined subset of data persists outside of your cluster. This data is stored in our database, in your selected region. This data persists in order to give you long-term storage, alerting, correlation with additional data, and the ability to use advanced New Relic platform capabilities, such as [anomaly detection](/docs/alerts-applied-intelligence/applied-intelligence/anomaly-detection/anomaly-detection-applied-intelligence/). The persisted performance metrics include, but are not limited to: diff --git a/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx b/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx index 18cffdd1eb5..a7433cabc06 100644 --- a/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx +++ b/src/content/docs/kubernetes-pixie/kubecost/get-started-kubecost.mdx @@ -19,7 +19,7 @@ Actionable insights: ## Get started -In order to get started with Kubecost and New Relic, you'll need to set up Prometheus Remote Write in New Relic. Then, you will need to install the Kubecost agent. +To get started, first set up [Prometheus Remote Write](/docs/infrastructure/prometheus-integrations/install-configure-remote-write/set-your-prometheus-remote-write-integration/) in New Relic, then install the Kubecost agent. ### Set up Prometheus Remote Write @@ -31,7 +31,7 @@ Go to the [Prometheus remote write setup launcher in the UI](https://one.newreli ### Install the Kubecost agent to your cluster -Now, we are going to install the Kubecost agent via Helm. +Next, install the Kubecost agent via Helm. 1. Download the template YAML file for the Kubecost agent installation. Save it to `kubecost-values.yaml`. @@ -1175,7 +1175,7 @@ extraObjects: [] 2. Open `kubecost-values.yaml` in an editor of your choice. 3. Go to line 671, and update `YOUR_URL_HERE` to contain the value of the URL you generated for the Prometheus Remote Write integration in the earlier step. It should look something like `https://metric-api.newrelic.com/prometheus/v1/write?prometheus_server=kubecost`. 4. Go to line 672, and update `YOUR_BEARER_TOKEN_HERE` to contain the value of the bearer token you generated for the Prometheus remote write integration in the earlier step. -5. Run the Helm command to add the Kubecost agent to your cluster. It should start sending data to New Relic. +5. Run the Helm command below to add the Kubecost agent to your cluster and start sending data to New Relic: ```shell helm upgrade --install kubecost \ [11:13:30] @@ -1184,8 +1184,8 @@ helm upgrade --install kubecost \ --values kubecost-values.yaml ``` -6. Wait a few minutes. In the previous tab setting up Remote Write, click the "See your data" button to see whether data has been received. -7. Query your data. +6. Wait a few minutes. In the previous tab where you set up Remote Write, click the **See your data** button to see whether data has been received. +7. Query your data: ```sql SELECT sum(`Total Cost($)`) AS 'Total Monthly Cost' FROM (FROM Metric SELECT (SELECT sum(`total_node_cost`) FROM (FROM Metric SELECT (average(kube_node_status_capacity_cpu_cores) * average(node_cpu_hourly_cost) * 730 + average(node_gpu_hourly_cost) * 730 + average(kube_node_status_capacity_memory_bytes) / 1024 / 1024 / 1024 * average(node_ram_hourly_cost) * 730) AS 'total_node_cost' FACET node)) + (SELECT (sum(acflb) / 1024 / 1024 / 1024 * 0.04) AS 'Container Cost($)' FROM (SELECT (average(container_fs_limit_bytes) * cardinality(container_fs_limit_bytes)) AS 'acflb' FROM Metric WHERE (NOT ((device = 'tmpfs')) AND (id = '/')))) + (SELECT sum(aphc * 730 * akpcb / 1024 / 1024 / 1024) AS 'Total Persistent Volume Cost($)' FROM (FROM Metric SELECT average(pv_hourly_cost) AS 'aphc', average(kube_persistentvolume_capacity_bytes) AS 'akpcb' FACET persistentvolume, instance)) AS 'Total Cost($)') diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx index 046feae107b..c9696113607 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3.mdx @@ -9,9 +9,9 @@ redirects: - /docs/kubernetes-pixie/kubernetes-integration/get-started/changes-since-v3 --- -From version 3 onwards, the Kubernetes solution of New Relic features an [architecture](/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-components/#architecture) which aims to be more modular and configurable, giving you more power to choose how the solution is deployed and making it compatible with more environments. +As of version 3, the New Relic Kubernetes integration features an [architecture](/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-components/#architecture) that aims to be more modular and configurable, giving you more power to choose how it is deployed and making it compatible with more environments. -Data reported by the Kubernetes Integration version 3 hasn't changed since version 2. For version 3, we focused on configurability, stability, and user experience. +Data reported by the Kubernetes integration version 3 hasn't changed since version 2. For version 3, we focused on configurability, stability, and user experience. See the latest release notes for the integration [here](/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/). The Kubernetes integration version 3 (`appVersion`) is included on the `nri-bundle` chart `version` 4. @@ -19,12 +19,12 @@ Data reported by the Kubernetes Integration version 3 hasn't changed since versi ## Migration Guide [#migration-guide] -To make migration from earlier versions as easy as possible, we have developed a compatibility layer that translates most of the options that could be specified in the old newrelic-infrastructure chart to their new counterparts. This compatibility layer is temporary and will be removed in the future. Therefore, we encourage you to read this guide carefully and migrate the configuration with human supervision. +To make migrating from earlier versions as easy as possible, we have developed a compatibility layer that translates most of the configurable options in the old `newrelic-infrastructure` chart to their new counterparts. This compatibility layer is temporary and will be removed in the future. We encourage you to read this guide carefully and migrate the configuration with human supervision. You can read more about the updated `newrelic-infrastructure` chart [here](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure#newrelic-infrastructure). ### Kube State Metrics (KSM) configuration [#ksm-config] - KSM monitoring works out of the box for most configurations, most users will not need to change this config. + KSM monitoring works out of the box for most configurations; most users will not need to change this config. * `disableKubeStateMetrics` has been replaced by `ksm.enabled`. The default is still the same (KSM scraping enabled). @@ -50,7 +50,7 @@ ksm: ### Control plane configuration [#controlplane-configuration] -Control plane configuration has changed substantially. If you previously had control plane monitoring enabled, we encourage you to take a look at the [Configure control plane monitoring](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring) dedicated page. +Control plane configuration has changed substantially. If you previously enabled control plane monitoring, we encourage you to take a look at our [Configure control plane monitoring](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring) documentation. The following options have been replaced by more comprehensive configuration, covered in the section linked above: @@ -61,15 +61,15 @@ The following options have been replaced by more comprehensive configuration, co ### Agent configuration [#agent-configuration] -Agent config file, previously specified in `config` has been moved to `common.agentConfig`. Format of the file has not changed, and the full range of options that can be configured can be found [here](/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/). +The agent config file, previously specified in `config`, has been moved to `common.agentConfig`. The format of the file has not changed, and the full range of options that can be configured can be found [here](/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/). -The following agent options were previously "aliased" in the root of the `values.yml` file, and are no longer available: +The following agent options were previously "aliased" in the root of the `values.yml` file, and are **no longer available**: * `logFile` has been replaced by `common.agentConfig.log_file`. * `eventQueueDepth` has been replaced by `common.agentConfig.event_queue_depth`. -* `customAttributes` has changed in format to a yaml object. The previous format, a manually json-encoded string e.g. `{"team": "devops"}` is deprecated. -* Previously, `customAttributes` had a default `clusterName` entry which might have unwanted consequences if removed. This is no longer the case, users may now safely override `customAttributes` on its entirety. -* `discoveryCacheTTL` has been completely removed, as the discovery is now performed using kubernetes informers which have a built-in cache. +* `customAttributes` has changed in format to a yaml object. The previous format, a manually JSON-encoded string e.g. `{"team": "devops"}`, is deprecated. +* Previously, `customAttributes` had a default `clusterName` entry that might have unwanted consequences if removed. This is no longer the case; users may now safely override `customAttributes` in its entirety. +* `discoveryCacheTTL` has been completely removed, as the discovery is now performed using Kubernetes informers, which have a built-in cache. ### Integrations configuration [#integrations-configuration] @@ -92,7 +92,7 @@ integrations: integrations: # ... ``` -Moreover, now the `--port` and `--tls` flags are mandatory on the discovery command. In the past, the following would work: +Moreover, the `--port` and `--tls` flags are now mandatory in the discovery command. In the past, the following would work: ```yaml integrations: @@ -112,36 +112,31 @@ integrations: exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250 ``` -This change is required because in v2 and below, the `nrk8s-kubelet` component (or its equivalent) ran with `hostNetwork: true`, so `nri-discovery-kubernetes` could connect to the kubelet using `localhost` and plain http. For security reasons, this is no longer the case, hence the need to specify both flags from now on. +This change is required because in v2 and below, the `nrk8s-kubelet` component (or its equivalent) ran with `hostNetwork: true`, so `nri-discovery-kubernetes` could connect to the kubelet using `localhost` and plain http. For security reasons, this is no longer the case; hence, the need to specify both flags from now on. -For more details on how to configure on-host integrations in Kubernetes please check the [Monitor services in Kubernetes](/docs/kubernetes-pixie/kubernetes-integration/link-apps-services/monitor-services-running-kubernetes) page. +For more details on how to configure on-host integrations in Kubernetes, please check our [Monitor services in Kubernetes](/docs/kubernetes-pixie/kubernetes-integration/link-apps-services/monitor-services-running-kubernetes) documentation. ### Miscellaneous chart values [#misc-chart-values] -While not related to the integration configuration, the following miscellaneous options for the helm chart have also changed: +While not related to the integration configuration, the following miscellaneous options for the Helm chart have also changed: * `runAsUser` has been replaced by `securityContext`, which is templated directly into the pods and more configurable. * `resources` has been removed, as now we deploy three different workloads. Resources for each one can be configured individually under: -* `ksm.resources` -* `kubelet.resources` -* `controlPlane.resources` -* Similarly, `tolerations` has been split into three and the previous one is no longer valid: -* `ksm.tolerations` -* `kubelet.tolerations` -* `controlPlane.tolerations` - - -* All three default to tolerate any value for `NoSchedule` and `NoExecute` - - + * `ksm.resources` + * `kubelet.resources` + * `controlPlane.resources` +* `tolerations` has been split into three and the previous one is no longer valid. All three default to tolerate any value for `NoSchedule` and `NoExecute`: + * `ksm.tolerations` + * `kubelet.tolerations` + * `controlPlane.tolerations` * `image` and all its subkeys have been replaced by individual sections for each of the three images that are now deployed: -* `images.forwarder.*` to configure the infrastructure-agent forwarder. -* `images.agent.*` to configure the image bundling the infrastructure-agent and on-host integrations. -* `images.integration.*` to configure the image in charge of scraping k8s data. + * `images.forwarder.*` to configure the infrastructure-agent forwarder. + * `images.agent.*` to configure the image bundling the infrastructure-agent and on-host integrations. + * `images.integration.*` to configure the image in charge of scraping k8s data. ### Upgrade from v2 [#upgrade-from-v2] -In order to upgrade from the Kubernetes integration version 2 (included in [nri-bundle chart](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) versions 3.x), we strongly encourage you to create a `values-newrelic.yaml` file with your desired and configuration. If you had previously installed our chart from the CLI directly, for example using a command like the following: +In order to upgrade the Kubernetes integration from version 2 (included in [nri-bundle chart](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) versions 3.x), we strongly encourage you to create a `values-newrelic.yaml` file with your desired and configuration. If you had previously installed our chart from the CLI directly, for example using a command like the following: ```shell helm install newrelic/nri-bundle \ @@ -176,7 +171,7 @@ logging: enabled: true ``` -After doing this, and adapting any other setting you might have changed according to the [section above](#migration-guide), you can upgrade by running the following command: +After doing this, and adapting any other setting you might have changed according to the [migration guide above](#migration-guide), you can upgrade your `nri-bundle` by running the following command: ```shell helm upgrade newrelic newrelic/nri-bundle \ diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx index 17b604d8552..7e5b5f81f42 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/data-governance.mdx @@ -9,7 +9,7 @@ metaDescription: How to manage your data from the Kubernetes integration. ### Change the scrape interval [#scrape-interval] -The Kubernetes Integration v3 and above allows changing the interval at which metrics are gathered from the cluster. This allows choosing a tradeoff between data resolution and usage. We recommend choosing an interval between 15 and 30 seconds for optimal experience. +The [New Relic Kubernetes integration v3](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/changes-since-v3/) and above allows changing the interval at which metrics are gathered from the cluster. This allows choosing a tradeoff between data resolution and usage. We recommend choosing an interval between 15 and 30 seconds for optimal experience. In order to change the scrape interval, add the following to your `values-newrelic.yaml`, under the `newrelic-infrastructure` section: @@ -26,11 +26,11 @@ global: licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ cluster: _K8S_CLUSTER_NAME_ -# ... Other settings as shown above +# ... Other settings # Configuration for newrelic-infrastructure newrelic-infrastructure: - # ... Other settings as shown above + # ... Other settings common: config: interval: 25s @@ -42,7 +42,7 @@ newrelic-infrastructure: ### Filtering Namespaces [#filter-namespace] -The Kubernetes Integration v3 and above allows filtering which namespaces are scraped by labelling them. By default all namespaces are scraped. +The Kubernetes integration v3 and above allows filtering on which namespaces are scraped by labelling them. All namespaces are scraped by default. We use the `namespaceSelector` in the same way Kubernetes does. In order to include only namespaces matching a label, change the `namespaceSelector` by adding the following to your `values-newrelic.yaml`, under the `newrelic-infrastructure` section: @@ -61,11 +61,11 @@ global: licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ cluster: _K8S_CLUSTER_NAME_ -# ... Other settings as shown above +# ... Other settings # Configuration for newrelic-infrastructure newrelic-infrastructure: - # ... Other settings as shown above + # ... Other settings common: config: namespaceSelector: @@ -88,18 +88,18 @@ common: The expressions under `matchExpressions` are concatenated. -In this example namespaces with the label `newrelic.com/scrape` set to `false` will be excluded: +In this example, namespaces with the label `newrelic.com/scrape` set to `false` will be excluded: ```yaml global: licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ cluster: _K8S_CLUSTER_NAME_ -# ... Other settings as shown above +# ... Other settings # Configuration for newrelic-infrastructure newrelic-infrastructure: - # ... Other settings as shown above + # ... Other settings common: config: namespaceSelector: @@ -107,13 +107,13 @@ newrelic-infrastructure: - {key: newrelic.com/scrape, operator: NotIn, values: ["false"]} ``` -See a full list of the settings that can be modified in the [chart's README file](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure). +See a full list of settings that can be modified in the [chart's README file](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure). -#### How can I know which namespaces are excluded? [#excluded-namespaces] +#### How can I find out which namespaces are excluded? [#excluded-namespaces] All the namespaces within the cluster are listed thanks to the `K8sNamespace` sample. The `nrFiltered` attribute determines whether the data related to the namespace is going to be scraped. -Use this query to know which namespaces are being monitored: +Use this query to find out which namespaces are being monitored: ```sql FROM K8sNamespaceSample SELECT displayName, nrFiltered WHERE clusterName = SINCE 2 MINUTES AGO diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx index c0b8fc570e6..a8662f9d72e 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes.mdx @@ -23,10 +23,10 @@ The steps in this guide enable your application to inject infrastructure-specifi ## Prerequisites [#prereqs] -To be successful with the steps below, you should already be familiar with OpenTelemetry and Kubernetes and have done the following: +To be successful with the steps below, you should already be familiar with OpenTelemetry and Kubernetes, and have done the following: * Created the following environment variables: - * `OTEL_EXPORTER_OTLP_ENDPOINT` ([New Relic endpoint for your region or purpose](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/#review-settings)) + * `OTEL_EXPORTER_OTLP_ENDPOINT` ([New Relic endpoint for your region or purpose](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/get-started/opentelemetry-set-up-your-app/#review-settings)) * `NEW_RELIC_API_KEY` () * Installed the [New Relic Kubernetes integration](/docs/kubernetes-pixie/kubernetes-integration/installation/kubernetes-integration-install-configure) in your cluster * Instrumented your applications with [OpenTelemetry](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/), and successfully sent data to New Relic via OpenTelemetry Protocol (OTLP) @@ -38,7 +38,7 @@ If you have general questions about using collectors with New Relic, see our [In To set this up, you need to add a custom snippet to the `env` stanza of your Kubernetes YAML file. We have an example below that shows the snippet for a sample frontend microservice (`Frontend.yaml`). The snippet includes two sections that do the following: * **Section 1:** Ensure that the telemetry data is sent to the collector. This sets the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. It does this by calling the downward API to pull the host IP. - * **Section 2:** Attach infrastructure-specific metadata. To do this, we capture `metadata.uid` using the downward API and add it to the OTEL_RESOURCE_ATTRIBUTES environment variable. This environment variable is used by the OpenTelemetry Collector’s `resourcedetection` and `k8sattributes` processors to add additional infrastructure-specific context to telemetry data. + * **Section 2:** Attach infrastructure-specific metadata. To do this, we capture `metadata.uid` using the downward API and add it to the `OTEL_RESOURCE_ATTRIBUTES` environment variable. This environment variable is used by the OpenTelemetry Collector’s `resourcedetection` and `k8sattributes` processors to add additional infrastructure-specific context to telemetry data. For each microservice instrumented with OpenTelemetry, add the highlighted lines below to your manifest’s `env` stanza: @@ -77,9 +77,9 @@ fieldPath: metadata.uid ## Configure and deploy the OpenTelemetry Collector as an agent [#agent] -We recommend you deploy the [collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data as well as enrich telemetry data with metadata. For example, the collector can add custom attributes or infrastructure information through processors as well as handle batching, retry, compression and other more advanced features that are handled less efficiently at the client instrumentation level. +We recommend you deploy the [collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data, and enrich telemetry data with metadata. For example, the collector can add custom attributes or infrastructure information through processors, as well as handle batching, retry, compression and additional advanced features that are handled less efficiently at the client instrumentation level. -For help configuring the collector, see the sample collector configuration file below, along with sections about setting up these options: +For help configuring the collector, see the sample collector configuration file below, along with the sections about setting up these options: * [OTLP exporter](#otlp-exporter) * [batch processor](#batch) @@ -147,7 +147,7 @@ service: ### Step 1: Configure the OTLP exporter [#otlp-exporter] -First, configure it by adding an OTLP exporter to your [OpenTelemetry Collector configuration YAML file](https://opentelemetry.io/docs/collector/configuration/) along with your New Relic as a header. +First, add an OTLP exporter to your [OpenTelemetry Collector configuration YAML file](https://opentelemetry.io/docs/collector/configuration/) along with your New Relic as a header. ```yaml exporters: @@ -158,7 +158,7 @@ exporters: ### Step 2: Configure the batch processor [#batch] -The batch processor accepts spans, metrics, or logs and places them into batches to make it easier to compress the data and reduce the number of outgoing requests from the collector. +The batch processor accepts spans, metrics, or logs, and places them into batches to make it easier to compress the data and reduce the number of outgoing requests from the collector. ``` processors: @@ -187,7 +187,7 @@ Detectors: [ gke, gce ] ### Step 4: Configure the Kubernetes Attributes processor (general) [#attributes-general] -When we run the `k8sattributes` processor as part of the OpenTelemetry Collector running as an agent, it detects IP addresses of pods sending telemetry data to the OpenTelemetry Collector agent, using them to extract pod metadata. Below is a basic Kubernetes manifest example with only a processors section. To deploy the OpenTelemetry Collector as a `DaemonSet`, read this [comprehensive manifest example](https://github.com/newrelic-forks/microservices-demo/tree/main/src/otel-collector-agent). +When we run the `k8sattributes` processor as part of the OpenTelemetry Collector running as an agent, it detects the IP addresses of pods sending telemetry data to the OpenTelemetry Collector agent, using them to extract pod metadata. Below is a basic Kubernetes manifest example with only a processors section. To deploy the OpenTelemetry Collector as a `DaemonSet`, read this [comprehensive manifest example](https://github.com/newrelic-forks/microservices-demo/tree/main/src/otel-collector-agent). ```yaml processors: @@ -212,7 +212,7 @@ processors: ### Step 5: Configure the Kubernetes Attributes processor (RBAC) [#rbac] -You need to add configurations for role based access control (RBAC). The `k8sattributes` processor needs `get`, `watch` and `list` permissions for pods and namespaces resources included in the configured filters. See this [example](https://github.com/newrelic-forks/microservices-demo/blob/main/otel-kubernetes-manifests/otel-collector-agent.yaml#L43-L69) of how to configure role based access control (RBAC) for `ClusterRole` to give a `ServiceAccount` the necessary permissions for all pods and namespaces in the cluster. +You need to add configurations for role-based access control (RBAC). The `k8sattributes` processor needs `get`, `watch`, and `list` permissions for pods and namespaces resources included in the configured filters. See this [example](https://github.com/newrelic-forks/microservices-demo/blob/main/otel-kubernetes-manifests/otel-collector-agent.yaml#L43-L69) of how to configure role-based access control (RBAC) for `ClusterRole` to give a `ServiceAccount` the necessary permissions for all pods and namespaces in the cluster. ### Step 6: Configure the Kubernetes Attributes processor (discovery filter) [#discovery-filter] @@ -253,4 +253,6 @@ Click to enlarge the image: ## What's next? [#next] -Now that you've connected your OpenTelemetry-instrumented apps with Kubernetes, check out our [best practices](/docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts/) guide for tips to improve your use of OpenTelemetry and New Relic. +Now that you've connected your OpenTelemetry-instrumented apps with Kubernetes, check out our [best practices](/docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts/) guide for tips to improve your use of OpenTelemetry and New Relic. + +You can also check out this blog post, [Correlate OpenTelemetry traces, metrics, and logs with Kubernetes performance data](https://newrelic.com/blog/how-to-relic/k8s-with-otel) for more information on the steps provided above. diff --git a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx index 764d6e828db..5a4b29505a8 100644 --- a/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx +++ b/src/content/docs/network-performance-monitoring/advanced/advanced-config.mdx @@ -134,7 +134,6 @@ devices: ext_only: true meraki_config: api_key: APIKEY123ABC - monitor_clients: true monitor_devices: true monitor_org_changes: true monitor_uplinks: true @@ -1289,34 +1288,6 @@ global: The [Meraki Dashboard API](https://developer.cisco.com/meraki/api-latest/) integration pulls various metrics related to the health of your Meraki environment. The combination of various configuration options allows you to set up different monitoring scenarios for your needs. - * `meraki_config.monitor_clients: true`: Uses the [Get Network Clients](https://developer.cisco.com/meraki/api-latest/get-network-clients/) endpoint to iterate through all target networks and return client data. - - - In large environments, this API call has known issues with timeouts against the Meraki Dashboard API, resulting in missing metrics. - - - NRQL to find network client telemetry: - - ```sql - FROM Metric SELECT - latest(status) AS 'Current Client Status', - max(kentik.meraki.clients.RecvTotal) AS 'Total Received Bytes', - max(kentik.meraki.clients.SentTotal) AS 'Total Sent Bytes' - FACET - network AS 'Network Name', - client_id AS 'Client ID', - client_mac_addr AS 'Client MAC', - description AS 'Client Description', - vlan AS 'Client VLAN', - user AS 'Client User', - manufacturer AS 'Client Manufacturer', - device_type AS 'Client Type', - recent_device_name AS 'Latest Device' - WHERE instrumentation.name = 'meraki.clients' - ``` - -
- * `meraki_config.monitor_devices: true && meraki_config.preferences.device_status_only: true`: Uses the [Get Organization Device Statuses](https://developer.cisco.com/meraki/api-latest/get-organization-devices-statuses/) endpoint to list the status of every Meraki device in the organization. NRQL to find device status telemetry: @@ -1436,12 +1407,6 @@ global: API Key (string) [Meraki Dashboard API key](https://documentation.meraki.com/General_Administration/Other_Topics/Cisco_Meraki_Dashboard_API#Enable_API_Access) for authentication. - - meraki_config.monitor_clients - - true | false (Default: false) - Monitor client status and performance per network. *(Not recommended for large environments due to timeout problems)* - meraki_config.monitor_devices @@ -1522,7 +1487,7 @@ global: meraki_config.preferences.device_status_only true | false (Default: false) - Used in combination with `monitor_devices` to restrict polling to only status information. *(This is helpful in large organizations to prevent timeout issues)*. + *Required* when using `monitor_devices: true` to restrict polling to only status information. **(This is used to prevent timeout issues.)** meraki_config.preferences.show_vpn_peers diff --git a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx index 2bbd9e6211b..fb5dd1d1f5e 100644 --- a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx +++ b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/pass-command-line-options-nrdiag.mdx @@ -287,6 +287,46 @@ To use the following command line options with the Diagnostics CLI: + + + `-list-scripts` + + + + List available scripts. + + + + + + `-script STRING` + + + + View the specified script. Use with `-run` to run the script. + + + + + + `-run` + + + + Use with `-script` to run the script. + + + + + + `-script-flags` + + + + Use with `-run -script` to pass command line flags to the script. + + + `-v` diff --git a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx index a3e25de123e..5e5686b5659 100644 --- a/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx +++ b/src/content/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag.mdx @@ -208,9 +208,90 @@ To run from PowerShell, add `./` to the start of `cmd`. * For ARM64 systems: ``` - nrdiag_arm64 -suites SUITE NAMES + nrdiag_arm64.exe -suites SUITE NAMES ``` +## Scripts [#scripts] + +Scripts provide an additional datasource for information that isn't collected by a task. The catalog of available scripts can be found in [the Diagnostic CLI's github repository](https://github.com/newrelic/newrelic-diagnostics-cli/tree/main/scriptcatalog). + +### Script output + +Script output is printed to the screen and is saved in a file based on the name of the script (for example, `name-of-script.out`). This is saved in the directory specified by `-output-path`, defaulting to the current directory. + +Scripts can also output files, either to the current working directory or the directory specified by `-output-path`. All output files are included in the results zip in the `ScriptOutput/` directory. + +### Script results + +The results of running a script can be found in the `nrdiag-output.json` file with the following schema: + +```json +"Script": { + "Name": "example", + "Description": "Example Description", + "Output": "example output", + "OutputFiles": [ + "/path/to/example.out", + "/path/to/another-file.out" + ], + "OutputTruncated": false +} +``` + +The `Output` field contains the stdout output. If it is over 20000 characters, it is truncated and the `OutputTruncated` field is set to `true`. Even if trucated, the full output is still available in the `ScriptOutput/` directory in the zip file. + +A list of files the script created can be found in the `Outputfiles` field. + +### List, view, and run a script [#list-view-run-script] + + + + To view a list of the scripts available to run, use `-list-scripts`: + ``` + ./nrdiag -list-scripts + ``` + + + To view a script without running it: + ``` + ./nrdiag -script SCRIPT_NAME + ``` + + + To run a script: + ``` + ./nrdiag -script SCRIPT_NAME -run + ``` + + + To run a script with arguments: + ``` + ./nrdiag -script SCRIPT_NAME -run -script-flags "-foo bar" + ``` + + + To run a script and suites at the same time: + ``` + ./nrdiag -script SCRIPT_NAME -run -s SUITE NAMES" + ``` + + + ## Include additional files in the zip [#include-additional-files] If you have additional files that you would like to share with support, you can include them in the `nrdiag-output.zip` file using the `-include` command line flag. This can be used with a single file or a directory. If a directory is provided, all of its subdirectories are included. The total size limit of the files included is 4GB. @@ -237,7 +318,7 @@ To run from PowerShell, add `./` to the start of `cmd`. * For 32-bit systems: ``` - nrdiag -include Path\To\File -attach + nrdiag.exe -include Path\To\File -attach ``` * For 64-bit systems: @@ -306,7 +387,7 @@ Uploading your results to an account will automatically upload the contents of t OR ``` - nrdiag -api-key ${API_KEY} + nrdiag.exe -api-key ${API_KEY} ``` * For 64-bit systems: @@ -317,7 +398,7 @@ Uploading your results to an account will automatically upload the contents of t OR ``` - nrdiag_x64 -api-key ${API_KEY} + nrdiag_x64.exe -api-key ${API_KEY} ``` * For ARM64 systems: @@ -328,7 +409,7 @@ Uploading your results to an account will automatically upload the contents of t OR ``` - nrdiag_arm64 -api-key ${API_KEY} + nrdiag_arm64.exe -api-key ${API_KEY} ``` diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx index 4070e1184df..966a2752dba 100644 --- a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-251.mdx @@ -2,7 +2,6 @@ subject: Diagnostics CLI (nrdiag) releaseDate: '2023-05-24' version: 2.5.1 -downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_2.5.1.zip' --- ## Changes diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx index b5097c23c2e..0f5a7220a2b 100644 --- a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-261.mdx @@ -2,7 +2,6 @@ subject: Diagnostics CLI (nrdiag) releaseDate: '2023-07-11' version: 2.6.1 -downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_2.6.1.zip' --- ## Changes diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx index f5e73bf3751..e1705ff2712 100644 --- a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-262.mdx @@ -2,7 +2,6 @@ subject: Diagnostics CLI (nrdiag) releaseDate: '2023-07-17' version: 2.6.2 -downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_2.6.2.zip' --- ## Changes diff --git a/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-310.mdx b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-310.mdx new file mode 100644 index 00000000000..91f9a99bd85 --- /dev/null +++ b/src/content/docs/release-notes/diagnostics-release-notes/diagnostics-cli-release-notes/diagnostics-cli-310.mdx @@ -0,0 +1,16 @@ +--- +subject: Diagnostics CLI (nrdiag) +releaseDate: '2023-09-05' +version: 3.1.0 +downloadLink: 'https://download.newrelic.com/nrdiag/nrdiag_3.1.0.zip' +--- + +## New Feature +- The CLI now supports running scripts to gather additional output that isn't currently collected by a task. For more information, please see the [Run the Diagnostics CLI documentation](https://docs.newrelic.com/docs/new-relic-solutions/solve-common-issues/diagnostics-cli-nrdiag/run-diagnostics-cli-nrdiag/#scripts). ([#182](https://github.com/newrelic/newrelic-diagnostics-cli/pull/182), [#185](https://github.com/newrelic/newrelic-diagnostics-cli/pull/185)) + +## Task updates +- Updated Hotspot versions supported by the Java APM agent. ([#183](https://github.com/newrelic/newrelic-diagnostics-cli/pull/183)) + +## Fixes +- Fixed an issue when using `-output-path` where the `nrdiag-filelist.txt` and `nrdiag-output.json` files were not included in the `nrdiag-output.zip`. ([#187](https://github.com/newrelic/newrelic-diagnostics-cli/pull/187)) +- Fixed an issue that prevented some logs from being included in the zip. ([#188](https://github.com/newrelic/newrelic-diagnostics-cli/pull/188)) diff --git a/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx b/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx new file mode 100644 index 00000000000..8be4583aba1 --- /dev/null +++ b/src/content/docs/release-notes/mobile-apps-release-notes/new-relic-android-release-notes/new-relic-android-5703.mdx @@ -0,0 +1,11 @@ +--- +subject: Mobile app for Android +releaseDate: '2023-09-05' +version: 5.7.3 +downloadLink: 'https://play.google.com/store/apps/details?id=com.newrelic.rpm' +--- + +### Notes + +* Support for new SLA Details screen +* Fixes login issues for some users diff --git a/src/content/docs/service-level-management/create-slm.mdx b/src/content/docs/service-level-management/create-slm.mdx index 4a25d5baa3f..1804c10e46c 100644 --- a/src/content/docs/service-level-management/create-slm.mdx +++ b/src/content/docs/service-level-management/create-slm.mdx @@ -134,7 +134,7 @@ Based on `Transaction` events, these SLIs are the most common for request-driven ```sql FROM: TransactionError - WHERE: entityGuid = '{entityGuid}' AND error.expected IS FALSE + WHERE: entityGuid = '{entityGuid}' AND error.expected != true ``` Where `{entityGuid}` is the service's GUID. diff --git a/src/nav/accounts.yml b/src/nav/accounts.yml index 05dc0579ee0..ddb5f71ff1f 100644 --- a/src/nav/accounts.yml +++ b/src/nav/accounts.yml @@ -18,7 +18,7 @@ pages: - title: Login troubleshooting path: /docs/accounts/accounts-billing/account-setup/troubleshoot-new-relics-password-email-address-login-problems - title: Users with multiple user records - path: /docs/accounts/accounts-billing/account-setup/multiple-logins-found + path: /docs/accounts/accounts-billing/account-setup/multiple-user-records - title: Email domain capture path: /docs/accounts/accounts-billing/account-setup/domain-capture - title: Account structure diff --git a/src/nav/infrastructure.yml b/src/nav/infrastructure.yml index 5a9c0e8cea9..5721ccd3b04 100644 --- a/src/nav/infrastructure.yml +++ b/src/nav/infrastructure.yml @@ -317,8 +317,6 @@ pages: pages: - title: Infrastructure integration alert threshold path: /docs/infrastructure/amazon-integrations/troubleshooting/cannot-create-alert-condition-infrastructure-integration - - title: No data appears - path: /docs/infrastructure/host-integrations/troubleshooting/not-seeing-host-integration-data - title: Pass infrastructure parameters to integration path: /docs/infrastructure/host-integrations/troubleshooting/pass-infrastructure-agent-parameters-host-integration - title: Run integrations manually