Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Daily release/09/05/2023/evening #14488

Merged
merged 36 commits into from
Sep 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
e34816b
fix(Accounts): Add target type query
rhetoric101 Aug 28, 2023
fa82bca
fix(accounts): Add sql formatting and reformat some queries
rhetoric101 Aug 28, 2023
3c6b3e4
Update changes-since-v3.mdx
reese-lee Sep 1, 2023
7797799
Update changes-since-v3.mdx
reese-lee Sep 1, 2023
1866704
Update link-otel-applications-kubernetes.mdx
reese-lee Sep 1, 2023
032e59e
Update data-governance.mdx
reese-lee Sep 1, 2023
211049a
Update pixie-data-security-overview.mdx
reese-lee Sep 1, 2023
cfd1a3d
Update manage-pixie-memory.mdx
reese-lee Sep 1, 2023
a40945f
Update get-started-kubecost.mdx
reese-lee Sep 2, 2023
c8db537
fix: update the query for suggested success sli
NilVentosa Sep 5, 2023
e9857db
Revert "fix: update the query for suggested success sli"
NilVentosa Sep 5, 2023
8745026
fix: update the query for suggested success sli
NilVentosa Sep 5, 2023
c4b0710
feat(diag-cli): Diagnostics CLI 3.1.0
daffinito Sep 5, 2023
11af820
fix(accounts): Remove latin
rhetoric101 Sep 5, 2023
618c03e
chore: update to reflect changes to agent
thezackm Sep 5, 2023
1ce2215
release note for 5.7.3
lwegener Sep 5, 2023
c1798e8
fix(kubernetes): Add button reference formatting
rhetoric101 Sep 5, 2023
5421924
fix(pixie): Remove unnecessary comma
rhetoric101 Sep 5, 2023
d26cb6e
Merge pull request #14468 from reese-lee/patch-15
rhetoric101 Sep 5, 2023
458af71
Merge pull request #14467 from reese-lee/patch-14
rhetoric101 Sep 5, 2023
1602cf0
Merge pull request #14466 from reese-lee/patch-13
rhetoric101 Sep 5, 2023
20d7d8d
fix(nav): fix some entries
zuluecho9 Sep 5, 2023
7aa0ce1
Merge pull request #14464 from reese-lee/patch-12
rhetoric101 Sep 5, 2023
16bdf88
Merge pull request #14470 from reese-lee/patch-17
rhetoric101 Sep 5, 2023
4c2593f
Merge pull request #14473 from NilVentosa/nilventosa/fix-sli-success-…
rhetoric101 Sep 5, 2023
c21eb4c
fix(nrdiag): Add code formatting
rhetoric101 Sep 5, 2023
f1c1026
fix(nrdiag): Clarify sentence
rhetoric101 Sep 5, 2023
638a2c5
fix(Android): Add list formatting
rhetoric101 Sep 5, 2023
99b3ea4
Merge pull request #14482 from lwegener/develop
rhetoric101 Sep 5, 2023
ec0caf5
Merge pull request #14485 from newrelic/Fixing-some-nav-issues
rhetoric101 Sep 5, 2023
f22c109
fix(network monitoring): Add bold per style guide
rhetoric101 Sep 5, 2023
8307d70
Merge pull request #14478 from daffinito/daffi/nrdiag-310
rhetoric101 Sep 5, 2023
545bdcd
fix(Network monitoring): Move period inside parentheses.
rhetoric101 Sep 5, 2023
d4853c9
Merge pull request #14481 from thezackm/chore/meraki-updates
rhetoric101 Sep 5, 2023
504d6f6
Merge pull request #14469 from reese-lee/patch-16
rhetoric101 Sep 5, 2023
0360a1d
Merge pull request #14394 from newrelic/rhs-add-query-for-target-type
rhetoric101 Sep 5, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@ redirects:
- /docs/data-apis/understand-data/event-data/query-account-audit-logs-nrauditevent
---

As an additional security measure for using and managing New Relic, you can use the `NrAuditEvent` event to view audit logs that show changes in your New Relic organization.
As an additional security measure for using and managing New Relic, you can use the `NrAuditEvent` event to view audit logs that show changes in your New Relic organization.

## What is the `NrAuditEvent`? [#attributes]

The `NrAuditEvent` is created to record some important types of configuration changes you and your users make in your New Relic organization. Data gathered includes the type of account change, what actor made the change, a human-readable description of the action taken, and a timestamp for the change. Reported information includes:
The `NrAuditEvent` is created to record some important types of configuration changes you and your users make in your New Relic organization. Data gathered includes the type of account change, what actor made the change, a human-readable description of the action taken, and a timestamp for the change. Reported information includes:

* Users added or deleted
* User permission changes
Expand Down Expand Up @@ -57,8 +57,10 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To view all changes to your New Relic account for a specific time frame, run this basic NRQL query:

```
SELECT * from NrAuditEvent SINCE 1 day ago
```sql
SELECT *
FROM NrAuditEvent
SINCE 1 day ago
```
</Collapser>

Expand All @@ -68,9 +70,11 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To query what type of change to the account users was made the most frequently during a specific time frame, include the [`actionIdentifier` attribute](#actorIdentifier) in your query. For example:

```
SELECT count(*) AS Actions FROM NrAuditEvent
FACET actionIdentifier SINCE 1 week ago
```sql
SELECT count(*) AS Actions
FROM NrAuditEvent
FACET actionIdentifier
SINCE 1 week ago
```
</Collapser>

Expand All @@ -80,8 +84,11 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To query for information about created accounts and who created them, you can use something like:

```
SELECT actorEmail, actorId, targetId FROM NrAuditEvent WHERE actionIdentifier = 'account.create' SINCE 1 month ago
```sql
SELECT actorEmail, actorId, targetId
FROM NrAuditEvent
WHERE actionIdentifier = 'account.create'
SINCE 1 month ago
```
</Collapser>

Expand All @@ -91,8 +98,10 @@ Note that the query builder in the UI can only query one account at a time. If y
>
When you include `TIMESERIES` in a NRQL query, the results are shown as a line graph. For example:

```
SELECT count(*) from NrAuditEvent TIMESERIES facet actionIdentifier since 1 week ago
```sql
SELECT count(*)
FROM NrAuditEvent
TIMESERIES facet actionIdentifier since 1 week ago
```
</Collapser>

Expand All @@ -104,17 +113,20 @@ Note that the query builder in the UI can only query one account at a time. If y

To see all the changes made to users, you could use:

```
SELECT * FROM NrAuditEvent WHERE targetType = 'user'
SINCE this month
```sql
SELECT *
FROM NrAuditEvent
WHERE targetType = 'user'
SINCE this month
```

If you wanted to narrow that down to see changes to [user type](/docs/accounts/accounts-billing/new-relic-one-user-management/user-type), you could use:

```
SELECT * FROM NrAuditEvent WHERE targetType = 'user'
```sql
SELECT * FROM NrAuditEvent
WHERE targetType = 'user'
AND actionIdentifier IN ('user.self_upgrade', 'user.change_type')
SINCE this month
SINCE this month
```
</Collapser>

Expand All @@ -124,7 +136,7 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To query updates for your synthetic monitors during a specific time frame, include the [`actionIdentifier`](/attribute-dictionary/nrauditevent/actionidentifier) attribute in your query. For example:

```
```sql
SELECT count(*) FROM NrAuditEvent
WHERE actionIdentifier = 'synthetics_monitor.update_script'
FACET actionIdentifier, description, actorEmail
Expand All @@ -140,12 +152,28 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To query what configuration changes were made to any workload, use the query below. The `targetId` attribute contains the GUID of the workload that was modified, which you can use for searches. Since changes on workloads are often automated, you might want to include the `actorType` attribute to know if the change was done directly by a user through the UI or through the API.

```
```sql
SELECT timestamp, actorEmail, actorType, description, targetId
FROM NrAuditEvent WHERE targetType = 'workload'
FROM NrAuditEvent
WHERE targetType = 'workload'
SINCE 1 week ago LIMIT MAX
```
</Collapser>

<Collapser
id="target-type"
title="What target types are in my account?">


The `targetType` attribute describes the object that changed, such as account, role, user, alert conditions or notifications, and logs.
To generate a list of `targetType` values for your account, run the query below. Note that this query will only show `targetTypes` that have been touched.

```sql
SELECT uniques(targetType)
FROM NrAuditEvent
SINCE 90 days ago
```
</Collapser>
</CollapserGroup>

### Changes made by specific users [#examples-who]
Expand All @@ -157,9 +185,10 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To see detailed information about any user who made changes to the account during a specific time frame, include [`actorType = 'user'`](#actorType) in the query. For example:

```
```sql
SELECT actionIdentifier, description, actorEmail, actorId, targetType, targetId
FROM NrAuditEvent WHERE actorType = 'user'
FROM NrAuditEvent
WHERE actorType = 'user'
SINCE 1 week ago
```
</Collapser>
Expand All @@ -170,8 +199,9 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To query account activities made by a specific person during the selected time frame, you must know their [`actorId`](#actorId). For example:

```
SELECT actionIdentifier FROM NrAuditEvent
```sql
SELECT actionIdentifier
FROM NrAuditEvent
WHERE actorId = 829034 SINCE 1 week ago
```
</Collapser>
Expand All @@ -182,8 +212,9 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To identify who ([`actorType`](#actorType)) has made the most changes to the account, include the [`actorEmail` attribute](#actorEmail) in your query. For example:

```
SELECT count(*) as Users FROM NrAuditEvent
```sql
SELECT count(*) as Users
FROM NrAuditEvent
WHERE actorType = 'user'
FACET actorEmail SINCE 1 week ago
```
Expand All @@ -195,7 +226,7 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To query updates from your synthetic monitors made by a specific user, include the [`actionIdentifier`](/attribute-dictionary/nrauditevent/actionidentifier) and [`actorEmail`](/attribute-dictionary/nrauditevent/actoremail) attribute in your query. For example:

```
```sql
SELECT count(*) FROM NrAuditEvent
WHERE actionIdentifier = 'synthetics_monitor.update_script'
FACET actorEmail, actionIdentifier, description
Expand All @@ -213,9 +244,11 @@ Note that the query builder in the UI can only query one account at a time. If y
>
To see detailed information about changes to the account that were made using an API key during a specific time frame, include [`actorType = 'api_key'`](#actorType) in the query. For example:

```
```sql
SELECT actionIdentifier, description, targetType, targetId, actorAPIKey, actorId, actorEmail
FROM NrAuditEvent WHERE actorType = 'api_key' SINCE 1 week ago
FROM NrAuditEvent
WHERE actorType = 'api_key'
SINCE 1 week ago
```
</Collapser>
</CollapserGroup>
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ You can configure the amount of memory Pixie uses. During the installation, use

The primary focus of the [open source Pixie project](https://github.com/pixie-io/pixie) is to build a real-time debugging platform. Pixie [isn't intended to be a long-term durable storage solution](https://docs.px.dev/about-pixie/faq/#data-collection-how-much-data-does-pixie-store) and is best used in conjunction with New Relic. The New Relic integration queries Pixie every few minutes and persists a subset of Pixie's telemetry data in New Relic.

When you install the New Relic Pixie integration, a `vizier-pem` agent is deployed to each node in your cluster via a DaemonSet. The `vizier-pem` agents use memory for two main purposes:
When you install the New Relic Pixie integration, a [`vizier-pem` agent](https://docs.px.dev/reference/architecture/#vizier) is deployed to each node in your cluster via a DaemonSet. The `vizier-pem` agents use memory for two main purposes:

* **Collecting telemetry data**: tracing application traffic or CPU profiles, amongst other. Those values must be stored in memory somewhere, as they're processed.
* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie); and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic.
* **Collecting telemetry data**: tracing application traffic or CPU profiles, amongst others. Those values must be stored in memory somewhere, as they're processed.
* **Short-term storage of telemetry data**: to power troubleshooting via the [Live debugging with Pixie tab](/docs/kubernetes-pixie/auto-telemetry-pixie/understand-use-data/live-debugging-with-pixie) and as a temporary storage location for a subset of the telemetry data before it's stored in New Relic.

By default, `vizier-pem` pods have a `2Gi` memory limit, and a `2Gi` memory request. They set aside 60% of their allocated memory for short-term data storage, leaving the other 40% for the data collection.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ redirects:
- /docs/auto-telemetry-pixie/pixie-data-security-overview
---

Auto-telemetry with Pixie is our integration of Community Cloud for Pixie, a managed version of Pixie open source software. Auto-telemetry with Pixie therefore benefits from Pixie's approach to keeping data secure. The data that Pixie collects is stored entirely within your Kubernetes cluster. This data does not persist outside of your environment, and will never be stored by Community Cloud for Pixie. This means that your sensitive data remains within your environment and control.
Auto-telemetry with Pixie is our integration of [Community Cloud for Pixie](https://docs.px.dev/installing-pixie/install-guides/community-cloud-for-pixie/), a managed version of Pixie open source software. Auto-telemetry with Pixie therefore benefits from Pixie's approach to keeping data secure. The data that Pixie collects is stored entirely within your Kubernetes cluster. This data does not persist outside of your environment, and will never be stored by Community Cloud for Pixie. This means that your sensitive data remains within your environment and control.

Community Cloud for Pixie makes queries directly to your Kubernetes cluster to access the data. In order for the query results to be shown in the Community Cloud for Pixie UI, CLI, and API, the data is sent to the client from your cluster using a reverse proxy.

Expand All @@ -20,7 +20,7 @@ Community Cloud for Pixie’s reverse proxy is designed to ensure:
* Data is ephemeral. It only passes through the Community Cloud for Pixie's cloud proxy in transit. This ensures data locality.
* Data is encrypted while in transit. Only you are able to read your data.

New Relic fetches and stores data that related to an application's performance. With Auto-telemetry with Pixie, a predefined subset of data persists outside of your cluster. This data is stored in our database, in your selected region. This data persists in order to give you long-term storage, alerting, correlation with additional data, and the ability to use advanced New Relic platform capabilities, such as anomaly detection.
New Relic fetches and stores data that related to an application's performance. With Auto-telemetry with Pixie, a predefined subset of data persists outside of your cluster. This data is stored in our database, in your selected region. This data persists in order to give you long-term storage, alerting, correlation with additional data, and the ability to use advanced New Relic platform capabilities, such as [anomaly detection](/docs/alerts-applied-intelligence/applied-intelligence/anomaly-detection/anomaly-detection-applied-intelligence/).

The persisted performance metrics include, but are not limited to:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Actionable insights:

## Get started

In order to get started with Kubecost and New Relic, you'll need to set up Prometheus Remote Write in New Relic. Then, you will need to install the Kubecost agent.
To get started, first set up [Prometheus Remote Write](/docs/infrastructure/prometheus-integrations/install-configure-remote-write/set-your-prometheus-remote-write-integration/) in New Relic, then install the Kubecost agent.

### Set up Prometheus Remote Write

Expand All @@ -31,7 +31,7 @@ Go to the [Prometheus remote write setup launcher in the UI](https://one.newreli

### Install the Kubecost agent to your cluster

Now, we are going to install the Kubecost agent via Helm.
Next, install the Kubecost agent via Helm.

1. Download the template YAML file for the Kubecost agent installation. Save it to `kubecost-values.yaml`.

Expand Down Expand Up @@ -1175,7 +1175,7 @@ extraObjects: []
2. Open `kubecost-values.yaml` in an editor of your choice.
3. Go to line 671, and update `YOUR_URL_HERE` to contain the value of the URL you generated for the Prometheus Remote Write integration in the earlier step. It should look something like `https://metric-api.newrelic.com/prometheus/v1/write?prometheus_server=kubecost`.
4. Go to line 672, and update `YOUR_BEARER_TOKEN_HERE` to contain the value of the bearer token you generated for the Prometheus remote write integration in the earlier step.
5. Run the Helm command to add the Kubecost agent to your cluster. It should start sending data to New Relic.
5. Run the Helm command below to add the Kubecost agent to your cluster and start sending data to New Relic:

```shell
helm upgrade --install kubecost \ [11:13:30]
Expand All @@ -1184,8 +1184,8 @@ helm upgrade --install kubecost \
--values kubecost-values.yaml
```

6. Wait a few minutes. In the previous tab setting up Remote Write, click the "See your data" button to see whether data has been received.
7. Query your data.
6. Wait a few minutes. In the previous tab where you set up Remote Write, click the **See your data** button to see whether data has been received.
7. Query your data:

```sql
SELECT sum(`Total Cost($)`) AS 'Total Monthly Cost' FROM (FROM Metric SELECT (SELECT sum(`total_node_cost`) FROM (FROM Metric SELECT (average(kube_node_status_capacity_cpu_cores) * average(node_cpu_hourly_cost) * 730 + average(node_gpu_hourly_cost) * 730 + average(kube_node_status_capacity_memory_bytes) / 1024 / 1024 / 1024 * average(node_ram_hourly_cost) * 730) AS 'total_node_cost' FACET node)) + (SELECT (sum(acflb) / 1024 / 1024 / 1024 * 0.04) AS 'Container Cost($)' FROM (SELECT (average(container_fs_limit_bytes) * cardinality(container_fs_limit_bytes)) AS 'acflb' FROM Metric WHERE (NOT ((device = 'tmpfs')) AND (id = '/')))) + (SELECT sum(aphc * 730 * akpcb / 1024 / 1024 / 1024) AS 'Total Persistent Volume Cost($)' FROM (FROM Metric SELECT average(pv_hourly_cost) AS 'aphc', average(kube_persistentvolume_capacity_bytes) AS 'akpcb' FACET persistentvolume, instance)) AS 'Total Cost($)')
Expand Down
Loading
Loading