Skip to content

Commit

Permalink
Merge pull request #18783 from newrelic/daily-release/09-27-24-morning
Browse files Browse the repository at this point in the history
Daily release/09 27 24 morning
  • Loading branch information
cbehera-newrelic authored Sep 27, 2024
2 parents 0b683d2 + dee427f commit 3a0beca
Show file tree
Hide file tree
Showing 4 changed files with 82 additions and 43 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ The [usage UI](/docs/accounts/accounts-billing/new-relic-one-pricing-billing/new
This query breaks down [Metric data](/docs/telemetry-data-platform/understand-data/new-relic-data-types/#dimensional-metrics) by the top ten metric names. You could also facet by `appName` or `host` to adjust the analysis.

```sql
FROM Metric SELECT bytecountestimate()/10e8 as 'GB Estimate'
FROM Metric SELECT bytecountestimate()/10e8 AS 'GB Estimate'
SINCE 24 hours ago
FACET metricName LIMIT 10 TIMESERIES 1 hour
```
Expand Down Expand Up @@ -134,7 +134,7 @@ Here's a query for getting the current month's cost for your full platform users
```sql
FROM NrMTDConsumption
SELECT latest(FullPlatformUsersBillable) * <var>YOUR_PER_FULL_PLATFORM_USER_COST</var>
SELECT latest(FullPlatformUsersBillable) * YOUR_PER_FULL_PLATFORM_USER_COST
```
### User queries for organizations without core users [#queries-non-core]
Expand All @@ -157,7 +157,7 @@ These queries apply for some older New Relic organizations that have only two us
```sql
FROM NrMTDConsumption
SELECT latest(FullUsersBillable)
SELECT latest(FullUsersBillable)
```
This query shows how many full platform users were counted by hour. This is useful for seeing how the full platform user count changed over time.
Expand All @@ -170,7 +170,7 @@ These queries apply for some older New Relic organizations that have only two us
### Projected monthly full platform user count
This query shows a projected count of monthly full platform users. This query would not be good for using in a dashboard; it requires values based on a) the days remaining in the month, b) the start of the month. Here's an example querying the projected end-of-month count with 10 days left in that month:
Here's an example of querying the projected end-of-month count of monthly full platform users with 10 days left in that month. Note that this query would not be good for using in a dashboard because it requires values based on the days remaining in the month and the start of the month.

```sql
FROM NrMTDConsumption
Expand All @@ -185,16 +185,17 @@ These queries apply for some older New Relic organizations that have only two us
```sql
FROM NrUsage SELECT max(usage) SINCE 10 days ago
WHERE productLine='FullStackObservability'
WHERE metric in ('FullUsers', 'BasicUsers')
AND metric IN ('FullUsers', 'BasicUsers')
FACET metric
```

To see the count of full platform users and basic users over time:

```
```sql
FROM NrUsage SELECT max(usage) SINCE 10 days ago
WHERE productLine='FullStackObservability'
WHERE metric in ('FullUsers', 'BasicUsers') FACET metric TIMESERIES 1 hour
AND metric IN ('FullUsers', 'BasicUsers')
FACET metric TIMESERIES 1 hour
```

### Estimated cost
Expand All @@ -203,14 +204,14 @@ These queries apply for some older New Relic organizations that have only two us

```sql
FROM NrMTDConsumption
SELECT latest(FullPlatformUsersBillable)* <var>YOUR_PER_FULL_PLATFORM_USER_COST</var>
SELECT latest(FullPlatformUsersBillable) * YOUR_PER_FULL_PLATFORM_USER_COST
```

Here's an equivalent one for your core users:
```sql
FROM NrMTDConsumption
SELECT latest(CoreUsersBillable)* <var>YOUR_PER_CORE_USER_COST</var>
SELECT latest(CoreUsersBillable) * YOUR_PER_CORE_USER_COST
```
</Collapser>
</CollapserGroup>
Expand Down Expand Up @@ -323,14 +324,14 @@ Here are some query recommendations for helping you understand the estimated cos
```sql
FROM NrMTDConsumption
SELECT latest(GigabytesIngestedBillable)*YOUR_PER_GB_COST
SELECT latest(GigabytesIngestedBillable) * YOUR_PER_GB_COST
```
Here's an example of this query using a [per-GB cost of $0.35](/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing/#data-prices):

```sql
FROM NrMTDConsumption
SELECT latest(GigabytesIngestedBillable)*.35
SELECT latest(GigabytesIngestedBillable) * .35
```
</Collapser>

Expand Down Expand Up @@ -416,7 +417,7 @@ Here are some NRQL alert condition examples.
```sql
FROM NrMTDConsumption
SELECT latest(GigabytesIngestedBillable)*YOUR_PER_GB_COST
SELECT latest(GigabytesIngestedBillable) * YOUR_PER_GB_COST
```
</Collapser>
</CollapserGroup>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ Here's the basic syntax for creating all NRQL alert conditions.

```sql
SELECT function(attribute)
FROM Event
WHERE attribute [comparison] [AND|OR ...]
FROM Event
WHERE attribute [comparison] [AND|OR ...]
```

<table>
Expand Down Expand Up @@ -175,7 +175,9 @@ Some elements of NRQL used in charts don't make sense in the context of streamin
Example:

```sql
SELECT percentile(largestContentfulPaint, 75) FROM PageViewTiming WHERE (appId = 837807) SINCE yesterday
SELECT percentile(largestContentfulPaint, 75)
FROM PageViewTiming
WHERE (appId = 837807) SINCE yesterday
```

NRQL conditions produce a never-ending stream of windowed query results, so the `SINCE` and `UNTIL` keywords to scope the query to a point in time are not compatible. As a convenience, we automatically strip `SINCE` and `UNTIL` from a query when creating a condition from the context of a chart.
Expand Down Expand Up @@ -227,17 +229,18 @@ Some elements of NRQL used in charts don't make sense in the context of streamin
Original query:
```sql
SELECT count(foo), average(bar), max(baz) from Transaction
SELECT count(foo), average(bar), max(baz)
FROM Transaction
```
Decomposed:
```sql
SELECT count(foo) from Transaction
SELECT count(foo) FROM Transaction
SELECT average(bar) from Transaction
SELECT average(bar) FROM Transaction
SELECT max(baz) from Transaction
SELECT max(baz) FROM Transaction
```
</td>
</tr>
Expand Down Expand Up @@ -265,7 +268,9 @@ Some elements of NRQL used in charts don't make sense in the context of streamin
For example to create an alert condition equivalent to
```sql
SELECT count(*) from Transaction TIMESERIES 1 minute SLIDE BY 5 minutes
SELECT count(*)
FROM Transaction
TIMESERIES 1 minute SLIDE BY 5 minutes
```
You would use a data aggregation window duration of 5 minutes, with a sliding window aggregation of 1 minute.
Expand All @@ -278,7 +283,7 @@ Some elements of NRQL used in charts don't make sense in the context of streamin
</td>
<td>
In NRQL queries, the `LIMIT` clause is used to control the amount of data a query returns, either the maximum number of facet values returned by `FACET` queries or the maximum number of items returned by `SELECT \*` queries.
In NRQL queries, the `LIMIT` clause is used to control the amount of data a query returns, either the maximum number of facet values returned by `FACET` queries or the maximum number of items returned by `SELECT *` queries.
`LIMIT` is not compatible with NRQL alerting: evaluation is always performed on the full result set.
</td>
Expand Down Expand Up @@ -318,11 +323,15 @@ Here are some common use cases for NRQL conditions. These queries will work for
Create constrained alerts that target a specific segment of your data, such as a few key customers or a range of data. Use the `WHERE` clause to define those conditions.
```sql
SELECT average(duration) FROM Transaction WHERE account_id in (91290, 102021, 20230)
SELECT average(duration)
FROM Transaction
WHERE account_id IN (91290, 102021, 20230)
```
```sql
SELECT percentile(duration, 95) FROM Transaction WHERE name LIKE 'Controller/checkout/%'
SELECT percentile(duration, 95)
FROM Transaction
WHERE name LIKE 'Controller/checkout/%'
```
</Collapser>
Expand All @@ -333,11 +342,13 @@ Here are some common use cases for NRQL conditions. These queries will work for
Create alerts when an Nth percentile of your data hits a specified threshold; for example, maintaining SLA service levels. Since we evaluate the NRQL query based on the aggregation window duration, percentiles will be calculated for each duration separately.
```sql
SELECT percentile(duration, 95) FROM Transaction
SELECT percentile(duration, 95)
FROM Transaction
```
```sql
SELECT percentile(databaseDuration, 75) FROM Transaction
SELECT percentile(databaseDuration, 75)
FROM Transaction
```
</Collapser>
Expand All @@ -348,11 +359,18 @@ Here are some common use cases for NRQL conditions. These queries will work for
Create alerts when your data hits a certain maximum, minimum, or average; for example, ensuring that a duration or response time does not pass a certain threshold.
```sql
SELECT max(duration) FROM Transaction
SELECT max(duration)
FROM Transaction
```
```sql
SELECT average(duration) FROM Transaction
SELECT min(duration)
FROM Transaction
```
```sql
SELECT average(duration)
FROM Transaction
```
</Collapser>
Expand All @@ -363,11 +381,13 @@ Here are some common use cases for NRQL conditions. These queries will work for
Create alerts when a proportion of your data goes above or below a certain threshold.
```sql
SELECT percentage(count(*), WHERE duration > 2) FROM Transaction
SELECT percentage(count(*), WHERE duration > 2)
FROM Transaction
```
```sql
SELECT percentage(count(*), WHERE http.statusCode = '500') FROM Transaction
SELECT percentage(count(*), WHERE http.statusCode = '500')
FROM Transaction
```
</Collapser>
Expand All @@ -378,7 +398,9 @@ Here are some common use cases for NRQL conditions. These queries will work for
Create alerts on [Apdex](/docs/apm/new-relic-apm/apdex/apdex-measuring-user-satisfaction), applying your own T-value for certain transactions. For example, get an alert notification when your Apdex for a T-value of 500ms on transactions for production apps goes below 0.8.
```sql
SELECT apdex(duration, t:0.5) FROM Transaction WHERE appName like '%prod%'
SELECT apdex(duration, t:0.5)
FROM Transaction
WHERE appName LIKE '%prod%'
```
</Collapser>
</CollapserGroup>
Expand All @@ -396,7 +418,9 @@ By default, the aggregation window duration is 1 minute, but you can change the
Let's say this is your alert condition query:
```sql
SELECT count(*) FROM SyntheticCheck WHERE monitorName = 'My Cool Monitor' AND result = 'FAILED'
SELECT count(*)
FROM SyntheticCheck
WHERE monitorName = 'My Cool Monitor' AND result = 'FAILED'
```
If there are no failures for the aggregation window:
Expand All @@ -414,7 +438,8 @@ If you have a data source delivering legitimate numeric zeroes, the query will r
Let's say this is your alert condition query, and that `MyCoolEvent` is an attribute that can sometimes return a zero value.

```sql
SELECT average(MyCoolAttribute) FROM MyCoolEvent
SELECT average(MyCoolAttribute)
FROM MyCoolEvent
```

If, in the aggregation window being evaluated, there's at least one instance of `MyCoolEvent` and if the average value of all `MyCoolAttribute` attributes from that window is equal to zero, then a `0` value will be returned. If there are no `MyCoolEvent` events during that minute, then a `NULL` will be returned due to the order of operations.
Expand All @@ -428,7 +453,9 @@ You can avoid `NULL` values entirely with a query order of operations shortcut.
Here's an example to alert on `FAILED` results:

```sql
SELECT filter(count(*), WHERE result = 'FAILED') FROM SyntheticCheck WHERE monitorName = 'My Favorite Monitor'
SELECT filter(count(*), WHERE result = 'FAILED')
FROM SyntheticCheck
WHERE monitorName = 'My Favorite Monitor'
```

In this example, a window with a successful result would return a `0`, allowing the condition's threshold to resolve on its own.
Expand All @@ -447,7 +474,13 @@ For more information, check out our [blog post](https://discuss.newrelic.com/t/r
Without a `FACET`, the inner query produces a single result, giving the outer query nothing to aggregate. If you're using a nested query, make sure your inner query is faceted.

```sql
SELECT max(cpu) FROM (FROM SystemSample SELECT min(cpuPercent) as 'cpu' FACET hostname) ​​​​
SELECT max(cpu)
FROM
(
SELECT min(cpuPercent) AS 'cpu'
FROM SystemSample
FACET hostname
)
```
</Collapser>

Expand All @@ -458,8 +491,13 @@ For more information, check out our [blog post](https://discuss.newrelic.com/t/r
With an alert aggregation window of 1 minute, the inner query would produce two smaller windows of 30 seconds. In theory, these two windows could be aggregated by the outer query. However, this is not currently supported.

```sql
SELECT max(cpu) FROM (FROM Event SELECT min(cpuTime) as cpu TIMESERIES 30 seconds)​​
```
SELECT max(cpu)
FROM
(
SELECT min(cpuTime) AS cpu TIMESERIES 30 seconds
FROM Event
)
```
</Collapser>

<Collapser
Expand Down Expand Up @@ -599,7 +637,7 @@ Here are some tips for creating and using a NRQL condition:
</td>
<td>
In order for a NRQL alert condition [health status display](/docs/alerts-applied-intelligence/new-relic-alerts/alert-conditions/view-entity-health-status-find-entities-without-alert-conditions) to function properly, the query must be scoped to a single entity. To do this, either use a WHERE clause (for example, `WHERE appName = 'MyFavoriteApp'`) or use a FACET clause to scope each signal to a single entity (for example, `FACET hostname` or `FACET appName`).
In order for a NRQL alert condition [health status display](/docs/alerts-applied-intelligence/new-relic-alerts/alert-conditions/view-entity-health-status-find-entities-without-alert-conditions) to function properly, the query must be scoped to a single entity. To do this, either use a `WHERE` clause (for example, `WHERE appName = 'MyFavoriteApp'`) or use a `FACET` clause to scope each signal to a single entity (for example, `FACET hostname` or `FACET appName`).
</td>
</tr>
Expand Down Expand Up @@ -728,8 +766,8 @@ Loss of signal settings include a time duration and a few actions.
**Loss of signal actions**
</DNT>
Once a signal is considered lost, you have a few options:
* Close all current open incidents: This closes all open incidents that are related to a specific signal. It won't necessarily close all incidents for a condition. If you're alerting on an ephemeral service, or on a sporadic signal, you'll want to choose this action to ensure that incidents are closed properly. The GraphQL node name for this is ["closeViolationsOnExpiration](/docs/apis/nerdgraph/examples/nerdgraph-api-loss-signal-gap-filling/#loss-of-signal)"
* Open new incidents: This will open a new incident when the signal is considered lost. These incidents will indicate that they are due to a loss of signal. Based on your incident preferences, this should trigger a notification. The graphQL node name for this is ["openViolationOnExpiration](/docs/apis/nerdgraph/examples/nerdgraph-api-loss-signal-gap-filling/#loss-of-signal)"
* Close all current open incidents: This closes all open incidents that are related to a specific signal. It won't necessarily close all incidents for a condition. If you're alerting on an ephemeral service, or on a sporadic signal, you'll want to choose this action to ensure that incidents are closed properly. The GraphQL node name for this is [`closeViolationsOnExpiration`](/docs/apis/nerdgraph/examples/nerdgraph-api-loss-signal-gap-filling/#loss-of-signal).
* Open new incidents: This will open a new incident when the signal is considered lost. These incidents will indicate that they are due to a loss of signal. Based on your incident preferences, this should trigger a notification. The graphQL node name for this is [`openViolationOnExpiration`](/docs/apis/nerdgraph/examples/nerdgraph-api-loss-signal-gap-filling/#loss-of-signal).
* When you enable both of the above actions, we'll close all open incidents first, and then open a new incident for loss of signal.
* Do not open "lost signal" incidents on expected termination. When a signal is expected to terminate, you can choose not to open a new incident. This is useful when you know that a signal will be lost at a certain time, and you don't want to open a new incident for that signal loss. The GraphQL node name for this is [`ignoreOnExpectedTermination`](/docs/apis/nerdgraph/examples/nerdgraph-api-loss-signal-gap-filling/#loss-of-signal).
Expand Down Expand Up @@ -816,7 +854,7 @@ You can adjust the [delay/timer](/docs/alerts-applied-intelligence/new-relic-ale

For the cadence method, the total supported latency is the sum of the aggregation window duration and the delay.

If the data type comes from an [APM language agent](/docs/apm/new-relic-apm/getting-started/introduction-apm) and is aggregated from many app instances (for example, `Transactions`, `TransactionErrors`, etc.), we recommend using the event flow method with the default settings.
If the data type comes from an [APM language agent](/docs/apm/new-relic-apm/getting-started/introduction-apm) and is aggregated from many app instances (for example, `Transaction`, `TransactionError`, etc.), we recommend using the event flow method with the default settings.

<Callout variant="important">
When creating NRQL conditions for data collected from [Infrastructure Cloud Integrations](/docs/infrastructure/infrastructure-integrations/get-started/introduction-infrastructure-integrations/#cloud) such as AWS CloudWatch or Azure, we recommend that you use the event timer method.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ Here are some examples of [NRQL queries](/docs/insights/nrql-new-relic-query-lan
>
Here's an example of a query that groups inventory change events from the last day by the type of change:

```
```sql
SELECT count(*) FROM InfrastructureEvent WHERE format='inventoryChange' FACET changeType SINCE 1 DAY AGO
```
</Collapser>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,14 @@ To get your Azure account's subscription `id` and `tenantId`, use your local ter
1. Open a terminal with access to your Azure account.
2. Type the following:

```
```sh
az account show
```
3. Copy and save the subscription `id` and `tenantID` from the output response for later use.

The response should look similar to the response below. The subscription `id` and `tenantID` are highlighted.

```js lineHighlight=4,8
```json lineHighlight=4,8
@Azure:~$ az account show
{
"environmentName": "AzureCloud",
Expand Down

0 comments on commit 3a0beca

Please sign in to comment.