Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(NR-320588) Adding some documentation for a new query limit #19118

Merged
merged 7 commits into from
Oct 30, 2024
164 changes: 97 additions & 67 deletions src/content/docs/alerts/admin/rules-limits-alerts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,48 +12,42 @@
- /docs/alerts-applied-intelligence/new-relic-alerts/rules-limits-glossary/rules-limits-new-relic-alerts
- /docs/alerts-applied-intelligence/new-relic-alerts/rules-limits-glossary/rules-limits-alerts/
- /docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/rules-limits-alerts
freshnessValidatedDate: never
freshnessValidatedDate: 2024-10-30
---

Limits and rules pertaining to New Relic <InlinePopover type="alerts"/>:
This page describes limits and rules pertaining to New Relic <InlinePopover type="alerts"/>:

Check warning on line 18 in src/content/docs/alerts/admin/rules-limits-alerts.mdx

View workflow job for this annotation

GitHub Actions / vale-linter

[vale] reported by reviewdog 🐶 [Microsoft.Wordiness] Consider using 'about' instead of 'pertaining to'. Raw Output: {"message": "[Microsoft.Wordiness] Consider using 'about' instead of 'pertaining to'.", "location": {"path": "src/content/docs/alerts/admin/rules-limits-alerts.mdx", "range": {"start": {"line": 18, "column": 38}}}, "severity": "INFO"}

<table>
<thead>
<tr>
<th width={275}>
<DNT>
**Category**

</th>
<th width={275}>
**Limited condition**
</DNT>

</th>

<th>
<DNT>

**Minimum value**
</DNT>

</th>

<th>
<DNT>
**Maximum value**
</DNT>

</th>
</tr>
</thead>

<tbody>
<tr>
<td id="policy">
<DNT>
**Alert policies:**
</DNT>
</td>

<td/>
Alert policies

<td/>
</tr>

<tr>
</td>
<td>
[Alert policy name](/docs/alerts/organize-alerts/create-edit-or-find-alert-policy/)
</td>
Expand All @@ -68,46 +62,62 @@
</tr>

<tr>
<td>

</td>
<td>
[Policies per account](/docs/alerts/new-relic-alerts-beta/getting-started/best-practices-alert-policies)
</td>

<td>
n/a
N/A
</td>

<td>
10000 policies
10K policies
</td>
</tr>

<tr>
<td id="condition">
<DNT>
**Alert conditions:**
Alert conditions
</DNT>
</td>
<td>
Matched data points per minute, per account ([learn more](#query-limit))
</td>

<td/>
<td>
N/A
</td>

<td/>
<td>
300M
</td>
</tr>

<tr>
<td>

</td>
<td>
Matched data points per minute, per account ([learn more](#query-limit))
Alert query scan operations per minute, per account ([learn more](#query-scan-limit))
</td>

<td>
N/A
</td>

<td>
300M
2.5B
</td>
</tr>

<tr>
<td>

</td>
<td>
[Condition name](/docs/alerts/new-relic-alerts-beta/configuring-alert-policies/define-alert-conditions)
</td>
Expand All @@ -122,6 +132,9 @@
</tr>

<tr>
<td>

</td>
<td>
[Conditions per policy](/docs/alerts/new-relic-alerts-beta/configuring-alert-policies/define-alert-conditions)
</td>
Expand All @@ -136,20 +149,26 @@
</tr>

<tr>
<td>

</td>
<td>
[Alert conditions per account](/docs/alerts/create-alert/create-alert-condition/alert-conditions)
[Alert conditions per account](/docs/alerts/create-alert/create-alert-conditioN/Alert-conditions)
</td>

<td>
0 conditions
</td>

<td>
4000 conditions
4K conditions
</td>
</tr>

<tr>
<td>

</td>
<td>
[Targets (product entities)](/docs/new-relic-solutions/get-started/glossary/#alert-target) per condition
</td>
Expand All @@ -159,12 +178,15 @@
</td>

<td>
5000 targets for NRQL conditions
1000 targets for non-NRQL conditions
5K targets for NRQL conditions
1K targets for non-NRQL conditions
</td>
</tr>

<tr>
<td>

</td>
<td>
[Thresholds](/docs/alerts/new-relic-alerts-beta/configuring-alert-policies/define-thresholds-trigger-alert) per condition
</td>
Expand All @@ -181,29 +203,25 @@
<tr>
<td id="incidents">
<DNT>
**Alert incidents:**
Alert incidents
</DNT>
</td>

<td/>

<td/>
</tr>

<tr>
<td>
[Custom incident descriptions](/docs/alerts/create-alert/condition-details/alert-custom-incident-descriptions)
</td>

n/a
N/A
<td/>

<td>
4000 characters
4K characters
</td>
</tr>

<tr>
<td>

</td>
<td>
[Duration for condition incident](/docs/alerts/new-relic-alerts-beta/configuring-alert-policies/define-thresholds-trigger-alert)
</td>
Expand All @@ -218,6 +236,9 @@
</tr>

<tr>
<td>

</td>
<td>
Incidents per issue
</td>
Expand All @@ -227,13 +248,16 @@
</td>

<td>
10,000 incidents
10K incidents

Incidents beyond this limit will not be persisted.
</td>
</tr>

<tr>
<td>

</td>
<td>
Incident search API: page size
</td>
Expand All @@ -243,7 +267,7 @@
</td>

<td>
1000 pages (25K incidents)
1K pages (25K incidents)

<Callout variant="tip">
Only use the `only-open` parameter to retrieve all open incidents. If you have more than 25K open incidents and need to retrieve them via the REST API, contact support.
Expand All @@ -253,31 +277,25 @@

<tr>
<td id="workflows">
<DNT>
**Workflows:**
</DNT>
Workflows
</td>

<td/>

<td/>
</tr>

<tr>
<td>
[Workflows per account](/docs/alerts-applied-intelligence/applied-intelligence/incident-workflows/incident-workflows)
</td>

<td>
n/a
N/A
</td>

<td>
Initial limit 1000
Initial limit: 1K
</td>
</tr>

<tr>
<td>

</td>
<td>
Workflow filter size
</td>
Expand All @@ -287,23 +305,14 @@
</td>

<td>
4096 characters per workflow
4,096 characters per workflow
</td>
</tr>

<tr>
<td id="channel">
<DNT>
**Notification channels (Legacy):**
</DNT>
Notification channels (Legacy)
</td>

<td/>

<td/>
</tr>

<tr>
<td>
Channel limitations
</td>
Expand All @@ -323,15 +332,15 @@

The alert condition `Matched data points per minute` limit applies to the total rate of matched [data points](/docs/alerts-applied-intelligence/new-relic-alerts/advanced-alerts/understand-technical-concepts/streaming-alerts-key-terms-concepts) for the alerting queries in a New Relic [account](/docs/accounts/accounts-billing/account-structure/new-relic-account-structure).

If this limit is exceeded, you won't be able to create or update conditions for the impacted account until the rate goes below the limit. Existing alert conditions are **not** affected.
If this limit is exceeded, you won't be able to create or update conditions for the impacted account until the rate goes below the limit. Existing alert conditions **aren't** affected.

Check warning on line 335 in src/content/docs/alerts/admin/rules-limits-alerts.mdx

View workflow job for this annotation

GitHub Actions / vale-linter

[vale] reported by reviewdog 🐶 [Microsoft.Passive] 'is exceeded' looks like passive voice. Raw Output: {"message": "[Microsoft.Passive] 'is exceeded' looks like passive voice.", "location": {"path": "src/content/docs/alerts/admin/rules-limits-alerts.mdx", "range": {"start": {"line": 335, "column": 15}}}, "severity": "INFO"}

You can see your matched data points and any limit incidents in the [limits UI](/docs/data-apis/manage-data/view-system-limits).

To understand what conditions are leading to the most throughput, you can perform a query like:

```sql
FROM NrAiSignal
SELECT sum(aggregatedDataPointsCount) AS 'alert matched data points'
FROM NrAiSignal
SELECT sum(aggregatedDataPointsCount) AS 'alert matched data points'
FACET conditionId
```

Expand All @@ -344,3 +353,24 @@
To request a limit increase, talk to your New Relic account representative.

Note that using [sliding windows](/docs/query-your-data/nrql-new-relic-query-language/nrql-query-tutorials/create-smoother-charts-sliding-windows) can significantly increase the number of data points. Consider using a longer duration of Sliding window aggregation to reduce the number of data points produced.

## Alert query scan operations per minute [#query-scan-limit]

The alert condition `Alert query scan operations per minute` limit applies to the total rate of query scan operations on ingested events.
A query scan operation is the work performed by the New Relic pipeline to match ingested events to alert queries registered in a New Relic [account](/docs/accounts/accounts-billing/account-structure/new-relic-account-structure).

If this limit is exceeded, you won't be able to create or update conditions for the impacted account until the rate goes below the limit. Existing alert conditions **aren't** affected.

Check warning on line 362 in src/content/docs/alerts/admin/rules-limits-alerts.mdx

View workflow job for this annotation

GitHub Actions / vale-linter

[vale] reported by reviewdog 🐶 [Microsoft.Passive] 'is exceeded' looks like passive voice. Raw Output: {"message": "[Microsoft.Passive] 'is exceeded' looks like passive voice.", "location": {"path": "src/content/docs/alerts/admin/rules-limits-alerts.mdx", "range": {"start": {"line": 362, "column": 15}}}, "severity": "INFO"}

You can see your query scan operations and any limit incidents in the [limits UI](/docs/data-apis/manage-data/view-system-limits).

When matching events to alert queries, all events from the [data type](/docs/nrql/get-started/introduction-nrql-new-relics-query-language/#what-you-can-query) that the query references must be examined. Here are a few common ways to have fewer events in a given data type (which will decrease the alert query scan operations):

Check warning on line 366 in src/content/docs/alerts/admin/rules-limits-alerts.mdx

View workflow job for this annotation

GitHub Actions / vale-linter

[vale] reported by reviewdog 🐶 [Microsoft.Passive] 'be examined' looks like passive voice. Raw Output: {"message": "[Microsoft.Passive] 'be examined' looks like passive voice.", "location": {"path": "src/content/docs/alerts/admin/rules-limits-alerts.mdx", "range": {"start": {"line": 366, "column": 191}}}, "severity": "INFO"}
* When alerting on logs data, use [log partitions](/docs/tutorial-manage-large-log-volume/organize-large-logs/) to limit which logs are being scanned for alert queries.

Check warning on line 367 in src/content/docs/alerts/admin/rules-limits-alerts.mdx

View workflow job for this annotation

GitHub Actions / vale-linter

[vale] reported by reviewdog 🐶 [Microsoft.Passive] 'being scanned' looks like passive voice. Raw Output: {"message": "[Microsoft.Passive] 'being scanned' looks like passive voice.", "location": {"path": "src/content/docs/alerts/admin/rules-limits-alerts.mdx", "range": {"start": {"line": 367, "column": 137}}}, "severity": "INFO"}

* When alerting on custom events, break up larger custom event types.
* Use custom events instead of alerting on transaction events.
* [Create metrics](/docs/data-apis/convert-to-metrics/analyze-monitor-data-trends-metrics/) to aggregate data.

Check warning on line 371 in src/content/docs/alerts/admin/rules-limits-alerts.mdx

View workflow job for this annotation

GitHub Actions / vale-linter

[vale] reported by reviewdog 🐶 [new-relic.ComplexWords] Consider using 'total' instead of 'aggregate'. Raw Output: {"message": "[new-relic.ComplexWords] Consider using 'total' instead of 'aggregate'.", "location": {"path": "src/content/docs/alerts/admin/rules-limits-alerts.mdx", "range": {"start": {"line": 371, "column": 96}}}, "severity": "INFO"}
* Use [metric timeslice queries](/docs/data-apis/understand-data/metric-data/query-apm-metric-timeslice-data-nrql/) when possible instead of alerting on transaction events.

In addition to the above tips, cleaning up any unused or unneeded alert queries (alert conditions) will decrease the number of query scan operations.

Check warning on line 374 in src/content/docs/alerts/admin/rules-limits-alerts.mdx

View workflow job for this annotation

GitHub Actions / vale-linter

[vale] reported by reviewdog 🐶 [Microsoft.Wordiness] Consider using 'also' instead of 'In addition'. Raw Output: {"message": "[Microsoft.Wordiness] Consider using 'also' instead of 'In addition'.", "location": {"path": "src/content/docs/alerts/admin/rules-limits-alerts.mdx", "range": {"start": {"line": 374, "column": 1}}}, "severity": "INFO"}

To request a limit increase, talk to your New Relic account representative.
Loading