Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nr 338923 confluent cloud integration #19474

Draft
wants to merge 4 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,270 @@
---
title: Confluent cloud integration
tags:
- Integrations
- Confluent cloud integrations
- Apache Kafka

metaDescription: 'New Relic''s Confluent cloud integration integration for Kafka: what data it reports, and how to enable it.'
freshnessValidatedDate: never
---

New Relic offers an integration for collecting your [Confluent Cloud managed streaming for Apache Kafka](https://www.confluent.io/resources/white-paper/apache-kafka-confluent-enterprise-reference-architecture/) data. This document explains how to activate this integration and describes the data that can be reported.

Check notice on line 12 in src/content/docs/infrastructure/other-infrastructure-integrations/confluent-cloud-integration.mdx

View workflow job for this annotation

GitHub Actions / vale

[vale] src/content/docs/infrastructure/other-infrastructure-integrations/confluent-cloud-integration.mdx#L12

[Microsoft.Passive] 'be reported' looks like passive voice.
Raw output
{"message": "[Microsoft.Passive] 'be reported' looks like passive voice.", "location": {"path": "src/content/docs/infrastructure/other-infrastructure-integrations/confluent-cloud-integration.mdx", "range": {"start": {"line": 12, "column": 306}}}, "severity": "INFO"}

## Prerequisites

* A New Relic account
* An active Confluent Cloud account
* A Confluent Cloud API key and secret
* `MetricsViewer` access on the Confluent Cloud account

## Activate integration [#activate]

To enable this integration, go to <DNT>**Integrations & Agents**</DNT> and select <DNT>**Confluent Cloud -> API Polling**</DNT> and follow the instructions.

<Callout variant="important">
If you have IP Filtering set up, add the following IP addresses to your filter.
* `162.247.240.0/22`
* `152.38.128.0/19`

For more information about New Relic IP ranges for cloud integration, refer [this document](/docs/new-relic-solutions/get-started/networks/#webhooks).
For instructions to perform this task, refer [this document](https://docs.confluent.io/cloud/current/security/access-control/ip-filtering/manage-ip-filters.html).
</Callout>



## Configuration and polling [#polling]

To change the polling frequency and filter data, use [configuration options](/docs/integrations/new-relic-integrations/getting-started/configure-polling-frequency-data-collection-cloud-integrations).

Default [polling](/docs/infrastructure/amazon-integrations/aws-integrations-list/aws-polling-intervals-infrastructure-integrations) information for the Amazon Managed Kafka integration:

* New Relic polling interval: 5 minutes
* Confluent Cloud data interval: 1 minute

## View and use data [#find-data]

To view your integration data, go to <DNT>**[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Infrastructure > AWS**</DNT> and select an integration.

You can [query and explore your data](/docs/using-new-relic/data/understand-data/query-new-relic-data) using the following [event type](/docs/data-apis/understand-data/new-relic-data-types/#event-data):

<table>
<thead>
<tr>
<th>
Entity
</th>

<th>
Data type
</th>

<th>
Provider
</th>
</tr>
</thead>

<tbody>
<tr>
<td>
Cluster
</td>

<td>
`Metric`
</td>

<td>
`Confluent`
</td>
</tr>
</tbody>
</table>

For more on how to use your data, see [Understand and use integration data](/docs/infrastructure/integrations/find-use-infrastructure-integration-data).

## Metric data [#metrics]

This integration records Amazon Managed Kafka data for cluster, partition, and topic entities.


<table>
<thead>
<tr>
<th style={{ width: "275px" }}>
Metric
</th>

<th style={{ width: "150px" }}>
Unit
</th>

<th>
Description
</th>
</tr>
</thead>

<tbody>
<tr>
<td>
`cluster_load_percent`
</td>

<td>
Percent
</td>

<td>
A measure of the utilization of the cluster. The value is between 0.0 and 1.0.
*Only dedicated tier clusters has this metric data

</td>
</tr>

<tr>
<td>
`hot_partition_ingress`
</td>

<td>
Percent
</td>

<td>
An indicator of the presence of a hot partition caused by ingress throughput. The value is 1.0 when a hot partition is detected, and empty when there is no hot partition detected.
</td>
</tr>

<tr>
<td>
`hot_partition_egress`
</td>

<td>
Percent
</td>

<td>
An indicator of the presence of a hot partition caused by egress throughput. The value is 1.0 when a hot partition is detected, and empty when there is no hot partition detected.
</td>
</tr>

<tr>
<td>
`request_bytes`
</td>

<td>
BytesPerSecond
</td>

<td>
The delta count of total request bytes from the `specifiedsent_bytesrequest` types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
</td>
</tr>

<tr>
<td>
`response_bytes`
</td>

<td>
CountPerSecond
</td>

<td>
The delta count of total response bytes from the specified response types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
</td>
</tr>

<tr>
<td>
`received_bytes`
</td>

<td>
BytesPerSecond
</td>

<td>
The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
</td>
</tr>

<tr>
<td>
`sent_bytes`
</td>

<td>
BytesPerSecond
</td>

<td>
The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
</td>
</tr>

<tr>
<td>
`received_records`
</td>

<td>
Count
</td>

<td>
The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
</td>
</tr>

<tr>
<td>
`sent_records`
</td>

<td>
Count
</td>

<td>
The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
</td>
</tr>

<tr>
<td>
`partition_count`
</td>

<td>
Count
</td>

<td>
The number of partitions.
</td>
</tr>

<tr>
<td>
`consumer_lag_offsets`
</td>

<td>
Milliseconds
</td>

<td>
The lag between a group member's committed offset and the partition's high watermark.
</td>
</tr>



</tbody>
</table>

2 changes: 2 additions & 0 deletions src/nav/infrastructure.yml
Original file line number Diff line number Diff line change
Expand Up @@ -993,6 +993,8 @@ pages:
path: /docs/infrastructure/other-infrastructure-integrations/statsd-monitoring-integration
- title: Stripe integration
path: /docs/infrastructure/other-infrastructure-integrations/stripe-integration
- title: Confluent cloud integration
path: /docs/infrastructure/other-infrastructure-integrations/confluent-cloud-integration
- title: Troubleshoot infrastructure monitoring
pages:
- title: Troubleshoot infrastructure agent
Expand Down
Loading