Skip to content

Commit

Permalink
mdx syntax fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
colleenmcginnis committed Oct 31, 2024
1 parent 66a73ef commit cd8c2b8
Show file tree
Hide file tree
Showing 32 changed files with 116 additions and 77 deletions.
2 changes: 1 addition & 1 deletion docs/en/serverless/aiops/aiops-analyze-spikes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ tags: [ 'serverless', 'observability', 'how-to' ]

{/* <DocCallOut template="technical preview" /> */}

Elastic ((observability)) provides built-in log rate analysis capabilities,
((observability)) provides built-in log rate analysis capabilities,
based on advanced statistical methods,
to help you find and investigate the causes of unusual spikes or drops in log rates.

Expand Down
10 changes: 6 additions & 4 deletions docs/en/serverless/aiops/aiops-detect-anomalies.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import Roles from '../partials/roles.mdx'

<Roles role="Editor" goal="create, run, and view ((anomaly-job))s" />

The anomaly detection feature in Elastic ((observability)) automatically models the normal behavior of your time series data — learning trends,
The anomaly detection feature in ((observability)) automatically models the normal behavior of your time series data — learning trends,
periodicity, and more — in real time to identify anomalies, streamline root cause analysis, and reduce false positives.

To set up anomaly detection, you create and run anomaly detection jobs.
Expand Down Expand Up @@ -47,7 +47,7 @@ To learn more about anomaly detection, refer to the [((ml))](((ml-docs))/ml-ad-o

<div id="create-anomaly-detection-job"></div>

# Create and run an anomaly detection job
## Create and run an anomaly detection job

1. In your ((observability)) project, go to **AIOps****Anomaly detection**.
1. Click **Create anomaly detection job** (or **Create job** if other jobs exist).
Expand Down Expand Up @@ -112,10 +112,10 @@ When the job runs, the ((ml)) features analyze the input stream of data, model i
When an event occurs outside of the baselines of normal behavior, that event is identified as an anomaly.
1. After the job is started, click **View results**.

# View the results
## View the results

After the anomaly detection job has processed some data,
you can view the results in Elastic ((observability)).
you can view the results in ((observability)).

<DocCallOut title="Tip">
Depending on the capacity of your machine,
Expand Down Expand Up @@ -227,7 +227,9 @@ The list includes maximum anomaly scores, which in this case are aggregated for
There is also a total sum of the anomaly scores for each influencer.
Use this list to help you narrow down the contributing factors and focus on the most anomalous entities.
1. Under **Anomaly timeline**, click a section in the swim lanes to obtain more information about the anomalies in that time period.

![Anomaly Explorer showing swim lanes with anomaly selected ](../images/anomaly-explorer.png)

You can see exact times when anomalies occurred.
If there are multiple detectors or metrics in the job, you can see which caught the anomaly.
You can also switch to viewing this time series in the **Single Metric Viewer** by selecting **View series** in the **Actions** menu.
Expand Down
4 changes: 2 additions & 2 deletions docs/en/serverless/aiops/aiops-detect-change-points.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ tags: [ 'serverless', 'observability', 'how-to' ]

{/* <DocCallOut template="technical preview" /> */}

The change point detection feature in Elastic ((observability)) detects distribution changes,
The change point detection feature in ((observability)) detects distribution changes,
trend changes, and other statistically significant change points in time series data.
Unlike anomaly detection, change point detection does not require you to configure a job or generate a model.
Instead you select a metric and immediately see a visual representation that splits the time series into two parts, before and after the change point.

Elastic ((observability)) uses a [change point aggregation](((ref))/search-aggregations-change-point-aggregation.html)
((observability)) uses a [change point aggregation](((ref))/search-aggregations-change-point-aggregation.html)
to detect change points. This aggregation can detect change points when:

* a significant dip or spike occurs
Expand Down
2 changes: 1 addition & 1 deletion docs/en/serverless/aiops/aiops.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ tags: [ 'serverless', 'observability', 'overview' ]

<p><DocBadge template="technical preview" /></p>

The AIOps capabilities available in Elastic ((observability)) enable you to consume and process large observability data sets at scale, reducing the time and effort required to detect, understand, investigate, and resolve incidents.
The AIOps capabilities available in ((observability)) enable you to consume and process large observability data sets at scale, reducing the time and effort required to detect, understand, investigate, and resolve incidents.
Built on predictive analytics and ((ml)), our AIOps capabilities require no prior experience with ((ml)).
DevOps engineers, SREs, and security analysts can get started right away using these AIOps features with little or no advanced configuration:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,7 @@ For example:
If you use [KQL](((kibana-ref))/kuery-query.html) or [Lucene](((kibana-ref))/lucene-query.html), you must specify a data view then define a text-based query.
For example, `http.request.referrer: "https://example.com"`.

<DocBadge template="technical preview" />
If you use [ES|QL](((ref))/esql.html), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|).
<DocBadge template="technical preview" /> If you use [ES|QL](((ref))/esql.html), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|).
For example:

```sh
Expand All @@ -66,6 +65,7 @@ For example:

When
: Specify how to calculate the value that is compared to the threshold. The value is calculated by aggregating a numeric field within the time window. The aggregation options are: `count`, `average`, `sum`, `min`, and `max`. When using `count` the document count is used and an aggregation field is not necessary.

Over or Grouped Over
: Specify whether the aggregation is applied over all documents or split into groups using up to four grouping fields.
If you choose to use grouping, it's a [terms](((ref))/search-aggregations-bucket-terms-aggregation.html) or [multi terms aggregation](((ref))/search-aggregations-bucket-multi-terms-aggregation.html); an alert will be created for each unique set of values when it meets the condition.
Expand Down Expand Up @@ -176,7 +176,7 @@ You can also specify [variables common to all rules](((kibana-ref))/rule-action-
For example, the message in an email connector action might contain:
```
```txt
Elasticsearch query rule '{{rule.name}}' is active:
{{#context.hits}}
Expand All @@ -191,7 +191,7 @@ You can also specify [variables common to all rules](((kibana-ref))/rule-action-
For example:

{/* NOTCONSOLE */}
```
```txt
{{#context.hits}}
timestamp: {{_source.@timestamp}}
day of the week: {{fields.day_of_week}} [^1]
Expand All @@ -203,7 +203,7 @@ You can also specify [variables common to all rules](((kibana-ref))/rule-action-
the [Mustache](https://mustache.github.io/) template array syntax is used to iterate over these values in your actions.
For example:

```
```txt
{{#context.hits}}
Labels:
{{#fields.labels}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ These steps show how to use the **Alerts** UI.
You can also create a latency threshold rule directly from any page within **Applications**. Click the **Alerts and rules** button, and select **Create threshold rule** and then **Latency**. When you create a rule this way, the **Name** and **Tags** fields will be prepopulated but you can still change these.
</DocCallOut>

To create your latency threshold rule::
To create your latency threshold rule:

1. In your ((observability)) project, go to **Alerts**.
1. Select **Manage Rules** from the **Alerts** page, and select **Create rule**.
Expand Down
64 changes: 48 additions & 16 deletions docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -86,35 +86,67 @@ You an also specify [variables common to all rules](((kibana-ref))/rule-action-v

<DocDefList>
<DocDefTerm>`context.checkedAt`</DocDefTerm>
<DocDefDescription>Timestamp of the monitor run.</DocDefDescription>
<DocDefDescription>
Timestamp of the monitor run.
</DocDefDescription>
<DocDefTerm>`context.hostName`</DocDefTerm>
<DocDefDescription>Hostname of the location from which the check is performed.</DocDefDescription>
<DocDefDescription>
Hostname of the location from which the check is performed.
</DocDefDescription>
<DocDefTerm>`context.lastErrorMessage`</DocDefTerm>
<DocDefDescription>Monitor last error message.</DocDefDescription>
<DocDefDescription>
Monitor last error message.
</DocDefDescription>
<DocDefTerm>`context.locationId`</DocDefTerm>
<DocDefDescription>Location id from which the check is performed.</DocDefDescription>
<DocDefDescription>
Location id from which the check is performed.
</DocDefDescription>
<DocDefTerm>`context.locationName`</DocDefTerm>
<DocDefDescription>Location name from which the check is performed.</DocDefDescription>
<DocDefDescription>
Location name from which the check is performed.
</DocDefDescription>
<DocDefTerm>`context.locationNames`</DocDefTerm>
<DocDefDescription>Location names from which the checks are performed.</DocDefDescription>
<DocDefDescription>
Location names from which the checks are performed.
</DocDefDescription>
<DocDefTerm>`context.message`</DocDefTerm>
<DocDefDescription>A generated message summarizing the status of monitors currently down.</DocDefDescription>
<DocDefDescription>
A generated message summarizing the status of monitors currently down.
</DocDefDescription>
<DocDefTerm>`context.monitorId`</DocDefTerm>
<DocDefDescription>ID of the monitor.</DocDefDescription>
<DocDefDescription>
ID of the monitor.
</DocDefDescription>
<DocDefTerm>`context.monitorName`</DocDefTerm>
<DocDefDescription>Name of the monitor.</DocDefDescription>
<DocDefDescription>
Name of the monitor.
</DocDefDescription>
<DocDefTerm>`context.monitorTags`</DocDefTerm>
<DocDefDescription>Tags associated with the monitor.</DocDefDescription>
<DocDefDescription>
Tags associated with the monitor.
</DocDefDescription>
<DocDefTerm>`context.monitorType`</DocDefTerm>
<DocDefDescription>Type (for example, HTTP/TCP) of the monitor.</DocDefDescription>
<DocDefDescription>
Type (for example, HTTP/TCP) of the monitor.
</DocDefDescription>
<DocDefTerm>`context.monitorUrl`</DocDefTerm>
<DocDefDescription>URL of the monitor.</DocDefDescription>
<DocDefDescription>
URL of the monitor.
</DocDefDescription>
<DocDefTerm>`context.reason`</DocDefTerm>
<DocDefDescription>A concise description of the reason for the alert.</DocDefDescription>
<DocDefDescription>
A concise description of the reason for the alert.
</DocDefDescription>
<DocDefTerm>`context.recoveryReason`</DocDefTerm>
<DocDefDescription>A concise description of the reason for the recovery.</DocDefDescription>
<DocDefDescription>
A concise description of the reason for the recovery.
</DocDefDescription>
<DocDefTerm>`context.status`</DocDefTerm>
<DocDefDescription>Monitor status (for example, "down").</DocDefDescription>
<DocDefDescription>
Monitor status (for example, "down").
</DocDefDescription>
<DocDefTerm>`context.viewInAppUrl`</DocDefTerm>
<DocDefDescription>Open alert details and context in Synthetics app.</DocDefDescription>
<DocDefDescription>
Open alert details and context in Synthetics app.
</DocDefDescription>
</DocDefList>
2 changes: 1 addition & 1 deletion docs/en/serverless/alerting/view-alerts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Use the toolbar buttons in the upper-left of the alerts table to customize the c

For example, click **Fields** and choose the `Maintenance Windows` field.
If an alert was affected by a maintenance window, its identifier appears in the new column.
For more information about their impact on alert notifications, refer to <DocLink slug="/serverless/maintenance-windows" />.
For more information about their impact on alert notifications, refer to <DocLink slug="/serverless/maintenance-windows">Maintenance windows</DocLink>.

{/* ![Alerts table with toolbar buttons highlighted](images/view-observability-alerts/-observability-alert-table-toolbar-buttons.png) */}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ be sent directly to Elastic.

## Send data from an upstream OpenTelemetry Collector

Connect your OpenTelemetry Collector instances to Elastic ((observability)) using the OTLP exporter:
Connect your OpenTelemetry Collector instances to ((observability)) using the OTLP exporter:

```yaml
receivers: [^1]
Expand Down Expand Up @@ -64,7 +64,7 @@ service:
[OTLP receiver](https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver), that forward data emitted by APM agents, or the [host metrics receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver).
[^2]: We recommend using the [Batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md) and the [memory limiter processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md). For more information, see [recommended processors](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors).
[^3]: The [logging exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/loggingexporter) is helpful for troubleshooting and supports various logging levels, like `debug`, `info`, `warn`, and `error`.
[^4]: Elastic ((observability)) endpoint configuration.
[^4]: ((observability)) endpoint configuration.
Elastic supports a ProtoBuf payload via both the OTLP protocol over gRPC transport [(OTLP/gRPC)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc)
and the OTLP protocol over HTTP transport [(OTLP/HTTP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp).
To learn more about these exporters, see the OpenTelemetry Collector documentation:
Expand Down
4 changes: 2 additions & 2 deletions docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -82,11 +82,11 @@ You can set up an [OpenTelemetry Collector](https://opentelemetry.io/docs/collec
</DocCallOut>

{/* Why you _would_ choose this approach */}
This approach works well when you need to instrument a technology that Elastic doesn't provide a solution for. For example, if you want to instrument C or C++ you could use the [OpenTelemetry C++ client](https://github.com/open-telemetry/opentelemetry-cpp).
This approach works well when you need to instrument a technology that Elastic doesn't provide a solution for. For example, if you want to instrument C or C((plus))((plus)) you could use the [OpenTelemetry C((plus))((plus)) client](https://github.com/open-telemetry/opentelemetry-cpp).
{/* Other languages include erlang, lua, perl. */}

{/* Why you would _not_ choose this approach */}
However, there are some limitations when using collectors and language SDKs built and maintainedby OpenTelemetry, including:
However, there are some limitations when using collectors and language SDKs built and maintained by OpenTelemetry, including:

* Elastic can't provide implementation support on how to use upstream OpenTelemetry tools.
* You won't have access to Elastic enterprise APM features.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import Roles from '../partials/roles.mdx'
<Roles role="Admin" goal="onboard system metrics data" />

In this guide you'll learn how to onboard system metrics data from a machine or server,
then observe the data in Elastic ((observability)).
then observe the data in ((observability)).

To onboard system metrics data:

Expand Down
4 changes: 2 additions & 2 deletions docs/en/serverless/infra-monitoring/host-metrics.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -398,15 +398,15 @@ However, any alerts that use the old definition will refer to the metric as "leg
</DocCell>
</DocRow>
<DocRow>
<DocCell>**Network Inbound (RX) (legacy)** </DocCell>
<DocCell>**Network Inbound (RX) (legacy)**</DocCell>
<DocCell>
Number of bytes that have been received per second on the public interfaces of the hosts.

**Field Calculation**: `average(host.network.ingress.bytes) * 8 / (max(metricset.period, kql='host.network.ingress.bytes: *') / 1000)`
</DocCell>
</DocRow>
<DocRow>
<DocCell>**Network Outbound (TX) (legacy)** </DocCell>
<DocCell>**Network Outbound (TX) (legacy)**</DocCell>
<DocCell>
Number of bytes that have been sent per second on the public interfaces of the hosts.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/serverless/infra-monitoring/infra-monitoring.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ tags: [ 'serverless', 'observability', 'overview' ]

<div id="analyze-metrics"></div>

Elastic ((observability)) allows you to visualize infrastructure metrics to help diagnose problematic spikes,
((observability)) allows you to visualize infrastructure metrics to help diagnose problematic spikes,
identify high resource utilization, automatically discover and track pods,
and unify your metrics with logs and APM data.

Expand Down
24 changes: 13 additions & 11 deletions docs/en/serverless/inventory.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ import Roles from './partials/roles.mdx'

<p><DocBadge template="technical preview" /></p>

Inventory provides a single place to observe the status of your entire ecosystem of hosts, containers, and services at a glance, even just from logs. From there, you can monitor and understand the health of your entities, check what needs attention, and start your investigations.
Inventory provides a single place to observe the status of your entire ecosystem of hosts, containers, and services at a glance, even just from logs. From there, you can monitor and understand the health of your entities, check what needs attention, and start your investigations.

<DocCallOut title="Note">
The new Inventory requires the Elastic Entity Model (EEM). To learn more, refer to <DocLink slug="/serverless/observability/elastic-entity-model" />.
Expand All @@ -28,7 +28,7 @@ Where `host.name` is set in `metrics-*`, `logs-*`, `filebeat-*`, and `metricbeat
**Services**

Where `service.name` is set in `filebeat*`, `logs-*`, `metrics-apm.service_transaction.1m*`, and `metrics-apm.service_summary.1m*`

**Containers**

Where `container.id` is set in `metrics-*`, `logs-*`, `filebeat-*`, and `metricbeat-*`
Expand All @@ -47,9 +47,9 @@ Inventory allows you to:
When you open the Inventory for the first time, you'll be asked to enable the EEM. Once enabled, the Inventory will be accessible to anyone with the appropriate privileges.

<DocCallOut title="Note">
The Inventory feature can be completely disabled using the `observability:entityCentricExperience` flag in **Stack Management**.
The Inventory feature can be completely disabled using the `observability:entityCentricExperience` flag in **Stack Management**.
</DocCallOut>


1. In the search bar, search for your entities by name or type, for example `entity.type:service`.

Expand Down Expand Up @@ -77,21 +77,23 @@ Entities are added to the Inventory through one of the following approaches: **A
### Add data
To add entities, select **Add data** from the left-hand navigation and choose one of the following onboarding journeys:

<DocDefTerm>- Auto-detect logs and metrics</DocDefTerm>
<DocDefList>
<DocDefTerm>Auto-detect logs and metrics</DocDefTerm>
<DocDefDescription>
Detects hosts (with metrics and logs)
Detects hosts (with metrics and logs)
</DocDefDescription>


<DocDefTerm>- Kubernetes</DocDefTerm>
<DocDefTerm>Kubernetes</DocDefTerm>
<DocDefDescription>
Detects hosts, containers, and services
Detects hosts, containers, and services
</DocDefDescription>

<DocDefTerm>- Elastic APM / OpenTelemetry / Synthetic Monitor</DocDefTerm>
<DocDefTerm>Elastic APM / OpenTelemetry / Synthetic Monitor</DocDefTerm>
<DocDefDescription>
Detects services
</DocDefDescription>
Detects services
</DocDefDescription>
</DocDefList>

### Associate existing service logs

Expand Down
2 changes: 1 addition & 1 deletion docs/en/serverless/logging/add-logs-service-name.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Follow these steps to update your mapping:
1. Under **Field path**, select the existing field you want to map to the service name.
1. Select **Add field**.

For more ways to add a field to your mapping, refer to [add a field to an existing mapping](((ref))/explicit-mapping.html#add-field-mapping.html).
For more ways to add a field to your mapping, refer to [add a field to an existing mapping](((ref))/explicit-mapping.html#add-field-mapping).

## Additional ways to process data

Expand Down
Loading

0 comments on commit cd8c2b8

Please sign in to comment.