diff --git a/docs/en/observability/images/action-dropdown.png b/docs/en/observability/images/action-dropdown.png
deleted file mode 100644
index 028d9587fa..0000000000
Binary files a/docs/en/observability/images/action-dropdown.png and /dev/null differ
diff --git a/docs/en/observability/images/app-link-icon.png b/docs/en/observability/images/app-link-icon.png
deleted file mode 100644
index 39996678ca..0000000000
Binary files a/docs/en/observability/images/app-link-icon.png and /dev/null differ
diff --git a/docs/en/observability/images/icons/boxesHorizontal.svg b/docs/en/observability/images/icons/boxesHorizontal.svg
new file mode 100644
index 0000000000..d845a6b9db
--- /dev/null
+++ b/docs/en/observability/images/icons/boxesHorizontal.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/en/observability/images/icons/boxesVertical.svg b/docs/en/observability/images/icons/boxesVertical.svg
new file mode 100644
index 0000000000..aed10b0d8e
--- /dev/null
+++ b/docs/en/observability/images/icons/boxesVertical.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/en/observability/images/icons/eye.svg b/docs/en/observability/images/icons/eye.svg
new file mode 100644
index 0000000000..0e576f21d5
--- /dev/null
+++ b/docs/en/observability/images/icons/eye.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/en/observability/images/slo-burn-rate-breach.png b/docs/en/observability/images/slo-burn-rate-breach.png
new file mode 100644
index 0000000000..cdedd2d722
Binary files /dev/null and b/docs/en/observability/images/slo-burn-rate-breach.png differ
diff --git a/docs/en/observability/index.asciidoc b/docs/en/observability/index.asciidoc
index dd913ad90e..dc2be31769 100644
--- a/docs/en/observability/index.asciidoc
+++ b/docs/en/observability/index.asciidoc
@@ -171,6 +171,7 @@ include::profiling-self-managed-troubleshooting.asciidoc[leveloffset=+3]
include::create-alerts.asciidoc[leveloffset=+1]
include::aggregation-options.asciidoc[leveloffset=+2]
include::view-observability-alerts.asciidoc[leveloffset=+2]
+include::triage-slo-burn-rate-breaches.asciidoc[leveloffset=+3]
//SLOs
include::slo-overview.asciidoc[leveloffset=+1]
diff --git a/docs/en/observability/slo-burn-rate-alert.asciidoc b/docs/en/observability/slo-burn-rate-alert.asciidoc
index 3bda9e2c5c..c553bc6030 100644
--- a/docs/en/observability/slo-burn-rate-alert.asciidoc
+++ b/docs/en/observability/slo-burn-rate-alert.asciidoc
@@ -79,4 +79,13 @@ You an also specify {kibana-ref}/rule-action-variables.html[variables common to
To receive a notification when the alert recovers, select *Run when Recovered*. Use the default notification message or customize it. You can add more context to the message by clicking the icon above the message text box and selecting from a list of available variables.
[role="screenshot"]
-image::images/duration-anomaly-alert-recovery.png[Default recovery message for Uptime duration anomaly rules with open "Add variable" popup listing available action variables,width=600]
\ No newline at end of file
+image::images/duration-anomaly-alert-recovery.png[Default recovery message for Uptime duration anomaly rules with open "Add variable" popup listing available action variables,width=600]
+
+[discrete]
+[[slo-creation-next-steps]]
+== Next steps
+
+Learn how to view alerts and triage SLO burn rate breaches:
+
+* <>
+* <>
diff --git a/docs/en/observability/slo-overview.asciidoc b/docs/en/observability/slo-overview.asciidoc
index 0ac86dcdd2..a9a9832e9f 100644
--- a/docs/en/observability/slo-overview.asciidoc
+++ b/docs/en/observability/slo-overview.asciidoc
@@ -117,7 +117,7 @@ Once an SLO is reset, it will start to regenerate SLIs and summary data.
[%collapsible]
.Remove legacy summary transforms
====
-After migrating to 8.12 or later, you might have some legacy SLO summary transforms running.
+After migrating to 8.12 or later, you might have some legacy SLO summary transforms running.
You can safely delete the following legacy summary transforms:
[source,sh]
@@ -153,8 +153,11 @@ Do not delete any new summary transforms used by your migrated SLOs.
[discrete]
[[slo-overview-next-steps]]
== Next steps
-To get started using SLOs to measure your service performance, see the following pages:
+
+Get started using SLOs to measure your service performance:
* <>
* <>
* <>
+* <>
+* <>
diff --git a/docs/en/observability/triage-slo-burn-rate-breaches.asciidoc b/docs/en/observability/triage-slo-burn-rate-breaches.asciidoc
new file mode 100644
index 0000000000..420b0204c6
--- /dev/null
+++ b/docs/en/observability/triage-slo-burn-rate-breaches.asciidoc
@@ -0,0 +1,39 @@
+[[triage-slo-burn-rate-breaches]]
+= Triage SLO burn rate breaches
+++++
+SLO burn rate breaches
+++++
+
+SLO burn rate breaches occur when the percentage of bad events over a specified time period exceeds the threshold set in your <>.
+When this happens, you are at risk of exhausting your error budget and violating your SLO.
+
+To triage issues quickly, go to the alert details page:
+
+. Go to **{observability}** -> **Alerts** (or open the SLO and click **Alerts**.)
+. From the Alerts table, click the image:images/icons/boxesHorizontal.svg[More actions icon] icon next to the alert and select **View alert details**.
+
+The alert details page shows information about the alert, including when the alert was triggered,
+the duration of the alert, the source SLO, and the rule that triggered the alert.
+You can follow the links to navigate to the source SLO or rule definition.
+
+Explore charts on the page to learn more about the SLO breach:
+
+[role="screenshot"]
+image::images/slo-burn-rate-breach.png[Alert details for SLO burn rate breach]
+
+* The first chart shows the burn rate during the time range when the alert was active.
+The line indicates how close the SLO came to breaching the threshold.
+* The next chart shows the alerts history over the last 30 days.
+It shows the number of alerts that were triggered and the average time it took to recover after a breach.
+* Both timelines are annotated to show when the threshold was breached.
+You can hover over an alert icon to see the timestamp of the alert.
+
+The number, duration, and frequency of these breaches over time gives you an indication of how severely the service is degrading so that you can focus on high severity issues first.
+
+NOTE: The contents of the alert details page may vary depending on the type of SLI that's defined in the SLO.
+
+After investigating the alert, you may want to:
+
+* Click **Snooze the rule** to snooze notifications for a specific time period or indefinitely.
+* Click the image:images/icons/boxesVertical.svg[Actions] icon and select **Add to case** to add the alert to a new or existing case. To learn more, refer to <>.
+* Click the image:images/icons/boxesVertical.svg[Actions] icon and select **Mark as untracked**.
diff --git a/docs/en/observability/view-observability-alerts.asciidoc b/docs/en/observability/view-observability-alerts.asciidoc
index 9f21c463d1..86d63a6f21 100644
--- a/docs/en/observability/view-observability-alerts.asciidoc
+++ b/docs/en/observability/view-observability-alerts.asciidoc
@@ -33,7 +33,7 @@ By default, this filter is set to *Show all* alerts, but you can filter to show
An alert is "Active" when the condition defined in the rule currently matches.
An alert has "Recovered" when that condition, which previously matched, is currently no longer matching.
An alert is "Untracked" when its corresponding rule is disabled or you mark the alert as untracked.
-To mark the alert as untracked, go to the Alerts table, click image:images/action-dropdown.png[Three dots used to expand the "More actions" menu,height=22] to expand the "More actions" menu, and click *Mark as untracked*.
+To mark the alert as untracked, go to the Alerts table, click the image:images/icons/boxesHorizontal.svg[More actions] icon to expand the "More actions" menu, and click *Mark as untracked*.
NOTE: There is also a "Flapping" status, which means the alert is switching repeatedly between active and recovered states.
This status is possible only if you have enabled alert flapping detection.
@@ -55,17 +55,17 @@ image::view-alert-details.png[View alert details flyout on the Alerts page]
To further inspect the alert:
* From the alert detail flyout, click *Alert details*.
-* From the Alerts table, use the image:images/action-dropdown.png[Three dots used to expand the "More actions" menu,height=22] and click *View alert details*.
+* From the Alerts table, click the image:images/icons/boxesHorizontal.svg[More actions] icon and select *View alert details*.
To further inspect the rule:
* From the alert detail flyout, click *View rule details*.
-* From the Alerts table, use the image:images/action-dropdown.png[Three dots used to expand the "More actions" menu,height=22] and click *View rule details*.
+* From the Alerts table, click the image:images/icons/boxesHorizontal.svg[More actions] icon and select *View rule details*.
To view the alert in the app that triggered it:
* From the alert detail flyout, click *View in app*.
-* From the Alerts table, click the image:images/app-link-icon.png[Eye icon used to "View in app",height=22].
+* From the Alerts table, click the image:images/icons/eye.svg[View in app] icon.
[discrete]
[[customize-observability-alerts-table]]
@@ -89,8 +89,8 @@ You can also use the toolbar buttons in the upper-right to customize the display
[[cases-observability-alerts]]
== Add alerts to cases
-From the Alerts table, you can add one or more alerts to a case. Select image:images/action-dropdown.png[Three dots used to expand the "More actions" menu,height=22]
-to add the alert to a new case or add it to an existing case. You can add an unlimited amount of alerts from any rule type.
+From the Alerts table, you can add one or more alerts to a case. Click the image:images/icons/boxesHorizontal.svg[More actions] icon
+to add the alert to a new or existing case.
NOTE: Each case can have a maximum of 1,000 alerts.