diff --git a/docs/en/observability/cloud-monitoring/azure/collect-azure-metrics.asciidoc b/docs/en/observability/cloud-monitoring/azure/collect-azure-metrics.asciidoc
new file mode 100644
index 0000000000..b6ea0b1218
--- /dev/null
+++ b/docs/en/observability/cloud-monitoring/azure/collect-azure-metrics.asciidoc
@@ -0,0 +1,135 @@
+[[collect-azure-metrics]]
+= How to collect any metrics with Azure Monitoring
+
+++++
+Collect metrics
+++++
+
+**WIP**
+
+I want to collect Azure Application insights logs using the Elastic Agent. Unfortunately, at the time of this writing, there isn't a specialized integration to collect such logs.
+But we can leverage the generic Event Hub integration to collect Azure Application insights logs and any other log exported using a Diagnostic Settings.
+
+[discrete]
+== Prerequisites
+
+WIP
+
+[discrete]
+=== Application
+
+Search for a Diagnostic Settings that exports Azure Application insights logs.
+
+For this test, I will use an application insight app (or component) named return-of-the-jedi:
+
+
+
+[discrete]
+=== Event Hub
+
+We need a new event hub to collect all the logs for this application.
+
+Create or use an existing Event Hub namespace
+Create a new event hub named "insightslogs"
+
+
+
+[discrete]
+== Configuration
+
+[discrete]
+[[diagnostic-settings-step-one]]
+=== Step 1: Set up the Diagnostic Settings
+
+Using the application return-of-the-jedi:
+
+Visit Application > Monitoring > Diagnostic Settings and click on Add diagnostic setting.
+Set a name
+Select all the categories you're interested in
+On Destination details select Stream to an event hub
+Select the namespace and event hub name from the drop down lists
+Click Save
+
+
+
+[discrete]
+[[generate-logs-step-two]]
+=== Step 2: Generate some logs
+
+Use the application connected to the application insights resource to get some test logs. In this example, return-of-the-jedi is connected to an App Function with an HTTP endpoint.
+
+I am sending a few requests to the HTTP endpoint, and here are a few logs:
+
+
+
+[discrete]
+[[check-event-hub-step-three]]
+=== Step 3: Check the Event Hub for exported logs
+
+If I go back to the event hub "insightslogs", the charts start reporting some data:
+
+[discrete]
+== Collect the logs
+
+[discrete]
+[[steup-agent-step-one]]
+=== Step 1: Set up the agent
+
+Create a new "Application Insights logs" agent policy for this test
+Install the generic Azure Event Hub input integration
+
+
+
+Set up the integration using the "insightslogs" event hub and the other options. See https://docs.elastic.co/integrations/azure#setup to learn more.
+
+
+
+In this first iteration:
+
+Leave "Parse azure message" off
+Turn "Preserve original event: on
+
+[discrete]
+[[explore-logs-step-two]]
+=== Step 2: Explore the logs
+
+Assign the agent policy to an agent and start exploring the logs.
+
+Open Analytics > Discover and then filter documents using data_stream.dataset : "azure.eventhub":
+
+
+
+[discrete]
+[[basic-parsing-step-three]]
+=== Step 3: Basic parsing
+
+With the current configuration, the integration collects the applications insights logs as string in the message field:
+
+
+
+At this point, we have two options:
+
+- Enable the "Parse azure message" to turn the content of the message field into an object, levering the dynamic mapping.
+- Add a custom pipeline and mapping to fine-tune the documents.
+
+Enable the "Parse azure message"
+
+This is a quick option to start using the logs. Go back to the agent policy and flip the "Parse azure message" switch:
+
+
+
+Here is an example document with parsing enabled:
+
+
+
+Add a custom pipeline and mapping
+
+The document parsing is great, but there are downsides:
+
+The automatic parsing turns the JSON log into an object; field names can vary a lot, depending on the conventions used by the Azure team responsible for the service.
+Conflicts may occur; for example, log categories may have the same field name with different types
+
+[discrete]
+== Conclusions
+
+"Parse azure message" is a great option, but I recommend considering building custom pipelines and mappings to take complete control.
\ No newline at end of file
diff --git a/docs/en/observability/monitor-azure-agent.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-agent.asciidoc
similarity index 100%
rename from docs/en/observability/monitor-azure-agent.asciidoc
rename to docs/en/observability/cloud-monitoring/azure/monitor-azure-agent.asciidoc
diff --git a/docs/en/observability/monitor-azure-beats.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-beats.asciidoc
similarity index 100%
rename from docs/en/observability/monitor-azure-beats.asciidoc
rename to docs/en/observability/cloud-monitoring/azure/monitor-azure-beats.asciidoc
diff --git a/docs/en/observability/cloud-monitoring/azure/monitor-azure-intro.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-intro.asciidoc
new file mode 100644
index 0000000000..4b73b6153d
--- /dev/null
+++ b/docs/en/observability/cloud-monitoring/azure/monitor-azure-intro.asciidoc
@@ -0,0 +1,28 @@
+
+[[monitor-azure-web-services]]
+= Azure monitoring
+
+++++
+Azure monitoring
+++++
+
+Elastic Observability offers powerful monitoring solutions to keep your Azure environments reliable and efficient, providing deep insights into the performance of your applications, services, and infrastructure components.
+
+Learn how to use the Elastic Observability solution to observe and monitor a broad range of Azure resources and applications.
+
+- <>
+- <>
+- <>
+
+
+For a full list of supported Azure integrations, check the {integrations-docs}[Elastic
+Integrations docs].
+
+include::monitor-azure-agent.asciidoc[]
+
+include::collect-azure-metrics.asciidoc[leveloffset=+2]
+
+include::monitor-azure-beats.asciidoc[]
+
+include::monitor-azure-native.asciidoc[]
+
diff --git a/docs/en/observability/monitor-azure-native.asciidoc b/docs/en/observability/cloud-monitoring/azure/monitor-azure-native.asciidoc
similarity index 100%
rename from docs/en/observability/monitor-azure-native.asciidoc
rename to docs/en/observability/cloud-monitoring/azure/monitor-azure-native.asciidoc
diff --git a/docs/en/observability/index.asciidoc b/docs/en/observability/index.asciidoc
index dd913ad90e..c6692f0f51 100644
--- a/docs/en/observability/index.asciidoc
+++ b/docs/en/observability/index.asciidoc
@@ -80,6 +80,8 @@ include::monitor-infra/metrics-reference.asciidoc[leveloffset=+2]
// Will eventually be replaced by cloud monitoring when other providers are covered
include::cloud-monitoring/aws/monitor-amazon-intro.asciidoc[leveloffset=+1]
+include::cloud-monitoring/azure/monitor-azure-intro.asciidoc[leveloffset=+1]
+
// Synthetics
include::synthetics-intro.asciidoc[leveloffset=+1]
diff --git a/docs/en/observability/tutorials.asciidoc b/docs/en/observability/tutorials.asciidoc
index c8f5590a11..a98dd17729 100644
--- a/docs/en/observability/tutorials.asciidoc
+++ b/docs/en/observability/tutorials.asciidoc
@@ -15,31 +15,16 @@ instead.
Not sure which agent to use? Refer to
{fleet-guide}/beats-agent-comparison.html[{beats} and {agent} capabilities].
-* <>
-
-* <>
-
* <>
* <>
* <>
-* <>
-
-* <>
-
-* <>
-
include::monitor-gcp.asciidoc[]
include::monitor-java-app.asciidoc[]
include::monitor-k8s/monitor-k8s.asciidoc[leveloffset=+1]
-include::monitor-azure-agent.asciidoc[]
-
-include::monitor-azure-native.asciidoc[]
-
-include::monitor-azure-beats.asciidoc[]