From d20bd854296ddaf7b77a03aa2af484cc391c166a Mon Sep 17 00:00:00 2001
From: CheetoDa <31571545+Calm-Rock@users.noreply.github.com>
Date: Wed, 25 Dec 2024 17:09:36 +0530
Subject: [PATCH] feat: cleaning up logs docs (#1022)
* feat: cleaning up logs docs
* next set of logs
* updated docker logs docs
* heroku logs
* heroku logs fix
* http logs fixed
* syslogs
* tomcat logs
* aws-lambda-nodejs
* otel sdk java
* otel python sdk
* log file
* existing collectors to signoz
* aws docs
* azure docs
* minor fix
* gcp docs
* gcp docs fix
* fix app logs
* logs docs fix
* docs fix
---
.../docs/aws-monitoring/ec2-infra-metrics.mdx | 25 +-
data/docs/aws-monitoring/ec2-logs.mdx | 77 ++--
data/docs/aws-monitoring/eks.mdx | 17 +-
data/docs/aws-monitoring/elb-logs.mdx | 16 +-
data/docs/aws-monitoring/lambda-logs.mdx | 14 +-
data/docs/aws-monitoring/rds-logs.mdx | 21 +-
data/docs/aws-monitoring/vpc-logs.mdx | 14 +-
data/docs/azure-monitoring/aks.mdx | 26 +-
.../azure-monitoring/app-service/logging.mdx | 11 +-
.../azure-monitoring/app-service/metrics.mdx | 29 +-
.../azure-monitoring/app-service/tracing.mdx | 50 +--
.../az-blob-storage/logging.mdx | 15 +-
.../az-blob-storage/metrics.mdx | 34 +-
.../az-container-apps/logging.mdx | 12 +-
.../az-container-apps/metrics.mdx | 36 +-
.../az-container-apps/tracing.mdx | 48 ++-
data/docs/azure-monitoring/az-fns/logging.mdx | 13 +-
data/docs/azure-monitoring/az-fns/metrics.mdx | 27 +-
data/docs/azure-monitoring/az-fns/tracing.mdx | 49 ++-
data/docs/azure-monitoring/db-metrics.mdx | 26 +-
.../virtual-machines/vm-metrics.mdx | 30 +-
.../gcp-monitoring/app-engine/logging.mdx | 28 +-
.../gcp-monitoring/app-engine/metrics.mdx | 17 +-
.../gcp-monitoring/app-engine/tracing.mdx | 16 +-
.../cloud-monitoring/metrics.mdx | 22 +-
.../docs/gcp-monitoring/cloud-run/logging.mdx | 20 +-
.../docs/gcp-monitoring/cloud-run/metrics.mdx | 14 +-
.../docs/gcp-monitoring/cloud-run/tracing.mdx | 12 +-
.../docs/gcp-monitoring/cloud-sql/logging.mdx | 24 +-
.../docs/gcp-monitoring/cloud-sql/metrics.mdx | 11 +-
.../gcp-monitoring/compute-engine/logging.mdx | 28 +-
.../gcp-monitoring/compute-engine/metrics.mdx | 18 +-
.../gcp-monitoring/compute-engine/tracing.mdx | 2 +-
data/docs/gcp-monitoring/gcp-clb/logging.mdx | 28 +-
data/docs/gcp-monitoring/gcp-clb/metrics.mdx | 18 +-
.../gcp-monitoring/gcp-fns/custom-metrics.mdx | 14 +-
.../gcp-monitoring/gcp-fns/fns-metrics.mdx | 17 +-
data/docs/gcp-monitoring/gcp-fns/logging.mdx | 36 +-
data/docs/gcp-monitoring/gcp-fns/tracing.mdx | 10 +-
data/docs/gcp-monitoring/gcs/logging.mdx | 28 +-
data/docs/gcp-monitoring/gcs/metrics.mdx | 6 +-
.../gke/gke-logging-and-metrics.mdx | 5 +-
data/docs/gcp-monitoring/gke/gke-tracing.mdx | 12 +-
data/docs/gcp-monitoring/vpc/logging.mdx | 30 +-
data/docs/gcp-monitoring/vpc/metrics.mdx | 26 +-
.../vpc/vpc-connector-creation.mdx | 2 +-
.../send-logs/aws-lambda-nodejs.mdx | 110 +++---
...mcat-access-and-garbage-collector-logs.mdx | 303 ++++++---------
.../send-logs/windows-events-log.mdx | 100 +++--
.../docs/messaging-queues/confluent-kafka.mdx | 2 +-
data/docs/userguide/collect_docker_logs.mdx | 204 +++++-----
.../userguide/collect_kubernetes_pod_logs.mdx | 301 +++++++--------
.../docs/userguide/collect_logs_from_file.mdx | 257 ++++++-------
.../collecting-ecs-logs-and-metrics.mdx | 10 +-
.../collecting-ecs-sidecar-infra.mdx | 13 +-
...lecting_application_logs_otel_sdk_java.mdx | 171 +++++----
...cting_application_logs_otel_sdk_python.mdx | 115 ++----
data/docs/userguide/collecting_syslogs.mdx | 350 ++++++++++--------
data/docs/userguide/fluentbit_to_signoz.mdx | 131 ++++---
data/docs/userguide/fluentd_to_signoz.mdx | 322 ++++++++--------
data/docs/userguide/heroku_logs_to_signoz.mdx | 132 ++++---
data/docs/userguide/logstash_to_signoz.mdx | 219 ++++++-----
.../python-logs-auto-instrumentation.mdx | 22 +-
.../send-cloudwatch-logs-to-signoz.mdx | 6 +-
data/docs/userguide/send-logs-http.mdx | 243 ++++++------
data/docs/userguide/vercel_logs_to_signoz.mdx | 71 ++--
66 files changed, 2117 insertions(+), 1999 deletions(-)
diff --git a/data/docs/aws-monitoring/ec2-infra-metrics.mdx b/data/docs/aws-monitoring/ec2-infra-metrics.mdx
index 0341e2df0..43f1537bf 100644
--- a/data/docs/aws-monitoring/ec2-infra-metrics.mdx
+++ b/data/docs/aws-monitoring/ec2-infra-metrics.mdx
@@ -1,21 +1,25 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: ec2-infra-metrics
title: Infrastructure metrics of EC2 instance
---
### Overview
-This documentation guides you through integrating AWS EC2 infrastructure metrics into SigNoz using the Hostmetrics receiver in OpenTelemetry Collector. The Hostmetrics receiver is designed to collect metrics about the host system from various sources. It supports various scrapers for collecting different metrics, including CPU, disk, load, filesystem, memory, network, paging, and process metrics.
+This documentation guides you through integrating AWS EC2 infrastructure metrics into SigNoz using the Hostmetrics receiver in OpenTelemetry Collector.
+The Hostmetrics receiver is designed to collect metrics about the host system from various sources. It supports various scrapers for collecting different metrics,
+including CPU, disk, load, filesystem, memory, network, paging, and process metrics.
+
+
+
### Prerequisites
- An EC2 instance
-- A [SigNoz Cloud](https://signoz.io/teams/) account
### Configuring Hostmetrics Receiver
-To see your infrastructure metrics in SigNoz, you need to configure the hostmetrics receiver and create a HostMetrics Dashboard. Follow [this documentation](https://signoz.io/docs/userguide/hostmetrics/) to configure hostmetrics receiver and creating the Hostmetrics Dashboard.
+To see your infrastructure metrics in SigNoz, you need to configure the hostmetrics receiver. Follow [this documentation](https://signoz.io/docs/userguide/hostmetrics/) to configure hostmetrics receiver and creating the Hostmetrics Dashboard.
### Final Output
@@ -34,16 +38,7 @@ After setting up your Hostmetrics Dashboard, here's what it might look like:
-
-{/* */}
+
+
diff --git a/data/docs/aws-monitoring/ec2-logs.mdx b/data/docs/aws-monitoring/ec2-logs.mdx
index a42c3c579..58ae807fc 100644
--- a/data/docs/aws-monitoring/ec2-logs.mdx
+++ b/data/docs/aws-monitoring/ec2-logs.mdx
@@ -1,81 +1,82 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: ec2-logs
-title: Send Application/Server logs from EC2 to SigNoz
+title: Send Application/Server Logs from EC2 to SigNoz
---
## Introduction
-This guide provides detailed instructions on how to send application and server logs from an EC2 instance to **SigNoz Cloud**. By integrating with SigNoz, you can efficiently collect, monitor, and analyze your logs for better insights into your applications and servers.
+
+This documentation provides detailed instructions on how to send application and server logs from an EC2 instance to **SigNoz**. By integrating with SigNoz, you can
+efficiently collect, monitor, and analyze your logs for better insights into your applications and servers.
## Prerequisites
- A Linux-based EC2 instance
-- An active [SigNoz Cloud](http://localhost:3000/teams/) account
-
-Sending your server/application logs to SigNoz Cloud broadly involves these two simple steps:
-- Install OpenTelemetry Collector(OTel collector)
-- Configure filelog receiver
+
-## Install OpenTelemetry Collector
+
-The OpenTelemetry collector provides a vendor-neutral way to collect, process, and export your telemetry data such as logs, metrics, and traces.
+### Install OpenTelemetry Collector
You can install OpenTelemetry collector as an agent on your Virtual Machine by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
+The OpenTelemetry collector provides a vendor-neutral way to collect, process, and export telemetry data such as logs, metrics, and traces.
+### Dummy Log File
-## Dummy log file
-
-As an example, we can use a sample log file called `app.log` with the following dummy data:
+As an example, use a sample log file called `app.log` with the following dummy data:
```
This is log line 1
This is log line 2
This is log line 3
+This is log line 4
+This is log line 5
```
This file represents a log file of your application/server.
-## Configure filelog receiver
+### Configure Filelog Receiver
-Receivers are used to get data into the collector. A filelog receiver collects logs from files.
-Modify the `config.yaml` file that you created while installing OTel collector in the previous step to include the filelog receiver. This involves specifying the path to your `app.log` file (or your log file) and setting the `start_at` parameter. For more fields that are available for filelog receiver please check [this link](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
+Receivers are used to get data into the collector. A filelog receiver collects logs from files. Modify the `config.yaml` file that you created while installing
+the OTel collector to include the filelog receiver. Specify the path to your `app.log` file (or your actual log file) and set the `start_at` parameter.
-
-```yaml
+```yaml:config.yaml
receivers:
- ...
filelog/app:
- include: [ /tmp/app.log ] #include the full path to your log file
+ include: [ /tmp/app.log ] # include the full path to your log file
start_at: end
-...
```
-
-
-The `start_at: end` configuration ensures that only newly added logs are transmitted. If you wish to include historical logs from the file, remember to modify `start_at` to `beginning`.
-
+
+The `start_at: end` configuration ensures that only newly added logs are transmitted. If you wish to include historical logs from the file, set `start_at` to
+`beginning`.
-## Update pipeline configuration
+### Update Pipeline Configuration
-Receivers must be enabled via pipelines within the service section of the collector config file. In the same `config.yaml` file mentioned above, update the pipeline settings to include the new filelog receiver. This step is crucial for ensuring that the logs are correctly processed and sent to SigNoz.
+Receivers must be enabled via pipelines within the `service` section of the collector config file. Update the pipeline configuration in `config.yaml`:
-```yaml {4}
- service:
- ....
- logs:
- receivers: [otlp, filelog/app]
- processors: [batch]
- exporters: [otlp]
+```yaml:config.yaml
+service:
+ pipelines:
+ logs:
+ receivers: [otlp, filelog/app]
+ processors: [batch]
+ exporters: [otlp]
```
-Now restart the OTel collector so that new changes are applied. The steps to run the OTel collector can be found [here](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/)
+Restart the OpenTelemetry Collector by following the steps outlined in [this guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
+
+### Verifying the Exported Logs
-## Verifying the exported logs
-The logs will be exported to SigNoz UI. If you add more entries to your app.log file they will also be visible in SigNoz UI.
+The logs will be exported to the SigNoz Cloud UI. If you add more entries to your `app.log` file, they will also appear in the SigNoz Logs Explorer.
\ No newline at end of file
+
+
+
+
+
diff --git a/data/docs/aws-monitoring/eks.mdx b/data/docs/aws-monitoring/eks.mdx
index 13b78e931..f3191d622 100644
--- a/data/docs/aws-monitoring/eks.mdx
+++ b/data/docs/aws-monitoring/eks.mdx
@@ -11,8 +11,10 @@ using SigNoz.
- An EKS cluster running
- Helm installed on your machine
-- [SigNoz Cloud account](https://signoz.io/teams/)
-
+
+
+
+
## Setup
### Step 1: Add SigNoz Helm repository
@@ -57,8 +59,8 @@ presets:
``- Name of the Kubernetes cluster or a unique identifier of the cluster.
`` - Deployment environment of your application. Example: "staging", "production", etc.
-`{region}` - [Ingestion region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint) of your SigNoz Cloud instance. Can be `us`, `eu` or `in`.
-`` - [Ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/) for your SigNoz Cloud instance.
+Replace `` with your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
+Set the `{region}` to match your [SigNoz Cloud region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint)
### Step 4: Install k8s-infra chart
@@ -73,7 +75,7 @@ helm install my-release signoz/k8s-infra -f override-values.yaml
### Logs
Once you're done with the Setup, you should be able to see your EKS logs in the [Logs explorer](https://signoz.io/docs/product-features/logs-explorer/) under the
-Logs tab of your SigNoz Cloud instance.
+Logs tab of your SigNoz instance.
### Metrics
@@ -115,4 +117,7 @@ You can check a complete list of Dashboards for Kubernetes Monitroing [here](htt
To create your own Dashboard in SigNoz, checkout this [documentation](https://signoz.io/docs/userguide/manage-dashboards/).
-You can find the complete list of availbe Kubernetes Metrics [here](https://signoz.io/docs/tutorial/kubernetes-infra-metrics/#kubernetes-metrics---kubeletstats-and-k8s_cluster).
\ No newline at end of file
+You can find the complete list of availbe Kubernetes Metrics [here](https://signoz.io/docs/tutorial/kubernetes-infra-metrics/#kubernetes-metrics---kubeletstats-and-k8s_cluster).
+
+
+
\ No newline at end of file
diff --git a/data/docs/aws-monitoring/elb-logs.mdx b/data/docs/aws-monitoring/elb-logs.mdx
index 0be154997..7050a1637 100644
--- a/data/docs/aws-monitoring/elb-logs.mdx
+++ b/data/docs/aws-monitoring/elb-logs.mdx
@@ -1,14 +1,17 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: elb-logs
title: Send your ELB logs to SigNoz
+hide_table_of_contents: true
---
## Overview
-This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect Elastic Load Balancer (ELB) logs stored in an AWS S3 bucket and forward them to SigNoz. By the end of this guide, you will have a setup that automatically sends your ELB logs to SigNoz, enabling you to visualize and monitor your application's load balancing performance and health.
+This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect Elastic Load Balancer (ELB) logs stored in an AWS S3 bucket
+and forward them to SigNoz. This will enable you to automatically sends your ELB logs to SigNoz, enabling you to visualize and
+monitor your application's load balancing performance and health.
-**Here’s a quick summary of what we’ll be doing in this detailed article.**
+**Here’s a quick summary of what we’ll be doing in this documentation.**
- [Creating / Configuring your S3 bucket](#creating--configuring-your-s3-bucket)
- [Understanding how lambda function work](#understanding-how-lambda-function-work)
@@ -19,9 +22,10 @@ This documentation provides a detailed walkthrough on how to set up an AWS Lambd
## Prerequisites
-- AWS account with administrative privilege.
-- [SigNoz Cloud Account](https://signoz.io/teams/)
+- AWS account with administrative privilege
+
+
## Creating / Configuring your S3 bucket
@@ -518,3 +522,5 @@ Upon accessing the SigNoz logs section, you will notice a considerable influx of
A sample log line of the logs sent from AWS Lambda
+
+
\ No newline at end of file
diff --git a/data/docs/aws-monitoring/lambda-logs.mdx b/data/docs/aws-monitoring/lambda-logs.mdx
index 6c96092c5..9cb222fb1 100644
--- a/data/docs/aws-monitoring/lambda-logs.mdx
+++ b/data/docs/aws-monitoring/lambda-logs.mdx
@@ -1,14 +1,17 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: lambda-logs
title: Send your AWS Lambda logs to SigNoz
+hide_table_of_contents: true
---
## Overview
-This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect AWS Lambda logs stored in an AWS S3 bucket and forward them to SigNoz. By the end of this guide, you will have a setup that automatically sends your Lambda logs to SigNoz, enabling you to visualize and monitor your application's load balancing performance and health.
+This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect AWS Lambda logs stored in an AWS S3 bucket and forward them to
+SigNoz. This will enable you to automatically sends your Lambda logs to SigNoz, enabling you to visualize and monitor your application's
+load balancing performance and health.
-**Here’s a quick summary of what we’ll be doing in this detailed article.**
+**Here’s a quick summary of what we’ll be doing in this detailed documentation.**
- [Creating / Configuring your S3 bucket](#creating--configuring-your-s3-bucket)
- [Understanding how lambda function work](#understanding-how-lambda-function-work)
@@ -20,8 +23,9 @@ This documentation provides a detailed walkthrough on how to set up an AWS Lambd
## Prerequisites
- AWS account with administrative privilege.
-- [SigNoz Cloud Account](https://signoz.io/teams/)
+
+
## Creating / Configuring your S3 bucket
@@ -459,3 +463,5 @@ Upon accessing the SigNoz logs section, you will notice a considerable influx of
A sample log line of the logs sent from AWS Lambda (SAMPLE ONLY)
+
+
\ No newline at end of file
diff --git a/data/docs/aws-monitoring/rds-logs.mdx b/data/docs/aws-monitoring/rds-logs.mdx
index 78c53efd1..d2b7d2c78 100644
--- a/data/docs/aws-monitoring/rds-logs.mdx
+++ b/data/docs/aws-monitoring/rds-logs.mdx
@@ -1,14 +1,17 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: rds-logs
title: Send your RDS logs to SigNoz
+hide_table_of_contents: true
---
## Overview
-This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect Relational Database Service (RDS) logs stored in an AWS S3 bucket and forward them to SigNoz. By the end of this guide, you will have a setup that automatically sends your RDS logs to SigNoz, enabling you to visualize and monitor your database performance and health.
+This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect Relational Database Service (RDS) logs stored in an AWS S3
+bucket and forward them to SigNoz. This will enable you to automatically sends your RDS logs to SigNoz, enabling you to visualize and
+monitor your database performance and health.
-**Here’s a quick summary of what we’ll be doing in this detailed article.**
+**Here’s a quick summary of what we’ll be doing in this detailed documentation**
- [Creating / Configuring your S3 bucket](#creating--configuring-your-s3-bucket)
- [Understanding how lambda function work](#understanding-how-lambda-function-work)
@@ -20,10 +23,8 @@ This documentation provides a detailed walkthrough on how to set up an AWS Lambd
## Prerequisites
- AWS account with administrative privilege.
-- [SigNoz Cloud Account](https://signoz.io/teams/)
-
-Before we dive into creating and configuring the S3 bucket and Lambda function, let's talk a little about AWS RDS () supported databases and all the expected types of logs that you will be dealing with.
+{/* Before we dive into creating and configuring the S3 bucket and Lambda function, let's talk a little about AWS RDS () supported databases and all the expected types of logs that you will be dealing with.
## Introduction to Database Logging in AWS RDS
@@ -161,7 +162,10 @@ Until MariaDB 10.1.4, the format only consisted of the date (yymmdd) and time, f
The screenshots shown below were taken to send Elastic Load Balancer logs to SigNoz, rest assured, the steps for for RDS logs to SigNoz cloud endpoint remain same. Please name the appropriate name changes against what is shown below in the screenshots (e.g. - bucket names, name fields, etc)
-
+ */}
+
+
+
## Creating / Configuring your S3 bucket
@@ -630,3 +634,6 @@ Upon accessing the SigNoz logs section, you will notice a considerable influx of
A sample log line of the logs sent from AWS Lambda
+
+
+
diff --git a/data/docs/aws-monitoring/vpc-logs.mdx b/data/docs/aws-monitoring/vpc-logs.mdx
index 6d077944d..4db64e373 100644
--- a/data/docs/aws-monitoring/vpc-logs.mdx
+++ b/data/docs/aws-monitoring/vpc-logs.mdx
@@ -1,14 +1,17 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: vpc-logs
title: Send your VPC logs to SigNoz
+hide_table_of_contents: true
---
## Overview
-This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect Virtual Private Cloud (VPC) logs stored in an AWS S3 bucket and forward them to SigNoz. By the end of this guide, you will have a setup that automatically sends your VPC logs to SigNoz, enabling you to visualize and monitor your application's load balancing performance and health.
+This documentation provides a detailed walkthrough on how to set up an AWS Lambda function to collect Virtual Private Cloud (VPC) logs stored in an AWS S3 bucket
+and forward them to SigNoz. This will enable you to automatically sends your VPC logs to SigNoz, enabling you to visualize and monitor
+your application's load balancing performance and health.
-**Here’s a quick summary of what we’ll be doing in this detailed article.**
+**Here’s a quick summary of what we’ll be doing in this detailed documentation.**
- [Creating / Configuring your S3 bucket](#creating--configuring-your-s3-bucket)
- [Understanding how lambda function work](#understanding-how-lambda-function-work)
@@ -20,8 +23,9 @@ This documentation provides a detailed walkthrough on how to set up an AWS Lambd
## Prerequisites
- AWS account with administrative privilege.
-- [SigNoz Cloud Account](https://signoz.io/teams/)
+
+
## Creating / Configuring your S3 bucket
@@ -493,3 +497,5 @@ Upon accessing the SigNoz logs section, you will notice a considerable influx of
A sample log line of the logs sent from AWS Lambda
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/aks.mdx b/data/docs/azure-monitoring/aks.mdx
index cb2f78264..a03e341fa 100644
--- a/data/docs/azure-monitoring/aks.mdx
+++ b/data/docs/azure-monitoring/aks.mdx
@@ -1,12 +1,14 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: aks
title: AKS Metrics & Logging
+hide_table_of_contents: true
---
## Overview
-[AKS (Azure Kubernetes Service)](https://learn.microsoft.com/en-us/azure/aks/what-is-aks) is a managed Kubernetes service provided by Microsoft Azure that simplifies the deployment, management, and operations of Kubernetes clusters.
+[AKS (Azure Kubernetes Service)](https://learn.microsoft.com/en-us/azure/aks/what-is-aks) is a managed Kubernetes service provided by Microsoft Azure that
+simplifies the deployment, management, and operations of Kubernetes clusters.
## Prerequisites
@@ -14,7 +16,10 @@ title: AKS Metrics & Logging
- `kubectl` installed and logged in to the AKS cluster
- Helm
-## Quick Start
+
+
+
+## Setup
This setup is similar to the central collector but with a different function.
@@ -26,7 +31,7 @@ helm install -n signoz --create-namespace kubelet-otel signoz/k8s-infra \\
This should start sending logs and metrics to SigNoz.
-## Tracing
+{/* ## Tracing
### eBPF Tracing
@@ -48,13 +53,15 @@ For example, Pixie can be configured by following the instructions in the respec
These solutions may not be suitable for all use cases, and are still may not be production-ready. It is recommended to evaluate solutions and choose the one that best fits your needs.
-
+ */}
### Application-Level Tracing
-For application-level tracing, you can use the OpenTelemetry SDKs integrated with your application. These SDKs will automatically collect and forward traces to the Central Collector.
+For application-level tracing, you can use the OpenTelemetry SDKs integrated with your application. These SDKs will automatically collect and forward traces to the
+Central Collector.
-Please refer to our [SigNoz Tutorials](../../instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like Spring, FastAPI, NextJS, Langchain, Node.js, Flask, Django, etc.
+Please refer to our [SigNoz Tutorials](../../instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like
+Spring, FastAPI, NextJS, Node.js, Flask, Django, etc.
```bash
# Node.js example
@@ -107,4 +114,7 @@ If you encounter any issues while setting up logging and metrics for your AKS cl
- Ensure that the AKS cluster has network access to the SigNoz ingestion endpoint (`ingest..signoz.cloud:443`).
- Check if there are any network security groups or firewalls blocking the required ports.
5. Double-check the SigNoz API key:
- - Confirm that the provided `signozApiKey` is correct and has the necessary permissions to ingest data.
\ No newline at end of file
+ - Confirm that the provided `signozApiKey` is correct and has the necessary permissions to ingest data.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/app-service/logging.mdx b/data/docs/azure-monitoring/app-service/logging.mdx
index c0f9f8e43..a3e98a7ab 100644
--- a/data/docs/azure-monitoring/app-service/logging.mdx
+++ b/data/docs/azure-monitoring/app-service/logging.mdx
@@ -1,7 +1,8 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: logging
title: App Service Logging
+hide_table_of_contents: true
---
## Overview
@@ -21,6 +22,9 @@ Although, the application logs could be sent directly in the Application Level u
- [EventHub Setup](../../bootstrapping/data-ingestion)
- [Central Collector Setup](../../bootstrapping/collector-setup)
+
+
+
## Setup
1. Navigate to your App Service in the Azure portal
@@ -58,4 +62,7 @@ Although, the application logs could be sent directly in the Application Level u
5. Configure the destination details as "**Stream to an Event Hub**" and select the Event Hub namespace and Event Hub name created during the [EventHub Setup](../../bootstrapping/data-ingestion)
6. Save the diagnostic settings
-That's it! You have successfully set up logging for your Azure App Service.
\ No newline at end of file
+That's it! You have successfully set up logging for your Azure App Service.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/app-service/metrics.mdx b/data/docs/azure-monitoring/app-service/metrics.mdx
index 0d96fb82a..dd6955f64 100644
--- a/data/docs/azure-monitoring/app-service/metrics.mdx
+++ b/data/docs/azure-monitoring/app-service/metrics.mdx
@@ -1,25 +1,31 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: metrics
title: App Service Metrics
+hide_table_of_contents: true
---
-## QuickStart
+{/* ## QuickStart
-To monitor Azure App Service's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz, you just need to set up the OpenTelemetry Collector with the Azure Monitor exporter. No changes are needed to your application code.
+To monitor Azure App Service's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz, you just need to set up the OpenTelemetry Collector with the Azure Monitor exporter. No changes are needed to your application code. */}
## Overview
-In this guide, you will learn how to monitor Azure App Service's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz. By monitoring these metrics, you can keep track of your application's resource utilization and performance.
+In this documentation, you will learn how to monitor Azure App Service's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz.
+By monitoring these metrics, you can keep track of your application's resource utilization and performance.
-For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
+For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send
+traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
+
+
+
## Prerequisites
Before you can monitor your Azure App Service with SigNoz, you need to ensure the following prerequisites are met:
-1. You have an Azure subscription and an Azure App Service instance running.
-2. You have set up the Central Collector with the Azure Monitor exporter. If you haven't set it up yet, follow the instructions in the [Central Collector Setup](../../bootstrapping/collector-setup)
+1. Azure subscription and an Azure App Service instance running.
+2. [Central Collector Setup](../../bootstrapping/collector-setup)
## Dashboard Example
@@ -47,7 +53,8 @@ Once you have completed the prerequisites, you can start monitoring your Azure A
That's it! You have successfully set up monitoring for your Azure App Service's system metrics with SigNoz.
-You don't need to make any changes to your application code to monitor the system metrics. The OpenTelemetry Collector with the Azure Monitor exporter takes care of collecting and sending the metrics to SigNoz.
+You don't need to make any changes to your application code to monitor the system metrics. The OpenTelemetry Collector with the Azure Monitor exporter takes care of
+collecting and sending the metrics to SigNoz.
## Troubleshooting
@@ -58,4 +65,8 @@ If you encounter any issues while setting up monitoring for your Azure App Servi
2. Verify that your Azure App Service instance is running and accessible.
3. Ensure that you have the necessary permissions to access the metrics in your Azure subscription.
-By following this guide, you should be able to easily monitor your Azure App Service's system metrics with SigNoz and gain valuable insights into your application's performance and resource utilization.
\ No newline at end of file
+By following this guide, you should be able to easily monitor your Azure App Service's system metrics with SigNoz and gain valuable insights into your application's
+performance and resource utilization.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/app-service/tracing.mdx b/data/docs/azure-monitoring/app-service/tracing.mdx
index 1f032d01e..f027a9a63 100644
--- a/data/docs/azure-monitoring/app-service/tracing.mdx
+++ b/data/docs/azure-monitoring/app-service/tracing.mdx
@@ -1,16 +1,34 @@
---
-date: 2024-06-13
+date: 2024-12-19
id: tracing
title: App Service Tracing
+hide_table_of_contents: true
---
-## QuickStart
+## Overview
+
+Unified monitoring of your Azure App Service involves capturing application-level metrics and traces to provide comprehensive insights into your application's
+performance and resource utilization. For more detailed information on the unified monitoring in Azure, please refer to the [Azure Monitoring Strategy](../../bootstrapping/strategy).
+
+
+
-To get started with monitoring your Azure App Service, we recommend using OpenTelemetry (Otel) SDKs to instrument your application. These SDKs will allow you to collect and forward metrics and traces to a Central Collector.
+## Prerequisites
+
+Before you proceed, ensure the following prerequisites are met:
+
+- An active Azure subscription with a running Azure App Service instance
+- [Central Collector Setup](../../bootstrapping/collector-setup)
+
+## Setup
+
+To get started with monitoring your Azure App Service, we recommend using OpenTelemetry (Otel) SDKs to instrument your application. These SDKs will allow you to
+collect and forward metrics and traces to a Central Collector.
### Installing the OpenTelemetry SDK
-Please refer to our [SigNoz Tutorials](../../../instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like Spring, FastAPI, NextJS, Langchain, Node.js, Flask, Django, etc. with OpenTelemetry.
+Please refer to our [SigNoz Documentation](../../../instrumentation/) to find information on how to instrument your application like Spring, FastAPI, NextJS,
+Node.js, Flask, Django, etc. with OpenTelemetry.
```bash
@@ -27,22 +45,8 @@ npm install @opentelemetry/exporter-trace-otlp-http
export OTEL_EXPORTER_OTLP_ENDPOINT="http://:4318/"
```
-For application-level traces and metrics, configure your application to use the DNS name of the [Central Collector](../../bootstrapping/collector-setup) you set up earlier. This Central Collector will automatically forward the collected data to SigNoz.
-
-
-## Overview
-
-Unified monitoring of your Azure App Service involves capturing application-level metrics and traces to provide comprehensive insights into your application's performance and resource utilization. For more detailed information on the unified monitoring in Azure, please refer to the [Azure Monitoring Strategy](../../bootstrapping/strategy).
-
-
-## Prerequisites
-
-Before you proceed, ensure the following prerequisites are met:
-
-1. **Azure Subscription & App Service**: You need an active Azure subscription with a running Azure App Service instance.
-2. **Central Collector Setup**: Make sure you have set up the Central Collector with the Azure Monitor exporter. If you haven't completed this setup, follow the instructions in the [Central Collector Setup](../../bootstrapping/collector-setup).
-
-
+For application-level traces and metrics, configure your application to use the DNS name of the [Central Collector](../../bootstrapping/collector-setup) you set up
+earlier. This Central Collector will automatically forward the collected data to SigNoz.
## Troubleshooting
@@ -55,4 +59,8 @@ If you encounter any issues while setting up monitoring for your Azure App Servi
2. **Azure App Service Accessibility**:
- Confirm that your Azure App Service instance is up and accessible.
-By following this guide, you should be able to monitor your Azure App Service's traces with SigNoz effectively, gaining valuable insights into your application's performance and resource utilization.
\ No newline at end of file
+By following this guide, you should be able to monitor your Azure App Service's traces with SigNoz effectively, gaining valuable insights into your application's
+performance and resource utilization.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/az-blob-storage/logging.mdx b/data/docs/azure-monitoring/az-blob-storage/logging.mdx
index 4c0f5edd9..97449f490 100644
--- a/data/docs/azure-monitoring/az-blob-storage/logging.mdx
+++ b/data/docs/azure-monitoring/az-blob-storage/logging.mdx
@@ -1,12 +1,15 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: logging
title: Azure Blob Storage Audit Logging
+hide_table_of_contents: true
---
## Overview
-Blob Storage Audit Logging is a feature of Azure Blob Storage that allows you to track and monitor access to your blobs. It provides detailed information about who accessed your blobs, when, and what actions were performed. This feature can help you identify and respond to security incidents or unauthorized access to your data more effectively (SIEM).
+Blob Storage Audit Logging is a feature of Azure Blob Storage that allows you to track and monitor access to your blobs. It provides detailed information about who
+accessed your blobs, when, and what actions were performed. This feature can help you identify and respond to security incidents or unauthorized access to your
+data more effectively (SIEM).
The following categories of Logs are available to export to Storage Account or EventHub.
@@ -14,6 +17,9 @@ The following categories of Logs are available to export to Storage Account or E
- Storage Write
- Storage Delete
+
+
+
### Prerequisites
- [EventHub Setup](../../bootstrapping/data-ingestion)
@@ -44,4 +50,7 @@ That's it! You have successfully set up logging for your Azure Blob Storage.
Blob Storage Diagnostic Settings
-
\ No newline at end of file
+
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/az-blob-storage/metrics.mdx b/data/docs/azure-monitoring/az-blob-storage/metrics.mdx
index f5a316cc5..eb9222fa1 100644
--- a/data/docs/azure-monitoring/az-blob-storage/metrics.mdx
+++ b/data/docs/azure-monitoring/az-blob-storage/metrics.mdx
@@ -1,37 +1,44 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: metrics
title: Azure Blob Storage Metrics
+hide_table_of_contents: true
---
## QuickStart
-To monitor Azure Blob Storage's system metrics like Total Requests, Total Ingress / Egress, and Total Errors with SigNoz, you just need to set up the OpenTelemetry Collector with the Azure Monitor exporter.
+To monitor Azure Blob Storage's system metrics like Total Requests, Total Ingress / Egress, and Total Errors with SigNoz, you just need to set up the OpenTelemetry
+Collector with the Azure Monitor exporter.
## Overview
-Azure Blob Storage is a cloud storage service that provides scalable, durable, and highly available storage for your data. It is designed to store large amounts of unstructured data, such as files, blobs, and objects, and is optimized for data access and retrieval.
+Azure Blob Storage is a cloud storage service that provides scalable, durable, and highly available storage for your data. It is designed to store large amounts of
+unstructured data, such as files, blobs, and objects, and is optimized for data access and retrieval.
-In this guide, you will learn how to monitor Azure Blob Storage's system metrics like Total Requests, Total Ingress / Egress, and Total Errors with SigNoz. By monitoring these metrics, you can keep track of your application's resource utilization and performance.
+In this document, you will learn how to monitor Azure Blob Storage's system metrics like Total Requests, Total Ingress / Egress, and Total Errors with SigNoz. By
+monitoring these metrics, you can keep track of your application's resource utilization and performance.
+
+
+
## Prerequisites
Before you can monitor your Azure Blob Storage with SigNoz, you need to ensure the following prerequisites are met:
-1. You have an Azure subscription and an Azure Blob Storage instance running.
-2. You have set up the Central Collector with the Azure Monitor exporter. If you haven't set it up yet, follow the instructions in the [Central Collector Setup](../../bootstrapping/collector-setup)
+1. An Azure subscription and an Azure Blob Storage instance running
+2. [Central Collector Setup](../../bootstrapping/collector-setup)
## Dashboard Example
Once you have completed the prerequisites, you can start monitoring your Azure Blob Storage's system metrics with SigNoz.
1. Log in to your SigNoz account.
-2. Navigate to the Dashboards, and add an dashboard
-3. Add a Timeseries Panel
-4. In *Metrics*, select `azure_ingress_total` and *Avg By* select tag `location`
-5. In Filter say `name = `
-6. Hit “Save Changes” You now have Total Ingress of your Azure Blob Storage in a Dashboard for reporting and alerting
+2. Navigate to the Dashboards, and add an dashboard.
+3. Add a Timeseries Panel.
+4. In *Metrics*, select `azure_ingress_total` and *Avg By* select tag `location`.
+5. In Filter say `name = `.
+6. Hit “Save Changes” You now have Total Ingress of your Azure Blob Storage in a Dashboard for reporting and alerting.
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/az-container-apps/logging.mdx b/data/docs/azure-monitoring/az-container-apps/logging.mdx
index 4f75b2836..98d55b4b3 100644
--- a/data/docs/azure-monitoring/az-container-apps/logging.mdx
+++ b/data/docs/azure-monitoring/az-container-apps/logging.mdx
@@ -1,6 +1,8 @@
---
+date: 2024-12-19
id: logging
title: Container App Logging
+hide_table_of_contents: true
---
## Overview
@@ -12,7 +14,10 @@ The following categories of Container Apps Logs are available to export to Stora
Although, the application logs could be sent directly in the Application Level using a OpenTelemetry Log Appender, this might not be an ideal solution for legacy software or micro-services model. It’s easier to do centralised logging for both application logs, system logs and SIEM Audit logs.
-### Prerequisites
+
+
+
+## Prerequisites
- [EventHub Setup](../../bootstrapping/data-ingestion)
- [Central Collector Setup](../../bootstrapping/collector-setup)
@@ -41,4 +46,7 @@ Although, the application logs could be sent directly in the Application Level u
5. Configure the destination details as "**Stream to an Event Hub**" and select the Event Hub namespace and Event Hub name created during the [EventHub Setup](../../bootstrapping/data-ingestion)
6. Save the diagnostic settings
-That's it! You have successfully set up logging for your Azure Container App.
\ No newline at end of file
+That's it! You have successfully set up logging for your Azure Container App.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/az-container-apps/metrics.mdx b/data/docs/azure-monitoring/az-container-apps/metrics.mdx
index 4cc85e3e9..3d0345805 100644
--- a/data/docs/azure-monitoring/az-container-apps/metrics.mdx
+++ b/data/docs/azure-monitoring/az-container-apps/metrics.mdx
@@ -1,26 +1,33 @@
---
+date: 2024-12-19
id: metrics
title: Container App Metrics
+hide_table_of_contents: true
---
-# QuickStart
+{/* # QuickStart
-To monitor Azure Container App's system metrics like CPU Percentage, Memory Percentage, Replica Count with SigNoz, you just need to set up the OpenTelemetry Collector with the Azure Monitor exporter. No changes are needed to your application code.
+To monitor Azure Container App's system metrics like CPU Percentage, Memory Percentage, Replica Count with SigNoz, you just need to set up the OpenTelemetry Collector with the Azure Monitor exporter. No changes are needed to your application code. */}
-# Overview
+## Overview
-In this guide, you will learn how to monitor Azure Container App's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz. By monitoring these metrics, you can keep track of your application's resource utilization and performance.
+In this document, you will learn how to monitor Azure Container App's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz. By
+monitoring these metrics, you can keep track of your application's resource utilization and performance.
-For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
+For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send
+traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
-# Prerequisites
+
+
+
+## Prerequisites
Before you can monitor your Azure Container App with SigNoz, you need to ensure the following prerequisites are met:
-1. You have an Azure subscription and an Azure Container App instance running.
-2. You have set up the Central Collector with the Azure Monitor exporter. If you haven't set it up yet, follow the instructions in the [Central Collector Setup](../../bootstrapping/collector-setup)
+1. An Azure subscription and an Azure Container App instance running
+2. [Central Collector Setup](../../bootstrapping/collector-setup)
-# Dashboard Example
+## Dashboard Example
Once you have completed the prerequisites, you can start monitoring your Azure Container App's system metrics with SigNoz. Here's how you can do it:
@@ -45,9 +52,10 @@ Once you have completed the prerequisites, you can start monitoring your Azure C
That's it! You have successfully set up monitoring for your Azure Container App's system metrics with SigNoz.
-Note: You don't need to make any changes to your application code to monitor the system metrics. The OpenTelemetry Collector with the Azure Monitor exporter takes care of collecting and sending the metrics to SigNoz.
+Note: You don't need to make any changes to your application code to monitor the system metrics. The OpenTelemetry Collector with the Azure Monitor exporter takes
+care of collecting and sending the metrics to SigNoz.
-# Troubleshooting
+## Troubleshooting
If you encounter any issues while setting up monitoring for your Azure Container App's system metrics with SigNoz, here are a few troubleshooting steps you can try:
@@ -55,4 +63,8 @@ If you encounter any issues while setting up monitoring for your Azure Container
2. Verify that your Azure Container App instance is running and accessible.
3. Ensure that you have the necessary permissions to access the metrics in your Azure subscription.
-By following this guide, you should be able to easily monitor your Azure Container App's system metrics with SigNoz and gain valuable insights into your application's performance and resource utilization.
\ No newline at end of file
+By following this document, you should be able to easily monitor your Azure Container App's system metrics with SigNoz and gain valuable insights into your
+application's performance and resource utilization.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/az-container-apps/tracing.mdx b/data/docs/azure-monitoring/az-container-apps/tracing.mdx
index bed027c4b..1e803652f 100644
--- a/data/docs/azure-monitoring/az-container-apps/tracing.mdx
+++ b/data/docs/azure-monitoring/az-container-apps/tracing.mdx
@@ -1,15 +1,34 @@
---
+date: 2024-12-19
id: tracing
title: Container Apps Tracing
+hide_table_of_contents: true
---
-## QuickStart
+## Overview
+
+Unified monitoring of your Azure Container App involves capturing application-level metrics and traces to provide comprehensive insights into your application's
+performance and resource utilization. For more detailed information on the unified monitoring in Azure, please refer to the [Azure Monitoring Strategy](../../bootstrapping/strategy).
+
+
+
-To get started with monitoring your Azure Container App, we recommend using OpenTelemetry (Otel) SDKs to instrument your application. These SDKs will allow you to collect and forward metrics and traces to a Central Collector.
+## Prerequisites
+
+Before you proceed, ensure the following prerequisites are met:
+
+1. An active Azure subscription with a running Azure Container App instance
+2. [Central Collector Setup](../../bootstrapping/collector-setup)
+
+## Setup
+
+To get started with monitoring your Azure Container App, we recommend using OpenTelemetry (Otel) SDKs to instrument your application. These SDKs will allow you to
+collect and forward metrics and traces to a Central Collector.
### Installing the OpenTelemetry SDK
-Please refer to our [SigNoz Tutorials](../../../instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like Spring, FastAPI, NextJS, Langchain, Node.js, Flask, Django, etc. with OpenTelemetry.
+Please refer to our [SigNoz Tutorials](../../../instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like
+Spring, FastAPI, NextJS, Node.js, Flask, Django, etc. with OpenTelemetry.
```bash
@@ -26,21 +45,8 @@ npm install @opentelemetry/exporter-trace-otlp-http
export OTEL_EXPORTER_OTLP_ENDPOINT="http://:4318/"
```
-For application-level traces and metrics, configure your application to use the DNS name of the [Central Collector](../../bootstrapping/collector-setup) you set up earlier. This Central Collector will automatically forward the collected data to SigNoz.
-
-
-## Overview
-
-Unified monitoring of your Azure Container App involves capturing application-level metrics and traces to provide comprehensive insights into your application's performance and resource utilization. For more detailed information on the unified monitoring in Azure, please refer to the [Azure Monitoring Strategy](../../bootstrapping/strategy).
-
-
-## Prerequisites
-
-Before you proceed, ensure the following prerequisites are met:
-
-1. **Azure Subscription & Container App**: You need an active Azure subscription with a running Azure Container App instance.
-2. **Central Collector Setup**: Make sure you have set up the Central Collector with the Azure Monitor exporter. If you haven't completed this setup, follow the instructions in the [Central Collector Setup](../../bootstrapping/collector-setup).
-
+For application-level traces and metrics, configure your application to use the DNS name of the [Central Collector](../../bootstrapping/collector-setup) you set up
+earlier. This Central Collector will automatically forward the collected data to SigNoz.
## Troubleshooting
@@ -54,4 +60,8 @@ If you encounter any issues while setting up monitoring for your Azure Container
2. **Azure Container App Accessibility**:
- Confirm that your Azure Container App instance is up and accessible.
-By following this guide, you should be able to monitor your Azure Container App's traces with SigNoz effectively, gaining valuable insights into your application's performance and resource utilization.
\ No newline at end of file
+By following this guide, you should be able to monitor your Azure Container App's traces with SigNoz effectively, gaining valuable insights into your application's
+performance and resource utilization.
+
+
+
diff --git a/data/docs/azure-monitoring/az-fns/logging.mdx b/data/docs/azure-monitoring/az-fns/logging.mdx
index 31956bcda..e6ac47642 100644
--- a/data/docs/azure-monitoring/az-fns/logging.mdx
+++ b/data/docs/azure-monitoring/az-fns/logging.mdx
@@ -1,7 +1,8 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: logging
title: Azure Functions Logging
+hide_table_of_contents: true
---
## Overview
@@ -10,8 +11,11 @@ The following categories of Logs are available to export to Storage Account or E
- Function App logs
- Function Authentication logs (beta)
-
Although, the application logs could be sent directly in the Application Level using a OpenTelemetry Log Appender, this might not an option for managed services.
+
+
+
+
### Prerequisites
- [EventHub Setup](../../bootstrapping/data-ingestion)
@@ -27,4 +31,7 @@ Although, the application logs could be sent directly in the Application Level u
5. Configure the destination details as "**Stream to an Event Hub**" and select the Event Hub namespace and Event Hub name created during the [EventHub Setup](../../bootstrapping/data-ingestion)
6. Save the diagnostic settings
-That's it! You have successfully set up logging for your Azure Function.
\ No newline at end of file
+That's it! You have successfully set up logging for your Azure Function.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/az-fns/metrics.mdx b/data/docs/azure-monitoring/az-fns/metrics.mdx
index 7d0c57e8f..5504edcdc 100644
--- a/data/docs/azure-monitoring/az-fns/metrics.mdx
+++ b/data/docs/azure-monitoring/az-fns/metrics.mdx
@@ -1,25 +1,32 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: metrics
title: Azure Function Metrics
+hide_table_of_contents: true
---
-## QuickStart
+{/* ## Quickstart
-To monitor Azure Function's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz, you just need to set up the OpenTelemetry Collector with the Azure Monitor exporter. No changes are needed to your application code.
+To monitor Azure Function's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz, you just need to set up the OpenTelemetry
+Collector with the Azure Monitor exporter. No changes are needed to your application code. */}
## Overview
-In this guide, you will learn how to monitor Azure Function's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz. By monitoring these metrics, you can keep track of your application's resource utilization and performance.
+In this document, you will learn how to monitor Azure Function's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz.
+By monitoring these metrics, you can keep track of your application's resource utilization and performance.
-For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
+For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send
+traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
+
+
+
## Prerequisites
Before you can monitor your Azure Function with SigNoz, you need to ensure the following prerequisites are met:
-1. You have an Azure subscription and an Azure Function instance running.
-2. You have set up the Central Collector with the Azure Monitor exporter. If you haven't set it up yet, follow the instructions in the [Central Collector Setup](../../bootstrapping/collector-setup)
+1. An Azure subscription and an Azure Function instance running
+2. [Central Collector Setup](../../bootstrapping/collector-setup)
## Dashboard Example
@@ -59,4 +66,8 @@ If you encounter any issues while setting up monitoring for your Azure Function'
3. Ensure that you have the necessary permissions to access the metrics in your Azure subscription.
4. Double-check the configuration of the OpenTelemetry Collector with the Azure Monitor exporter to ensure that a resource group filter is not preventing the metrics from being collected.
-By following this guide, you should be able to easily monitor your Azure Function's system metrics with SigNoz and gain valuable insights into your application's performance and resource utilization.
\ No newline at end of file
+By following this document, you should be able to easily monitor your Azure Function's system metrics with SigNoz and gain valuable insights into your application's
+performance and resource utilization.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/az-fns/tracing.mdx b/data/docs/azure-monitoring/az-fns/tracing.mdx
index f1cba58b4..5f539a1f0 100644
--- a/data/docs/azure-monitoring/az-fns/tracing.mdx
+++ b/data/docs/azure-monitoring/az-fns/tracing.mdx
@@ -1,15 +1,34 @@
---
+date: 2024-12-19
id: tracing
title: Azure Function Tracing
+hide_table_of_contents: true
---
-## QuickStart
+## Overview
+
+Unified monitoring of your Azure Function App involves capturing application-level metrics and traces to provide comprehensive insights into your application's
+performance and resource utilization. For more detailed information on the unified monitoring in Azure, please refer to the [Azure Monitoring Strategy](../../bootstrapping/strategy).
+
+
+
-To get started with monitoring your Azure Function App, we recommend using OpenTelemetry (Otel) SDKs to instrument your application. These SDKs will allow you to collect and forward metrics and traces to a Central Collector.
+## Prerequisites
+
+Before you proceed, ensure the following prerequisites are met:
+
+1. An active Azure subscription with a running Azure Function App instance
+2. [Central Collector Setup](../../bootstrapping/collector-setup)
+
+## Setup
+
+To get started with monitoring your Azure Function App, we recommend using OpenTelemetry (Otel) SDKs to instrument your application. These SDKs will allow you to
+collect and forward metrics and traces to a Central Collector.
### Installing the OpenTelemetry SDK
-Please refer to our [SigNoz Tutorials](../../../instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like Spring, FastAPI, NextJS, Langchain, Node.js, Flask, Django, etc. with OpenTelemetry.
+Please refer to our [SigNoz Tutorials](../../../instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like
+Spring, FastAPI, NextJS, Node.js, Flask, Django, etc. with OpenTelemetry.
```bash
@@ -26,22 +45,8 @@ npm install @opentelemetry/exporter-trace-otlp-http
export OTEL_EXPORTER_OTLP_ENDPOINT="http://:4318/"
```
-For application-level traces and metrics, configure your application to use the DNS name of the [Central Collector](../../bootstrapping/collector-setup) you set up earlier. This Central Collector will automatically forward the collected data to SigNoz.
-
-
-## Overview
-
-Unified monitoring of your Azure Function App involves capturing application-level metrics and traces to provide comprehensive insights into your application's performance and resource utilization. For more detailed information on the unified monitoring in Azure, please refer to the [Azure Monitoring Strategy](../../bootstrapping/strategy).
-
-
-## Prerequisites
-
-Before you proceed, ensure the following prerequisites are met:
-
-1. **Azure Subscription & Container App**: You need an active Azure subscription with a running Azure Function App instance.
-2. **Central Collector Setup**: Make sure you have set up the Central Collector with the Azure Monitor exporter. If you haven't completed this setup, follow the instructions in the [Central Collector Setup](../../bootstrapping/collector-setup).
-
-
+For application-level traces and metrics, configure your application to use the DNS name of the [Central Collector](../../bootstrapping/collector-setup) you set up
+earlier. This Central Collector will automatically forward the collected data to SigNoz.
## Troubleshooting
@@ -54,4 +59,8 @@ If you encounter any issues while setting up monitoring for your Azure Function
2. **Azure Function App Accessibility**:
- Confirm that your Azure Function App instance is up and accessible.
-By following this guide, you should be able to monitor your Azure Function App's traces with SigNoz effectively, gaining valuable insights into your application's performance and resource utilization.
\ No newline at end of file
+By following this guide, you should be able to monitor your Azure Function App's traces with SigNoz effectively, gaining valuable insights into your application's
+performance and resource utilization.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/db-metrics.mdx b/data/docs/azure-monitoring/db-metrics.mdx
index 774170f3c..7c23db107 100644
--- a/data/docs/azure-monitoring/db-metrics.mdx
+++ b/data/docs/azure-monitoring/db-metrics.mdx
@@ -1,26 +1,33 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: db-metrics
title: SQL Database Metrics
+hide_table_of_contents: true
---
## QuickStart
-To monitor Database's system metrics like CPU Percentage, Memory Percentage, Storage Usage with SigNoz, you just need to set up the OpenTelemetry Collector with the Azure Monitor exporter and enable Monitoring for the databases.
+To monitor Database's system metrics like CPU Percentage, Memory Percentage, Storage Usage with SigNoz, you just need to set up the OpenTelemetry Collector with
+the Azure Monitor exporter and enable Monitoring for the databases.
## Overview
-In this guide, you will learn how to monitor Database's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz. By monitoring these metrics, you can keep track of your application's resource utilization and performance.
+In this document, you will learn how to monitor Database's system metrics like CPU Percentage, Memory Percentage, Data In, and Data Out with SigNoz. By monitoring
+these metrics, you can keep track of your application's resource utilization and performance.
-For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
+For application-level traces and metrics, you can use the DNS name of the OpenTelemetry Collector you set up earlier. Simply configure your application to send
+traces and metrics to the Central Collector, and they will be forwarded to SigNoz automatically.
+
+
+
## Prerequisites
Before you can monitor your Database with SigNoz, you need to ensure the following prerequisites are met:
-1. You have an Azure subscription and an Database instance running.
-2. You have set up the Central Collector with the Azure Monitor exporter. If you haven't set it up yet, follow the instructions in the [Central Collector Setup](../bootstrapping/collector-setup)
-3. You should have sql monitoring profile created to monitor the databases in Azure Monitor if not, Follow this guide to [Create SQL Monitoring Profile](https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-insights-enable?view=azuresql#create-sql-monitoring-profile)
+1. An Azure subscription and an Database instance running
+2. [Central Collector Setup](../bootstrapping/collector-setup)
+3. [SQL monitoring profile](https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-insights-enable?view=azuresql#create-sql-monitoring-profile) created to monitor the databases in Azure Monitor
## Setup
@@ -62,4 +69,7 @@ If you encounter any issues while setting up monitoring for your Database's syst
3. Ensure that you have the necessary permissions to access the metrics in your Azure subscription.
4. Double-check the configuration of the OpenTelemetry Collector with the Azure Monitor exporter to ensure that a resource group filter is not preventing the metrics from being collected.
-By following this guide, you should be able to easily monitor your Database's system metrics with SigNoz and gain valuable insights into your application's performance and resource utilization.
\ No newline at end of file
+By following this document, you should be able to easily monitor your Database's system metrics with SigNoz and gain valuable insights into your application's performance and resource utilization.
+
+
+
\ No newline at end of file
diff --git a/data/docs/azure-monitoring/virtual-machines/vm-metrics.mdx b/data/docs/azure-monitoring/virtual-machines/vm-metrics.mdx
index 7b4510cc5..5c46d2d13 100644
--- a/data/docs/azure-monitoring/virtual-machines/vm-metrics.mdx
+++ b/data/docs/azure-monitoring/virtual-machines/vm-metrics.mdx
@@ -1,21 +1,25 @@
---
-date: 2024-06-06
+date: 2024-12-19
id: vm-metrics
title: VM Host Metrics & Logging
+hide_table_of_contents: true
---
## Overview
-In this guide, we'll walk you through the process of setting up an Azure Virtual Machine to send logs, traces and metrics to SigNoz, an open-source observability platform. By following these steps, you'll be able to monitor your Azure VM's performance and troubleshoot issues using SigNoz.
+In this documentation, we'll walk you through the process of setting up an Azure Virtual Machine to send logs, traces and metrics to SigNoz, an open-source
+observability platform. By following these steps, you'll be able to monitor your Azure VM's performance and troubleshoot issues using SigNoz.
+
+
+
## Prerequisites
Before you begin, ensure that you have the following:
-1. [SigNoz Cloud Account](https://signoz.io/teams/)
-2. An Azure subscription with permissions to create and manage Virtual Machines.
-3. [Central Collector Setup](../../bootstrapping/collector-setup)
-4. Azure Linux VM with SSH access enabled. Follow [SSH Keys Guide](https://learn.microsoft.com/en-us/azure/virtual-machines/ssh-keys-portal) to enable SSH access.
+1. An Azure subscription with permissions to create and manage Virtual Machines.
+2. [Central Collector Setup](../../bootstrapping/collector-setup)
+3. Azure Linux VM with SSH access enabled. Follow [SSH Keys Guide](https://learn.microsoft.com/en-us/azure/virtual-machines/ssh-keys-portal) to enable SSH access.
## Setup
@@ -26,13 +30,15 @@ The [SSH Keys Guide](https://learn.microsoft.com/en-us/azure/virtual-machines/ss
### Install OpenTelemetry Collector
-Follow the [OpenTelemetry SigNoz Guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) to install the OpenTelemetry Collector.
+Follow the [OpenTelemetry SigNoz document](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) to install the OpenTelemetry Collector.
### Configure Collector
-The configuration file for the OpenTelemetry Collector is located at `/etc/otelcol-contrib/config.yaml`. We send the logs, traces and metrics to the central collector instead of SigNoz directly, in order to adopt a scalable architecture pattern. We recommend to our users to use the same pattern in your Azure subscription.
+The configuration file for the OpenTelemetry Collector is located at `/etc/otelcol-contrib/config.yaml`. We send the logs, traces and metrics to the central
+collector instead of SigNoz directly, in order to adopt a scalable architecture pattern. We recommend to our users to use the same pattern in your Azure
+subscription.
-```bash
+```yaml
cat > /etc/otelcol-contrib/config.yaml << EOF
receivers:
filelog:
@@ -117,7 +123,8 @@ EOF
```
#### OLTP Exporter Configuration
-Make sure to replace `` with the DNS name of your central collector. If you don't have a central collector yet, follow the [Central Collector Setup](../../bootstrapping/collector-setup) guide to set one up.
+Make sure to replace `` with the DNS name of your central collector. If you don't have a central collector yet, follow the
+[Central Collector Setup](../../bootstrapping/collector-setup) document to set one up.
#### File Logs Receiver Configuration
The file logs receiver needs to be configured with the paths to the log files that you want to stream to SigNoz. You can specify multiple paths by separating them as a array.
@@ -150,3 +157,6 @@ If you encounter any issues during the setup process, here are a few troubleshoo
- Verify that the central collector is running and configured correctly.
That's it! You have now successfully set up your Azure Virtual Machine to send logs and metrics to SigNoz. You can start monitoring your VM's performance and troubleshooting any issues using the SigNoz dashboard.
+
+
+
\ No newline at end of file
diff --git a/data/docs/gcp-monitoring/app-engine/logging.mdx b/data/docs/gcp-monitoring/app-engine/logging.mdx
index adfd23bd3..2ac1ebe56 100644
--- a/data/docs/gcp-monitoring/app-engine/logging.mdx
+++ b/data/docs/gcp-monitoring/app-engine/logging.mdx
@@ -8,11 +8,11 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough on how to set up Google App Engine to send the logs directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your App Engine logs to SigNoz.
+This documentation provides a detailed walkthrough on how to set up Google App Engine to send the logs directly to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure App Engine
* Create Pub/Sub topic
@@ -20,12 +20,11 @@ This documentation provides a detailed walkthrough on how to set up Google App E
* Create Compute Engine instance
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz Cloud
* Invoke the deployed App Engine service to generate logs
-* Send and Visualize the logs in SigNoz Cloud
+* Send and Visualize the logs in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or App Engine Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
* [Cloud Build API](https://console.cloud.google.com/flows/enableapi?apiid=cloudbuild.googleapis.com) is enabled
@@ -108,11 +107,11 @@ Open the URL in the new browser which will invoke the service and put out the pr
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the App Engine logs, use the following filter conditions:
@@ -122,7 +121,7 @@ resource.type="gae_app"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
@@ -233,8 +232,8 @@ App Engine Logs in SigNoz Cloud
-
-**Here's a quick summary of what we will be doing in this guide**
+
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure App Engine
* Create Pub/Sub topic
@@ -242,7 +241,7 @@ App Engine Logs in SigNoz Cloud
* Self-Host SigNoz
* Create OTel Collector to route logs from Pub/Sub topic to self hosted SigNoz
* Invoke the deployed App Engine service to generate logs
-* Send and Visualize the logs in SigNoz
+* Send and Visualize the logs in SigNoz */}
## Prerequisites
@@ -250,7 +249,6 @@ App Engine Logs in SigNoz Cloud
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or App Engine Admin privilege.
* Access to a project in GCP
* [Cloud Build API](https://console.cloud.google.com/flows/enableapi?apiid=cloudbuild.googleapis.com) is enabled
-* [Self-Hosted SigNoz](https://signoz.io/docs/install/docker/)
For more details on how to configure Self-Hosted SigNoz for Logs, check [official documentation by Self-Hosted SigNoz](https://signoz.io/docs/userguide/send-logs-http/#send-logs-to-self-hosted-signoz) and navigate to the "Send Logs to Self-Hosted SigNoz" section.
@@ -335,11 +333,11 @@ Open the URL in the new browser which will invoke the service and put out the pr
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Setup Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the App Engine logs, use the following filter conditions:
@@ -349,7 +347,7 @@ resource.type="gae_app"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
diff --git a/data/docs/gcp-monitoring/app-engine/metrics.mdx b/data/docs/gcp-monitoring/app-engine/metrics.mdx
index 28ea74df6..d0d0cd673 100644
--- a/data/docs/gcp-monitoring/app-engine/metrics.mdx
+++ b/data/docs/gcp-monitoring/app-engine/metrics.mdx
@@ -8,22 +8,21 @@ hide_table_of_contents: true
## Overview
-This document provides a detailed walkthrough on how to send Google App Engine metrics to SigNoz. By the end of this guide, you will have a setup that sends your App Engine metrics to SigNoz.
+This document provides a detailed walkthrough on how to send Google App Engine metrics to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure App Engine service to generate the metrics
* Invoke the App Engine service
* Deploy OpenTelemetry Collector to scrape the metrics from Google Cloud Monitoring
-* Send and Visualize the metrics obtained by OpenTelemetry in SigNoz Cloud
+* Send and Visualize the metrics obtained by OpenTelemetry in SigNoz Cloud */}
## Prerequisites
* A [Google Cloud account](https://console.cloud.google.com/) with administrative privileges or App Engine Admin privileges
-* A [SigNoz Cloud account](https://signoz.io/teams/) (used for this demonstration). You'll need the ingestion details. To obtain your Ingestion Key and Ingestion URL, log in to your SigNoz Cloud account and navigate to Settings >> Ingestion Settings
* Access to a project in Google Cloud Platform (GCP)
* [Cloud Build API](https://console.cloud.google.com/flows/enableapi?apiid=cloudbuild.googleapis.com) is enabled
@@ -167,7 +166,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for App Engine.
@@ -200,13 +199,13 @@ If you run into any problems while setting up monitoring for your App Engine's m
-**Here’s a quick summary of what we will be doing in this guide**
+{/* **Here’s a quick summary of what we will be doing in this guide**
* Create and configure App Engine to generate the metrics
* Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
* Deploy the self-hosted **SigNoz**
* Invoke the App Engine
-* Visualize the metrics in the **SigNoz** dashboard
+* Visualize the metrics in the **SigNoz** dashboard */}
## Prerequisites
@@ -397,7 +396,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to your Self-Hosted SigNoz UI, and navigate to the Self-Hosted SigNoz dashboard. Click on the **Dashboards** section to view the metrics. Create a new dashboard (If not already present ). The default Self-Hosted SigNoz dashboard endpoint would be `http://:3301`, however, the URL can be different based on how you have set up the infrastructure.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for App Engine.
diff --git a/data/docs/gcp-monitoring/app-engine/tracing.mdx b/data/docs/gcp-monitoring/app-engine/tracing.mdx
index 02742ccdb..0e3bd35be 100644
--- a/data/docs/gcp-monitoring/app-engine/tracing.mdx
+++ b/data/docs/gcp-monitoring/app-engine/tracing.mdx
@@ -8,20 +8,19 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough on how to set up Google App Engine to send the traces directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your App Engine traces to SigNoz.
+This documentation provides a detailed walkthrough on how to set up Google App Engine to send the traces directly to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure App Engine
* Invoke the deployed App Engine service to generate traces
-* Send and Visualize the traces in SigNoz Cloud
+* Send and Visualize the traces in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or App Engine Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
* [Cloud Build API](https://console.cloud.google.com/flows/enableapi?apiid=cloudbuild.googleapis.com) is enabled
@@ -213,19 +212,18 @@ App Engine Traces in SigNoz Cloud
-
-**Here's a quick summary of what we will be doing in this guide**
+
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure App Engine
* Invoke the deployed App Engine service to generate traces
-* Send and Visualize the traces in SigNoz
+* Send and Visualize the traces in SigNoz */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or App Engine Admin privilege
* Access to a project in GCP
* [Cloud Build API](https://console.cloud.google.com/flows/enableapi?apiid=cloudbuild.googleapis.com) is enabled
-* [Self-Hosted SigNoz](https://signoz.io/docs/install/docker/)
For more details on how to configure Self-Hosted SigNoz for Logs, check [official documentation by Self-Hosted SigNoz](https://signoz.io/docs/userguide/send-logs-http/#send-logs-to-self-hosted-signoz) and navigate to the "Send Logs to Self-Hosted SigNoz" section.
diff --git a/data/docs/gcp-monitoring/cloud-monitoring/metrics.mdx b/data/docs/gcp-monitoring/cloud-monitoring/metrics.mdx
index 4f112be88..d672dacf8 100644
--- a/data/docs/gcp-monitoring/cloud-monitoring/metrics.mdx
+++ b/data/docs/gcp-monitoring/cloud-monitoring/metrics.mdx
@@ -10,23 +10,22 @@ hide_table_of_contents: true
Google Cloud Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. It collects metrics, events, and metadata from Google Cloud, hosted uptime probes, and application instrumentation.
-This document provides a detailed walkthrough on how to send Google Cloud Monitoring metrics to SigNoz. By the end of this guide, you will have a setup that sends your Cloud Monitoring metrics to SigNoz.
+This document provides a detailed walkthrough on how to send Google Cloud Monitoring metrics to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create uptime check for Cloud Run service
* Create and configure Compute Engine VM instance to deploy OpenTelemetry Collector
* Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
-* Send and Visualize the metrics in SigNoz Cloud
+* Send and Visualize the metrics in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Monitoring Admin and Compute Engine Admin privilege
* Cloud Run Admin and Artifact Registry Admin in case you want to setup Cloud Run service to create an uptime check
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
## Setup
@@ -115,7 +114,7 @@ The uptime check is created, and from here on, the metrics will start getting em
### Deploy OpenTelemetry to fetch the metrics from Google Cloud Monitoring
-You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
**Step 1:** Install and configure OpenTelemetry for scraping the metrics from Google Cloud Monitoring. Follow [OpenTelemetry Binary Usage in Virtual Machine](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) guide for detailed instructions.
@@ -195,7 +194,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select metric for Cloud Monitoring
@@ -228,19 +227,18 @@ If you run into any problems while setting up monitoring for your Cloud Monitori
-**Here’s a quick summary of what we will be doing in this guide**
+{/* **Here’s a quick summary of what we will be doing in this guide**
* Create uptime check for Cloud Run service
* Create and configure Compute Engine VM instance to deploy OpenTelemetry Collector
* Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud monitoring
-* Visualize the metrics in SigNoz dashboard
+* Visualize the metrics in SigNoz dashboard */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Monitoring Admin and Compute Engine Admin privilege
* Cloud Run Admin and Artifact Registry Admin in case you want to setup Cloud Run service to create an uptime check
* Access to a project in GCP
-* Self-hosted SigNoz
## Setup
@@ -328,7 +326,7 @@ The uptime check is created, and from here on, the metrics will start getting em
## Deploy OpenTelemetry to fetch the metrics from Google Cloud Monitoring
-You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
**Step 1:** Install and configure OpenTelemetry for scraping the metrics from Google Cloud Monitoring. Follow [OpenTelemetry Binary Usage in Virtual Machine](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) guide for detailed instructions.
@@ -398,7 +396,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to SigNoz and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select metric for Cloud Monitoring
diff --git a/data/docs/gcp-monitoring/cloud-run/logging.mdx b/data/docs/gcp-monitoring/cloud-run/logging.mdx
index 8f95ccc33..95f19580b 100644
--- a/data/docs/gcp-monitoring/cloud-run/logging.mdx
+++ b/data/docs/gcp-monitoring/cloud-run/logging.mdx
@@ -10,7 +10,7 @@ hide_table_of_contents: true
This documentation provides a detailed walkthrough on how to set up Cloud Run to send the logs directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud Run logs to SigNoz.
-
+
**Here's a quick summary of what we will be doing in this guide**
@@ -31,15 +31,15 @@ This documentation provides a detailed walkthrough on how to set up Cloud Run to
### Get started with Cloud Run service setup
-Follow the steps mentioned in the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page to create Cloud Run Service.
+Follow the steps mentioned in the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page to create Cloud Run Service.
### Create Pub/Sub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Cloud Run logs, use the following filter conditions:
@@ -50,7 +50,7 @@ resource.labels.service_name=""
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
@@ -161,7 +161,7 @@ Cloud Run Logs in SigNoz Cloud
-
+
**Here's a quick summary of what we will be doing in this guide**
* Create and configure Cloud Run
@@ -186,15 +186,15 @@ For more details on how to configure Self-Hosted SigNoz for Logs, check [officia
### Get started with Cloud Run service setup
-Follow the steps mentioned in the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page to create Cloud Run Service.
+Follow the steps mentioned in the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page to create Cloud Run Service.
### Create Pub/Sub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Setup Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Cloud Run logs, use the following filter conditions:
@@ -205,7 +205,7 @@ resource.labels.service_name=""
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
diff --git a/data/docs/gcp-monitoring/cloud-run/metrics.mdx b/data/docs/gcp-monitoring/cloud-run/metrics.mdx
index b5ad42fb2..1c1cf308a 100644
--- a/data/docs/gcp-monitoring/cloud-run/metrics.mdx
+++ b/data/docs/gcp-monitoring/cloud-run/metrics.mdx
@@ -10,7 +10,7 @@ hide_table_of_contents: true
This documentation provides a detailed walkthrough on how to set up Cloud Run to send the metrics directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud Run metrics to SigNoz.
-
+
**Here's a quick summary of what we will be doing in this guide**
@@ -28,11 +28,11 @@ This documentation provides a detailed walkthrough on how to set up Cloud Run to
### Get started with Cloud Run service setup
-Follow the steps mentioned in the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page to create Cloud Run Service.
+Follow the steps mentioned in the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page to create Cloud Run Service.
## Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
-You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
**Step 1:** Install and configure OpenTelemetry for scraping the metrics from Google Cloud Run. Follow [OpenTelemetry Binary Usage in Virtual Machine](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) guide for detailed instructions.
@@ -112,7 +112,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select metric for Cloud Run
@@ -165,11 +165,11 @@ For more details on how to configure Self-Hosted SigNoz for Logs, check [officia
### Get started with Cloud Run service setup
-Follow the steps mentioned in the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page to create Cloud Run Service.
+Follow the steps mentioned in the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page to create Cloud Run Service.
## Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
-You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
**Step 1:** Install and configure OpenTelemetry for scraping the metrics from Google Cloud Monitoring. Follow [OpenTelemetry Binary Usage in Virtual Machine](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) guide for detailed instructions.
@@ -239,7 +239,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to SigNoz and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select metric for Cloud Run
diff --git a/data/docs/gcp-monitoring/cloud-run/tracing.mdx b/data/docs/gcp-monitoring/cloud-run/tracing.mdx
index 42ec01fd7..c66ffcbd4 100644
--- a/data/docs/gcp-monitoring/cloud-run/tracing.mdx
+++ b/data/docs/gcp-monitoring/cloud-run/tracing.mdx
@@ -10,7 +10,7 @@ hide_table_of_contents: true
This documentation provides a detailed walkthrough on how to set up Google Cloud Run to send the traces directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud Run traces to SigNoz.
-
+
**Here's a quick summary of what we will be doing in this guide**
@@ -28,7 +28,7 @@ This documentation provides a detailed walkthrough on how to set up Google Cloud
### Get started with Cloud Run service setup
-Follow the steps mentioned in the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page to create Cloud Run Service.
+Follow the steps mentioned in the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page to create Cloud Run Service.
We will now make slight changes to the application to include tracing.
@@ -145,7 +145,7 @@ EXPOSE 8080
CMD [ "node", "-r", "./tracing.js", "app.js" ]
```
-You can now perform the following steps by referring the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page:
+You can now perform the following steps by referring the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page:
- Building the image of the application, and uploading it to Artifact Registry
- Deploying the new image from Artifact Registry to Cloud Run
@@ -186,7 +186,7 @@ Cloud Run Traces in SigNoz Cloud
-
+
**Here's a quick summary of what we will be doing in this guide**
* Create and configure Cloud Run
@@ -207,7 +207,7 @@ For more details on how to configure Self-Hosted SigNoz for Logs, check [officia
### Get started with Cloud Run service setup
-Follow the steps mentioned in the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page to create Cloud Run Service.
+Follow the steps mentioned in the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page to create Cloud Run Service.
We will now make slight changes to the application to include tracing.
@@ -324,7 +324,7 @@ EXPOSE 8080
CMD [ "node", "-r", "./tracing.js", "app.js" ]
```
-You can now perform the following steps by referring the [Cloud Run Service Setup](/docs/gcp-monitoring/cloud-run/cloud-run-setup) page:
+You can now perform the following steps by referring the [Cloud Run Service Setup](https://signoz.io/docs/gcp-monitoring/cloud-run/cloud-run-setup/) page:
- Building the image of the application, and uploading it to Artifact Registry
- Deploying the new image from Artifact Registry to Cloud Run
diff --git a/data/docs/gcp-monitoring/cloud-sql/logging.mdx b/data/docs/gcp-monitoring/cloud-sql/logging.mdx
index a55631d51..229709fa8 100644
--- a/data/docs/gcp-monitoring/cloud-sql/logging.mdx
+++ b/data/docs/gcp-monitoring/cloud-sql/logging.mdx
@@ -10,7 +10,7 @@ hide_table_of_contents: true
This documentation provides a detailed walkthrough to send the Google Cloud SQL logs directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud SQL logs to SigNoz.
-
+
**Here's a quick summary of what we will be doing in this guide**
@@ -31,15 +31,15 @@ This documentation provides a detailed walkthrough to send the Google Cloud SQL
### Get started with Cloud SQL Configuration
-Follow the steps mentioned in the [Creating Cloud SQL](/docs/gcp-monitoring/cloud-sql/cloud-sql-creation) document to create Cloud SQL instance.
+Follow the steps mentioned in the [Creating Cloud SQL](https://signoz.io/docs/gcp-monitoring/cloud-sql/cloud-sql-creation/) document to create Cloud SQL instance.
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Cloud SQL logs, use the following filter conditions:
@@ -49,7 +49,7 @@ resource.type="cloudsql_database"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
#### Install OTel Collector as agent
@@ -162,8 +162,8 @@ Cloud SQL Logs in SigNoz Cloud
-
-**Here's a quick summary of what we will be doing in this guide**
+
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure Cloud SQL
* Create Pub/Sub topic
@@ -171,7 +171,7 @@ Cloud SQL Logs in SigNoz Cloud
* Self-Host SigNoz
* Create Compute Engine instance
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz Cloud
-* Send and Visualize the logs in SigNoz
+* Send and Visualize the logs in SigNoz */}
## Prerequisites
@@ -187,15 +187,15 @@ For more details on how to configure Self-Hosted SigNoz for Logs, check [officia
### Get started with Cloud SQL Configuration
-Follow the steps mentioned in the [Creating Cloud SQL](/docs/gcp-monitoring/cloud-sql/cloud-sql-creation) document to create Cloud SQL instance.
+Follow the steps mentioned in the [Creating Cloud SQL](https://signoz.io/docs/gcp-monitoring/cloud-sql/cloud-sql-creation/) document to create Cloud SQL instance.
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Cloud SQL logs, use the following filter conditions:
@@ -205,7 +205,7 @@ resource.type="cloudsql_database"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
diff --git a/data/docs/gcp-monitoring/cloud-sql/metrics.mdx b/data/docs/gcp-monitoring/cloud-sql/metrics.mdx
index c89fcb2f5..fc2d4b437 100644
--- a/data/docs/gcp-monitoring/cloud-sql/metrics.mdx
+++ b/data/docs/gcp-monitoring/cloud-sql/metrics.mdx
@@ -10,7 +10,7 @@ hide_table_of_contents: true
This document provides a detailed walkthrough on how to send Google Cloud SQL metrics to SigNoz. By the end of this guide, you will have a setup that sends your Cloud SQL metrics to SigNoz.
-
+
**Here's a quick summary of what we will be doing in this guide**
@@ -30,7 +30,7 @@ This document provides a detailed walkthrough on how to send Google Cloud SQL me
### Get started with Cloud SQL Configuration
-Follow the steps mentioned in the [Creating Cloud SQL](/docs/gcp-monitoring/cloud-sql/cloud-sql-creation) document to create Cloud SQL instance.
+Follow the steps mentioned in the [Creating Cloud SQL](https://signoz.io/docs/gcp-monitoring/cloud-sql/cloud-sql-creation/) document to create Cloud SQL instance.
## Deploy OpenTelemetry Collector to scrape the metrics from Google Cloud Monitoring
@@ -112,7 +112,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Cloud SQL.
@@ -158,13 +158,12 @@ If you run into any problems while setting up monitoring for your Cloud SQL's me
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud SQL Admin privilege.
* Access to a project in GCP
-* Self-hosted SigNoz
## Setup
### Get started with Cloud SQL Configuration
-Follow the steps mentioned in the [Creating Cloud SQL](/docs/gcp-monitoring/cloud-sql/cloud-sql-creation) document to create Cloud SQL instance.
+Follow the steps mentioned in the [Creating Cloud SQL](https://signoz.io/docs/gcp-monitoring/cloud-sql/cloud-sql-creation/) document to create Cloud SQL instance.
## Deploy OpenTelemetry Collector to scrape the metrics from Google Cloud Monitoring
@@ -232,7 +231,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to SigNoz and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Cloud SQL.
diff --git a/data/docs/gcp-monitoring/compute-engine/logging.mdx b/data/docs/gcp-monitoring/compute-engine/logging.mdx
index f1d2be827..10672e096 100644
--- a/data/docs/gcp-monitoring/compute-engine/logging.mdx
+++ b/data/docs/gcp-monitoring/compute-engine/logging.mdx
@@ -8,11 +8,11 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough to send the Google Compute Engine logs directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Compute Engine logs to SigNoz.
+This documentation provides a detailed walkthrough to send the Google Compute Engine logs directly to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure Compute Engine
* Create Pub/Sub topic
@@ -20,12 +20,11 @@ This documentation provides a detailed walkthrough to send the Google Compute En
* Create Compute Engine instance
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz Cloud
* Make changes to Compute Engine instance to generate logs
-* Send and Visualize the logs in SigNoz Cloud
+* Send and Visualize the logs in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Compute Instance Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
### Get started with Compute Engine Configuration
@@ -65,11 +64,11 @@ With this, the Compute Engine instance is created.
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Compute Engine logs, use the following filter conditions:
@@ -79,7 +78,7 @@ resource.type="gce_instance"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
#### Install OTel Collector as agent
@@ -226,8 +225,8 @@ Compute Engine Logs in SigNoz Cloud
-
-**Here's a quick summary of what we will be doing in this guide**
+
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure Compute Engine
* Create Pub/Sub topic
@@ -235,13 +234,12 @@ Compute Engine Logs in SigNoz Cloud
* Self-Host SigNoz
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz Cloud
* Make changes to Compute Engine instance to generate logs
-* Send and Visualize the logs in SigNoz
+* Send and Visualize the logs in SigNoz */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Compute Instance Admin privilege.
* Access to a project in GCP
-* [Self-Hosted SigNoz](https://signoz.io/docs/install/docker/)
For more details on how to configure Self-Hosted SigNoz for Logs, check [official documentation by Self-Hosted SigNoz](https://signoz.io/docs/userguide/send-logs-http/#send-logs-to-self-hosted-signoz) and navigate to the "Send Logs to Self-Hosted SigNoz" section.
@@ -284,11 +282,11 @@ With this, the Compute Engine instance is created.
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Compute Engine logs, use the following filter conditions:
@@ -298,7 +296,7 @@ resource.type="gce_instance"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
diff --git a/data/docs/gcp-monitoring/compute-engine/metrics.mdx b/data/docs/gcp-monitoring/compute-engine/metrics.mdx
index e8137f74f..3881e2dd7 100644
--- a/data/docs/gcp-monitoring/compute-engine/metrics.mdx
+++ b/data/docs/gcp-monitoring/compute-engine/metrics.mdx
@@ -8,22 +8,21 @@ hide_table_of_contents: true
## Overview
-This document provides a detailed walkthrough on how to send Google Compute Engine metrics to SigNoz. By the end of this guide, you will have a setup that sends your Compute Engine metrics to SigNoz.
+This document provides a detailed walkthrough on how to send Google Compute Engine metrics to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure Compute Engine VM instance(whose metrics you want to observe in SigNoz)
* Create and configure Compute Engine VM instance to deploy OpenTelemetry Collector
* Deploy OpenTelemetry Collector to scrape the metrics from Google Cloud Monitoring
-* Send and Visualize the metrics in SigNoz Cloud
+* Send and Visualize the metrics in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Compute Instance Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
## Setup
@@ -146,7 +145,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Compute Engine.
@@ -180,18 +179,17 @@ If you run into any problems while setting up monitoring for your Compute Engine
-**Here’s a quick summary of what we will be doing in this guide**
+{/* **Here’s a quick summary of what we will be doing in this guide**
* Create and configure Compute Engine VM instance(whose metrics you want to observe in SigNoz)
* Create and configure Compute Engine VM instance to deploy OpenTelemetry Collector
* Deploy OpenTelemetry Collector to scrape the metrics from Google Cloud Monitoring
-* Visualize the metrics in SigNoz dashboard
+* Visualize the metrics in SigNoz dashboard */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Compute Instance Admin privilege.
* Access to a project in GCP
-* Self-hosted SigNoz
## Setup
@@ -302,7 +300,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to SigNoz and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Compute Engine.
diff --git a/data/docs/gcp-monitoring/compute-engine/tracing.mdx b/data/docs/gcp-monitoring/compute-engine/tracing.mdx
index b98cf87ae..1176c8f56 100644
--- a/data/docs/gcp-monitoring/compute-engine/tracing.mdx
+++ b/data/docs/gcp-monitoring/compute-engine/tracing.mdx
@@ -10,7 +10,7 @@ hide_table_of_contents: true
This documentation provides a detailed walkthrough to send the Google Compute Engine traces directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Compute Engine traces to SigNoz.
-
+
**Here's a quick summary of what we will be doing in this guide**
diff --git a/data/docs/gcp-monitoring/gcp-clb/logging.mdx b/data/docs/gcp-monitoring/gcp-clb/logging.mdx
index ad0d6d98b..92d5e5973 100644
--- a/data/docs/gcp-monitoring/gcp-clb/logging.mdx
+++ b/data/docs/gcp-monitoring/gcp-clb/logging.mdx
@@ -7,35 +7,34 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough on how to set up a Pub/Sub to collect Cloud Load Balancer (CLB) logs and forward them to SigNoz. By the end of this guide, you will have a setup that automatically sends your CLB logs to SigNoz, enabling you to visualize and monitor your application's load-balancing performance and health.
+This documentation provides a detailed walkthrough on how to set up a Pub/Sub to collect Cloud Load Balancer (CLB) logs and forward them to SigNoz.
-Here's a quick summary of what we will be doing in this guide
+{/* Here's a quick summary of what we will be doing in this guide
* Create a Pub/Sub topic.
* Create a Log Router to route the Cloud Load Balancer logs to SigNoz.
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz.
-* Send and Visualize the logs in SigNoz.
+* Send and Visualize the logs in SigNoz. */}
-
+
## Prerequisites
1. [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or load balancer Admin privilege.
2. Cloud Load Balancer (logging should be enabled)
-3. [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your Ingestion Key and Ingestion URL, sign in to your SigNoz Cloud Account and go to Settings >> Ingestion Settings)
-4. Access to a project in GCP
-5. [Google Cloud Monitoring API](https://console.cloud.google.com/apis/api/monitoring.googleapis.com) enabled
+3. Access to a project in GCP
+4. [Google Cloud Monitoring API](https://console.cloud.google.com/apis/api/monitoring.googleapis.com) enabled
## Setup
### Create a Pub/Sub topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.iohttps://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation//) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.iohttps://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup//) document to create the Log Router.
To ensure you filter out only the Load Balancer logs, use the following filter conditions:
@@ -114,7 +113,7 @@ Verify log router getting volume upon any trigger of CLB.
After the log router configuration and permission is done, let’s configure the OTel collector to receive these logs.
### OTel Collector Configuration
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
@@ -239,18 +238,17 @@ By following this guide, you should be able to easily send the logs from your Go
1. [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or load balancer Admin privilege.
2. Cloud Load Balancer (logging should be enabled)
-3. [Self-Hosted SigNoz](https://signoz.io/docs/install/docker/)
-5. [Google Cloud Monitoring API](https://console.cloud.google.com/apis/api/monitoring.googleapis.com) enabled
+3. [Google Cloud Monitoring API](https://console.cloud.google.com/apis/api/monitoring.googleapis.com) enabled
## Setup
### Create a Pub/Sub topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.iohttps://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation//) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.iohttps://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup//) document to create the Log Router.
To ensure you filter out only the Load Balancer logs, use the following filter conditions:
@@ -329,7 +327,7 @@ Verify log router getting volume upon any trigger of CLB.
After the log router configuration and permission, let’s configure the OTel collector to receive these logs.
### OTel Collector Configuration
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
diff --git a/data/docs/gcp-monitoring/gcp-clb/metrics.mdx b/data/docs/gcp-monitoring/gcp-clb/metrics.mdx
index 605896484..62f353844 100644
--- a/data/docs/gcp-monitoring/gcp-clb/metrics.mdx
+++ b/data/docs/gcp-monitoring/gcp-clb/metrics.mdx
@@ -7,21 +7,20 @@ hide_table_of_contents: true
## Overview
-This document provides a detailed walkthrough on how to send Google Cloud Load Balancer metrics to SigNoz. By the end of this guide, you will have a setup that sends your Cloud Load Balancer metrics to SigNoz.
+This document provides a detailed walkthrough on how to send Google Cloud Load Balancer metrics to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Deploy OpenTelemetry to scrape the metrics from Google Cloud Monitoring.
-* Send and Visualize the metrics obtained by OpenTelemetry in SigNoz
+* Send and Visualize the metrics obtained by OpenTelemetry in SigNoz */}
## Prerequisites
* You should have a [Google Cloud account](https://console.cloud.google.com/) with administrative privileges or Cloud Load Balancer Admin privileges both should suffice.
* Cloud Load Balancer
-* A [SigNoz Cloud account](https://signoz.io/teams/). You'll need the ingestion details. To obtain your Ingestion Key and URL, log in to your SigNoz Cloud account and navigate to Settings >> Ingestion Settings.
* Access to a project in Google Cloud Platform (GCP).
* The Google Cloud Load Balancer APIs must be enabled. You can follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to learn how to enable an API in Google Cloud.
@@ -107,7 +106,7 @@ OpenTelemetry logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Cloud Load Balancer.
@@ -141,11 +140,11 @@ If you run into any problems while setting up monitoring for your Cloud Load Bal
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring.
* Deploy the self-hosted SigNoz.
-* Visualize the metrics in the SigNoz dashboard.
+* Visualize the metrics in the SigNoz dashboard. */}
## Prerequisites
@@ -153,7 +152,6 @@ If you run into any problems while setting up monitoring for your Cloud Load Bal
* Cloud Load Balancer
* Access to a project in Google Cloud Platform (GCP).
* The Google Cloud Load Balancer APIs must be enabled. You can follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to learn how to enable an API in Google Cloud.
-* Self-hosted SigNoz
## Setup
@@ -263,7 +261,7 @@ The default Self-Hosted SigNoz dashboard endpoint would be `http://
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select the metric for the Cloud Load Balancer
diff --git a/data/docs/gcp-monitoring/gcp-fns/custom-metrics.mdx b/data/docs/gcp-monitoring/gcp-fns/custom-metrics.mdx
index 9b70bae6e..7f6502ba7 100644
--- a/data/docs/gcp-monitoring/gcp-fns/custom-metrics.mdx
+++ b/data/docs/gcp-monitoring/gcp-fns/custom-metrics.mdx
@@ -8,21 +8,20 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough on how to set up a Google Cloud Function to send the custom metrics to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud Function custom metrics to SigNoz.
+This documentation provides a detailed walkthrough on how to set up a Google Cloud Function to send the custom metrics to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function
* Invoke the Cloud Function
-* Send and Visualize the custom metrics in SigNoz
+* Send and Visualize the custom metrics in SigNoz */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
@@ -263,16 +262,15 @@ By following this guide. You can easily send the custom metrics of your Google C
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function
* Invoke the Cloud Function
-* Send and Visualize the custom metrics in Self-Hosted SigNoz
+* Send and Visualize the custom metrics in Self-Hosted SigNoz */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
-* [Self-Host Self-Hosted SigNoz](https://signoz.io/docs/install/docker/) (For more details on how to configure Self-Hosted SigNoz)
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
diff --git a/data/docs/gcp-monitoring/gcp-fns/fns-metrics.mdx b/data/docs/gcp-monitoring/gcp-fns/fns-metrics.mdx
index e3b23360d..b617330ef 100644
--- a/data/docs/gcp-monitoring/gcp-fns/fns-metrics.mdx
+++ b/data/docs/gcp-monitoring/gcp-fns/fns-metrics.mdx
@@ -6,22 +6,21 @@ hide_table_of_contents: true
---
## Overview
-This document provides a detailed walkthrough on how to send Google Cloud Functions metrics to SigNoz. By the end of this guide, you will have a setup that sends your Cloud Function metrics to SigNoz.
+This document provides a detailed walkthrough on how to send Google Cloud Functions metrics to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function to generate the metrics.
* Invoke the Cloud Function.
* Deploy OpenTelemetry Collector to scrape the metrics from Google Cloud Monitoring.
-* Send and Visualize the metrics obtained by OpenTelemetry in SigNoz.
+* Send and Visualize the metrics obtained by OpenTelemetry in SigNoz. */}
## Prerequisites
* A [Google Cloud account](https://console.cloud.google.com/) with administrative privileges or Cloud Functions Admin privileges.
-* A [SigNoz Cloud account](https://signoz.io/teams/) (used for this demonstration). You'll need the ingestion details. To obtain your Ingestion Key and Ingestion URL, log in to your SigNoz Cloud account and navigate to Settings >> Ingestion Settings.
* Access to a project in Google Cloud Platform (GCP).
* The Google Cloud Functions APIs must be enabled. You can follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to learn how to enable an API in Google Cloud.
@@ -242,7 +241,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Cloud Function.
@@ -278,13 +277,13 @@ If you run into any problems while setting up monitoring for your Cloud Function
-**Here’s a quick summary of what we will be doing in this guide**
+{/* **Here’s a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function to generate the metrics.
* Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring.
* Deploy the self-hosted **SigNoz.**
* Invoke the Cloud Function.
-* Visualize the metrics in the **SigNoz** dashboard.
+* Visualize the metrics in the **SigNoz** dashboard. */}
## Prerequisites
@@ -543,7 +542,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to your Self-Hosted SigNoz UI, and navigate to the Self-Hosted SigNoz dashboard. Click on the **Dashboards** section to view the metrics. Create a new dashboard (If not already present ). The default Self-Hosted SigNoz dashboard endpoint would be `http://:3301`, however, the URL can be different based on how you have set up the infrastructure.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Cloud Function.
diff --git a/data/docs/gcp-monitoring/gcp-fns/logging.mdx b/data/docs/gcp-monitoring/gcp-fns/logging.mdx
index e94ba9b62..f83e775ae 100644
--- a/data/docs/gcp-monitoring/gcp-fns/logging.mdx
+++ b/data/docs/gcp-monitoring/gcp-fns/logging.mdx
@@ -8,15 +8,15 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough on how to set up a Google Cloud Function to send the logs directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud Function logs to SigNoz.
+This documentation provides a detailed walkthrough on how to set up a Google Cloud Function to send the logs directly to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function
* Create Pub/Sub topic
@@ -24,13 +24,12 @@ This documentation provides a detailed walkthrough on how to set up a Google Clo
* Create Compute Engine instance
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz Cloud
* Invoke the Cloud Function using Trigger
-* Send and Visualize the logs in SigNoz Cloud
+* Send and Visualize the logs in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
@@ -196,11 +195,11 @@ Viewing Cloud Function Logs
### Create Pub/Sub topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Setup Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Cloud Function logs, use the following filter conditions:
@@ -230,7 +229,7 @@ Filter Cloud Functions Logs
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
@@ -344,17 +343,16 @@ Functions Logs in SigNoz Cloud
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function
* Invoke the Cloud Function using Trigger
-* Send and Visualize the logs in SigNoz Cloud using HTTP calls
+* Send and Visualize the logs in SigNoz Cloud using HTTP calls */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
@@ -646,7 +644,7 @@ By following this guide, you should be able to easily send the logs of your Goog
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function
* Create Pub/Sub topic
@@ -654,7 +652,7 @@ By following this guide, you should be able to easily send the logs of your Goog
* Self-Host SigNoz
* Create OTel collector to route logs from Pub/Sub topic to self hosted SigNoz
* Invoke the Cloud Function using Trigger
-* Send and Visualize the logs in SigNoz
+* Send and Visualize the logs in SigNoz */}
## Prerequisites
@@ -662,7 +660,6 @@ By following this guide, you should be able to easily send the logs of your Goog
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
-* [Self-Hosted SigNoz](https://signoz.io/docs/install/docker/)
For more details on how to configure Self-Hosted SigNoz for Logs, check [official documentation by Self-Hosted SigNoz](https://signoz.io/docs/userguide/send-logs-http/#send-logs-to-self-hosted-signoz) and navigate to the "Send Logs to Self-Hosted SigNoz" section
@@ -830,11 +827,11 @@ Viewing Cloud Function Logs
### Create Pub/Sub topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Setup Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Cloud Function logs, use the following filter conditions:
@@ -864,7 +861,7 @@ Filter Cloud Functions Logs
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
@@ -960,11 +957,11 @@ You can now trigger the Cloud Function a few times, and see the logs from the GC
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function
* Invoke the Cloud Function using Trigger
-* Send and Visualize the logs in Self-Hosted SigNoz
+* Send and Visualize the logs in Self-Hosted SigNoz */}
## Prerequisites
@@ -972,7 +969,6 @@ You can now trigger the Cloud Function a few times, and see the logs from the GC
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
-* [Self-Hosted SigNoz](https://signoz.io/docs/install/docker/)
For more details on how to configure Self-Hosted SigNoz for Logs, check [official documentation by Self-Hosted SigNoz](https://signoz.io/docs/userguide/send-logs-http/#send-logs-to-self-hosted-signoz) and navigate to the "Send Logs to Self-Hosted SigNoz" section
diff --git a/data/docs/gcp-monitoring/gcp-fns/tracing.mdx b/data/docs/gcp-monitoring/gcp-fns/tracing.mdx
index 782d58810..333608ea2 100644
--- a/data/docs/gcp-monitoring/gcp-fns/tracing.mdx
+++ b/data/docs/gcp-monitoring/gcp-fns/tracing.mdx
@@ -8,23 +8,22 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough on how to set up a Google Cloud Function to send the traces to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud Function traces to SigNoz.
+This documentation provides a detailed walkthrough on how to set up a Google Cloud Function to send the traces to SigNoz.
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure a Cloud Function
* Invoke the Cloud Function
-* Send and Visualize the traces in SigNoz
+* Send and Visualize the traces in SigNoz */}
-
+
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
@@ -366,7 +365,6 @@ By following this guide, you should be able to easily send the traces of your Go
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Cloud Functions Admin privilege.
-* [Self-Host SigNoz](https://signoz.io/docs/install/docker/) (For more details on how to configure Self-Hosted SigNoz)
* Access to a project in GCP
* Google Cloud Functions APIs enabled (follow [this](https://support.google.com/googleapi/answer/6158841?hl=en) guide to see how to enable an API in Google Cloud)
diff --git a/data/docs/gcp-monitoring/gcs/logging.mdx b/data/docs/gcp-monitoring/gcs/logging.mdx
index 5699374f3..08ddc09fe 100644
--- a/data/docs/gcp-monitoring/gcs/logging.mdx
+++ b/data/docs/gcp-monitoring/gcs/logging.mdx
@@ -8,23 +8,22 @@ hide_table_of_contents: true
## Overview
-This documentation provides a detailed walkthrough to send the Google CCloud Storage logs directly to SigNoz. By the end of this guide, you will have a setup that automatically sends your Cloud Storage logs to SigNoz.
+This documentation provides a detailed walkthrough to send the Google CCloud Storage logs directly to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure Cloud Storage Audit Logs
* Create Pub/Sub topic
* Create Log Router to route the Cloud Storage logs to SigNoz
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz Cloud
* Create Cloud Storage bucket and objects
-* Send and Visualize the logs in SigNoz Cloud
+* Send and Visualize the logs in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Logging Admin privilege.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
## Setup
@@ -83,11 +82,11 @@ Enable Data Write Audit Logs
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Compute Engine logs, use the following filter conditions:
@@ -97,7 +96,7 @@ resource.type="gcs_bucket"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
#### Install OTel Collector as agent
@@ -258,8 +257,8 @@ Cloud Storage Logs in SigNoz Cloud
-
-**Here's a quick summary of what we will be doing in this guide**
+
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create and configure Cloud Storage Audit Logs
* Create Pub/Sub topic
@@ -267,13 +266,12 @@ Cloud Storage Logs in SigNoz Cloud
* Self-Host SigNoz
* Create OTel Collector to route logs from Pub/Sub topic to SigNoz Cloud
* Create Cloud Storage bucket and objects
-* Send and Visualize the logs in SigNoz
+* Send and Visualize the logs in SigNoz */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege or Logging Admin privilege.
* Access to a project in GCP
-* [Self-Hosted SigNoz](https://signoz.io/docs/install/docker/)
For more details on how to configure Self-Hosted SigNoz for Logs, check [official documentation by Self-Hosted SigNoz](https://signoz.io/docs/userguide/send-logs-http/#send-logs-to-self-hosted-signoz) and navigate to the "Send Logs to Self-Hosted SigNoz" section.
@@ -335,11 +333,11 @@ Enable Data Write Audit Logs
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Compute Engine logs, use the following filter conditions:
@@ -349,7 +347,7 @@ resource.type="gcs_bucket"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
diff --git a/data/docs/gcp-monitoring/gcs/metrics.mdx b/data/docs/gcp-monitoring/gcs/metrics.mdx
index 2ce6a6d49..37d954fdc 100644
--- a/data/docs/gcp-monitoring/gcs/metrics.mdx
+++ b/data/docs/gcp-monitoring/gcs/metrics.mdx
@@ -10,7 +10,7 @@ hide_table_of_contents: true
This document provides a detailed walkthrough on how to send Google Cloud Storage metrics to SigNoz. By the end of this guide, you will have a setup that sends your Cloud Storage metrics to SigNoz.
-
+
**Here's a quick summary of what we will be doing in this guide**
@@ -159,7 +159,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Cloud Storage.
@@ -329,7 +329,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to SigNoz and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** While creating the panel, select metric for Cloud Storage.
diff --git a/data/docs/gcp-monitoring/gke/gke-logging-and-metrics.mdx b/data/docs/gcp-monitoring/gke/gke-logging-and-metrics.mdx
index 71d1155d9..929968135 100644
--- a/data/docs/gcp-monitoring/gke/gke-logging-and-metrics.mdx
+++ b/data/docs/gcp-monitoring/gke/gke-logging-and-metrics.mdx
@@ -6,9 +6,9 @@ title: GKE Metrics and Logging
## Overview
-GKE (Google Kubernetes Engine) is a managed Kubernetes service provided by Google that simplifies the deployment, management, and operation of Kubernetes clusters. By using this documentation, you can send the metrics and logs of the GKE cluster to SigNoz.
+GKE (Google Kubernetes Engine) is a managed Kubernetes service provided by Google that simplifies the deployment, management, and operation of Kubernetes clusters.
-
+
## Prerequisites
@@ -16,7 +16,6 @@ GKE (Google Kubernetes Engine) is a managed Kubernetes service provided by Googl
* [GKE cluster](https://cloud.google.com/kubernetes-engine)
* Install [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) to access the GKE cluster.
* [Install Helm](https://helm.sh/docs/intro/install/)
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
## Quick Start
diff --git a/data/docs/gcp-monitoring/gke/gke-tracing.mdx b/data/docs/gcp-monitoring/gke/gke-tracing.mdx
index 8d6767f13..0d25bd3a3 100644
--- a/data/docs/gcp-monitoring/gke/gke-tracing.mdx
+++ b/data/docs/gcp-monitoring/gke/gke-tracing.mdx
@@ -7,9 +7,10 @@ hide_table_of_contents: true
## Overview
-Google Kubernetes Engine (GKE) is a managed Kubernetes service provided by Google that simplifies the deployment, management, and operation of Kubernetes clusters. This guide will help you send traces from your GKE cluster to SigNoz.
+Google Kubernetes Engine (GKE) is a managed Kubernetes service provided by Google that simplifies the deployment, management, and operation of Kubernetes clusters.
+This document will help you send traces from your GKE cluster to SigNoz.
-
+
## Prerequisites
@@ -17,7 +18,6 @@ Google Kubernetes Engine (GKE) is a managed Kubernetes service provided by Googl
* A [GKE cluster](https://cloud.google.com/kubernetes-engine)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) installed to access the GKE cluster
* [Helm](https://helm.sh/docs/intro/install/) installed
-* [A SigNoz Cloud Account](https://signoz.io/teams/) For this demonstration, we'll be using SigNoz Cloud. You'll need your Ingestion Key and Ingestion URL, which can be found by signing in to your SigNoz Cloud account and navigating to **Settings** > **Ingestion Settings**.
## Quick Start
@@ -293,7 +293,7 @@ data:
```
This should start sending signals to SigNoz.
-## eBPF Tracing
+{/* ## eBPF Tracing
There are solution to collect metrics and traces without modifying the application code. These solutions come under the category of eBPF Tracing. These solutions are relatively new and are still in the early stages of development.
@@ -358,13 +358,13 @@ To visualize the traces log into the SigNoz account and navigate to the traces s
Traces in SigNoz Dashboard
-
+ */}
## APM and Distributed Tracing
For application-level tracing, you can use the OpenTelemetry SDKs integrated with your application. These SDKs automatically collect and forward traces to the central collector.
-Please refer to our [SigNoz Tutorials](https://signoz.io/docs/instrumentation/) or [Blog](https://signoz.io/blog/) to find information on how to instrument your application like Spring, FastAPI, NextJS, Langchain, Node.js, Flask, Django, etc.
+Please refer to our [SigNoz Documentation](https://signoz.io/docs/instrumentation/) to find information on how to instrument your application like Spring, FastAPI, NextJS, Langchain, Node.js, Flask, Django, etc.
## Sample Python Application
diff --git a/data/docs/gcp-monitoring/vpc/logging.mdx b/data/docs/gcp-monitoring/vpc/logging.mdx
index dd954e5d5..c1e693a69 100644
--- a/data/docs/gcp-monitoring/vpc/logging.mdx
+++ b/data/docs/gcp-monitoring/vpc/logging.mdx
@@ -8,31 +8,30 @@ hide_table_of_contents: true
## Overview
-This document provides a detailed walkthrough on how to send Serverless VPC Access Connector logs to SigNoz. By the end of this guide, you will have a setup that sends your VPC access logs to SigNoz.
+This document provides a detailed walkthrough on how to send Serverless VPC Access Connector logs to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create Serverless VPC Access Connector
* Enable Flow Logs
* Create Pub/Sub topic
* Create Log Router to route the Cloud Storage logs to SigNoz
* Create OTel Collector on Compute Engine instance to route logs from Pub/Sub topic to SigNoz Cloud
-* Send and Visualize the logs in SigNoz Cloud
+* Send and Visualize the logs in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege, or Serverless VPC Access Admin and Compute Engine Admin privilege. You might also require access to create Cloud Function in case you are following the tutorial to create Serverless VPC Connector.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
## Setup
### Create Serverless VPC Access Connector
-Follow the [Creating Serverless VPC Access Connector](/docs/gcp-monitoring/vpc/vpc-connector-creation) document to create the serverless VPC access connector.
+Follow the [Creating Serverless VPC Access Connector](https://signoz.io/docs/gcp-monitoring/vpc/vpc-connector-creation/) document to create the serverless VPC access connector.
### Enable Flow Logs
@@ -85,11 +84,11 @@ Step 5: Click on **SAVE**. The flow logs are now enabled for the network in the
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Compute Engine logs, use the following filter conditions:
@@ -99,7 +98,7 @@ resource.type="gce_subnetwork"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create another Compute Engine instance. We will be installing OTel Collector on this instance.
#### Install OTel Collector as agent
@@ -214,26 +213,25 @@ Network Logs
-**Here’s a quick summary of what we will be doing in this guide**
+{/* **Here’s a quick summary of what we will be doing in this guide**
* Create Serverless VPC Access Connector
* Enable Flow Logs
* Create Pub/Sub topic
* Create Log Router to route the Cloud Storage logs to SigNoz
* Create OTel Collector on Compute Engine instance to route logs from Pub/Sub topic to SigNoz
-* Send and Visualize the logs in SigNoz
+* Send and Visualize the logs in SigNoz */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege, or Serverless VPC Access Admin and Compute Engine Admin privilege. You might also require access to create Cloud Function in case you are following the tutorial to create Serverless VPC Connector.
* Access to a project in GCP
-* Self-hosted SigNoz
## Setup
### Create Serverless VPC Access Connector
-Follow the [Creating Serverless VPC Access Connector](/docs/gcp-monitoring/vpc/vpc-connector-creation) document to create the serverless VPC access connector.
+Follow the [Creating Serverless VPC Access Connector](https://signoz.io/docs/gcp-monitoring/vpc/vpc-connector-creation/) document to create the serverless VPC access connector.
### Enable Flow Logs
@@ -286,11 +284,11 @@ Step 5: Click on **SAVE**. The flow logs are now enabled for the network in the
### Create PubSub Topic
-Follow the steps mentioned in the [Creating Pub/Sub Topic](/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation) document to create the Pub/Sub topic.
+Follow the steps mentioned in the [Creating Pub/Sub Topic](https://signoz.io/docs/gcp-monitoring/bootstrapping/pubsub-topic-creation/) document to create the Pub/Sub topic.
### Create Log Router to Pub/Sub Topic
-Follow the steps mentioned in the [Log Router Setup](/docs/gcp-monitoring/bootstrapping/log-router-setup) document to create the Log Router.
+Follow the steps mentioned in the [Log Router Setup](https://signoz.io/docs/gcp-monitoring/bootstrapping/log-router-setup/) document to create the Log Router.
To ensure you filter out only the Compute Engine logs, use the following filter conditions:
@@ -300,7 +298,7 @@ resource.type="gce_subnetwork"
### Setup OTel Collector
-Follow the steps mentioned in the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+Follow the steps mentioned in the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
#### Install OTel Collector as agent
diff --git a/data/docs/gcp-monitoring/vpc/metrics.mdx b/data/docs/gcp-monitoring/vpc/metrics.mdx
index a308e1256..288831318 100644
--- a/data/docs/gcp-monitoring/vpc/metrics.mdx
+++ b/data/docs/gcp-monitoring/vpc/metrics.mdx
@@ -8,33 +8,32 @@ hide_table_of_contents: true
## Overview
-This document provides a detailed walkthrough on how to send Serverless VPC Access Connector metrics to SigNoz. By the end of this guide, you will have a setup that sends your VPC access metrics to SigNoz.
+This document provides a detailed walkthrough on how to send Serverless VPC Access Connector metrics to SigNoz.
-
+
-**Here's a quick summary of what we will be doing in this guide**
+{/* **Here's a quick summary of what we will be doing in this guide**
* Create Serverless VPC Access Connector
* Create and configure Compute Engine VM instance to deploy OpenTelemetry Collector
* Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
-* Send and Visualize the metrics in SigNoz Cloud
+* Send and Visualize the metrics in SigNoz Cloud */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege, or Serverless VPC Access Admin and Compute Engine Admin privilege. You might also require access to create Cloud Function in case you are following the tutorial to create Serverless VPC Connector.
-* [SigNoz Cloud Account](https://signoz.io/teams/) (we are using SigNoz Cloud for this demonstration, we will also need ingestion details. To get your **Ingestion Key** and **Ingestion URL,** sign-in to your SigNoz Cloud Account and go to **Settings** >> **Ingestion Settings**)
* Access to a project in GCP
## Setup
### Create Serverless VPC Access Connector
-Follow the [Creating Serverless VPC Access Connector](/docs/gcp-monitoring/vpc/vpc-connector-creation) document to create the serverless VPC access connector.
+Follow the [Creating Serverless VPC Access Connector](https://signoz.io/docs/gcp-monitoring/vpc/vpc-connector-creation/) document to create the serverless VPC access connector.
### Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
-You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
**Step 1:** Install and configure OpenTelemetry for scraping the metrics from GCP Serverless VPC Access Connector. Follow [OpenTelemetry Binary Usage in Virtual Machine](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) guide for detailed instructions.
@@ -114,7 +113,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to the SigNoz Cloud URL and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select metric for Serverless VPC Access Connector
@@ -147,28 +146,27 @@ If you run into any problems while setting up monitoring for your Serverless VPC
-**Here’s a quick summary of what we will be doing in this guide**
+{/* **Here’s a quick summary of what we will be doing in this guide**
* Create Serverless VPC Access Connector
* Create and configure Compute Engine VM instance to deploy OpenTelemetry Collector
* Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
-* Send and Visualize the metrics in SigNoz dashboard
+* Send and Visualize the metrics in SigNoz dashboard */}
## Prerequisites
* [Google Cloud account](https://console.cloud.google.com/) with administrative privilege, or Serverless VPC Access Admin and Compute Engine Admin privilege. You might also require access to create Cloud Function in case you are following the tutorial to create Serverless VPC Connector.
* Access to a project in GCP
-* Self-hosted SigNoz
## Setup
### Create Serverless VPC Access Connector
-Follow the [Creating Serverless VPC Access Connector](/docs/gcp-monitoring/vpc/vpc-connector-creation) document to create the serverless VPC access connector.
+Follow the [Creating Serverless VPC Access Connector](https://signoz.io/docs/gcp-monitoring/vpc/vpc-connector-creation/) document to create the serverless VPC access connector.
### Deploy OpenTelemetry Collector to fetch the metrics from Google Cloud Monitoring
-You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document to create the Compute Engine instance.
+You will need a Compute Engine instance to install OpenTelemetry Collector. You can follow the instructions on the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document to create the Compute Engine instance.
**Step 1:** Install and configure OpenTelemetry for scraping the metrics from GCP Serverless VPC Access Connector. Follow [OpenTelemetry Binary Usage in Virtual Machine](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) guide for detailed instructions.
@@ -238,7 +236,7 @@ Viewing OTel Collector Logs
**Step 1:** Go to SigNoz and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select metric for Serverless VPC Access Connector
diff --git a/data/docs/gcp-monitoring/vpc/vpc-connector-creation.mdx b/data/docs/gcp-monitoring/vpc/vpc-connector-creation.mdx
index c373cadd7..f3bcb6291 100644
--- a/data/docs/gcp-monitoring/vpc/vpc-connector-creation.mdx
+++ b/data/docs/gcp-monitoring/vpc/vpc-connector-creation.mdx
@@ -19,7 +19,7 @@ In order to build a serverless VPC access connector, we will create:
### Create Private Compute Engine Instance
-Step 1: Create a Compute Engine instance by following the [Creating Compute Engine](/docs/gcp-monitoring/bootstrapping/gce-creation) document. Note that we have instantiated an Ubuntu instance. The commands migth slightly differ for other operating systems.
+Step 1: Create a Compute Engine instance by following the [Creating Compute Engine](https://signoz.io/docs/gcp-monitoring/bootstrapping/gce-creation/) document. Note that we have instantiated an Ubuntu instance. The commands might slightly differ for other operating systems.
Step 2: SSH into this instance, and install node and npm onto it using the following commands:
diff --git a/data/docs/logs-management/send-logs/aws-lambda-nodejs.mdx b/data/docs/logs-management/send-logs/aws-lambda-nodejs.mdx
index 99b054efd..8c3276964 100644
--- a/data/docs/logs-management/send-logs/aws-lambda-nodejs.mdx
+++ b/data/docs/logs-management/send-logs/aws-lambda-nodejs.mdx
@@ -1,44 +1,58 @@
---
-date: 2024-09-04
+date: 2024-12-18
title: Send traces and logs from AWS Lambda Node.js functions to SigNoz
id: aws-lambda-nodejs
+hide_table_of_contents: true
---
-OpenTelemetry has autoinstrumentation support for Node.js lambda functions.
-With autoinstrumentation you will be able to send traces and logs easily.
+You can auto instrument your Node.js Lambda function to send traces and logs to SigNoz. Follow the steps below.
-In this example we will create a simple Node.js lambda function and deploy it.
-1. Create a new Node.js application using `yarn init -y`
-2. Add the following packages
-```
+
+
+
+
+## Configure OpenTelemetry for AWS Lambda Functions
+
+### Add Required Dependencies
+
+Add the OpenTelemetry API dependency to your NodeJS application
+```bash
yarn add @opentelemetry/api
```
-2. Add the following code to index.js
-```
+### Set Up Your Lambda Function
+
+Use OpenTelemetry APIs to trace spans and log events within your Lambda function.
+
+Below is an `index.js` file of a [sample app](https://github.com/SigNoz/nodejs-lambda). This example demonstrates how to create a trace span, emit a custom log
+entry, and ensure logs are captured by explicitly flushing the LoggerProvider at the end of the execution. It is important to flush to avoid losing any log data during
+the function's lifecycle.
+
+
+```javascript:index.js
const { trace } = require("@opentelemetry/api");
-const logsAPI = require('@opentelemetry/api-logs');
+const logsAPI = require("@opentelemetry/api-logs");
-const provider = logsAPI.logs.getLoggerProvider()
-const logger = provider.getLogger('default', '1.0.0');
-const { flush } = require("./instrumentation")
+const provider = logsAPI.logs.getLoggerProvider();
+const logger = provider.getLogger("default", "1.0.0");
+const { flush } = require("./instrumentation");
const tracer = trace.getTracer("test", "0.1");
exports.handler = async (event) => {
- const parentSpan = tracer.startSpan('main');
- tracer.startActiveSpan('testSpan', (parentSpan) => {
+ const parentSpan = tracer.startSpan("main");
+ tracer.startActiveSpan("testSpan", (span) => {
logger.emit({
- severityText: 'info',
- body: 'this is a log body example',
- attributes: { 'log.type': 'custom' },
+ severityText: "info",
+ body: "this is a log body example",
+ attributes: { "log.type": "custom" },
});
- parentSpan.end();
+ span.end();
});
const response = {
statusCode: 200,
- body: JSON.stringify('Hello from Lambda!'),
+ body: JSON.stringify("Hello from Lambda!"),
};
provider.forceFlush();
@@ -46,40 +60,48 @@ exports.handler = async (event) => {
};
```
-Here we are creating a span, emitting a log line and then flushing the loggerProvider.
-Please note that it is important to flush the loggerProvider to not miss any log lines.
+### Zip the Folder for Deployment
-Here is the [link](https://github.com/SigNoz/nodejs-lambda) to the github repository.
+Run the below command in the root directory of your NodeJS project. The command recursively (`-r`) compresses all files and folders in the current directory (`./`)
+into a zip file named `deploy.zip`.
-3. Zip the folder using `zip -r deploy.zip ./`
-4. Upload the zip to aws lambda by going to `Code` and selecting Upload from .zip file.
-5. Add the following environment variables by going to `Configuration` and selecting `Environment Variables`
+```bash
+zip -r deploy.zip ./
```
-AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
-OTEL_EXPORTER_OTLP_ENDPOINT=
-OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=
-OTEL_RESOURCE_ATTRIBUTES=service.name=
-
-```
-- The value of `SIGNOZ_ENDPOINT` will be `https://ingest.{region}.signoz.cloud:443` where depending on the choice of your region for SigNoz cloud, the otlp endpoint will vary according to this table.
-| Region | Endpoint |
-| ------ | -------------------------- |
-| US | ingest.us.signoz.cloud:443 |
-| IN | ingest.in.signoz.cloud:443 |
-| EU | ingest.eu.signoz.cloud:443 |
+### Upload to AWS Lambda
-- The value of `INGESTION_KEY` is your ingestion key.
-- The value of SERVICE_NAME will be the name of the lambda function.
+- Navigate to the **Code** section in AWS Lambda.
+- Select **Upload from .zip file** and upload `deploy.zip`.
+### Configure Environment Variables
-6. In the `Layers` section add a new layer, i.e the otel lambda layer
+Go to **Configuration** > **Environment Variables** and add the following:
+```environment
+AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
+OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest..signoz.cloud:443
+OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=
+OTEL_RESOURCE_ATTRIBUTES=service.name=
```
+- Set the `` to match your [SigNoz Cloud region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint).
+- Replace `` with your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/).
+- ``is the name of your Lambda function.
+
+### Add the OpenTelemetry Lambda Layer
+Go to the **Layers** section and add the following ARN:
+
+```environment
arn:aws:lambda::184161586896:layer:opentelemetry-nodejs-0_9_0:4
```
-- replace `` with the region where your function is running.
-You can find the latest version [here](https://github.com/open-telemetry/opentelemetry-lambda/releases)
+Replace `` with your AWS region.
+Check [OpenTelemetry Lambda releases](https://github.com/open-telemetry/opentelemetry-lambda/releases) for the latest version.
+
+### Test the Lambda Function
+
+Run the function and verify traces and logs in SigNoz.
+
+
-7. Now you can test the function and you will be able to see the corresponding logs and traces in SigNoz
\ No newline at end of file
+
\ No newline at end of file
diff --git a/data/docs/logs-management/send-logs/collect-tomcat-access-and-garbage-collector-logs.mdx b/data/docs/logs-management/send-logs/collect-tomcat-access-and-garbage-collector-logs.mdx
index 9ee280b1a..09588d2f0 100644
--- a/data/docs/logs-management/send-logs/collect-tomcat-access-and-garbage-collector-logs.mdx
+++ b/data/docs/logs-management/send-logs/collect-tomcat-access-and-garbage-collector-logs.mdx
@@ -1,234 +1,165 @@
---
-date: 2024-06-12
+date: 2024-12-17
title: Collecting Tomcat Access and Garbage Collector Logs
id: collect-tomcat-access-and-garbage-collector-logs
+hide_table_of_contents: true
---
-## Overview
-
-This documentation provides detailed instructions about configuring the OpenTelemetry Collector to read Tomcat Server Access and Garbage Collector logs and push them to SigNoz, enabling you to analyze them effectively.
-
-## Sample Log
-Here is how the Tomcat Access logs and Garbage Collector logs look like:
-
-### Sample Access Log
-```
-0:0:0:0:0:0:0:1 - - [18/Apr/2024:13:45:29 +0530] "GET /demo1/add?num1=1&num2=2 HTTP/1.1" 200 11
-0:0:0:0:0:0:0:1 - - [18/Apr/2024:13:45:30 +0530] "GET /demo1/add?num1=2&num2=3 HTTP/1.1" 200 11
-```
-### Sample Garbage Collector log
-```
-[0.724s][info][gc] GC(3) Concurrent Mark Cycle 6.218ms
-[0.772s][info][gc] GC(4) Pause Young (Prepare Mixed) (G1 Preventive Collection) 28M->8M(40M) 1.891ms
-[591.215s][info][gc] GC(5) Pause Young (Mixed) (G1 Evacuation Pause) 10M->8M(40M) 8.173ms
-```
-
-
-## Collect Logs in SigNoz Cloud
-
-### Prerequisite
-
-- SigNoz [cloud](https://signoz.io/teams/) account
-
-Sending logs to SigNoz cloud can be achieved by following these simple steps:
-- Installing OpenTelemetry Collector
-- Configuring filelog receiver
-
-### Install OpenTelemetry Collector
+## Overview
+You can configure OpenTelemetry Collector to read Tomcat Server Access and Garbage Collector logs and push them to SigNoz for analysis.
-The OpenTelemetry collector provides a vendor-neutral way to collect, process, and export your telemetry data such as logs, metrics, and traces.
+
+
-You can install OpenTelemetry collector (OTel collector) as an agent by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
-
+### Steps to Collect Logs in SigNoz Cloud
-### Configure Filelog receiver
+#### 1. Install OpenTelemetry Collector
+The OpenTelemetry Collector provides a vendor-neutral way to collect, process, and export telemetry data (logs, metrics, traces).
+Follow this [installation guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
-Modify the `config.yaml` file that you created while installing OTel collector in the previous step to include the filelog receiver. This involves specifying the path to your access and garbage collector logs and setting the `start_at` parameter, which specifies where to start reading logs from the log file. For more fields that are available for filelog receiver please check [this link](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
+#### 2. Configure Filelog Receiver
+Modify the `config.yaml` file from the installation to include the filelog receiver:
-```yaml
+```yaml:config.yaml
receivers:
- ...
filelog/access_logs:
- include: [ //localhost_access_log.*] #include the path to your access logs
+ include: [//localhost_access_log.*] # Path to access logs
start_at: end
filelog/gc_logs:
- include: [ //garbage-collection.log.*] #include the path to your garbage collector logs
+ include: [//garbage-collection.log.*] # Path to garbage collector logs
start_at: end
-...
```
-
-The `start_at: end` configuration ensures that only newly added logs are transmitted. If you wish to include historical logs from the file, remember to modify `start_at` to `beginning`.
-
-
-
-If you want to change the path of where your access logs are stored you can change it by adding the following in your server arguments
-`-Dcatalina.base=`
-
-If you want to change the path of where your garbage collector logs are stored you can change it by
-`-Xloggc:`
+
+- Use `start_at: end` to transmit only new logs. Change to `beginning` to include historical logs.
+- Update log paths using server arguments:
+ - Access logs: `-Dcatalina.base=`
+ - GC logs: `-Xloggc:`
+#### 3. Update Pipelines Configuration
+In the same `config.yaml`, add the receivers to the pipeline:
-### Update Pipelines Configuration
-
-In the same `config.yaml` file, update the pipeline settings to include the new filelog receiver. This step is crucial for ensuring that the logs are correctly processed and sent to SigNoz.
-
-```yaml {4}
+```yaml:config.yaml
service:
- ....
- logs:
- receivers: [otlp, filelog/access_logs, filelog/gc_logs]
- processors: [batch]
- exporters: [otlp]
+ logs:
+ receivers: [otlp, filelog/access_logs, filelog/gc_logs]
+ processors: [batch]
+ exporters: [otlp]
```
-Now restart the OTel collector so that new changes are applied. The steps to run the OTel collector can be found [here](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/)
+#### 4. Restart the OTel Collector
+Apply changes by [restarting the OTel Collector](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
-### Verify Export
+#### 5. Verify Export
+Check the SigNoz UI for the exported logs.
-The logs will be exported to SigNoz and will be visible in SigNoz UI.
-
-
-
-
-Add the filelog reciever to `otel-collector-config.yaml` which is present inside `deploy/docker/clickhouse-setup` directory in your self-hosted SigNoz setup. The configuration below tells the collector where to find your log file and how to start processing it.
+
- ```yaml {3-15}
- receivers:
- ...
- filelog/access_logs:
- include: [ //localhost_access_log.*] #include the path to your access logs
- start_at: end
- filelog/gc_logs:
- include: [ //garbage-collection.log.*] #include the path to your garbage collector logs
- start_at: end
- ...
- ```
+### Prerequisites
-
-The `start_at: end` configuration ensures that only newly added logs are transmitted. If you wish to include historical logs from the file, remember to modify `start_at` to `beginning`.
-
+### Steps to Collect Logs in Self-Hosted SigNoz
-
-If you want to change the path of where your access logs are stored you can change it by adding the following in your server arguments
-`-Dcatalina.base=`
+#### Scenario 1: SigNoz Running on the Same Host
-If you want to change the path of where your garbage collector logs are stored you can change it by
-`-Xloggc:`
-
+##### 1. Modify Docker Compose File
+Update [`docker-compose-minimal.yaml`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/docker-compose-minimal.yaml) to mount your log files:
+```yaml:/deploy/docker/clickhouse-setup/docker-compose-minimal.yaml
+otel-collector:
+ image: signoz/signoz-otel-collector:0.88.11
+ command: ["--config=/etc/otel-collector-config.yaml"]
+ volumes:
+ - ~//://
+ - ~//://
+```
+##### 2. Add Filelog Receiver
+Update [`otel-collector-config.yaml`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/otel-collector-config.yaml) to include the filelog receiver:
-For more fields that are available for filelog receiver please check [this link](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
-
-#### Update Pipeline configuration
-
-Modify the pipeline inside `otel-collector-config.yaml` to include the filelog receiver. This step is crucial for ensuring that the logs are correctly processed and sent to SigNoz.
+```yaml:/deploy/docker/clickhouse-setup/otel-collector-config.yaml
+receivers:
+ filelog/access_logs:
+ include: [//localhost_access_log.*] # Path to access logs
+ start_at: end
+ filelog/gc_logs:
+ include: [//garbage-collection.log.*] # Path to GC logs
+ start_at: end
+```
- ```yaml {4}
- service:
- ....
- logs:
- receivers: [otlp, filelog/access_logs, filelog/gc_logs]
- processors: [batch]
- exporters: [clickhouselogsexporter]
- ```
+##### 3. Update Pipelines Configuration
+Modify the pipeline to include the filelog receiver:
-Now, restart the OTel collector so that new changes are applied. You can find instructions to run OTel collector [here](https://signoz.io/docs/install/docker/)
+```yaml::/deploy/docker/clickhouse-setup/otel-collector-config.yaml
+service:
+ logs:
+ receivers: [otlp, filelog/access_logs, filelog/gc_logs]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+```
-#### Verify Export
+##### 4. Restart the OTel Collector
+Restart the OTel Collector to apply changes. [Guide here](https://signoz.io/docs/install/docker/).
-The logs will be exported to SigNoz UI if there are no errors.
+##### 5. Verify Export
+Check the SigNoz UI for the exported logs.
-
-
- Sample tomcat access logs data shown in SigNoz Logs Explorer
+
+
+ Access logs in SigNoz Logs Explorer
-
-
-
- Sample tomcat garbage collector logs data shown in SigNoz Logs Explorer
+
+
+
+ GC logs in SigNoz Logs Explorer
+#### Scenario 2: SigNoz Running on a Different Host
-### Running on a different host
-
-If you have a SigNoz running on a different host then you will have to run a OTel collector to export logs from your host to the host where SigNoz is running.
-
-#### Create OTel collector configuration
-
-You need to create an `otel-collector-config.yaml` file, this file defines how the OTel collector will process and forward logs to your SigNoz instance.
-
- ```yaml
- receivers:
- filelog/access_logs:
- include: [ //localhost_access_log.*] #include the path to your access logs
- start_at: end
- filelog/gc_logs:
- include: [ //garbage-collection.log.*] #include the path to your garbage collector logs
- start_at: end
- processors:
- batch:
- send_batch_size: 10000
- send_batch_max_size: 11000
- timeout: 10s
- exporters:
- otlp/log:
- endpoint: http://:
- tls:
- insecure: true
- service:
- pipelines:
- logs:
- receivers: [filelog/access_logs, filelog/gc_logs]
- processors: [batch]
- exporters: [ otlp/log ]
- ```
-
-
-The parsed logs are batched up using the batch processor and then exported to the host where SigNoz is deployed. For finding the right host and port for your SigNoz cluster please follow the guide [here](../../install/troubleshooting.md#signoz-otel-collector-address-grid).
+##### 1. Create OTel Collector Configuration
+Define `otel-collector-config.yaml`:
+
+```yaml
+receivers:
+ filelog/access_logs:
+ include: [//localhost_access_log.*]
+ start_at: end
+ filelog/gc_logs:
+ include: [//garbage-collection.log.*]
+ start_at: end
+processors:
+ batch:
+ send_batch_size: 10000
+ send_batch_max_size: 11000
+ timeout: 10s
+exporters:
+ otlp/log:
+ endpoint: http://:
+ tls:
+ insecure: true
+service:
+ pipelines:
+ logs:
+ receivers: [filelog/access_logs, filelog/gc_logs]
+ processors: [batch]
+ exporters: [otlp/log]
+```
-The `otlp/log` exporter in the above configuration file uses a `http` endpoint but if you want to use `https` you will have to provide the certificate and the key. You can read more about it [here](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README.md)
+For HTTPS, configure certificates as per [this guide](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README.md).
+
+
+
diff --git a/data/docs/logs-management/send-logs/windows-events-log.mdx b/data/docs/logs-management/send-logs/windows-events-log.mdx
index 90f008e6e..1e4a521ad 100644
--- a/data/docs/logs-management/send-logs/windows-events-log.mdx
+++ b/data/docs/logs-management/send-logs/windows-events-log.mdx
@@ -1,14 +1,12 @@
---
-date: 2024-07-03
+date: 2024-12-18
title: Windows Events log to SigNoz
id: windows_events_logs
---
## Overview
-If you are using a Windows environment, you can stream Windows Event Log to SigNoz using OpenTelemetry Collector.
-
-Monitoring specific Event Log sources, known as Channels, can be done using the Windows Event Log receiver which is configured in the OpenTelemetry Collector configuration file.
+If you are using a Windows environment, you can stream Windows Event Logs to SigNoz using OpenTelemetry Collector. Monitoring specific Event Log sources, known as Channels, can be done using the Windows Event Log receiver configured in the OpenTelemetry Collector configuration file.
Key channels typically monitored include:
@@ -16,27 +14,68 @@ Key channels typically monitored include:
- **Security:** Records security-related events such as login attempts and resource access.
- **System:** Captures events related to system components, drivers, and services.
-## Prerequisites
+
+
+
-- SigNoz [cloud account](https://signoz.io/teams/)
-- Microsoft User account with permissions to access EventLog and Services
+## Prerequisites
+- Microsoft User account with permissions to access Event Log and Services.
## Setup
### Step 1: Install OpenTelemetry Collector
-The OpenTelemetry collector provides a vendor-neutral way to collect, process, and export your telemetry data such as logs, metrics, and traces.
-You can install OpenTelemetry collector as an agent on your Virtual Machine by following this [documentation](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
+Install OpenTelemetry Collector as an agent by following the [SigNoz Cloud Installation Documentation](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
+
+### Step 2: Configure `windowseventlog` receiver
+
+#### Add `windowseventlog` Receiver
+
+Modify the `config.yaml` file to include the `windowseventlog` receiver for monitoring application and system logs:
+
+```yaml
+receivers:
+ windowseventlog/application:
+ channel: application
+ windowseventlog/system:
+ channel: system
+```
+
+#### Update Pipelines Configuration
+
+In the same `config.yaml` file, update the `pipelines` section to include the `windowseventlog/application` and `windowseventlog/system` receivers:
+
+```yaml
+service:
+ pipelines:
+ logs:
+ receivers: [windowseventlog/application, windowseventlog/system]
+ processors: [batch]
+ exporters: [otlp]
+```
+
+Save the changes and restart the OpenTelemetry Collector.
+
-### Step 2: Add windowseventlog receiver
+
+## Prerequisites
+
+- Microsoft User account with permissions to access Event Log and Services.
-#### Configure windowseventlog receiver
+## Setup
+
+### Step 1: Install OpenTelemetry Collector
-Modify the `config.yaml` file created in the previous step to include the `windowseventlog` receiver in the receiver section. The below codeblock shows how you can
-add the receiver to get the windows application and system logs.
+Install OpenTelemetry Collector as an agent by following the [Self-Hosted SigNoz Installation Documentation](https://signoz.io/docs/self-host/install/).
+
+### Step 2: Configure `windowseventlog` receiver
+
+#### Add `windowseventlog` Receiver
+
+Modify the `config.yaml` file to include the `windowseventlog` receiver for monitoring application and system logs:
```yaml
receivers:
@@ -46,24 +85,24 @@ receivers:
channel: system
```
-There are more configuration options available [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/windowseventlogreceiver/README.md)
+#### Update Pipelines Configuration
-#### Update pipelines configuration
-
-In the same `config.yaml` file, update the pipelines section to include the `windowseventlog/application` and `windowseventlog/system` receivers under `logs`.
+In the same `config.yaml` file, update the `pipelines` section to include the `windowseventlog/application` and `windowseventlog/system` receivers:
```yaml
service:
pipelines:
- ....
logs:
receivers: [windowseventlog/application, windowseventlog/system]
processors: [batch]
exporters: [otlp]
```
-If there are no errors, your Event logs will be visible in SigNoz under the Logs Tab.
+Save the changes and restart the OpenTelemetry Collector.
+
+
+
## Output
@@ -72,18 +111,13 @@ If there are no errors, your Event logs will be visible in SigNoz under the Logs
Windows System Events Logs in SigNoz
-This is what the typical output will look like with the configurations we made above:
+This is what the typical output will look like with the configurations made above:
**Application Log**
```json
{
- "body": "{\"channel\":\"Application\",\"computer\":\"logs-windows\",\"event_data\":{},
- \"event_id\":{\"id\":16384,\"qualifiers\":16384},\"keywords\":[\"Classic\"],\"level\":\"Information\",
- \"message\":\"Successfully scheduled Software Protection service for re-start at 2024-08-10T18:52:44Z.
- Reason: RulesEngine.\",\"opcode\":\"0\",\"provider\":{\"event_source\":\"Software Protection Platform
- Service\",\"guid\":\"{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}\",\"name\":\"Microsoft-Windows-Security-SPP\"},
- \"record_id\":750,\"system_time\":\"2024-08-03T19:29:44.9757970Z\",\"task\":\"0\"}",
+ "body": "{\"channel\":\"Application\",\"computer\":\"logs-windows\",\"event_data\":{},\"event_id\":{\"id\":16384,\"qualifiers\":16384},\"keywords\":[\"Classic\"],\"level\":\"Information\",\"message\":\"Successfully scheduled Software Protection service for re-start at 2024-08-10T18:52:44Z. Reason: RulesEngine.\",\"opcode\":\"0\",\"provider\":{\"event_source\":\"Software Protection Platform Service\",\"guid\":\"{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}\",\"name\":\"Microsoft-Windows-Security-SPP\"},\"record_id\":750,\"system_time\":\"2024-08-03T19:29:44.9757970Z\",\"task\":\"0\"}",
"id": "2k2Ud5JPPt8hVRQpgF6gXTxl1Yd",
"timestamp": "2024-08-03T19:29:44.975797Z",
"attributes": {},
@@ -100,14 +134,7 @@ This is what the typical output will look like with the configurations we made a
```json
{
- "body": "{\"channel\":\"System\",\"computer\":\"logs-windows\",
- \"event_data\":{\"param1\":\"Background Intelligent Transfer Service\",
- \"param2\":\"auto start\",\"param3\":\"demand start\",\"param4\":\"BITS\"},\"event_id\":{\"id\":7040,
- \"qualifiers\":16384},\"keywords\":[\"Classic\"],\"level\":\"Information\",
- \"message\":\"The start type of the Background Intelligent Transfer Service service was changed from
- auto start to demand start.\",\"opcode\":\"0\",\"provider\":{\"event_source\":\"Service Control Manager\",
- \"guid\":\"{555908d1-a6d7-4695-8e1e-26931d2012f4}\",\"name\":\"Service Control Manager\"},
- \"record_id\":893,\"system_time\":\"2024-08-03T19:32:41.9476831Z\",\"task\":\"0\"}",
+ "body": "{\"channel\":\"System\",\"computer\":\"logs-windows\",\"event_data\":{\"param1\":\"Background Intelligent Transfer Service\",\"param2\":\"auto start\",\"param3\":\"demand start\",\"param4\":\"BITS\"},\"event_id\":{\"id\":7040,\"qualifiers\":16384},\"keywords\":[\"Classic\"],\"level\":\"Information\",\"message\":\"The start type of the Background Intelligent Transfer Service service was changed from auto start to demand start.\",\"opcode\":\"0\",\"provider\":{\"event_source\":\"Service Control Manager\",\"guid\":\"{555908d1-a6d7-4695-8e1e-26931d2012f4}\",\"name\":\"Service Control Manager\"},\"record_id\":893,\"system_time\":\"2024-08-03T19:32:41.9476831Z\",\"task\":\"0\"}",
"id": "2k2Ud5JPPt8hVRQpgF6gXTxl1Yf",
"timestamp": "2024-08-03T19:32:41.9476831Z",
"attributes": {},
@@ -118,8 +145,3 @@ This is what the typical output will look like with the configurations we made a
"trace_flags": 0,
"trace_id": ""
}
-```
-
-
-
-
diff --git a/data/docs/messaging-queues/confluent-kafka.mdx b/data/docs/messaging-queues/confluent-kafka.mdx
index 63482e006..ec8b49c2c 100644
--- a/data/docs/messaging-queues/confluent-kafka.mdx
+++ b/data/docs/messaging-queues/confluent-kafka.mdx
@@ -264,7 +264,7 @@ Viewing OpenTelemetry Collector Logs
**Step 1:** Go to SigNoz and head over to the dashboard.
-**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](/docs/userguide/manage-Dashboards).
+**Step 2:** If not already created, create a new dashboard. You can create the dashboard and multiple panel under it by following the instructions [here](https://signoz.io/docs/userguide/manage-dashboards/).
**Step 3:** Select metric for Confluent Cloud.
diff --git a/data/docs/userguide/collect_docker_logs.mdx b/data/docs/userguide/collect_docker_logs.mdx
index 8d4e35ffb..30a8a394f 100644
--- a/data/docs/userguide/collect_docker_logs.mdx
+++ b/data/docs/userguide/collect_docker_logs.mdx
@@ -1,116 +1,130 @@
---
-date: 2024-06-06
-title: Collecting Docker container logs
+date: 2024-12-17
+title: Collecting Docker Container Logs
id: collect_docker_logs
+hide_table_of_contents: true
---
-With SigNoz you can collect all your docker container logs and perform different queries on top of it.
-Below are the steps to collect docker container logs.
-
-
-## Collect Docker container logs in SigNoz cloud
+You can easily collect all your Docker container logs using OpenTelemetry Collector and perform advanced queries to analyze them efficiently using SigNoz.
+
+
+
+
+
+## Collect Docker Container Logs in SigNoz Cloud
+
+1. **Clone the Repository**
+ ```bash
+ git clone https://github.com/SigNoz/docker-container-logs
+ ```
+ This [repository](https://github.com/SigNoz/docker-container-logs) provides a pre-configured OpenTelemetry Collector setup with `otel-collector-config.yaml` and
+ `docker-compose.yaml`, enabling easy log collection and forwarding to SigNoz Cloud with minimal changes and no manual configuration.
+
+2. **Update Environment Variables**
+
+ Update the Environment variables in the `.env` file of `docker-container-logs` repository:
+ ```.env
+ OTEL_COLLECTOR_ENDPOINT=ingest..signoz.cloud:443
+ SIGNOZ_INGESTION_KEY=
+ ```
+ - Replace `` with your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
+ - Set the `` to match your [SigNoz Cloud region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint)
- * Clone this [repository](https://github.com/SigNoz/docker-container-logs)
- * Update `otel-collector-config.yaml` and set the values of `` and `{region}`.
-
- Depending on the choice of your region for SigNoz cloud, the otlp endpoint will vary according to this table.
+3. **Start Containers**
+ Run the following command inside the `docker-container-logs`to start the containers:
+ ```bash
+ docker compose up -d
+ ```
- | Region | Endpoint |
- | ------ | -------------------------- |
- | US | ingest.us.signoz.cloud:443 |
- | IN | ingest.in.signoz.cloud:443 |
- | EU | ingest.eu.signoz.cloud:443 |
-
-* Start the containers `docker compose up -d`
+4. **Verify Log Export**
+ If there are no errors, your logs will be exported and visible on the SigNoz UI.
-* If there are no errors your logs will be exported and will be visible on the SigNoz UI.
+{/*
+For enhanced log collection capabilities, consider using **logspout-signoz**. See [Using logspout-signoz for Enhanced Log Collection](#using-logspout-signoz-for-enhanced-log-collection).
+ */}
-
-
- For enhanced log collection capabilities, you can also use logspout-signoz. See [Using logspout-signoz for Enhanced Log Collection](#using-logspout-signoz-for-enhanced-log-collection).
-
-
+
+
-## Collect Docker container logs in Self-Hosted SigNoz
+## Collect Docker Container Logs in Self-Hosted SigNoz
-### Steps for collecting logs if SigNoz is running on the same host.
-Once you deploy SigNoz in docker, it will automatically start collecting logs of all the docker containers, except for the container logs of SigNoz.
+### Scenario 1: SigNoz Running on the Same Host
-#### Disable automatic container log collection.
-You can disable automatic container logs collection by modifying the `otel-collector-config.yaml` file which is present inside `deploy/docker/clickhouse-setup`
+When SigNoz is deployed via Docker, it automatically starts collecting logs of all Docker containers except its own container logs.
- ```yaml {5}
- ...
- service:
- pipelines:
- logs:
- receivers: [otlp]
- processors: [batch]
- exporters: [clickhouselogsexporter]
- ...
- ```
- Here we have modified the value of receivers from `[otlp, tcplog/docker]` to `[otlp]`.
- Now you can restart SigNoz and the changes will be applied.
+#### 1. Disable Automatic Container Log Collection
+Change `receivers` in `pipelines` from `[otlp, tcplog/docker]` to `[otlp]` in the [`otel-collector-config.yaml`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/otel-collector-config.yaml) file:
-#### Filter/Exclude logs
-If you want to exclude certain logs you can exclude them based the container name or based on pattern.
+```yaml:/deploy/docker/clickhouse-setup/otel-collector-config.yaml {5}
+...
+service:
+ pipelines:
+ logs:
+ receivers: [otlp]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+...
+```
+Restart SigNoz to apply changes.
-* **Using container name** : We will modify the `tcplog/docker` reciever in `otel-collector-config.yaml` file which is present inside `deploy/docker/clickhouse-setup` and add a new operator after `signoz_logs_filter`
- ```yaml {2}
- ...
- - type: filter
- expr: 'attributes.container_name matches "^(|)'
- ...
- ```
- Replace `` with the name of the containers that you want to exclude.
+#### 2. Filter or Exclude Logs
- If you want to collect logs of signoz containers you can remove the names of signoz containers from the filter operator with id `signoz_logs_filter` operator.
+- **Using Container Name**
+ Modify the `tcplog/docker` receiver to add a [`filter`](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/filter.md) operator in the [`otel-collector-config.yaml`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/otel-collector-config.yaml) file:
+ ```yaml {2}
+ ...
+ - type: filter
+ expr: 'attributes.container_name matches "^(|)"'
+ ...
+ ```
+ Replace `` with the names of containers to exclude.
-* **Based on pattern** : You can also use the filter operator to filter out logs based on a pattern
- ```yaml {3-6}
- ....
- operators:
- - type: filter
- expr: 'body matches "^LOG: .* END$"'
- drop_ratio: 1.0
- ....
- ```
- Here we are matching logs using an expression and dropping the entire log by setting `drop_ratio: 1.0` . You can read more about the filter operator [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/filter.md)
+- **Based on Pattern**
+ Exclude logs matching a specific pattern using the [`filter`](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/filter.md) operator:
+ ```yaml {3-5}
+ ...
+ operators:
+ - type: filter
+ expr: 'body matches "^LOG: .* END$"'
+ drop_ratio: 1.0
+ ...
+ ```
-* Now we can restart the otel collector container so that new changes are applied and the docker container logs will be dropped for the specified containers.
+Restart the OTEL collector container for changes to take effect.
-### Steps for collecting logs if SigNoz is running on a different host.
+### Scenario 2: SigNoz Running on a Different Host
+If SigNoz is running on a different host, Logspout can be deployed on the host to send logs to the SigNoz cluster.
-If you have a signoz running on a different host then you can run logspout on the host and send logs to SigNoz cluster.
+1. **Expose OTEL Collector Port**
+ Modify `docker-compose.yaml`:
-* Expose port `2255` of otel-collector by modifying the `docker-compose.yaml` file present inside `deploy/docker/clickhouse-setup`
- ```yaml {6}
- ...
- otel-collector:
- image: signoz/signoz-otel-collector:latest
- command: ["--config=/etc/otel-collector-config.yaml"]
- ports:
- - "2255:2255"
- ```
-
-* Run logspout
- ```bash
- docker run --net=host --rm --name="logspout" \
- --volume=/var/run/docker.sock:/var/run/docker.sock \
- gliderlabs/logspout \
- syslog+tcp://:2255
-
- ```
-
- For finding the right host for your SigNoz cluster please follow the guide [here](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
-
-* If there are no errors your logs will be exported and visible on the SigNoz UI.
-
-## Using logspout-signoz for Enhanced Log Collection
-
-To enhance your Docker container log collection with SigNoz, especially when your JSON logs include fields like serviceName and env, consider using the logspout-signoz extension. This tool automatically labels logs with service names, severity levels, and environment details, streamlining your observability setup.
-
-For detailed instructions on configuring logspout-signoz, refer to the [Rich Logs Collector for Docker Compose Services with SigNoz](/blog/logspout-signoz-setup) guide.
-
-By integrating logspout-signoz, you can achieve more organized and insightful log management within your SigNoz environment.
+ ```yaml:/deploy/docker/clickhouse-setup/docker-compose-minimal.yaml {6}
+ ...
+ otel-collector:
+ image: signoz/signoz-otel-collector:latest
+ command: ["--config=/etc/otel-collector-config.yaml"]
+ ports:
+ - "2255:2255"
+ ```
+
+2. **Run Logspout**
+ Deploy `logspout` to forward logs to the SigNoz host:
+ ```bash
+ docker run --net=host --rm --name="logspout" \
+ --volume=/var/run/docker.sock:/var/run/docker.sock \
+ gliderlabs/logspout \
+ syslog+tcp://:2255
+ ```
+
+Refer to the [troubleshooting guide](https://signoz.io/docs/install/troubleshooting/#signoz-otel-collector-address-grid) for finding the correct SigNoz host address.
+
+
+
+
+
+## Using Logspout-SigNoz for Enhanced Log Collection
+
+To improve log organization and insight, especially for JSON logs containing `serviceName` and `env` fields, use the **logspout-signoz** extension. It labels logs with service names, severity levels, and environment details.
+
+For detailed setup instructions, check the [Rich Logs Collector for Docker Compose Services with SigNoz](/blog/logspout-signoz-setup) guide.
diff --git a/data/docs/userguide/collect_kubernetes_pod_logs.mdx b/data/docs/userguide/collect_kubernetes_pod_logs.mdx
index 5672411c6..96f5f9157 100644
--- a/data/docs/userguide/collect_kubernetes_pod_logs.mdx
+++ b/data/docs/userguide/collect_kubernetes_pod_logs.mdx
@@ -1,195 +1,180 @@
---
date: 2024-06-06
-title: Collecting Kubernetes pod logs
+title: Collecting Kubernetes Pod Logs
id: collect_kubernetes_pod_logs
+hide_table_of_contents: true
---
-SigNoz can automatically collect all your pod logs and you can perform various action on top of that data.
+SigNoz enables seamless collection of Kubernetes pod logs, allowing you to perform various actions on the data.
+## Setup
-## Collect Kubernetes Pod Logs in SigNoz Cloud
-To collect logs from your kubernetes cluster, you will need to deploy k8s-infra chart. Please follow the guide [here](/docs/tutorial/kubernetes-infra-metrics/). Log collection of pods from all namespaces is enabled by default except for pods in `kube-system` and `hotrod`. To modify the log collection mechanism, please follow the guides below.
+
+
+To collect logs from your Kubernetes cluster in SigNoz Cloud, deploy the `k8s-infra` chart by following this [guide](https://signoz.io/docs/tutorial/kubernetes-infra-metrics/).
-- [Disable automatic pod logs collection](#steps-to-disable-automatic-pod-logs-collection)
-- [Filter/Exclude logs collection](#steps-to-filterexclude-logs-collection)
+
+Log collection of pods from all namespaces is enabled by default except for **kube-system**, **hotrod** and **locust**.
+
+
+
-## Collect Kubernetes Pod Logs in Self-Hosted SigNoz
-When you deploy SigNoz to your kubernetes cluster it will automatically start collecting all the pod logs. It will automatically parse out different attributes from the logs like name, namespace, container name, uid etc. But if you want to parse specific attributes from certain kind of logs you can use different kinds of operators provided by OpenTelemetry [here](/docs/userguide/logs#operators-for-parsing-and-manipulating-logs).
+To enable pod log collection and parse attributes such as pod name, namespace, container name, and UID etc. you need to deploy the `k8s-infra` chart in both scenarios:
-If your signoz cluster is hosted in a different cluster then you will have to install k8s-infra chart on your application kubernetes cluster. Please follow the guide [here](/docs/tutorial/kubernetes-infra-metrics/). Log collection of pods from all namespaces is enabled by default except for pods in `kube-system` and `hotrod`. To modify the log collection mechanism, please follow the guides below.
+- **If SigNoz and applications are in the same cluster:** Install the `k8s-infra` chart in the cluster where [SigNoz is deployed](https://signoz.io/docs/install/kubernetes/).
+- **If SigNoz is hosted in a separate cluster:** Install the `k8s-infra` chart in the application clusters to monitor them and in the cluster hosting SigNoz to monitor its infrastructure.
-### Steps to disable automatic pod logs collection
+Follow [this guide](https://signoz.io/docs/tutorial/kubernetes-infra-metrics/) for detailed installation instructions.
-* Modify/Create the `override-values.yaml` file
- ```yaml
- k8s-infra:
- presets:
- logsCollection:
- enabled: false
- ```
+
+Log collection of pods from all namespaces is enabled by default except for **kube-system**, **hotrod** and **locust**.
+
- You can apply this yaml file by running the following command:
+To extract specific attributes from certain types of logs, you can use the various operators provided by OpenTelemetry, available [here](https://signoz.io/docs/userguide/logs/#operators-for-parsing-and-manipulating-logs).
- ```bash
- helm -n platform upgrade my-release signoz/signoz -f override-values.yaml
- ```
+
+
- In case of external K8s cluster where only k8s-infra chart is installed, users can disable log collections by including the following in `override-values.yaml` :
+## Customizing Kubernetes Pod Logs Collection
- ```yaml
- presets:
- logsCollection:
- enabled: false
- ```
+### Disable Automatic Pod Logs Collection
- You can apply this yaml file by running the following command:
+1. Modify/Create `override-values.yaml` file with the below configuration to customize and override default settings
+of the [`values.yaml`](https://github.com/SigNoz/charts/blob/main/charts/k8s-infra/values.yaml) file:
- ```bash
- helm -n platform upgrade my-release signoz/k8s-infra -f override-values.yaml
- ```
+```yaml
+presets:
+ logsCollection:
+ enabled: false
+```
- Once the above is applied to your k8s cluster, logs collection will be disabled.
+The `presets` key in the k8s-infra chart includes `logsCollection`, which controls log collection, and setting `enabled: false` disables automatic pod log collection.
-### Steps to Filter/Exclude/Include Logs Collection
+2. Apply the configuration:
-* **Exclude certain log files** : If you want to exclude logs of certain namespaces, pods or containers,
- you can append the following config in your Helm override values file.
+Run the below command in the directory containing the `override-values.yaml` file.
- _override-values.yaml_
+```bash
+helm -n platform upgrade --install signoz/k8s-infra -f override-values.yaml
+```
-
-
+``: Replace this placeholder with your helm release name (e.g., `my-monitoring`).
- ```yaml
- k8s-infra:
- presets:
- logsCollection:
- # whether to enable log collection
- enabled: true
- blacklist:
- # whether to enable blacklisting
- enabled: true
- # whether to exclude signoz logs
- signozLogs: false
- # which namespaces to exclude
- namespaces:
- - kube-system
- # which pods to exclude
- pods:
- - hotrod
- - locust
- # which containers to exclude
- containers: []
- # additional exclude rules
- additionalExclude: []
- ```
+### Exclude/Include/Filter Logs Collection
-
-
+
+
- ```yaml
- presets:
- logsCollection:
- # whether to enable log collection
+To exclude logs based on namespaces, pods, or containers, you need to modify/create `override-values.yaml` file with the below configuration to customize and
+override default settings of the [`values.yaml`](https://github.com/SigNoz/charts/blob/main/charts/k8s-infra/values.yaml) file:
+
+```yaml
+presets:
+ logsCollection:
+ # whether to enable log collection
+ enabled: true
+ blacklist:
+ # whether to enable blacklisting
enabled: true
- blacklist:
- # whether to enable blacklisting
- enabled: true
- # whether to exclude signoz logs
- signozLogs: false
- # which namespaces to exclude
- namespaces:
- - kube-system
- # which pods to exclude
- pods:
- - hotrod
- - locust
- # which containers to exclude
- containers: []
- # additional exclude rules
- additionalExclude: []
- ```
-
-
-
-
-* **Include certain log files only** : If you want to only include logs of certain namespaces, pods or containers,
- you can append the following config in your Helm override values file.
-
- _override-values.yaml_
-
-
-
-
- ```yaml
- k8s-infra:
- presets:
- logsCollection:
- # whether to enable log collection
- enabled: true
- whitelist:
- # whether to enable whitelisting
- enabled: true
- # whether to include signoz logs
- signozLogs: false
- # which namespaces to include
- namespaces:
- - platform
- - my-application-namespace
- # which pods to include
- pods:
- - otel # all pods with otel prefix
- - my-application-pod
- # which containers to include
- containers: []
- # additional include rules
- additionalInclude: []
- ```
-
-
-
-
- ```yaml
- presets:
- logsCollection:
- # whether to enable log collection
+ # whether to exclude signoz logs
+ signozLogs: false
+ # which namespaces to exclude
+ namespaces:
+ - kube-system
+ # which pods to exclude
+ pods:
+ - hotrod
+ - locust
+ # which containers to exclude
+ containers: []
+ # additional exclude rules
+ additionalExclude: []
+```
+The `presets` key in the k8s-infra chart includes `logsCollection`, which controls log collection, and enabling `blacklist` allows exclusion of logs from
+specified namespaces, pods, and containers.
+
+
+Run the below command in the directory containing the `override-values.yaml` file to apply the configuration:
+
+```bash
+helm -n platform upgrade --install signoz/k8s-infra -f override-values.yaml
+```
+
+``: Replace this placeholder with your helm release name (e.g., `my-monitoring`).
+
+
+
+
+
+To include logs based on namespaces, pods, or containers, you need to modify/create `override-values.yaml` file with the below configuration to customize and
+override default settings of the [`values.yaml`](https://github.com/SigNoz/charts/blob/main/charts/k8s-infra/values.yaml) file:
+
+```yaml
+presets:
+ logsCollection:
+ # whether to enable log collection
+ enabled: true
+ whitelist:
+ # whether to enable whitelisting
enabled: true
- whitelist:
- # whether to enable whitelisting
- enabled: true
- # whether to include signoz logs
- signozLogs: false
- # which namespaces to include
- namespaces:
- - platform
- - my-application-namespace
- # which pods to include
- pods:
- - otel # all pods with otel prefix
- - my-application-pod
- # which containers to include
- containers: []
- # additional include rules
- additionalInclude: []
- ```
-
-
-
-
-* **Using filter operator in filelog receiver** : You can also use the filter operator to filter out logs by changing the operators here [charts](https://github.com/SigNoz/charts/blob/main/charts/k8s-infra/values.yaml).
-
- ```yaml {3-6}
- ....
+ # whether to include signoz logs
+ signozLogs: false
+ # which namespaces to include
+ namespaces:
+ - platform
+ - my-application-namespace
+ # which pods to include
+ pods:
+ - otel # all pods with otel prefix
+ - my-application-pod
+ # which containers to include
+ containers: []
+ # additional include rules
+ additionalInclude: []
+```
+The `presets` key in the k8s-infra chart includes `logsCollection`, which controls log collection, and enabling `whitelist` allows inclusion of logs from
+specified namespaces, pods, and containers.
+
+Run the below command in the directory containing the `override-values.yaml` file to apply the configuration:
+
+```bash
+helm -n platform upgrade --install signoz/k8s-infra -f override-values.yaml
+```
+
+``: Replace this placeholder with your helm release name (e.g., `my-monitoring`).
+
+
+
+
+
+To filter logs using [expression](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/types/expression.md), you need to
+modify/create `override-values.yaml` file with the below configuration to customize and override default settings of the [`values.yaml`](https://github.com/SigNoz/charts/blob/main/charts/k8s-infra/values.yaml) file:
+
+
+```yaml
+presets:
+ logsCollection:
+ enabled: true
operators:
- type: filter
expr: 'body matches "^LOG: .* END$"'
drop_ratio: 1.0
- ....
- ```
+```
+
+This filters logs matching the expression and drops them entirely. Learn more about the filter operator [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/filter.md).
+
+Run the below command in the directory containing the `override-values.yaml` file to apply the configuration:
+
+```bash
+helm -n platform upgrade --install signoz/k8s-infra -f override-values.yaml
+```
+
+``: Replace this placeholder with your helm release name (e.g., `my-monitoring`).
- Here we are matching logs using an expression and dropping the entire log by setting `drop_ratio: 1.0` . You can read more about the filter operator [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/filter.md)
+
+
-* Now you can restart the otel collector pod so that new changes are applied.
+After making changes to the configuration, restart the OpenTelemetry Collector pod deployed by the `k8s-infra` chart to apply the updates.
\ No newline at end of file
diff --git a/data/docs/userguide/collect_logs_from_file.mdx b/data/docs/userguide/collect_logs_from_file.mdx
index bc06db3d4..6b12e4b9b 100644
--- a/data/docs/userguide/collect_logs_from_file.mdx
+++ b/data/docs/userguide/collect_logs_from_file.mdx
@@ -1,220 +1,179 @@
---
-date: 2024-06-06
-title: Collecting Application Logs from Log file
+date: 2024-12-18
+title: Collecting Application Logs from Log File
id: collect_logs_from_file
+hide_table_of_contents: true
---
## Overview
-This guide provides detailed instructions on configuring the OpenTelemetry Collector to read logs from a file and push them to SigNoz, enabling you to analyze your application logs effectively.
+This documentation provides detailed instructions on configuring the OpenTelemetry Collector to read logs from a file and push them to SigNoz, enabling you to
+analyze your application logs effectively.
## Sample Log File
-As an example, we can create a sample log file called `app.log` with the following dummy data:
- ```
- This is log line 1
- This is log line 2
- This is log line 3
- ```
-This file represents a log file of your application. You can choose any file which contains your application's log entries.
-## Collect Logs in SigNoz Cloud
+As an example, we can create a sample log file called `app.log` with the following dummy data:
-### Prerequisite
+```plaintext
+This is log line 1
+This is log line 2
+This is log line 3
+```
+
+This file represents a log file of your application. You can choose any file that contains your application's log entries.
+
+
-- SigNoz [cloud](https://signoz.io/teams/) account
+
-Sending logs to SigNoz cloud can be achieved by following these simple steps:
-- Installing OpenTelemetry Collector
-- Configuring filelog receiver
+### Setup
-### Install OpenTelemetry Collector
+#### 1. Install OpenTelemetry Collector
-The OpenTelemetry collector provides a vendor-neutral way to collect, process, and export your telemetry data such as logs, metrics, and traces.
+The OpenTelemetry Collector provides a vendor-neutral way to collect, process, and export your telemetry data such as logs, metrics, and traces.
-You can install OpenTelemetry collector as an agent on your Virtual Machine by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
-
+Follow this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) to install the OpenTelemetry Collector as an agent on your Virtual Machine.
-### Configure filelog receiver
+#### 2. Configure Filelog Receiver
-Modify the `config.yaml` file that you created while installing OTel collector in the previous step to include the filelog receiver. This involves specifying the path to your `app.log` file and setting the `start_at` parameter, which specifies where to start reading logs from the log file. For more fields that are available for filelog receiver please check [this link](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
+Modify the `config.yaml` file created during the installation of the OpenTelemetry Collector to include the `filelog` receiver. Specify the path to your `app.log` file and set the `start_at` parameter.
-```yaml
+```yaml
receivers:
- ...
filelog/app:
- include: [ /tmp/app.log ] #include the full path to your log file
+ include: [ /tmp/app.log ] # Include the full path to your log file
start_at: end
-...
```
-
-The `start_at: end` configuration ensures that only newly added logs are transmitted. If you wish to include historical logs from the file, remember to modify `start_at` to `beginning`.
-
-Log lines from the file will be visible on the SigNoz UI and you will able to filter them once new logs are added to the file while using `start_at: end`
-
+The `start_at: end` configuration ensures that only newly added logs are transmitted. To include historical logs, set `start_at` to `beginning`.
-{/* */}
+#### 3. Update Pipeline Configuration
-### Update Pipelines Configuration
+Update the pipeline settings in `config.yaml` to include the new filelog receiver:
-In the same `config.yaml` file, update the pipeline settings to include the new filelog receiver. This step is crucial for ensuring that the logs are correctly processed and sent to SigNoz.
-
-```yaml {4}
+```yaml
service:
- ....
+ pipelines:
logs:
- receivers: [otlp, filelog/app]
- processors: [batch]
- exporters: [otlp]
+ receivers: [otlp, filelog/app]
+ processors: [batch]
+ exporters: [otlp]
```
-Now restart the OTel collector so that new changes are applied. The steps to run the OTel collector can be found [here](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/)
+Restart the OpenTelemetry Collector for the changes to take effect. Follow [this guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) for restart instructions.
-### Verify Export
+#### 4. Verify Export
-The logs will be exported to SigNoz UI. If you add more entries to your `app.log` file they will also be visible in SigNoz UI.
+The logs will be visible in the Logs Tab of SigNoz. As more entires are added to `app.log` it will reflect Logs tab.
Sample log file data shown in SigNoz Logs Explorer
-## Collecting Logs in self-hosted SigNoz
-
-Collecting logs in Self-Hosted SigNoz can have two scenarios:
-- SigNoz running on the same host
-- SigNoz running on different host
-
-### Running on the same host
+
-If your self-hosted SigNoz is running on the same host, then you can follow these steps to collect your application logs.
+
-#### Install SigNoz
+### Scenarios
-You can install Self-Hosted SigNoz using the instructions [here](https://signoz.io/docs/install/docker/).
+#### Scenario 1: SigNoz on the Same Host
+##### Install SigNoz
-#### Modify Docker Compose file
+Follow [this guide](https://signoz.io/docs/install/docker/) to install self-hosted SigNoz.
-In your self-hosted SigNoz setup, locate and edit the `docker-compose.yaml` file found in the `deploy/docker/clickhouse-setup` directory. You'll need to mount the log file of your application to the `tmp` directory of SigNoz OTel collector.
- ```yaml {6}
- ...
- otel-collector:
- image: signoz/signoz-otel-collector:0.88.11
- command: ["--config=/etc/otel-collector-config.yaml"]
- volumes:
- - ~//app.log:/tmp/app.log
- ....
- ```
+##### Modify Docker Compose File
-Replace `` with the path where your log file is present. Please ensure that the file path is correctly specified.
-
-#### Add filelog receiver
+Edit the `docker-compose.yaml` file in the `deploy/docker/clickhouse-setup` directory to mount your application's log file to the `tmp` directory of the OpenTelemetry Collector.
-Add the filelog receiver to `otel-collector-config.yaml` which is present inside `deploy/docker/clickhouse-setup` directory in your self-hosted SigNoz setup. The configuration below tells the collector where to find your log file and how to start processing it.
+```yaml
+volumes:
+ - ~//app.log:/tmp/app.log
+```
- ```yaml {3-15}
- receivers:
- ...
- filelog:
- include: [ /tmp/app.log ]
- start_at: end
- ...
- ```
+Replace `` with the actual path of your log file.
-
+##### Add Filelog Receiver
-The `start_at: end` configuration ensures that only newly added logs are transmitted. If you wish to include historical logs from the file, remember to modify `start_at` to `beginning`.
+Update `otel-collector-config.yaml` with the following configuration:
-Log lines from the file will be visible on the SigNoz UI and you will able to filter them once new logs are added to the file while using `start_at: end`
+```yaml
+receivers:
+ filelog:
+ include: [ /tmp/app.log ]
+ start_at: end
+```
+
+The `start_at: end` configuration ensures only newly added logs are transmitted. To include historical logs, set `start_at` to `beginning`.
-{/* */}
-
-For more fields that are available for filelog receiver please check [this link](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
-
-#### Update Pipeline configuration
-
-Modify the pipeline inside `otel-collector-config.yaml` to include the filelog receiver. This step is crucial for ensuring that the logs are correctly processed and sent to SigNoz.
-
- ```yaml {4}
- service:
- ....
- logs:
- receivers: [otlp, filelog]
- processors: [batch]
- exporters: [clickhouselogsexporter]
- ```
-
-Now, restart the OTel collector so that new changes are applied. You can find instructions to run OTel collector [here](https://signoz.io/docs/install/docker/)
-
-#### Verify Export
-
-The logs will be exported to SigNoz UI if there are no errors. If you add more entries to your `app.log` file they will also be visible in SigNoz.
-
-
-
- Sample log file data shown in SigNoz Logs Explorer
-
-
+##### Update Pipeline Configuration
+Modify the pipeline to include the filelog receiver:
-### Running on a different host
+```yaml
+service:
+ pipelines:
+ logs:
+ receivers: [otlp, filelog]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+```
-If you have a SigNoz running on a different host then you will have to run a OTel collector to export logs from your host to the host where SigNoz is running.
+Restart the OpenTelemetry Collector for the changes to take effect.
-#### Create OTel collector configuration
+#### Scenario 2: SigNoz on a Different Host
-You need to create an `otel-collector-config.yaml` file, this file defines how the OTel collector will process and forward logs to your SigNoz instance.
+##### Create OTel Collector Configuration
- ```yaml
- receivers:
- filelog:
- include: [ /tmp/app.log ]
- start_at: end
- processors:
- batch:
- send_batch_size: 10000
- send_batch_max_size: 11000
- timeout: 10s
- exporters:
- otlp/log:
- endpoint: http://:
- tls:
- insecure: true
- service:
- pipelines:
- logs:
- receivers: [filelog]
- processors: [batch]
- exporters: [ otlp/log ]
- ```
-
-{/* */}
-
+Create an `otel-collector-config.yaml` file:
-The parsed logs are batched up using the batch processor and then exported to the host where SigNoz is deployed. For finding the right host and port for your SigNoz cluster please follow the guide [here](../install/troubleshooting.md#signoz-otel-collector-address-grid).
+```yaml
+receivers:
+ filelog:
+ include: [ /tmp/app.log ]
+ start_at: end
+processors:
+ batch:
+ send_batch_size: 10000
+ send_batch_max_size: 11000
+ timeout: 10s
+exporters:
+ otlp/log:
+ endpoint: http://:
+ tls:
+ insecure: true
+service:
+ pipelines:
+ logs:
+ receivers: [filelog]
+ processors: [batch]
+ exporters: [otlp/log]
+```
-
-The `otlp/log` exporter in the above configuration file uses a `http` endpoint but if you want to use `https` you will have to provide the certificate and the key. You can read more about it [here](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README.md)
-
+To use `https`, provide the certificate and key. Refer to [this guide](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README.md) for details.
-#### Mount the log file
+##### Mount Log File and Run Collector
-Run this docker command
+Run the following command to start the OpenTelemetry Collector in Docker:
-```
+```bash
docker run -d --name signoz-host-otel-collector --user root -v $(pwd)/app.log:/tmp/app.log:ro -v $(pwd)/otel-collector-config.yaml:/etc/otel/config.yaml signoz/signoz-otel-collector:0.88.11
```
-The above command runs an OpenTelemetry collector provided by SigNoz in a Docker container. It runs in the background with root privileges, mounts a log file and a configuration file from the host to the container
-After running the collector, if there are no errors your logs will be exported and will be visible in SigNoz.
+Adding more entries to `app.log` will make them visible in SigNoz UI.
+
+
+
+ Sample log file data shown in SigNoz Logs Explorer
+
+
+
+
+
diff --git a/data/docs/userguide/collecting-ecs-logs-and-metrics.mdx b/data/docs/userguide/collecting-ecs-logs-and-metrics.mdx
index 4e0e780ed..dbc73b3b6 100644
--- a/data/docs/userguide/collecting-ecs-logs-and-metrics.mdx
+++ b/data/docs/userguide/collecting-ecs-logs-and-metrics.mdx
@@ -25,14 +25,13 @@ send them to SigNoz.
Select the type of SigNoz instance you are running: **SigNoz Cloud** or **Self-Hosted**.
-
+
### Prerequisites
- An ECS cluster running with at least one task definition
- ECS cluster of launch type **EC2** or **External**
-- [SigNoz Cloud account](https://signoz.io/teams/)
If you want to collect metrics and other data for Fargate launch type, checkout [this documentation](https://signoz.io/docs/userguide/collecting-ecs-sidecar-infra/).
@@ -139,12 +138,13 @@ wget https://github.com/SigNoz/benchmark/raw/main/ecs/external/daemon-template.y
- Update `{region}` and `SIGNOZ_INGESTION_KEY` values in your YAML configuration file and copy the updated content of the `otelcol-sidecar.yaml` file and paste it in the value field of the `/ecs/signoz/otelcol-daemon.yaml` parameter that you created.
-You will be able to get `{region}` and `SIGNOZ_INGESTION_KEY` values in your [SigNoz Cloud account](https://signoz.io/teams/) under **Settings --> Ingestion Settings**.
+ - Replace `SIGNOZ_INGESTION_KEY` with your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
+ - Set the `{region}` to match your [SigNoz Cloud region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint)
-
+{/* Ingestion details in SigNoz dashboard
-
+ */}
diff --git a/data/docs/userguide/collecting-ecs-sidecar-infra.mdx b/data/docs/userguide/collecting-ecs-sidecar-infra.mdx
index 74d334f27..ddddc681e 100644
--- a/data/docs/userguide/collecting-ecs-sidecar-infra.mdx
+++ b/data/docs/userguide/collecting-ecs-sidecar-infra.mdx
@@ -25,7 +25,7 @@ The sidecar container will collect metrics and forward any received OTLP data to
Select the type of SigNoz instance you are running: **SigNoz Cloud** or **Self-Hosted**.
-
+
Below are the steps to collect your metrics and logs from ECS infrastructure:
@@ -42,14 +42,10 @@ Below are the steps to collect your metrics and logs from ECS infrastructure:
- An ECS cluster running with at least one task definition
- ECS cluster can be either of the launch types: **Fargate**, **EC2** or **External**
-- [SigNoz Cloud account](https://signoz.io/teams/)
-
## Setting up Sidecar Container
-
-
### Step 1: Create SigNoz OtelCollector Config
@@ -69,12 +65,13 @@ Below are the steps to collect your metrics and logs from ECS infrastructure:
- Update `{region}` and `SIGNOZ_INGESTION_KEY` values in your YAML configuration file and copy the updated content of the `otelcol-sidecar.yaml` file and paste it in the value field of the `/ecs/signoz/otelcol-sidecar.yaml` parameter that you created.
-You will be able to get `{region}` and `SIGNOZ_INGESTION_KEY` values in your [SigNoz Cloud account](https://signoz.io/teams/) under **Settings --> Ingestion Settings**.
+ - Replace `SIGNOZ_INGESTION_KEY` with your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
+ - Set the `{region}` to match your [SigNoz Cloud region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint)
-
+{/* Ingestion details in SigNoz dashboard
-
+ */}
diff --git a/data/docs/userguide/collecting_application_logs_otel_sdk_java.mdx b/data/docs/userguide/collecting_application_logs_otel_sdk_java.mdx
index 8002437c6..7f7fa468d 100644
--- a/data/docs/userguide/collecting_application_logs_otel_sdk_java.mdx
+++ b/data/docs/userguide/collecting_application_logs_otel_sdk_java.mdx
@@ -1,122 +1,147 @@
---
-date: 2024-06-06
+date: 2024-12-18
title: Collecting Application Logs Using OTEL Java Agent
id: collecting_application_logs_otel_sdk_java
+hide_table_of_contents: true
---
-# Collecting Application Logs Using OTEL Java Agent
-
You can directly send your application logs to SigNoz using [Java Agent provided by OpenTelemetry](https://signoz.io/docs/instrumentation/java/).
-In this doc we will run a sample java application with the OpenTelemetry Java agent to send logs to SigNoz.
-
+In this documentation, we will run a sample Java application with the OpenTelemetry Java agent to send logs to SigNoz.
-For collecting logs we will have to download the java agent from [here](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar).
+For collecting logs, download the Java agent from [here](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar).
+To send logs from a Java application, you will need to add the agent and set the appropriate environment variables.
-To sends logs from a Java application you will have to add the agent and add the environment variables for the agent.
+
-## For Sending Logs To SigNoz Cloud
+
```bash
-OTEL_RESOURCE_ATTRIBUTES=service.name= \
-OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=SIGNOZ_INGESTION_KEY" \
-OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.{region}.signoz.cloud:443 \
OTEL_LOGS_EXPORTER=otlp \
-java -javaagent:/path/opentelemetry-javaagent.jar -jar target/*.jar
+OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest..signoz.cloud:443" \
+OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=" \
+ OTEL_RESOURCE_ATTRIBUTES=service.name= \
+java -javaagent:/path/opentelemetry-javaagent.jar -jar .jar
```
-You will have to add `` and depending on the choice of your region for SigNoz cloud, the otlp endpoint will vary according to this table.
+- Replace `` with your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
+- Set the `` to match your [SigNoz Cloud region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint)
-| Region | Endpoint |
-| ------ | -------------------------- |
-| US | ingest.us.signoz.cloud:443 |
-| IN | ingest.in.signoz.cloud:443 |
-| EU | ingest.eu.signoz.cloud:443 |
-## For Sending Logs To SigNoz Hosted Locally
+
+
```bash
-OTEL_LOGS_EXPORTER=otlp OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" OTEL_RESOURCE_ATTRIBUTES=service.name= java -javaagent:/path/opentelemetry-javaagent.jar -jar .jar
-```
-
-```bash
-OTEL_RESOURCE_ATTRIBUTES=service.name= \
-OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" \
OTEL_LOGS_EXPORTER=otlp \
-java -javaagent:/path/opentelemetry-javaagent.jar -jar target/*.jar
+OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" \
+OTEL_RESOURCE_ATTRIBUTES=service.name= \
+java -javaagent:/path/opentelemetry-javaagent.jar -jar .jar
```
-## Settings for Appender instrumentation based on the logging library
+- Replace `` with the IP address or hostname of your SigNoz backend.
+- For local setups, use `0.0.0.0` if SigNoz runs on the same host.
-You can use appender settings by passing it as an argument in the `-D=` format.
+
-ex:- `-Dotel.instrumentation.logback-appender.experimental-log-attributes=true`
+
-### Logback
+## Settings for Appender Instrumentation
-LINK - [Logback](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/logback/logback-appender-1.0/javaagent)
+You can use appender settings by passing them as arguments in the `-D=` format.
-| System property | Type | Default Value | Description |
-|----------------------------------------------------------------------------------------|---------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------|
-| `otel.instrumentation.logback-appender.experimental-log-attributes` | Boolean | `false` | Enable the capture of experimental log attributes `thread.name` and `thread.id`. |
-| `otel.instrumentation.logback-appender.experimental.capture-code-attributes` | Boolean | `false` | Enable the capture of [source code attributes]. Note that capturing source code attributes at logging sites might add a performance overhead. |
-| `otel.instrumentation.logback-appender.experimental.capture-marker-attribute` | Boolean | `false` | Enable the capture of Logback markers as attributes. |
-| `otel.instrumentation.logback-appender.experimental.capture-key-value-pair-attributes` | Boolean | `false` | Enable the capture of Logback key value pairs as attributes. |
-| `otel.instrumentation.logback-appender.experimental.capture-logger-context-attributes` | Boolean | `false` | Enable the capture of Logback logger context properties as attributes. |
-| `otel.instrumentation.logback-appender.experimental.capture-mdc-attributes` | String | NA | Comma separated list of MDC attributes to capture. Use the wildcard character `*` to capture all attributes.
+Example:
+```bash
+-Dotel.instrumentation.logback-appender.experimental-log-attributes=true
+```
-### Log4j
+### Logback Configuration
-LINK - [Log4j](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/log4j/log4j-appender-2.17/javaagent)
+[Logback Documentation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/logback/logback-appender-1.0/javaagent)
-| System property | Type | Default | Description |
-|-----------------------------------------------------------------------------------| ------- | ------- |-----------------------------------------------------------------------------------------------------------------------|
-| `otel.instrumentation.log4j-appender.experimental-log-attributes` | Boolean | `false` | Enable the capture of experimental log attributes `thread.name` and `thread.id`. |
-| `otel.instrumentation.log4j-appender.experimental.capture-map-message-attributes` | Boolean | `false` | Enable the capture of `MapMessage` attributes. |
-| `otel.instrumentation.log4j-appender.experimental.capture-marker-attribute` | Boolean | `false` | Enable the capture of Log4j markers as attributes. |
-| `otel.instrumentation.log4j-appender.experimental.capture-mdc-attributes` | String | | Comma separated list of context data attributes to capture. Use the wildcard character `*` to capture all attributes. |
+| System Property | Type | Default | Description |
+|----------------------------------------------------------------------------------------|---------|---------|-------------------------------------------------------------------------------------------------------------------|
+| `otel.instrumentation.logback-appender.experimental-log-attributes` | Boolean | `false` | Capture experimental log attributes like `thread.name` and `thread.id`. |
+| `otel.instrumentation.logback-appender.experimental.capture-code-attributes` | Boolean | `false` | Capture source code attributes. May impact performance. |
+| `otel.instrumentation.logback-appender.experimental.capture-marker-attribute` | Boolean | `false` | Capture Logback markers as attributes. |
+| `otel.instrumentation.logback-appender.experimental.capture-key-value-pair-attributes` | Boolean | `false` | Capture Logback key-value pairs as attributes. |
+| `otel.instrumentation.logback-appender.experimental.capture-logger-context-attributes` | Boolean | `false` | Capture Logback logger context properties as attributes. |
+| `otel.instrumentation.logback-appender.experimental.capture-mdc-attributes` | String | NA | Comma-separated list of MDC attributes to capture. Use `*` to capture all attributes. |
+### Log4j Configuration
-In the below example we will configure a java application to send logs to SigNoz.
+[Log4j Documentation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/log4j/log4j-appender-2.17/javaagent)
-## [Example] How to Collect Application Logs Using OTEL Java Agent?
+| System Property | Type | Default | Description |
+|-----------------------------------------------------------------------------------|---------|---------|-----------------------------------------------------------------------------------------------------------------------|
+| `otel.instrumentation.log4j-appender.experimental-log-attributes` | Boolean | `false` | Capture experimental log attributes like `thread.name` and `thread.id`. |
+| `otel.instrumentation.log4j-appender.experimental.capture-map-message-attributes` | Boolean | `false` | Capture `MapMessage` attributes. |
+| `otel.instrumentation.log4j-appender.experimental.capture-marker-attribute` | Boolean | `false` | Capture Log4j markers as attributes. |
+| `otel.instrumentation.log4j-appender.experimental.capture-mdc-attributes` | String | NA | Comma-separated list of context data attributes to capture. Use `*` to capture all attributes. |
-- Clone this [repository](https://github.com/SigNoz/spring-petclinic)
-- Build the application using `./mvnw package`
-- Now run the application
+## Example: Collecting Logs with OTEL Java Agent
-### For SigNoz Cloud
-```
-OTEL_LOGS_EXPORTER=otlp OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.{region}.signoz.cloud:443" OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key= OTEL_RESOURCE_ATTRIBUTES=service.name=myapp java -javaagent:/path/opentelemetry-javaagent.jar -jar target/*.jar
-```
+1. Clone the [Spring PetClinic Repository](https://github.com/SigNoz/spring-petclinic):
+ ```bash
+ git clone https://github.com/SigNoz/spring-petclinic
+ ```
+2. Build the application:
+ ```bash
+ ./mvnw package
+ ```
+3. Run the application:
-You will have to replace the value of `{region}` according to region of your cloud account and also add ``
+
-### For SigNoz Hosted Locally
+
+```bash
+OTEL_LOGS_EXPORTER=otlp \
+OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.{region}.signoz.cloud:443" \
+OTEL_EXPORTER_OTLP_HEADERS=signoz-access-token= \
+OTEL_RESOURCE_ATTRIBUTES=service.name=myapp \
+java -javaagent:/path/opentelemetry-javaagent.jar -jar target/*.jar
```
-OTEL_LOGS_EXPORTER=otlp OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" OTEL_RESOURCE_ATTRIBUTES=service.name=myapp java -javaagent:/path/opentelemetry-javaagent.jar -jar target/*.jar
+
+To enable settings like experimental log attributes, pass additional arguments:
+
+```bash
+OTEL_LOGS_EXPORTER=otlp \
+OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.{region}.signoz.cloud:443" \
+OTEL_EXPORTER_OTLP_HEADERS=signoz-access-token= \
+OTEL_RESOURCE_ATTRIBUTES=service.name=myapp \
+java -javaagent:/path/opentelemetry-javaagent.jar \
+-Dotel.instrumentation.logback-appender.experimental-log-attributes=true \
+-jar target/*.jar
```
-You will have to replace your the value of `host` as `0.0.0.0` if SigNoz is running in the same host, for other configurations please check the [troubleshooting](/docs/install/troubleshooting#signoz-otel-collector-address-grid) guide.
+
-- Visit `http://localhost:8090` to access the application.
-- Once you use the application logs will be visible on SigNoz UI.
-- If you want to enable settings here is how you do it.
-
-Let's say we want to enable `-Dotel.instrumentation.logback-appender.experimental-log-attributes=true`
-
-### For SigNoz Cloud
+
+```bash
+OTEL_LOGS_EXPORTER=otlp \
+OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" \
+OTEL_RESOURCE_ATTRIBUTES=service.name=myapp \
+java -javaagent:/path/opentelemetry-javaagent.jar -jar target/*.jar
```
-OTEL_LOGS_EXPORTER=otlp OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.{region}.signoz.cloud:443" OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key= OTEL_RESOURCE_ATTRIBUTES=service.name=myapp java -javaagent:/path/opentelemetry-javaagent.jar -Dotel.instrumentation.logback-appender.experimental-log-attributes=true -jar target/*.jar
+
+To enable settings like experimental log attributes, pass additional arguments:
+
+```bash
+OTEL_LOGS_EXPORTER=otlp \
+OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" \
+OTEL_RESOURCE_ATTRIBUTES=service.name=myapp \
+java -javaagent:/path/opentelemetry-javaagent.jar \
+-Dotel.instrumentation.logback-appender.experimental-log-attributes=true \
+-jar target/*.jar
```
-You will have to replace the value of `{region}` according to the region of your cloud account and also replace `` with your SigNoz Cloud Ingestion key.
+
-## For SigNoz Hosted Locally
+
-```
-OTEL_LOGS_EXPORTER=otlp OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" OTEL_RESOURCE_ATTRIBUTES=service.name=myapp java -javaagent:/path/opentelemetry-javaagent.jar -Dotel.instrumentation.logback-appender.experimental-log-attributes=true -jar target/*.jar
-```
\ No newline at end of file
+4. Access the application at `http://localhost:8090`.
+5. Use the application to generate logs, which will be visible on the SigNoz UI.
+
+For troubleshooting, check the [troubleshooting guide](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
diff --git a/data/docs/userguide/collecting_application_logs_otel_sdk_python.mdx b/data/docs/userguide/collecting_application_logs_otel_sdk_python.mdx
index e4a1b53ac..7a73963fa 100644
--- a/data/docs/userguide/collecting_application_logs_otel_sdk_python.mdx
+++ b/data/docs/userguide/collecting_application_logs_otel_sdk_python.mdx
@@ -1,118 +1,45 @@
---
-date: 2024-06-06
+date: 2024-12-18
title: Collecting Application Logs Using OTEL Python SDK
id: collecting_application_logs_otel_sdk_python
+hide_table_of_contents: true
---
-You can directly send logs of your application to SigNoz using the Python SDKs provided by opentlemetry. Please find an example [here](https://github.com/open-telemetry/opentelemetry-python/tree/main/docs/examples/logs).
+You can directly send logs of your application to SigNoz using the Python SDKs provided by OpenTelemetry. [This is an example](https://github.com/open-telemetry/opentelemetry-python/tree/main/docs/examples/logs) of how to
+set it up effectively.
-The default logging level in Python is WARNING.
+The default logging level in Python is WARNING. To send all logs to SigNoz, change the default log level to DEBUG.
-To send all the logs to SigNoz please change the default log level to DEBUG.
-```
+```python
import logging
logging.basicConfig(level=logging.DEBUG)
```
-## For SigNoz Cloud
-
-For sending logs to SigNoz cloud, while running the above example set the below environment variables
-* The value of `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable will be `https://ingest.{region}.signoz.cloud:443` where depending on the choice of your region for SigNoz cloud, the otlp endpoint will vary according to this table.
-
- | Region | Endpoint |
- | ------ | -------------------------- |
- | US | ingest.us.signoz.cloud:443 |
- | IN | ingest.in.signoz.cloud:443 |
- | EU | ingest.eu.signoz.cloud:443 |
-
-* The value of `OTEL_EXPORTER_OTLP_HEADERS` environment variable will be `signoz-ingestion-key=` where `` is your ingestion key
-* Your run command will look like
- ```bash
- OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.{region}.signoz.cloud:443" OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key= python3 example.py`
- ```
-
-{/* */}
diff --git a/data/docs/userguide/collecting_syslogs.mdx b/data/docs/userguide/collecting_syslogs.mdx
index a5620bc41..db48a9492 100644
--- a/data/docs/userguide/collecting_syslogs.mdx
+++ b/data/docs/userguide/collecting_syslogs.mdx
@@ -1,166 +1,214 @@
---
-date: 2024-06-06
-title: Collecting syslogs
+date: 2024-12-17
+title: Collecting Syslogs
id: collecting_syslogs
+hide_table_of_contents: true
---
# Collecting Syslogs
-With SigNoz you can collect your syslogs logs and perform different queries on top of it.
-We will demonstrate how to configure `rsyslog` to forward system logs to tcp endpoint of otel-collector and use `syslog` receiver in OpenTelemetry Collector to receive and parse the logs.
-Below are the steps to collect syslogs.
+With SigNoz, you can easily collect and analyze system logs. This document shows how to set up `rsyslog` to forward logs to the OpenTelemetry (OTel) Collector using
+the syslog receiver, so you can parse, query, and monitor logs with minimal effort.
-## Collect Syslogs in SigNoz cloud
+## Prerequisite
+- Unix based Operating System
+
+
+
+
+
+## Collect Syslogs in SigNoz Cloud
If you don’t already have a SigNoz cloud account, you can sign up [here](https://signoz.io/teams/).
-* Add otel collector binary to your VM by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
-
-* Add the syslog reciever to `config.yaml` to otel-collector.
- ```
- receivers:
- syslog:
- tcp:
- listen_address: "0.0.0.0:54527"
- protocol: rfc3164
- location: UTC
- operators:
- - type: move
- from: attributes.message
- to: body
+### Step 1: Add OTel Collector Binary
+
+Add the OpenTelemetry Collector binary to your VM by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
+
+### Step 2: Configure Syslog Receiver in OTel Collector
+
+Add the `syslog` receiver to the `config.yaml` of the OTel Collector:
+
+```yaml
+receivers:
+ syslog:
+ tcp:
+ listen_address: "0.0.0.0:54527"
+ protocol: rfc3164
+ location: UTC
+ operators:
+ - type: move
+ from: attributes.message
+ to: body
+...
+```
+
+Here, we collect logs and move messages from `attributes` to `body` using operators. Read more about operators [here](/docs/userguide/logs#operators-for-parsing-and-manipulating-logs).
+
+For additional configurations for the syslog receiver, check [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/syslogreceiver).
+
+### Step 3: Update Pipeline in OTel Collector
+
+Modify the pipeline inside `config.yaml` to include the syslog receiver:
+
+```yaml
+service:
...
- ```
- Here we are collecting the logs and moving message from attributes to body using operators that are available.
- You can read more about operators [here](/docs/userguide/logs#operators-for-parsing-and-manipulating-logs).
-
- For more configurations that are available for syslog receiver please check [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/syslogreceiver).
-
-* Next we will modify our pipeline inside `config.yaml` of otel-collector to include the receiver we have created above.
- ```
- service:
- ....
- logs:
- receivers: [otlp, syslog]
- processors: [batch]
- exporters: [otlp]
- ```
-
-* Now we can restart the otel collector so that new changes are applied and we can forward our logs to port `54527`.
-
-* Modify your `rsyslog.conf` file present inside `/etc/` by running the following command:
-
- ```
- sudo vim /etc/rsyslog.conf
- ```
-
- and adding the this line at the end
-
- ```
- template(
- name="UTCTraditionalForwardFormat"
- type="string"
- string="<%PRI%>%TIMESTAMP:::date-utc% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
- )
-
- *.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp" template="UTCTraditionalForwardFormat")
- ```
-
- For production use cases it is recommended to use something like below:
- ```
- template(
- name="UTCTraditionalForwardFormat"
- type="string"
- string="<%PRI%>%TIMESTAMP:::date-utc% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
- )
-
- *.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp"
- action.resumeRetryCount="10"
- queue.type="linkedList" queue.size="10000" template="UTCTraditionalForwardFormat")
- ```
-
- So that you have retries and queue in place to de-couple the sending from the other logging action.
- Also we are assuming that you are running the otel binary on the same host. If not, the value of `target` might change depending on your environment.
-
-* Now restart your rsyslog service by running `sudo systemctl restart rsyslog.service`
-* You can check the status of service by running `sudo systemctl status rsyslog.service`
-* If there are no errors your logs will be visible on SigNoz UI.
+ logs:
+ receivers: [otlp, syslog]
+ processors: [batch]
+ exporters: [otlp]
+```
+
+### Step 4: Restart OTel Collector
+
+Restart the OTel Collector to apply the new changes.
+
+### Step 5: Modify `rsyslog.conf`
+
+Run the following command to edit the `rsyslog.conf` file:
+
+```bash
+sudo vim /etc/rsyslog.conf
+```
+
+Add the following lines at the end:
+
+```rsyslog
+template(
+ name="UTCTraditionalForwardFormat"
+ type="string"
+ string="<%PRI%>%TIMESTAMP:::date-utc% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
+)
+
+*.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp" template="UTCTraditionalForwardFormat")
+```
+
+For production use cases, configure retries and queues:
+
+```rsyslog
+*.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp"
+ action.resumeRetryCount="10"
+ queue.type="linkedList" queue.size="10000" template="UTCTraditionalForwardFormat")
+```
+
+### Step 6: Restart rsyslog Service
+
+Restart the `rsyslog` service:
+
+```bash
+sudo systemctl restart rsyslog.service
+```
+Check the status:
+
+```bash
+sudo systemctl status rsyslog.service
+```
+
+If there are no errors, logs will be visible in the SigNoz UI.
+
+
+
+
## Collect Syslogs in Self-Hosted SigNoz
-* Modify the `docker-compose.yaml` file present inside `deploy/docker/clickhouse-setup` to expose a port, in this case `54527` so that we can forward syslogs to this port.
- ```
- ...
- otel-collector:
- image: signoz/signoz-otel-collector:0.88.11
- command: ["--config=/etc/otel-collector-config.yaml"]
- volumes:
- - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ports:
- - "54527:54527"
- ...
- ```
-
-* Add the syslog reciever to `otel-collector-config.yaml` which is present inside `deploy/docker/clickhouse-setup`
- ```
- receivers:
- syslog:
- tcp:
- listen_address: "0.0.0.0:54527"
- protocol: rfc3164
- location: UTC
- operators:
- - type: move
- from: attributes.message
- to: body
+
+### Step 1: Update `docker-compose.yaml`
+
+Modify the `docker-compose.yaml` file in `deploy/docker/clickhouse-setup` to expose port `54527`:
+
+```yaml
+...
+otel-collector:
+ image: signoz/signoz-otel-collector:0.88.11
+ command: ["--config=/etc/otel-collector-config.yaml"]
+ volumes:
+ - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
+ ports:
+ - "54527:54527"
+...
+```
+
+### Step 2: Configure Syslog Receiver in OTel Collector
+
+Add the `syslog` receiver to `otel-collector-config.yaml`:
+
+```yaml
+receivers:
+ syslog:
+ tcp:
+ listen_address: "0.0.0.0:54527"
+ protocol: rfc3164
+ location: UTC
+ operators:
+ - type: move
+ from: attributes.message
+ to: body
+...
+```
+
+### Step 3: Update Pipeline in OTel Collector
+
+Modify the pipeline to include the syslog receiver:
+
+```yaml
+service:
...
- ```
- Here we are collecting the logs and moving message from attributes to body using operators that are available.
- You can read more about operators [here](/docs/userguide/logs#operators-for-parsing-and-manipulating-logs)
-
- For more configurations that are available for syslog receiver please check [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/syslogreceiver).
-
-* Next we will modify our pipeline inside `otel-collector-config.yaml` to include the receiver we have created above.
- ```
- service:
- ....
- logs:
- receivers: [otlp, syslog]
- processors: [batch]
- exporters: [clickhouselogsexporter]
- ```
-
-* Now we can restart the otel collector container so that new changes are applied and we can forward our logs to port `54527`.
-
-* Modify your `rsyslog.conf` file present inside `/etc/` by running `sudo vim /etc/rsyslog.conf` and adding the this line at the end
- ```
- template(
- name="UTCTraditionalForwardFormat"
- type="string"
- string="<%PRI%>%TIMESTAMP:::date-utc% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
- )
-
- *.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp" template="UTCTraditionalForwardFormat")
- ```
-
- For production use cases it is recommended to using something like
- ```
- template(
- name="UTCTraditionalForwardFormat"
- type="string"
- string="<%PRI%>%TIMESTAMP:::date-utc% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
- )
-
- *.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp"
- action.resumeRetryCount="10"
- queue.type="linkedList" queue.size="10000" template="UTCTraditionalForwardFormat")
- ```
-
- So that you have retires and queue in place to de-couple the sending from the other logging action.
-
- The value of `target` might vary depending on where SigNoz is deployed, since it is deployed on the same host I am using `0.0.0.0` for more help you can visit [here](/docs/install/troubleshooting#signoz-otel-collector-address-grid)
-
-* Now restart your rsyslog service by running `sudo systemctl restart rsyslog.service`
-* You can check the status of service by running `sudo systemctl status rsyslog.service`
-* If there are no errors your logs will be visible on SigNoz UI.
-
-
+ logs:
+ receivers: [otlp, syslog]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+```
+
+### Step 4: Restart OTel Collector Container
+
+Restart the OTel Collector container to apply the changes.
+
+### Step 5: Modify `rsyslog.conf`
+
+Edit the `rsyslog.conf` file:
+
+```bash
+sudo vim /etc/rsyslog.conf
+```
+
+Add the following lines:
+
+```rsyslog
+template(
+ name="UTCTraditionalForwardFormat"
+ type="string"
+ string="<%PRI%>%TIMESTAMP:::date-utc% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
+)
+
+*.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp" template="UTCTraditionalForwardFormat")
+```
+
+For production use, add retries and queues:
+
+```rsyslog
+*.* action(type="omfwd" target="0.0.0.0" port="54527" protocol="tcp"
+ action.resumeRetryCount="10"
+ queue.type="linkedList" queue.size="10000" template="UTCTraditionalForwardFormat")
+```
+
+### Step 6: Restart rsyslog Service
+
+Restart the `rsyslog` service:
+
+```bash
+sudo systemctl restart rsyslog.service
+```
+
+Check the status:
+
+```bash
+sudo systemctl status rsyslog.service
+```
+
+If there are no errors, logs will be visible in the SigNoz UI.
+
+
+
+
diff --git a/data/docs/userguide/fluentbit_to_signoz.mdx b/data/docs/userguide/fluentbit_to_signoz.mdx
index 5b103ec62..57aa1fde2 100644
--- a/data/docs/userguide/fluentbit_to_signoz.mdx
+++ b/data/docs/userguide/fluentbit_to_signoz.mdx
@@ -1,48 +1,58 @@
---
-date: 2024-06-06
+date: 2024-12-18
title: FluentBit to SigNoz
id: fluentbit_to_signoz
+hide_table_of_contents: true
---
+If you use FluentBit to collect logs in your stack, this tutorial will guide you on how to send logs from FluentBit to SigNoz.
-If you use fluentBit to collect logs in your stack with this tutotrial you will be able to send logs from fluentBit to SigNoz.
+At SigNoz, we use the OpenTelemetry Collector to receive logs, which supports the FluentForward protocol. You can forward logs from your FluentBit agent to the OpenTelemetry Collector using this protocol.
-At SigNoz we use opentelemetry collector to recieve logs which supports the fluentforward protocol. So you can forward your logs from your fluentBit agent to opentelemetry collector using fluentforward protocol.
+
+
+
+### Collect Logs Using FluentBit in SigNoz Cloud
+
+1. **Add OpenTelemetry Collector binary to your VM**: Follow this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
+
+2. **Add FluentForward receiver to your `config.yaml`:**
-### Collect Logs Using FluentBit in SigNoz cloud
- * Add otel collector binary to your VM by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
- * Add fluentforward reciever to your `config.yaml`
```yaml
receivers:
fluentforward:
endpoint: 0.0.0.0:24224
```
- Here we have used port 24224 for listening in fluentforward protocol, but you can change it to a port you want.
- You can read more about fluentforward receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
+ Here, port 24224 is used for listening to the FluentForward protocol. You can change it to a port of your choice. Learn more about the FluentForward receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
- * Modify your `config.yaml` and add the above receiver
- ```yaml {4}
+3. **Modify your `config.yaml` to include the receiver:**
+
+ ```yaml
service:
- ....
- logs:
- receivers: [otlp, fluentforward]
- processors: [batch]
- exporters: [otlp]
+ ...
+ logs:
+ receivers: [otlp, fluentforward]
+ processors: [batch]
+ exporters: [otlp]
+ ```
+
+4. **Update FluentBit configuration to forward logs to the OpenTelemetry Collector:**
+
```
- * Add the following to your fluentBit config to forward the logs to otel collector.
- ```
- [OUTPUT]
- Name forward
- Match *
- Host localhost
- Port 24224
- ```
- In this config we are forwarding the logs to the otel collector which is listening on port 24224.
- Also we are assuming that you are running the fluentBit binary on the host. If not, the value of `host` might change depending on your environment.
- * Once you make this changes you can restart fluentBit and otel-binary, and you will be able to see the logs in SigNoz.
- * To properly transform your existing log model into opentelemetry [log](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md) model you can use the different processors provided by opentelemetry. [link](/docs/userguide/logs#processors-available-for-processing-logs)
-
- eg:-
+ [OUTPUT]
+ Name forward
+ Match *
+ Host localhost
+ Port 24224
+ ```
+ - This configuration forwards logs to the OpenTelemetry Collector listening on port 24224.
+ - If FluentBit is not running on the same host, replace `localhost` with the appropriate host value.
+
+5. **Restart FluentBit and OpenTelemetry Collector.**
+
+6. **Use processors to transform logs:**
+ To properly transform your existing log model into the OpenTelemetry [log model](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md), use OpenTelemetry processors:
+
```yaml
processors:
logstransform:
@@ -57,38 +67,44 @@ At SigNoz we use opentelemetry collector to recieve logs which supports the flue
- type: remove
field: attributes.span_id
```
- The operations in the above processor will parse the trace_id and span_id from log to opentelemetry log model and remove them from attributes.
+
+
+
+
+### Collect Logs Using FluentBit in Self-Hosted SigNoz
+
+1. **Add FluentForward receiver to `otel-collector-config.yaml`:**
-## Collect Logs Using FluentBit in Self-Hosted SigNoz
-* Add fluentforward reciever to your `otel-collector-config.yaml` which is present inside `deploy/docker/clickhouse-setup`
```yaml
receivers:
fluentforward:
endpoint: 0.0.0.0:24224
```
- Here we have used port 24224 for listening in fluentforward protocol, but you can change it to a port you want.
- You can read more about fluentforward receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
+ Port 24224 is used for the FluentForward protocol. You can change it to a preferred port. Learn more about the FluentForward receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
-* Update the pipleline for logs by making the following change in `otel-collector-config.yaml`
- ```
+2. **Update the logs pipeline in `otel-collector-config.yaml`:**
+
+ ```yaml
service:
...
-
logs:
- receivers: [ otlp, fluentforward ]
- processors: [ batch ]
- exporters: [ clickhouselogsexporter ]
+ receivers: [otlp, fluentforward]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+ ```
+
+3. **Expose the port in `docker-compose.yaml`:**
+
+ ```yaml
+ otel-collector:
+ ...
+ ports:
+ - "24224:24224"
```
- Here we are updating the logs pipeline which will collect logs from `fluentforward` and `otlp` receiver, processing it using batch processor and export it to clickhouse.
-* Expose the port in port for otel-collector in `docker-compose.yaml` file present in `deploy/docker/clickhouse-setup`
- ```yaml
- otel-collector:
- ...
- ports:
- - "24224:24224"
- ```
-* Change the fluentBit config to forward the logs to otel collector.
+
+4. **Update FluentBit configuration:**
+
```
[INPUT]
Name dummy
@@ -101,12 +117,13 @@ At SigNoz we use opentelemetry collector to recieve logs which supports the flue
Host
Port 24224
```
- In this example we are generating sample logs and then forwarding them to the otel collector which is listening on port 24224.
- `` has to be replaced by the host where otel-collector is running. For more info check [troubleshooting](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
-* Once you make this changes you can restart fluentBit and SignNoz, and you will be able to see the logs in SigNoz.
-* To properly transform your existing log model into opentelemetry [log](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md) model you can use the different processors provided by opentelemetry. [link](/docs/userguide/logs#processors-available-for-processing-logs)
-
- eg:-
+ Replace `` with the host where the OpenTelemetry Collector is running. For more info, check the [troubleshooting guide](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
+
+5. **Restart FluentBit and SigNoz.**
+
+6. **Use processors to transform logs:**
+ To transform your existing log model into the OpenTelemetry [log model](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md), use processors:
+
```yaml
processors:
logstransform:
@@ -121,4 +138,6 @@ At SigNoz we use opentelemetry collector to recieve logs which supports the flue
- type: remove
field: attributes.span_id
```
- The operations in the above processor will parse the trace_id and span_id from log to opentelemetry log model and remove them from attributes.
\ No newline at end of file
+
+
+
diff --git a/data/docs/userguide/fluentd_to_signoz.mdx b/data/docs/userguide/fluentd_to_signoz.mdx
index f8e153c70..e903a4b81 100644
--- a/data/docs/userguide/fluentd_to_signoz.mdx
+++ b/data/docs/userguide/fluentd_to_signoz.mdx
@@ -2,161 +2,171 @@
date: 2024-06-06
title: FluentD to SigNoz
id: fluentd_to_signoz
+hide_table_of_contents: true
---
-
-
-If you use fluentD to collect logs in your stack with this tutotrial you will be able to send logs from fluentD to SigNoz.
-
-At SigNoz we use opentelemetry collector to recieve logs which supports the fluentforward protocol. So you can forward your logs from your fluentD agent to opentelemetry collector.
-
-### Collect Logs Using FluentD in SigNoz cloud
- * Add otel collector binary to your VM by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
-
- * Add fluentforward reciever to your `config.yaml`
- ```yaml
- receivers:
- fluentforward:
- endpoint: 0.0.0.0:24224
- ```
- Here we have used port 24224 for listening in fluentforward protocol, but you can change it to a port you want.
- You can read more about fluentforward receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
-
- * Modify your `config.yaml` and add the above receiver
- ```yaml {4}
- service:
- ....
- logs:
- receivers: [otlp, fluentforward]
- processors: [batch]
- exporters: [otlp]
- ```
-
- * Add the following to your fluentD config to forward the logs to otel collector.
- ```
- >
- @type forward
- send_timeout 60s
- recover_wait 10s
- hard_timeout 60s
-
-
- name myserver1
- host localhost
- port 24224
-
-
- ```
- In this config we are matching a directive and forwarding logs to the otel collector which is listening on port 24224. Replace `` with your directive name.
- Also we are assuming that you are running the fluentD binary on the host. If not, the value of `host` might change depending on your environment.
- * Once you make this changes you can restart fluentD and otel-binary, and you will be able to see the logs in SigNoz.
-
- * To properly transform your existing log model into opentelemetry [log](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md) model you can use the different processors provided by opentelemetry. [link](/docs/userguide/logs#processors-available-for-processing-logs)
-
- eg:-
- ```yaml
- processors:
- logstransform:
- operators:
- - type: trace_parser
- trace_id:
- parse_from: attributes.trace_id
- span_id:
- parse_from: attributes.span_id
- - type: remove
- field: attributes.trace_id
- - type: remove
- field: attributes.span_id
- ```
- The operations in the above processor will parse the trace_id and span_id from log to opentelemetry log model and remove them from attributes.
-
-## Collect Logs Using FluentD in Self-Hosted SigNoz
-### Steps to recieve logs from FluentD:
-* Add fluentforward reciever to your `otel-collector-config.yaml` which is present inside `deploy/docker/clickhouse-setup`
- ```
- receivers:
- fluentforward:
- endpoint: 0.0.0.0:24224
- ```
- Here we have used port 24224 for listening in fluentforward protocol, but you can change it to a port you want.
- You can read more about fluentforward receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
-
-* Uncomment the exporter and pipleline for logs and make the following change in `otel-collector-config.yaml`
- ```yaml
- exporters:
- ...
-
- clickhouselogsexporter:
- dsn: tcp://clickhouse:9000/
- timeout: 5s
- sending_queue:
- queue_size: 100
- retry_on_failure:
- enabled: true
- initial_interval: 5s
- max_interval: 30s
- max_elapsed_time: 300s
-
- ...
-
- service:
- ...
-
- logs:
- receivers: [ otlp, fluentforward ]
- processors: [ batch ]
- exporters: [ clickhouselogsexporter ]
- ```
- Here we are adding our clickhouse exporter and creating a pipeline which will collect logs from `fluentforward` receiver, processing it using batch processor and export it to clickhouse.
-
-* Expose the port in port for otel-collector in `docker-compose-core.yaml` file present in `deploy/docker/clickhouse-setup`
- ```yaml
- otel-collector:
- ...
- ports:
- - "24224:24224"
- ```
-
-* Change the fluentD config to forward the logs to otel collector.
- ```
-
-
-
- @type forward
- send_timeout 60s
- recover_wait 10s
- hard_timeout 60s
-
-
- name myserver1
- host
- port 24224
-
-
- ```
- In this example we are generating sample logs and then forwarding them to the otel collector which is listening on port 24224.
- `` has to be replaced by the host where otel-collector is running. For more info check [troubleshooting](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
-* Once you make this changes you can restart fluentD and SignNoz, and you will be able to see the logs in SigNoz.
-* To properly transform your existing log model into opentelemetry [log](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md) model you can use the different processors provided by opentelemetry. [link](/docs/userguide/logs#processors-available-for-processing-logs)
-
- eg:-
- ```yaml
- processors:
- logstransform:
- operators:
- - type: trace_parser
- trace_id:
- parse_from: attributes.trace_id
- span_id:
- parse_from: attributes.span_id
- - type: remove
- field: attributes.trace_id
- - type: remove
- field: attributes.span_id
- ```
- The operations in the above processor will parse the trace_id and span_id from log to opentelemetry log model and remove them from attributes.
+If you use FluentD to collect logs in your stack, this tutorial will help you send logs from FluentD to SigNoz.
+
+SigNoz uses the OpenTelemetry collector to receive logs, which supports the `fluentforward` protocol. You can forward your logs from your FluentD agent to the OpenTelemetry collector.
+
+
+
+
+### Collect Logs Using FluentD in SigNoz Cloud
+
+1. **Add OpenTelemetry Collector Binary**
+ Follow this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/) to add the OpenTelemetry collector binary to your VM.
+
+2. **Configure FluentForward Receiver**
+ Add the following to your `config.yaml`:
+ ```yaml
+ receivers:
+ fluentforward:
+ endpoint: 0.0.0.0:24224
+ ```
+ > You can change the port if needed. Learn more about the `fluentforward` receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
+
+3. **Modify the Service Section**
+ Update your `config.yaml`:
+ ```yaml
+ service:
+ ...
+ logs:
+ receivers: [otlp, fluentforward]
+ processors: [batch]
+ exporters: [otlp]
+ ```
+
+4. **Update FluentD Configuration**
+ Add the following to your FluentD configuration to forward logs to the OpenTelemetry collector:
+ ```
+ >
+ @type forward
+ send_timeout 60s
+ recover_wait 10s
+ hard_timeout 60s
+
+
+ name myserver1
+ host localhost
+ port 24224
+
+
+ ```
+ Replace `` with your directive name. If FluentD is not running on the host, adjust the `host` value accordingly.
+
+5. **Restart Services**
+ Restart FluentD and the OpenTelemetry collector binary. Logs should now appear in SigNoz.
+
+6. **Transform Logs to OpenTelemetry Model**
+ Use processors in OpenTelemetry to transform logs. Example:
+ ```yaml
+ processors:
+ logstransform:
+ operators:
+ - type: trace_parser
+ trace_id:
+ parse_from: attributes.trace_id
+ span_id:
+ parse_from: attributes.span_id
+ - type: remove
+ field: attributes.trace_id
+ - type: remove
+ field: attributes.span_id
+ ```
+
+
+
+
+
+### Collect Logs Using FluentD in Self-Hosted SigNoz
+
+1. **Configure FluentForward Receiver**
+ Add the following to your `otel-collector-config.yaml` (inside `deploy/docker/clickhouse-setup`):
+ ```yaml
+ receivers:
+ fluentforward:
+ endpoint: 0.0.0.0:24224
+ ```
+ > You can change the port if needed. Learn more about the `fluentforward` receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver).
+
+2. **Update Exporter and Pipeline**
+ Uncomment the exporter and pipeline for logs in `otel-collector-config.yaml` and make the following changes:
+ ```yaml
+ exporters:
+ ...
+ clickhouselogsexporter:
+ dsn: tcp://clickhouse:9000/
+ timeout: 5s
+ sending_queue:
+ queue_size: 100
+ retry_on_failure:
+ enabled: true
+ initial_interval: 5s
+ max_interval: 30s
+ max_elapsed_time: 300s
+
+ service:
+ ...
+ logs:
+ receivers: [otlp, fluentforward]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+ ```
+
+3. **Expose Collector Port**
+ Modify `docker-compose-core.yaml` to expose the port for the OpenTelemetry collector:
+ ```yaml
+ otel-collector:
+ ...
+ ports:
+ - "24224:24224"
+ ```
+
+4. **Update FluentD Configuration**
+ Update your FluentD configuration:
+ ```
+
+
+
+ @type forward
+ send_timeout 60s
+ recover_wait 10s
+ hard_timeout 60s
+
+
+ name myserver1
+ host
+ port 24224
+
+
+ ```
+ Replace `` with the host where the OpenTelemetry collector is running. For troubleshooting, check [here](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
+
+5. **Restart Services**
+ Restart FluentD and SigNoz. Logs should now appear in SigNoz.
+
+6. **Transform Logs to OpenTelemetry Model**
+ Use processors in OpenTelemetry to transform logs. Example:
+ ```yaml
+ processors:
+ logstransform:
+ operators:
+ - type: trace_parser
+ trace_id:
+ parse_from: attributes.trace_id
+ span_id:
+ parse_from: attributes.span_id
+ - type: remove
+ field: attributes.trace_id
+ - type: remove
+ field: attributes.span_id
+ ```
+
+
+
diff --git a/data/docs/userguide/heroku_logs_to_signoz.mdx b/data/docs/userguide/heroku_logs_to_signoz.mdx
index fe6803acf..6f861b7a5 100644
--- a/data/docs/userguide/heroku_logs_to_signoz.mdx
+++ b/data/docs/userguide/heroku_logs_to_signoz.mdx
@@ -2,78 +2,98 @@
date: 2024-06-06
title: Stream Logs from Heroku to SigNoz
id: heroku_logs_to_signoz
+hide_table_of_contents: true
---
-If you are running your applications on heroku, you can stream logs from Heroku to SigNoz using [httpsdrain](https://devcenter.heroku.com/articles/log-drains#https-drains).
+If you are running your applications on **Heroku**, you can stream logs to **SigNoz** using [httpsdrain](https://devcenter.heroku.com/articles/log-drains#https-drains).
+
-## Stream Heroku logs to SigNoz in SigNoz cloud
+
-* Use the heroku cli to add a https drain
- ```sh
- heroku drains:add https://:@ingest..signoz.cloud:443/logs/heroku -a
- ```
+## Stream Heroku Logs to SigNoz Cloud
- Set the values of ``, ``, `` and ``.
+### Add an HTTPS Drain
+Use the Heroku CLI to add an HTTPS drain to your SigNoz cloud endpoint:
- `` is name of your instance. Ex:- If the url is `https://cpvo-test.us.signoz.cloud` the `TENANT_NAME` is `cpvo-test`.
+```bash
+heroku drains:add https://:@ingest..signoz.cloud:443/logs/heroku -a
+```
- `` is the ingestion key.
+- **``**: The name of your SigNoz Cloud instance
+ - Example: If the URL is `https://cpvo-test.us.signoz.cloud`, the `TENANT_NAME` is `cpvo-test`
+- **``**: Your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
+- **``**: Your chosen [region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint) for SigNoz Cloud
+- **``**: The name of your Heroku application
- `` is the name of the application where you want to add the drain.
-
- Depending on the choice of your region for SigNoz cloud, the otlp endpoint will vary according to this table.
+Once the drain is added, verify the logs in SigNoz under the Logs tab.
- | Region | Endpoint |
- | ------ | -------------------------- |
- | US | ingest.us.signoz.cloud:443 |
- | IN | ingest.in.signoz.cloud:443 |
- | EU | ingest.eu.signoz.cloud:443 |
+
-* Once added you can verify by going to the SigNoz UI.
+
+## Stream Heroku Logs to Self-Hosted SigNoz
-## Stream Heroku logs to SigNoz in Self-Hosted SigNoz
+### 1. Expose a Port in `docker-compose.yaml`
+Modify the [`docker-compose-minimal.yaml`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/docker-compose-minimal.yaml) file to expose port `8081` for receiving logs:
-* Modify the `docker-compose.yaml` file present inside `deploy/docker/clickhouse-setup` to expose a port, in this case `8081`.
- ```yaml {8}
- ...
- otel-collector:
- image: signoz/signoz-otel-collector:0.88.11
- command: ["--config=/etc/otel-collector-config.yaml"]
- volumes:
- - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ports:
- - "8081:8081"
- ...
- ```
+```yaml:/deploy/docker/clickhouse-setup/docker-compose-minimal.yaml {8}
+...
+otel-collector:
+ image: signoz/signoz-otel-collector:0.88.11
+ command: ["--config=/etc/otel-collector-config.yaml"]
+ volumes:
+ - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
+ ports:
+ - "8081:8081"
+...
+```
-* Add the httplogreceiver reciever to `otel-collector-config.yaml` which is present inside `deploy/docker/clickhouse-setup`
- ```yaml {2-10}
- receivers:
- httplogreceiver/heroku:
- endpoint: 0.0.0.0:8081
- source: heroku
- ...
- ```
+### 2. Configure `httplogreceiver`
+Add the **httplogreceiver** to your [`otel-collector-config.yaml`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/otel-collector-config.yaml) file:
-* Next we will modify our pipeline inside `otel-collector-config.yaml` to include the receiver we have created above.
- ```yaml {4}
- service:
- ....
- logs:
- receivers: [otlp, httplogreceiver/heroku]
- processors: [batch]
- exporters: [clickhouselogsexporter]
- ```
+```yaml:/deploy/docker/clickhouse-setup/otel-collector-config.yaml {2-4}
+receivers:
+ httplogreceiver/heroku:
+ endpoint: 0.0.0.0:8081
+ source: heroku
+...
+```
-* Now we can restart the otel collector container so that new changes are applied and we can forward our logs to port `8081`.
+### 3. Update the Pipeline Configuration
+Modify the pipeline in the `otel-collector-config.yaml`](https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/otel-collector-config.yaml) file to include the new receiver:
-* Use the heroku cli to add a https drain
- ```sh
- heroku drains:add http://:8081 -a
- ```
- Replace IP with IP of the system where your collector is running.
- For more info check [troubleshooting](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
-* Once added you can verify by going to the SigNoz UI.
\ No newline at end of file
+```yaml:/deploy/docker/clickhouse-setup/otel-collector-config.yaml {4}
+service:
+ ...
+ logs:
+ receivers: [otlp, httplogreceiver/heroku]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+```
+
+### 4. Restart the OTEL Collector
+Restart the **otel-collector** container to apply the changes:
+
+```bash
+docker-compose restart otel-collector
+```
+
+### 5. Add an HTTPS Drain
+Use the Heroku CLI to add an HTTPS drain pointing to the OTEL Collector:
+
+```bash
+heroku drains:add http://:8081 -a
+```
+- Replace `` with the IP address of the machine running the **OTEL Collector**.
+- Replace `` with the name of your Heroku application.
+
+Refer to the [troubleshooting guide](https://signoz.io/docs/install/troubleshooting/#signoz-otel-collector-address-grid) for finding the correct SigNoz host address.
+
+### 6. Verify the Logs
+Once the drain is added, verify the logs in SigNoz under the Logs tab.
+
+
+
+
diff --git a/data/docs/userguide/logstash_to_signoz.mdx b/data/docs/userguide/logstash_to_signoz.mdx
index 7135a5b2e..77b54d761 100644
--- a/data/docs/userguide/logstash_to_signoz.mdx
+++ b/data/docs/userguide/logstash_to_signoz.mdx
@@ -1,102 +1,129 @@
---
-date: 2024-06-06
+date: 2024-12-18
title: Logstash to SigNoz
id: logstash_to_signoz
+hide_table_of_contents: true
---
+If you use Logstash to collect logs in your stack, this tutorial will help you send logs from Logstash to SigNoz.
-If you use logstash to collect logs in your stack with this tutotrial you will be able to send logs from logstash to SigNoz.
-
-At SigNoz we use opentelemetry collector to recieve logs which supports the TCP protocol. So you can forward your logs from your logstash agent to opentelemetry collector
-
-### Collect Logs Using Logstash in SigNoz cloud
- * Add otel collector binary to your VM by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
- * Add the reciever to your `config.yaml`
- ```yaml
- receivers:
- tcplog/logstash:
- max_log_size: 1MiB
- listen_address: "0.0.0.0:2255"
- attributes: {}
- resource: {}
- add_attributes: false
- operators: []
- ```
- Here we have used port 2255 for listening in TCP protocol, but you can change it to a port you want.
- You can read more about tcplog reciver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/tcplogreceiver).
- * Modify your `config.yaml` and add the above receiver
- ```yaml {4}
- service:
- ....
- logs:
- receivers: [otlp, tcplog/logstash]
- processors: [batch]
- exporters: [otlp]
- ```
- * Change the logstash config to forward the logs to otel collector.
- ```
- output {
- tcp {
- codec => json_lines # this is required otherwise it will send eveything in a single line
- host => "localhost"
- port => 2255
- }
- }
- ```
- Here we are configuring logstash to send logs to otel-collector that we ran in the previous step, which is listening on port 2255.
- Also we are assuming that you are running the logstash binary on the host. If not, the value of `host` might change depending on your environment.
-
- * Once you make this changes you can otel binary and logstash, and you will be able to see the logs in SigNoz.
- * To properly transform your existing log model into opentelemetry [log](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md) model you can use the different processors provided by opentelemetry [link](/docs/userguide/logs#processors-available-for-processing-logs).
-
-## Collect Logs Using Logstash in Self-Hosted SigNoz
-
-* Add the reciever to your `otel-collector-config.yaml` which is present inside `deploy/docker/clickhouse-setup`
- ```yaml
- receivers:
- tcplog/logstash:
- max_log_size: 1MiB
- listen_address: "0.0.0.0:2255"
- attributes: {}
- resource: {}
- add_attributes: false
- operators: []
- ```
- Here we have used port 2255 for listening in TCP protocol, but you can change it to a port you want.
- You can read more about tcplog reciver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/tcplogreceiver).
-
-* Update the pipleline for logs and make the following change in `otel-collector-config.yaml`
- ```yaml
- service:
- ...
-
- logs:
- receivers: [ otlp, tcplog/logstash ]
- processors: [ batch ]
- exporters: [ clickhouselogsexporter ]
- ```
- Here we are adding our clickhouse exporter and creating a pipeline which will collect logs from `tcplog/logstash` receiver, processing it using batch processor and export it to clickhouse.
-
-* Expose the port in port for otel-collector in `docker-compose.yaml` file present in `deploy/docker/clickhouse-setup`
- ```yaml
- otel-collector:
- ...
- ports:
- - "2255:2255"
- ```
-
-* Change the logstash config to forward the logs to otel collector.
- ```
- output {
- tcp {
- codec => json_lines # this is required otherwise it will send eveything in a single line
- host => ""
- port => 2255
- }
- }
- ```
- In this example we are generating sample logs and then forwarding them to the otel collector which is listening on port 2255.
- `` has to be replaced by the host where otel-collector is running. For more info check [troubleshooting](/docs/install/troubleshooting#signoz-otel-collector-address-grid).
-
-* Once you make this changes you can restart logstash and SignNoz, and you will be able to see the logs in SigNoz.
-* To properly transform your existing log model into opentelemetry [log](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md) model you can use the different processors provided by opentelemetry. [link](/docs/userguide/logs#processors-available-for-processing-logs)
\ No newline at end of file
+At SigNoz, we use the OpenTelemetry collector to receive logs, which supports the TCP protocol. You can forward your logs from your Logstash agent to the OpenTelemetry collector.
+
+
+
+
+
+### Collect Logs Using Logstash in SigNoz Cloud
+
+1. **Add OpenTelemetry Collector Binary**:
+ Add the OpenTelemetry collector binary to your VM by following this [guide](https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/).
+
+2. **Configure the Receiver**:
+ Add the receiver to your `config.yaml`:
+ ```yaml
+ receivers:
+ tcplog/logstash:
+ max_log_size: 1MiB
+ listen_address: "0.0.0.0:2255"
+ attributes: {}
+ resource: {}
+ add_attributes: false
+ operators: []
+ ```
+ - Port `2255` is used here for listening via the TCP protocol, but you can use a port of your choice.
+ - Learn more about the `tcplog` receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/tcplogreceiver).
+
+3. **Modify the `config.yaml`**:
+ Update your service configuration to include the receiver:
+ ```yaml {4}
+ service:
+ ...
+ logs:
+ receivers: [otlp, tcplog/logstash]
+ processors: [batch]
+ exporters: [otlp]
+ ```
+
+4. **Update Logstash Configuration**:
+ Change the Logstash config to forward logs to the OpenTelemetry collector:
+ ```
+ output {
+ tcp {
+ codec => json_lines # Ensures logs are sent in JSON format line-by-line
+ host => "localhost"
+ port => 2255
+ }
+ }
+ ```
+ - This config assumes the Logstash binary is running on the same host. Adjust the `host` value if Logstash is running elsewhere.
+
+5. **Start the Services**:
+ Once the changes are made, start the OpenTelemetry binary and Logstash. You should see the logs in SigNoz.
+
+6. **Transform Logs**:
+ Use OpenTelemetry processors to transform your log model into the [OpenTelemetry log data model](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md). Learn more about available processors [here](/docs/userguide/logs#processors-available-for-processing-logs).
+
+
+
+
+
+### Collect Logs Using Logstash in Self-Hosted SigNoz
+
+1. **Configure the Receiver**:
+ Add the receiver to your `otel-collector-config.yaml` located in `deploy/docker/clickhouse-setup`:
+ ```yaml
+ receivers:
+ tcplog/logstash:
+ max_log_size: 1MiB
+ listen_address: "0.0.0.0:2255"
+ attributes: {}
+ resource: {}
+ add_attributes: false
+ operators: []
+ ```
+ - Port `2255` is used here for listening via the TCP protocol, but you can use a port of your choice.
+ - Learn more about the `tcplog` receiver [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/tcplogreceiver).
+
+2. **Update the Log Pipeline**:
+ Modify the `otel-collector-config.yaml` to include the receiver in the logs pipeline:
+ ```yaml
+ service:
+ ...
+
+ logs:
+ receivers: [otlp, tcplog/logstash]
+ processors: [batch]
+ exporters: [clickhouselogsexporter]
+ ```
+
+3. **Expose the Port**:
+ Update the `docker-compose.yaml` file in `deploy/docker/clickhouse-setup` to expose the port:
+ ```yaml
+ otel-collector:
+ ...
+ ports:
+ - "2255:2255"
+ ```
+
+4. **Update Logstash Configuration**:
+ Modify the Logstash config to forward logs to the OpenTelemetry collector:
+ ```
+ output {
+ tcp {
+ codec => json_lines # Ensures logs are sent in JSON format line-by-line
+ host => ""
+ port => 2255
+ }
+ }
+ ```
+ - Replace `` with the actual host where the OpenTelemetry collector is running. Refer to the [troubleshooting guide](/docs/install/troubleshooting#signoz-otel-collector-address-grid) for details.
+
+5. **Start the Services**:
+ Restart Logstash and SigNoz. You should see the logs in SigNoz.
+
+6. **Transform Logs**:
+ Use OpenTelemetry processors to transform your log model into the [OpenTelemetry log data model](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md). Learn more about available processors [here](/docs/userguide/logs#processors-available-for-processing-logs).
+
+
+
+
diff --git a/data/docs/userguide/python-logs-auto-instrumentation.mdx b/data/docs/userguide/python-logs-auto-instrumentation.mdx
index 7aa9a0a30..af11afb9a 100644
--- a/data/docs/userguide/python-logs-auto-instrumentation.mdx
+++ b/data/docs/userguide/python-logs-auto-instrumentation.mdx
@@ -16,9 +16,9 @@ OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
```
## Example application
-Here is a sample python application
+Here is a sample python application.
-1. Create a file named main.py and paste the following code
+1. Create a file named `main.py` and paste the following code:
```python
from flask import Flask
import logging
@@ -62,36 +62,28 @@ Here is a sample python application
You will be able to see the otel logs on the console once you visit `http://localhost:5000`
-If you want to send data to SigNoz cloud or self host SigNoz the run command will change and will be described in the next steps
+Run the below command to start sending your traces to SigNoz.
-
+
-For SigNoz Cloud the run command will be
```bash
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
-OTEL_EXPORTER_OTLP_ENDPOINT= \
+OTEL_EXPORTER_OTLP_ENDPOINT=ingest..signoz.cloud:443 \
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key= \
opentelemetry-instrument --traces_exporter otlp --metrics_exporter otlp --logs_exporter otlp python main.py
```
-- The value of `SIGNOZ_ENDPOINT` will be `https://ingest.{region}.signoz.cloud:443` where depending on the choice of your region for SigNoz cloud, the otlp endpoint will vary according to this table.
+- Replace `` with your SigNoz Cloud [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
+- Set the `` to match your [SigNoz Cloud region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint)
-| Region | Endpoint |
-| ------ | -------------------------- |
-| US | ingest.us.signoz.cloud:443 |
-| IN | ingest.in.signoz.cloud:443 |
-| EU | ingest.eu.signoz.cloud:443 |
-
-- The value of `INGESTION_KEY` is your ingestion key.
-For SigNoz Cloud the run command will be
```bash
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
diff --git a/data/docs/userguide/send-cloudwatch-logs-to-signoz.mdx b/data/docs/userguide/send-cloudwatch-logs-to-signoz.mdx
index 8e2a78eaa..bdb133640 100644
--- a/data/docs/userguide/send-cloudwatch-logs-to-signoz.mdx
+++ b/data/docs/userguide/send-cloudwatch-logs-to-signoz.mdx
@@ -1,5 +1,5 @@
---
-date: 2024-06-06
+date: 2024-12-17
id: send-cloudwatch-logs-to-signoz
title: Send Cloudwatch Logs to SigNoz
description: Send your AWS Cloudwatch logs to SigNoz Cloud/Self-Host
@@ -17,9 +17,7 @@ SigNoz effectively addresses these challenges, and in the following steps, we'll
## Setup
-You can choose from the two options below.
-
-
+
**Step 1** : Setup the OTel Collector
diff --git a/data/docs/userguide/send-logs-http.mdx b/data/docs/userguide/send-logs-http.mdx
index ad7940e3a..684089bc0 100644
--- a/data/docs/userguide/send-logs-http.mdx
+++ b/data/docs/userguide/send-logs-http.mdx
@@ -1,50 +1,53 @@
---
-date: 2024-06-06
+date: 2024-12-17
id: send-logs-http
title: Sending Logs to SigNoz over HTTP
+hide_table_of_contents: true
---
## Overview
-This guide provides detailed instructions on how to send logs to SigNoz using HTTP. Sending logs over HTTP offers flexibility, allowing users to create custom wrappers, directly transmit logs, or integrate existing loggers, making it a versatile choice for diverse use-cases.
+This documentation provides detailed instructions on how to send logs to SigNoz using HTTP. Sending logs over HTTP offers flexibility, allowing you to create
+custom wrappers, directly transmit logs, or integrate existing loggers, making it a versatile choice for diverse use-cases.
## Payload Structure
The payload is an array of logs in JSON format. It follows a structure similar to [OTEL Logs Data Model](https://opentelemetry.io/docs/specs/otel/logs/data-model/).
Below is how the payload would look like:
-```
+```json
[
{
"timestamp": ,
"trace_id": ,
"span_id": ,
- "trace_flags":
+ "trace_flags": ,
"severity_text": ,
"severity_number": ,
"attributes":