diff --git a/pipeline/outputs/azure.md b/pipeline/outputs/azure.md
index eda87d29d..3e4bf7b04 100644
--- a/pipeline/outputs/azure.md
+++ b/pipeline/outputs/azure.md
@@ -20,6 +20,7 @@ To get more details about how to setup Azure Log Analytics, please refer to the
| Log_Type_Key | If included, the value for this key will be looked upon in the record and if present, will over-write the `log_type`. If not found then the `log_type` value will be used. | |
| Time\_Key | Optional parameter to specify the key name where the timestamp will be stored. | @timestamp |
| Time\_Generated | If enabled, the HTTP request header 'time-generated-field' will be included so Azure can override the timestamp with the key specified by 'time_key' option. | off |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Getting Started
@@ -61,4 +62,3 @@ Another example using the `Log_Type_Key` with [record-accessor](https://docs.flu
Customer_ID abc
Shared_Key def
```
-
diff --git a/pipeline/outputs/azure_blob.md b/pipeline/outputs/azure_blob.md
index c775379aa..1c23806ff 100644
--- a/pipeline/outputs/azure_blob.md
+++ b/pipeline/outputs/azure_blob.md
@@ -31,6 +31,7 @@ We expose different configuration properties. The following table lists all the
| emulator\_mode | If you want to send data to an Azure emulator service like [Azurite](https://github.com/Azure/Azurite), enable this option so the plugin will format the requests to the expected format. | off |
| endpoint | If you are using an emulator, this option allows you to specify the absolute HTTP address of such service. e.g: [http://127.0.0.1:10000](http://127.0.0.1:10000). | |
| tls | Enable or disable TLS encryption. Note that Azure service requires this to be turned on. | off |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Getting Started
@@ -128,4 +129,3 @@ Azurite Queue service is successfully listening at http://127.0.0.1:10001
127.0.0.1 - - [03/Sep/2020:17:40:03 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log HTTP/1.1" 201 -
127.0.0.1 - - [03/Sep/2020:17:40:04 +0000] "PUT /devstoreaccount1/logs/kubernetes/var.log.containers.app-default-96cbdef2340.log?comp=appendblock HTTP/1.1" 201 -
```
-
diff --git a/pipeline/outputs/azure_kusto.md b/pipeline/outputs/azure_kusto.md
index 5fd4075fc..19cf72157 100644
--- a/pipeline/outputs/azure_kusto.md
+++ b/pipeline/outputs/azure_kusto.md
@@ -63,6 +63,7 @@ By default, Kusto will insert incoming ingestions into a table by inferring the
| tag_key | The key name of tag. If `include_tag_key` is false, This property is ignored. | `tag` |
| include_time_key | If enabled, a timestamp is appended to output. The key name is used `time_key` property. | `On` |
| time_key | The key name of time. If `include_time_key` is false, This property is ignored. | `timestamp` |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### Configuration File
diff --git a/pipeline/outputs/azure_logs_ingestion.md b/pipeline/outputs/azure_logs_ingestion.md
index e008ac4da..dbf7678b9 100644
--- a/pipeline/outputs/azure_logs_ingestion.md
+++ b/pipeline/outputs/azure_logs_ingestion.md
@@ -37,6 +37,7 @@ To get more details about how to setup these components, please refer to the fol
| time\_key | _Optional_ - Specify the key name where the timestamp will be stored. | `@timestamp` |
| time\_generated | _Optional_ - If enabled, will generate a timestamp and append it to JSON. The key name is set by the 'time_key' parameter. | `true` |
| compress | _Optional_ - Enable HTTP payload gzip compression. | `true` |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Getting Started
@@ -58,7 +59,7 @@ Use this configuration to quickly get started:
Name tail
Path /path/to/your/sample.log
Tag sample
- Key RawData
+ Key RawData
# Or use other plugins Plugin
# [INPUT]
# Name cpu
diff --git a/pipeline/outputs/bigquery.md b/pipeline/outputs/bigquery.md
index 8ef7a469f..dd2c278a9 100644
--- a/pipeline/outputs/bigquery.md
+++ b/pipeline/outputs/bigquery.md
@@ -59,6 +59,7 @@ You must configure workload identity federation in GCP before using it with Flue
| pool\_id | GCP workload identity pool where the identity provider was created. Used to construct the full resource name of the identity provider. | |
| provider\_id | GCP workload identity provider. Used to construct the full resource name of the identity provider. Currently only AWS accounts are supported. | |
| google\_service\_account | Email address of the Google service account to impersonate. The workload identity provider must have permissions to impersonate this service account, and the service account must have permissions to access Google BigQuery resources (e.g. `write` access to tables) | |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
See Google's [official documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll) for further details.
@@ -77,4 +78,3 @@ If you are using a _Google Cloud Credentials File_, the following configuration
dataset_id my_dataset
table_id dummy_table
```
-
diff --git a/pipeline/outputs/chronicle.md b/pipeline/outputs/chronicle.md
index ddc945c88..5298ec584 100644
--- a/pipeline/outputs/chronicle.md
+++ b/pipeline/outputs/chronicle.md
@@ -34,6 +34,7 @@ Fluent Bit's Chronicle output plugin uses a JSON credentials file for authentica
| log\_type | The log type to parse logs as. Google Chronicle supports parsing for [specific log types only](https://cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers). | |
| region | The GCP region in which to store security logs. Currently, there are several supported regions: `US`, `EU`, `UK`, `ASIA`. Blank is handled as `US`. | |
| log\_key | By default, the whole log record will be sent to Google Chronicle. If you specify a key name with this option, then only the value of that key will be sent to Google Chronicle. | |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
See Google's [official documentation](https://cloud.google.com/chronicle/docs/reference/ingestion-api) for further details.
diff --git a/pipeline/outputs/cloudwatch.md b/pipeline/outputs/cloudwatch.md
index 74a17c673..7fc5fee81 100644
--- a/pipeline/outputs/cloudwatch.md
+++ b/pipeline/outputs/cloudwatch.md
@@ -34,6 +34,7 @@ See [here](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b
| profile | Option to specify an AWS Profile for credentials. Defaults to `default` |
| auto\_retry\_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to `true`. |
| external\_id | Specify an external ID for the STS API, can be used with the role\_arn parameter if your role requires an external ID. |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. Default: `1`. |
## Getting Started
@@ -80,28 +81,6 @@ The following AWS IAM permissions are required to use this plugin:
}
```
-### Worker support
-
-Fluent Bit 1.7 adds a new feature called `workers` which enables outputs to have dedicated threads. This `cloudwatch_logs` plugin has partial support for workers in Fluent Bit 2.1.11 and prior. **2.1.11 and prior, the plugin can support a single worker; enabling multiple workers will lead to errors/indeterminate behavior.**
-Starting from Fluent Bit 2.1.12, the `cloudwatch_logs` plugin added full support for workers, meaning that more than one worker can be configured.
-
-Example:
-
-```
-[OUTPUT]
- Name cloudwatch_logs
- Match *
- region us-east-1
- log_group_name fluent-bit-cloudwatch
- log_stream_prefix from-fluent-bit-
- auto_create_group On
- workers 1
-```
-
-If you enable workers, you are enabling one or more dedicated threads for your CloudWatch output.
-We recommend starting with 1 worker, evaluating the performance, and then enabling more workers if needed.
-For most users, the plugin can provide sufficient throughput with 0 or 1 workers.
-
### Log Stream and Group Name templating using record\_accessor syntax
Sometimes, you may want the log group or stream name to be based on the contents of the log record itself. This plugin supports templating log group and stream names using Fluent Bit [record\_accessor](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor) syntax.
diff --git a/pipeline/outputs/datadog.md b/pipeline/outputs/datadog.md
index a89649a35..4441ea7bf 100644
--- a/pipeline/outputs/datadog.md
+++ b/pipeline/outputs/datadog.md
@@ -25,6 +25,7 @@ Before you begin, you need a [Datadog account](https://app.datadoghq.com/signup)
| dd_source | _Recommended_ - A human readable name for the underlying technology of your service (e.g. `postgres` or `nginx`). If unset, Datadog will look for the source in the [`ddsource` attribute](https://docs.datadoghq.com/logs/log_configuration/pipelines/?tab=source#source-attribute). | |
| dd_tags | _Optional_ - The [tags](https://docs.datadoghq.com/tagging/) you want to assign to your logs in Datadog. If unset, Datadog will look for the tags in the [`ddtags' attribute](https://docs.datadoghq.com/api/latest/logs/#send-logs). | |
| dd_message_key | By default, the plugin searches for the key 'log' and remap the value to the key 'message'. If the property is set, the plugin will search the property name key. | |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### Configuration File
diff --git a/pipeline/outputs/elasticsearch.md b/pipeline/outputs/elasticsearch.md
index 7f6f4a708..8e5288a44 100644
--- a/pipeline/outputs/elasticsearch.md
+++ b/pipeline/outputs/elasticsearch.md
@@ -48,7 +48,7 @@ The **es** output plugin, allows to ingest your records into an [Elasticsearch](
| Trace\_Error | If elasticsearch return an error, print the elasticsearch API request and response \(for diag only\) | Off |
| Current\_Time\_Index | Use current time for index generation instead of message record | Off |
| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. If using Elasticsearch 8.0.0 or higher - it [no longer supports mapping types](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html), so it shall be set to On. | Off |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
> The parameters _index_ and _type_ can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the _database_ and _table_ concepts. Also see [the FAQ below](elasticsearch.md#faq)
diff --git a/pipeline/outputs/file.md b/pipeline/outputs/file.md
index 5dde1b862..475609aec 100644
--- a/pipeline/outputs/file.md
+++ b/pipeline/outputs/file.md
@@ -12,7 +12,7 @@ The plugin supports the following configuration parameters:
| File | Set file name to store the records. If not set, the file name will be the _tag_ associated with the records. |
| Format | The format of the file content. See also Format section. Default: out\_file. |
| Mkdir | Recursively create output directory if it does not exist. Permissions set to 0755. |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 1 |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
## Format
@@ -111,4 +111,3 @@ In your main configuration file append the following Input & Output sections:
Match *
Path output_dir
```
-
diff --git a/pipeline/outputs/firehose.md b/pipeline/outputs/firehose.md
index e896610c9..d4a8d831a 100644
--- a/pipeline/outputs/firehose.md
+++ b/pipeline/outputs/firehose.md
@@ -28,6 +28,7 @@ See [here](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b
| auto\_retry\_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to `true`. |
| external\_id | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. |
| profile | AWS profile name to use. Defaults to `default`. |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. Default: `1`. |
## Getting Started
@@ -132,4 +133,3 @@ aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
```
For more see [the AWS for Fluent Bit github repo](https://github.com/aws/aws-for-fluent-bit#public-images).
-
diff --git a/pipeline/outputs/flowcounter.md b/pipeline/outputs/flowcounter.md
index 69bc75ebd..a6b12e462 100644
--- a/pipeline/outputs/flowcounter.md
+++ b/pipeline/outputs/flowcounter.md
@@ -9,6 +9,7 @@ The plugin supports the following configuration parameters:
| Key | Description | Default |
| :--- | :--- | :--- |
| Unit | The unit of duration. \(second/minute/hour/day\) | minute |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Getting Started
@@ -42,7 +43,7 @@ In your main configuration file append the following Input & Output sections:
Once Fluent Bit is running, you will see the reports in the output interface similar to this:
```bash
-$ fluent-bit -i cpu -o flowcounter
+$ fluent-bit -i cpu -o flowcounter
Fluent Bit v1.x.x
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
@@ -52,4 +53,3 @@ Fluent Bit v1.x.x
[2016/12/23 11:01:20] [ info] [engine] started
[out_flowcounter] cpu.0:[1482458540, {"counts":60, "bytes":7560, "counts/minute":1, "bytes/minute":126 }]
```
-
diff --git a/pipeline/outputs/forward.md b/pipeline/outputs/forward.md
index 1c9a8feff..df861c52a 100644
--- a/pipeline/outputs/forward.md
+++ b/pipeline/outputs/forward.md
@@ -23,7 +23,7 @@ The following parameters are mandatory for either Forward for Secure Forward mod
| Send_options | Always send options (with "size"=count of messages) | False |
| Require_ack_response | Send "chunk"-option and wait for "ack" response from server. Enables at-least-once and receiving server can control rate of traffic. (Requires Fluentd v0.14.0+ server) | False |
| Compress | Set to 'gzip' to enable gzip compression. Incompatible with `Time_as_Integer=True` and tags set dynamically using the [Rewrite Tag](../filters/rewrite-tag.md) filter. Requires Fluentd server v0.14.7 or later. | _none_ |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
## Secure Forward Mode Configuration Parameters
diff --git a/pipeline/outputs/gelf.md b/pipeline/outputs/gelf.md
index d5a4c848e..0aad41bff 100644
--- a/pipeline/outputs/gelf.md
+++ b/pipeline/outputs/gelf.md
@@ -22,6 +22,7 @@ According to [GELF Payload Specification](https://go2docs.graylog.org/5-0/gettin
| Gelf_Level_Key | Key to be used as the log level. Its value must be in [standard syslog levels](https://en.wikipedia.org/wiki/Syslog#Severity_level) (between 0 and 7). (_Optional in GELF_) | level |
| Packet_Size | If transport protocol is `udp`, you can set the size of packets to be sent. | 1420 |
| Compress | If transport protocol is `udp`, you can set this if you want your UDP packets to be compressed. | true |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### TLS / SSL
diff --git a/pipeline/outputs/http.md b/pipeline/outputs/http.md
index 59ed7b5f5..bbbdd8e79 100644
--- a/pipeline/outputs/http.md
+++ b/pipeline/outputs/http.md
@@ -33,7 +33,7 @@ The **http** output plugin allows to flush your records into a HTTP endpoint. Fo
| gelf\_level\_key | Specify the key to use for the `level` in _gelf_ format | |
| body\_key | Specify the key to use as the body of the request (must prefix with "$"). The key must contain either a binary or raw string, and the content type can be specified using headers\_key (which must be passed whenever body\_key is present). When this option is present, each msgpack record will create a separate request. | |
| headers\_key | Specify the key to use as the headers of the request (must prefix with "$"). The key must contain a map, which will have the contents merged on the request headers. This can be used for many purposes, such as specifying the content-type of the data contained in body\_key. | |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
### TLS / SSL
diff --git a/pipeline/outputs/influxdb.md b/pipeline/outputs/influxdb.md
index 53a8fe41b..2b59703f4 100644
--- a/pipeline/outputs/influxdb.md
+++ b/pipeline/outputs/influxdb.md
@@ -19,6 +19,7 @@ The **influxdb** output plugin, allows to flush your records into a [InfluxDB](h
| Tag\_Keys | Space separated list of keys that needs to be tagged | |
| Auto\_Tags | Automatically tag keys where value is _string_. This option takes a boolean value: True/False, On/Off. | Off |
| Uri | Custom URI endpoint | |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### TLS / SSL
@@ -207,4 +208,3 @@ key value
method "MATCH"
method "POST"
```
-
diff --git a/pipeline/outputs/kafka-rest-proxy.md b/pipeline/outputs/kafka-rest-proxy.md
index 399d57108..b03d49e9d 100644
--- a/pipeline/outputs/kafka-rest-proxy.md
+++ b/pipeline/outputs/kafka-rest-proxy.md
@@ -15,6 +15,7 @@ The **kafka-rest** output plugin, allows to flush your records into a [Kafka RES
| Time\_Key\_Format | Defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S |
| Include\_Tag\_Key | Append the Tag name to the final record. | Off |
| Tag\_Key | If Include\_Tag\_Key is enabled, this property defines the key name for the tag. | \_flb-key |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### TLS / SSL
@@ -49,4 +50,3 @@ In your main configuration file append the following _Input_ & _Output_ sections
Topic fluent-bit
Message_Key my_key
```
-
diff --git a/pipeline/outputs/kafka.md b/pipeline/outputs/kafka.md
index 77725215d..4599b62da 100644
--- a/pipeline/outputs/kafka.md
+++ b/pipeline/outputs/kafka.md
@@ -18,7 +18,7 @@ Kafka output plugin allows to ingest your records into an [Apache Kafka](https:/
| queue\_full\_retries | Fluent Bit queues data into rdkafka library, if for some reason the underlying library cannot flush the records the queue might fills up blocking new addition of records. The `queue_full_retries` option set the number of local retries to enqueue the data. The default value is 10 times, the interval between each retry is 1 second. Setting the `queue_full_retries` value to `0` set's an unlimited number of retries. | 10 |
| rdkafka.{property} | `{property}` can be any [librdkafka properties](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md) | |
| raw\_log\_key | When using the raw format and set, the value of raw\_log\_key in the record will be send to kafka as the payload. | |
-| workers | This setting improves the throughput and performance of data forwarding by enabling concurrent data processing and transmission to the kafka output broker destination. | 0 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
> Setting `rdkafka.log.connection.close` to `false` and `rdkafka.request.required.acks` to 1 are examples of recommended settings of librdfkafka properties.
diff --git a/pipeline/outputs/kinesis.md b/pipeline/outputs/kinesis.md
index b21766678..14c8d0aa7 100644
--- a/pipeline/outputs/kinesis.md
+++ b/pipeline/outputs/kinesis.md
@@ -29,6 +29,7 @@ See [here](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b
| auto\_retry\_requests | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to `true`. |
| external\_id | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. |
| profile | AWS profile name to use. Defaults to `default`. |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. Default: `1`. |
## Getting Started
@@ -71,23 +72,6 @@ The following AWS IAM permissions are required to use this plugin:
}
```
-### Worker support
-
-Fluent Bit 1.7 adds a new feature called `workers` which enables outputs to have dedicated threads. This `kinesis_streams` plugin fully supports workers.
-
-Example:
-
-```text
-[OUTPUT]
- Name kinesis_streams
- Match *
- region us-east-1
- stream my-stream
- workers 2
-```
-
-If you enable a single worker, you are enabling a dedicated thread for your Kinesis output. We recommend starting with without workers, evaluating the performance, and then adding workers one at a time until you reach your desired/needed throughput. For most users, no workers or a single worker will be sufficient.
-
### AWS for Fluent Bit
Amazon distributes a container image with Fluent Bit and these plugins.
@@ -133,4 +117,3 @@ aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
```
For more see [the AWS for Fluent Bit github repo](https://github.com/aws/aws-for-fluent-bit#public-images).
-
diff --git a/pipeline/outputs/logdna.md b/pipeline/outputs/logdna.md
index 3416dff2a..96026d7c7 100644
--- a/pipeline/outputs/logdna.md
+++ b/pipeline/outputs/logdna.md
@@ -78,6 +78,11 @@ Before to get started with the plugin configuration, make sure to obtain the pro
if not found, the default value is used.
Fluent Bit |
+
+ workers |
+ The number of workers to perform flush operations for this output. |
+ `0` |
+
@@ -150,4 +155,3 @@ Your record will be available and visible in your LogDNA dashboard after a few s
In your LogDNA dashboard, go to the top filters and mark the Tags `aa` and `bb`, then you will be able to see your records as the example below:
![](../../.gitbook/assets/logdna.png)
-
diff --git a/pipeline/outputs/loki.md b/pipeline/outputs/loki.md
index cadb70b6e..646f480ae 100644
--- a/pipeline/outputs/loki.md
+++ b/pipeline/outputs/loki.md
@@ -31,6 +31,7 @@ Be aware there is a separate Golang output plugin provided by [Grafana](https://
| auto\_kubernetes\_labels | If set to true, it will add all Kubernetes labels to the Stream labels | off |
| tenant\_id\_key | Specify the name of the key from the original record that contains the Tenant ID. The value of the key is set as `X-Scope-OrgID` of HTTP header. It is useful to set Tenant ID dynamically. ||
| compress | Set payload compression mechanism. The only available option is gzip. Default = "", which means no compression. ||
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Labels
@@ -284,4 +285,3 @@ Fluent Bit v1.7.0
[2020/10/14 20:57:46] [debug] [http] request payload (272 bytes)
[2020/10/14 20:57:46] [ info] [output:loki:loki.0] 127.0.0.1:3100, HTTP status=204
```
-
diff --git a/pipeline/outputs/nats.md b/pipeline/outputs/nats.md
index c2586e45a..10d17a004 100644
--- a/pipeline/outputs/nats.md
+++ b/pipeline/outputs/nats.md
@@ -2,12 +2,13 @@
The **nats** output plugin, allows to flush your records into a [NATS Server](https://docs.nats.io/nats-concepts/intro) end point. The following instructions assumes that you have a fully operational NATS Server in place.
-In order to flush records, the **nats** plugin requires to know two parameters:
+## Configuration parameters
| parameter | description | default |
| :--- | :--- | :--- |
| host | IP address or hostname of the NATS Server | 127.0.0.1 |
| port | TCP port of the target NATS Server | 4222 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
In order to override the default configuration values, the plugin uses the optional Fluent Bit network address format, e.g:
@@ -64,4 +65,3 @@ Each record is an individual entity represented in a JSON array that contains a
[1457108506,{"tag":"fluentbit","cpu_p":6.500000,"user_p":4.500000,"system_p":2}]
]
```
-
diff --git a/pipeline/outputs/new-relic.md b/pipeline/outputs/new-relic.md
index 29219f6c8..074acce00 100644
--- a/pipeline/outputs/new-relic.md
+++ b/pipeline/outputs/new-relic.md
@@ -72,6 +72,7 @@ Before to get started with the plugin configuration, make sure to obtain the pro
| compress | Set the compression mechanism for the payload. This option allows two values: `gzip` \(enabled by default\) or `false` to disable compression. | gzip |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
| :--- | :--- | :--- |
@@ -117,4 +118,3 @@ Fluent Bit v1.5.0
[2020/04/10 10:58:35] [ info] [output:nrlogs:nrlogs.0] log-api.newrelic.com:443, HTTP status=202
{"requestId":"feb312fe-004e-b000-0000-0171650764ac"}
```
-
diff --git a/pipeline/outputs/observe.md b/pipeline/outputs/observe.md
index 2e722422e..47be2503f 100644
--- a/pipeline/outputs/observe.md
+++ b/pipeline/outputs/observe.md
@@ -2,7 +2,7 @@
Observe employs the **http** output plugin, allowing you to flush your records [into Observe](https://docs.observeinc.com/en/latest/content/data-ingestion/forwarders/fluentbit.html).
-For now the functionality is pretty basic and it issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format.
+For now the functionality is pretty basic and it issues a POST request with the data records in [MessagePack](http://msgpack.org) (or JSON) format.
The following are the specific HTTP parameters to employ:
@@ -19,6 +19,7 @@ The following are the specific HTTP parameters to employ:
| header | The specific header to instructs Observe how to decode incoming payloads | X-Observe-Decoder fluent |
| compress | Set payload compression mechanism. Option available is 'gzip' | gzip |
| tls.ca_file | **For use with Windows**: provide path to root cert | |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### Configuration File
@@ -41,5 +42,5 @@ In your main configuration file, append the following _Input_ & _Output_ section
# For Windows: provide path to root cert
#tls.ca_file C:\fluent-bit\isrgrootx1.pem
-
+
```
diff --git a/pipeline/outputs/oci-logging-analytics.md b/pipeline/outputs/oci-logging-analytics.md
index 36475c870..4f8246ceb 100644
--- a/pipeline/outputs/oci-logging-analytics.md
+++ b/pipeline/outputs/oci-logging-analytics.md
@@ -20,6 +20,7 @@ Following are the top level configuration properties of the plugin:
| profile_name | OCI Config Profile Name to be used from the configuration file | DEFAULT |
| namespace | OCI Tenancy Namespace in which the collected log data is to be uploaded | |
| proxy | define proxy if required, in http://host:port format, supports only http protocol | |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
The following parameters are to set the Logging Analytics resources that must be used to process your logs by OCI Logging Analytics.
@@ -28,7 +29,7 @@ The following parameters are to set the Logging Analytics resources that must be
| oci_config_in_record | If set to true, the following oci_la_* will be read from the record itself instead of the output plugin configuration. | false |
| oci_la_log_group_id | The OCID of the Logging Analytics Log Group where the logs must be stored. This is a mandatory parameter | |
| oci_la_log_source_name | The Logging Analytics Source that must be used to process the log records. This is a mandatory parameter | |
-| oci_la_entity_id | The OCID of the Logging Analytics Entity | |
+| oci_la_entity_id | The OCID of the Logging Analytics Entity | |
| oci_la_entity_type | The entity type of the Logging Analytics Entity | |
| oci_la_log_path | Specify the original location of the log files | |
| oci_la_global_metadata | Use this parameter to specify additional global metadata along with original log content to Logging Analytics. The format is 'key_name value'. This option can be set multiple times | |
@@ -191,4 +192,4 @@ With oci_config_in_record option set to true, the metadata key-value pairs will
tls.verify Off
```
-The above configuration first injects the necessary metadata keys and values in the record directly, with a prefix olgm. attached to the keys in order to segregate the metadata keys from rest of the record keys. Then, using a nest filter only the metadata keys are selected by the filter and nested under oci_la_global_metadata key in the record, and the prefix olgm. is removed from the metadata keys.
\ No newline at end of file
+The above configuration first injects the necessary metadata keys and values in the record directly, with a prefix olgm. attached to the keys in order to segregate the metadata keys from rest of the record keys. Then, using a nest filter only the metadata keys are selected by the filter and nested under oci_la_global_metadata key in the record, and the prefix olgm. is removed from the metadata keys.
diff --git a/pipeline/outputs/opensearch.md b/pipeline/outputs/opensearch.md
index e238486e0..0b0142d3d 100644
--- a/pipeline/outputs/opensearch.md
+++ b/pipeline/outputs/opensearch.md
@@ -45,7 +45,7 @@ The following instructions assumes that you have a fully operational OpenSearch
| Trace\_Error | When enabled print the OpenSearch API calls to stdout when OpenSearch returns an error \(for diag only\) | Off |
| Current\_Time\_Index | Use current time for index generation instead of message record | Off |
| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. | Off |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
| Compress | Set payload compression mechanism. The only available option is `gzip`. Default = "", which means no compression. | |
> The parameters _index_ and _type_ can be confusing if you are new to OpenSearch, if you have used a common relational database before, they can be compared to the _database_ and _table_ concepts. Also see [the FAQ below](opensearch.md#faq)
@@ -199,7 +199,7 @@ With data access permissions, IAM policies are not needed to access the collecti
### Issues with the OpenSearch cluster
-Occasionally the Fluent Bit service may generate errors without any additional detail in the logs to explain the source of the issue, even with the service's log_level attribute set to [Debug](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file).
+Occasionally the Fluent Bit service may generate errors without any additional detail in the logs to explain the source of the issue, even with the service's log_level attribute set to [Debug](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file).
For example, in this scenario the logs show that a connection was successfully established with the OpenSearch domain, and yet an error is still returned:
```
@@ -218,9 +218,9 @@ This behavior could be indicative of a hard-to-detect issue with index shard usa
While OpenSearch index shards and disk space are related, they are not directly tied to one another.
-OpenSearch domains are limited to 1000 index shards per data node, regardless of the size of the nodes. And, importantly, shard usage is not proportional to disk usage: an individual index shard can hold anywhere from a few kilobytes to dozens of gigabytes of data.
+OpenSearch domains are limited to 1000 index shards per data node, regardless of the size of the nodes. And, importantly, shard usage is not proportional to disk usage: an individual index shard can hold anywhere from a few kilobytes to dozens of gigabytes of data.
-In other words, depending on the way index creation and shard allocation are configured in the OpenSearch domain, all of the available index shards could be used long before the data nodes run out of disk space and begin exhibiting disk-related performance issues (e.g. nodes crashing, data corruption, or the dashboard going offline).
+In other words, depending on the way index creation and shard allocation are configured in the OpenSearch domain, all of the available index shards could be used long before the data nodes run out of disk space and begin exhibiting disk-related performance issues (e.g. nodes crashing, data corruption, or the dashboard going offline).
The primary issue that arises when a domain is out of available index shards is that new indexes can no longer be created (though logs can still be added to existing indexes).
@@ -231,7 +231,7 @@ When that happens, the Fluent Bit OpenSearch output may begin showing confusing
If any of those symptoms are present, consider using the OpenSearch domain's API endpoints to troubleshoot possible shard issues.
-Running this command will show both the shard count and disk usage on all of the nodes in the domain.
+Running this command will show both the shard count and disk usage on all of the nodes in the domain.
```
GET _cat/allocation?v
```
diff --git a/pipeline/outputs/opentelemetry.md b/pipeline/outputs/opentelemetry.md
index a70d84396..c41fe95dd 100644
--- a/pipeline/outputs/opentelemetry.md
+++ b/pipeline/outputs/opentelemetry.md
@@ -35,6 +35,7 @@ Important Note: At the moment only HTTP endpoints are supported.
| logs_span_id_metadata_key |Specify a SpanId key to look up in the metadata.| $SpanId |
| logs_trace_id_metadata_key |Specify a TraceId key to look up in the metadata.| $TraceId |
| logs_attributes_metadata_key |Specify an Attributes key to look up in the metadata.| $Attributes |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Getting Started
diff --git a/pipeline/outputs/postgresql.md b/pipeline/outputs/postgresql.md
index 6bb581ed8..16eac7ffc 100644
--- a/pipeline/outputs/postgresql.md
+++ b/pipeline/outputs/postgresql.md
@@ -62,6 +62,7 @@ Make sure that the `fluentbit` user can connect to the `fluentbit` database on t
| `min_pool_size` | Minimum number of connection in async mode | 1 |
| `max_pool_size` | Maximum amount of connections in async mode | 4 |
| `cockroachdb` | Set to `true` if you will connect the plugin with a CockroachDB | false |
+| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### Libpq
@@ -129,4 +130,3 @@ Here follows a list of useful resources from the PostgreSQL documentation:
* [libpq - Environment variables](https://www.postgresql.org/docs/current/libpq-envars.html)
* [libpq - password file](https://www.postgresql.org/docs/current/libpq-pgpass.html)
* [Trigger functions](https://www.postgresql.org/docs/current/plpgsql-trigger.html)
-
diff --git a/pipeline/outputs/prometheus-exporter.md b/pipeline/outputs/prometheus-exporter.md
index 7db7c6d2d..feac59d76 100644
--- a/pipeline/outputs/prometheus-exporter.md
+++ b/pipeline/outputs/prometheus-exporter.md
@@ -4,7 +4,7 @@ description: An output plugin to expose Prometheus Metrics
# Prometheus Exporter
-The prometheus exporter allows you to take metrics from Fluent Bit and expose them such that a Prometheus instance can scrape them.
+The prometheus exporter allows you to take metrics from Fluent Bit and expose them such that a Prometheus instance can scrape them.
Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics
@@ -13,6 +13,7 @@ Important Note: The prometheus exporter only works with metric plugins, such as
| host | This is address Fluent Bit will bind to when hosting prometheus metrics. Note: `listen` parameter is deprecated from v1.9.0. | 0.0.0.0 |
| port | This is the port Fluent Bit will bind to when hosting prometheus metrics | 2021 |
| add\_label | This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields | |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
## Getting Started
diff --git a/pipeline/outputs/prometheus-remote-write.md b/pipeline/outputs/prometheus-remote-write.md
index ae7c1007b..b866f7193 100644
--- a/pipeline/outputs/prometheus-remote-write.md
+++ b/pipeline/outputs/prometheus-remote-write.md
@@ -25,7 +25,7 @@ Important Note: The prometheus exporter only works with metric plugins, such as
| header | Add a HTTP header key/value pair. Multiple headers can be set. | |
| log_response_payload | Log the response payload within the Fluent Bit log | false |
| add_label | This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields | |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
## Getting Started
@@ -93,7 +93,7 @@ With Logz.io [hosted prometheus](https://logz.io/solutions/infrastructure-monito
[OUTPUT]
name prometheus_remote_write
host listener.logz.io
- port 8053
+ port 8053
match *
header Authorization Bearer
tls on
@@ -109,7 +109,7 @@ With [Coralogix Metrics](https://coralogix.com/platform/metrics/) you may need t
[OUTPUT]
name prometheus_remote_write
host metrics-api.coralogix.com
- uri prometheus/api/v1/write?appLabelName=path&subSystemLabelName=path&severityLabelName=severity
+ uri prometheus/api/v1/write?appLabelName=path&subSystemLabelName=path&severityLabelName=severity
match *
port 443
tls on
diff --git a/pipeline/outputs/s3.md b/pipeline/outputs/s3.md
index 5f2df5f38..a51752c7e 100644
--- a/pipeline/outputs/s3.md
+++ b/pipeline/outputs/s3.md
@@ -49,6 +49,7 @@ See [here](https://github.com/fluent/fluent-bit-docs/tree/43c4fe134611da471e706b
| storage\_class | Specify the [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API\_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) for S3 objects. If this option is not specified, objects will be stored with the default 'STANDARD' storage class. | None |
| retry\_limit | Integer value to set the maximum number of retries allowed. Note: this configuration is released since version 1.9.10 and 2.0.1. For previous version, the number of retries is 5 and is not configurable. | 1 |
| external\_id | Specify an external ID for the STS API, can be used with the role\_arn parameter if your role requires an external ID. | None |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
## TLS / SSL
@@ -209,17 +210,17 @@ The following settings are recommended for this use case:
## S3 Multipart Uploads
-With `use_put_object Off` (default), S3 will attempt to send files using multipart uploads. For each file, S3 first calls [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html), then a series of calls to [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) for each fragment (targeted to be `upload_chunk_size` bytes), and finally [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) to create the final file in S3.
+With `use_put_object Off` (default), S3 will attempt to send files using multipart uploads. For each file, S3 first calls [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html), then a series of calls to [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) for each fragment (targeted to be `upload_chunk_size` bytes), and finally [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) to create the final file in S3.
### Fallback to PutObject
-S3 [requires](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html) each [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) fragment to be at least 5,242,880 bytes, otherwise the upload is rejected.
+S3 [requires](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html) each [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) fragment to be at least 5,242,880 bytes, otherwise the upload is rejected.
-Consequently, the S3 output must sometimes fallback to the [PutObject API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html).
+Consequently, the S3 output must sometimes fallback to the [PutObject API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html).
Uploads are triggered by three settings:
-1. `total_file_size` and `upload_chunk_size`: When S3 has buffered data in the `store_dir` that meets the desired `total_file_size` (for `use_put_object On`) or the `upload_chunk_size` (for Multipart), it will trigger an upload operation.
-2. `upload_timeout`: Whenever locally buffered data has been present on the filesystem in the `store_dir` longer than the configured `upload_timeout`, it will be sent. This happens regardless of whether or not the desired byte size has been reached. Consequently, if you configure a small `upload_timeout`, your files may be smaller than the `total_file_size`. The timeout is evaluated against the time at which S3 started buffering data for each unqiue tag (that is, the time when new data was buffered for the unique tag after the last upload). The timeout is also evaluated against the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) time, so a multipart upload will be completed after `upload_timeout` has elapsed, even if the desired size has not yet been reached.
+1. `total_file_size` and `upload_chunk_size`: When S3 has buffered data in the `store_dir` that meets the desired `total_file_size` (for `use_put_object On`) or the `upload_chunk_size` (for Multipart), it will trigger an upload operation.
+2. `upload_timeout`: Whenever locally buffered data has been present on the filesystem in the `store_dir` longer than the configured `upload_timeout`, it will be sent. This happens regardless of whether or not the desired byte size has been reached. Consequently, if you configure a small `upload_timeout`, your files may be smaller than the `total_file_size`. The timeout is evaluated against the time at which S3 started buffering data for each unqiue tag (that is, the time when new data was buffered for the unique tag after the last upload). The timeout is also evaluated against the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) time, so a multipart upload will be completed after `upload_timeout` has elapsed, even if the desired size has not yet been reached.
If your `upload_timeout` triggers an upload before the pending buffered data reaches the `upload_chunk_size`, it may be too small for a multipart upload. S3 will consequently fallback to use the [PutObject API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html).
@@ -232,37 +233,17 @@ When you enable compression, S3 applies the compression algorithm at send time.
If you encounter this frequently, use the numbers in the messages to guess your compression factor. For example, in this case, the buffered data was reduced from 5,630,650 bytes to 1,063,320 bytes. The compressed size is 1/5 the actual data size, so configuring `upload_chunk_size 30M` should ensure each part is large enough after compression to be over the min required part size of 5,242,880 bytes.
-The S3 API allows the last part in an upload to be less than the 5,242,880 byte minimum. Therefore, if a part is too small for an existing upload, the S3 output will upload that part and then complete the upload.
+The S3 API allows the last part in an upload to be less than the 5,242,880 byte minimum. Therefore, if a part is too small for an existing upload, the S3 output will upload that part and then complete the upload.
### upload_timeout constrains total multipart upload time for a single file
-The `upload_timeout` is evaluated against the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) time. So a multipart upload will be completed after `upload_timeout` has elapsed, even if the desired size has not yet been reached.
+The `upload_timeout` is evaluated against the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) time. So a multipart upload will be completed after `upload_timeout` has elapsed, even if the desired size has not yet been reached.
### Completing uploads
-When [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) is called, an `UploadID` is returned. S3 stores these IDs for active uploads in the `store_dir`. Until [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) is called, the uploaded data will not be visible in S3.
+When [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) is called, an `UploadID` is returned. S3 stores these IDs for active uploads in the `store_dir`. Until [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) is called, the uploaded data will not be visible in S3.
-On shutdown, S3 output will attempt to complete all pending uploads. If it fails to complete an upload, the ID will remain buffered in the `store_dir` in a directory called `multipart_upload_metadata`. If you restart the S3 output with the same `store_dir` it will discover the old UploadIDs and complete the pending uploads. The [S3 documentation](https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/) also has suggestions on discovering and deleting/completing dangling uploads in your buckets.
-
-## Worker support
-
-Fluent Bit 1.7 adds a new feature called `workers` which enables outputs to have dedicated threads. This `s3` plugin has partial support for workers. **The plugin can only support a single worker; enabling multiple workers will lead to errors/indeterminate behavior.**
-
-Example:
-
-```
-[OUTPUT]
- Name s3
- Match *
- bucket your-bucket
- region us-east-1
- total_file_size 1M
- upload_timeout 1m
- use_put_object On
- workers 1
-```
-
-If you enable a single worker, you are enabling a dedicated thread for your S3 output. We recommend starting without workers, evaluating the performance, and then enabling a worker if needed. For most users, the plugin can provide sufficient throughput without workers.
+On shutdown, S3 output will attempt to complete all pending uploads. If it fails to complete an upload, the ID will remain buffered in the `store_dir` in a directory called `multipart_upload_metadata`. If you restart the S3 output with the same `store_dir` it will discover the old UploadIDs and complete the pending uploads. The [S3 documentation](https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/) also has suggestions on discovering and deleting/completing dangling uploads in your buckets.
## Usage with MinIO
diff --git a/pipeline/outputs/skywalking.md b/pipeline/outputs/skywalking.md
index 9919567a5..1d6206bf1 100644
--- a/pipeline/outputs/skywalking.md
+++ b/pipeline/outputs/skywalking.md
@@ -11,6 +11,7 @@ The **Apache SkyWalking** output plugin, allows to flush your records to a [Apac
| auth_token | Authentication token if needed for Apache SkyWalking OAP | None |
| svc_name | Service name that fluent-bit belongs to | sw-service |
| svc_inst_name | Service instance name of fluent-bit | fluent-bit |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### TLS / SSL
@@ -57,6 +58,6 @@ This message is packed into the following protocol format and written to the OAP
"json": {
"json": "{\"log\": \"This is the original log message\"}"
}
- }
+ }
}]
```
diff --git a/pipeline/outputs/slack.md b/pipeline/outputs/slack.md
index 0ef7d9d9d..5cbee7f03 100644
--- a/pipeline/outputs/slack.md
+++ b/pipeline/outputs/slack.md
@@ -17,6 +17,7 @@ Once you have obtained the Webhook address you can place it in the configuration
| Key | Description | Default |
| :--- | :--- | :--- |
| webhook | Absolute address of the Webhook provided by Slack | |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### Configuration File
@@ -28,4 +29,3 @@ Get started quickly with this configuration file:
match *
webhook https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
```
-
diff --git a/pipeline/outputs/splunk.md b/pipeline/outputs/splunk.md
index f038909fc..545e85d4c 100644
--- a/pipeline/outputs/splunk.md
+++ b/pipeline/outputs/splunk.md
@@ -23,7 +23,7 @@ Connectivity, transport and authentication configuration properties:
| compress | Set payload compression mechanism. The only available option is `gzip`. | |
| channel | Specify X-Splunk-Request-Channel Header for the HTTP Event Collector interface. | |
| http_debug_bad_request | If the HTTP server response code is 400 (bad request) and this flag is enabled, it will print the full HTTP request and response to the stdout interface. This feature is available for debugging purposes. | |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
Content and Splunk metadata \(fields\) handling configuration properties:
@@ -168,9 +168,9 @@ The following configuration gathers CPU metrics, nests the appropriate field, ad
name cpu
tag cpu
-# Move CPU metrics to be nested under "fields" and
+# Move CPU metrics to be nested under "fields" and
# add the prefix "metric_name:" to all metrics
-# NOTE: you can change Wildcard field to only select metric fields
+# NOTE: you can change Wildcard field to only select metric fields
[FILTER]
Name nest
Match cpu
@@ -183,18 +183,18 @@ The following configuration gathers CPU metrics, nests the appropriate field, ad
[FILTER]
Name modify
Match cpu
- Set index cpu-metrics
+ Set index cpu-metrics
Set source fluent-bit
Set sourcetype custom
# ensure splunk_send_raw is on
[OUTPUT]
- name splunk
+ name splunk
match *
host
port 8088
splunk_send_raw on
- splunk_token f9bd5bdb-c0b2-4a83-bcff-9625e5e908db
+ splunk_token f9bd5bdb-c0b2-4a83-bcff-9625e5e908db
tls on
tls.verify off
```
diff --git a/pipeline/outputs/stackdriver.md b/pipeline/outputs/stackdriver.md
index 759e629d8..54fe89a38 100644
--- a/pipeline/outputs/stackdriver.md
+++ b/pipeline/outputs/stackdriver.md
@@ -32,7 +32,7 @@ Before to get started with the plugin configuration, make sure to obtain the pro
| severity\_key | Specify the name of the key from the original record that contains the severity information. | `logging.googleapis.com/severity`. See [Stackdriver Special Fields][StackdriverSpecialFields] for more info. |
| project_id_key | The value of this field is used by the Stackdriver output plugin to find the gcp project id from jsonPayload and then extract the value of it to set the PROJECT_ID within LogEntry logName, which controls the gcp project that should receive these logs. | `logging.googleapis.com/projectId`. See [Stackdriver Special Fields][StackdriverSpecialFields] for more info. |
| autoformat\_stackdriver\_trace | Rewrite the _trace_ field to include the projectID and format it for use with Cloud Trace. When this flag is enabled, the user can get the correct result by printing only the traceID (usually 32 characters). | false |
-| Workers | Enables dedicated thread(s) for this output. | 1 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
| custom\_k8s\_regex | Set a custom regex to extract field like pod\_name, namespace\_name, container\_name and docker\_id from the local\_resource\_id in logs. This is helpful if the value of pod or node name contains dots. | `(?[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)-(?[a-z0-9]{64})\.log$` |
| resource_labels | An optional list of comma separated strings specifying resource labels plaintext assignments (`new=value`) and/or mappings from an original field in the log entry to a destination field (`destination=$original`). Nested fields and environment variables are also supported using the [record accessor syntax](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/record-accessor). If configured, *all* resource labels will be assigned using this API only, with the exception of `project_id`. See [Resource Labels](#resource-labels) for more details. | |
| compress | Set payload compression mechanism. The only available option is `gzip`. Default = "", which means no compression.| |
diff --git a/pipeline/outputs/standard-output.md b/pipeline/outputs/standard-output.md
index 44ddf0580..69e3e44f2 100644
--- a/pipeline/outputs/standard-output.md
+++ b/pipeline/outputs/standard-output.md
@@ -9,7 +9,7 @@ The **stdout** output plugin allows to print to the standard output the data rec
| Format | Specify the data format to be printed. Supported formats are _msgpack_, _json_, _json\_lines_ and _json\_stream_. | msgpack |
| json\_date\_key | Specify the name of the time key in the output record. To disable the time key just set the value to `false`. | date |
| json\_date\_format | Specify the format of the date. Supported formats are _double_, _epoch_, _iso8601_ (eg: _2018-05-30T09:39:52.000681Z_) and _java_sql_timestamp_ (eg: _2018-05-30 09:39:52.000681_) | double |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 1 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
### Command Line
@@ -35,4 +35,3 @@ Fluent Bit v1.x.x
```
No more, no less, it just works.
-
diff --git a/pipeline/outputs/syslog.md b/pipeline/outputs/syslog.md
index 8c5b56c91..9c6f17e23 100644
--- a/pipeline/outputs/syslog.md
+++ b/pipeline/outputs/syslog.md
@@ -31,6 +31,7 @@ You must be aware of the structure of your original record so you can configure
| syslog\_sd\_key | The key name from the original record that contains a map of key/value pairs to use as Structured Data \(SD\) content. The key name is included in the resulting SD field as shown in examples below. This configuration is optional. | |
| syslog\_message\_key | The key name from the original record that contains the message to deliver. Note that this property is **mandatory**, otherwise the message will be empty. | |
| allow\_longer\_sd\_id| If true, Fluent-bit allows SD-ID that is longer than 32 characters. Such long SD-ID violates RFC 5424.| false |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
### TLS / SSL
@@ -123,7 +124,7 @@ Example configuration file:
syslog_hostname_key hostname
syslog_appname_key appname
syslog_procid_key procid
- syslog_msgid_key msgid
+ syslog_msgid_key msgid
syslog_sd_key uls@0
syslog_message_key log
```
@@ -156,19 +157,19 @@ Example output:
### Adding Structured Data Authentication Token
-Some services use the structured data field to pass authentication tokens (e.g. `[@41018]`), which would need to be added to each log message dynamically.
-However, this requires setting the token as a key rather than as a value.
+Some services use the structured data field to pass authentication tokens (e.g. `[@41018]`), which would need to be added to each log message dynamically.
+However, this requires setting the token as a key rather than as a value.
Here's an example of how that might be achieved, using `AUTH_TOKEN` as a [variable](../../administration/configuring-fluent-bit/classic-mode/variables.md):
{% tabs %}
{% tab title="fluent-bit.conf" %}
```text
-[FILTER]
+[FILTER]
name lua
match *
call append_token
code function append_token(tag, timestamp, record) record["${AUTH_TOKEN}"] = {} return 2, timestamp, record end
-
+
[OUTPUT]
name syslog
match *
@@ -213,4 +214,4 @@ Here's an example of how that might be achieved, using `AUTH_TOKEN` as a [variab
tls.crt_file: /path/to/my.crt
```
{% endtab %}
-{% endtabs %}
\ No newline at end of file
+{% endtabs %}
diff --git a/pipeline/outputs/tcp-and-tls.md b/pipeline/outputs/tcp-and-tls.md
index 545063593..55de1b07c 100644
--- a/pipeline/outputs/tcp-and-tls.md
+++ b/pipeline/outputs/tcp-and-tls.md
@@ -11,7 +11,7 @@ The **tcp** output plugin allows to send records to a remote TCP server. The pay
| Format | Specify the data format to be printed. Supported formats are _msgpack_ _json_, _json\_lines_ and _json\_stream_. | msgpack |
| json\_date\_key | Specify the name of the time key in the output record. To disable the time key just set the value to `false`. | date |
| json\_date\_format | Specify the format of the date. Supported formats are _double_, _epoch_, _iso8601_ (eg: _2018-05-30T09:39:52.000681Z_) and _java_sql_timestamp_ (eg: _2018-05-30 09:39:52.000681_) | double |
-| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` |
## TLS Configuration Parameters
diff --git a/pipeline/outputs/treasure-data.md b/pipeline/outputs/treasure-data.md
index ff2a070bf..22991f239 100644
--- a/pipeline/outputs/treasure-data.md
+++ b/pipeline/outputs/treasure-data.md
@@ -12,6 +12,7 @@ The plugin supports the following configuration parameters:
| Database | Specify the name of your target database. | |
| Table | Specify the name of your target table where the records will be stored. | |
| Region | Set the service region, available values: US and JP | US |
+| Workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Getting Started
@@ -41,4 +42,3 @@ In your main configuration file append the following _Input_ & _Output_ sections
Database fluentbit
Table cpu_samples
```
-
diff --git a/pipeline/outputs/vivo-exporter.md b/pipeline/outputs/vivo-exporter.md
index ba1afd7bb..156ae257a 100644
--- a/pipeline/outputs/vivo-exporter.md
+++ b/pipeline/outputs/vivo-exporter.md
@@ -9,6 +9,8 @@ Vivo Exporter is an output plugin that exposes logs, metrics, and traces through
| `empty_stream_on_read` | If enabled, when an HTTP client consumes the data from a stream, the stream content will be removed. | Off |
| `stream_queue_size` | Specify the maximum queue size per stream. Each specific stream for logs, metrics and traces can hold up to `stream_queue_size` bytes. | 20M |
| `http_cors_allow_origin` | Specify the value for the HTTP Access-Control-Allow-Origin header (CORS). | |
+| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |
+
### Getting Started
diff --git a/pipeline/outputs/websocket.md b/pipeline/outputs/websocket.md
index bb96cd674..a5a049df1 100644
--- a/pipeline/outputs/websocket.md
+++ b/pipeline/outputs/websocket.md
@@ -13,6 +13,7 @@ The **websocket** output plugin allows to flush your records into a WebSocket en
| Format | Specify the data format to be used in the HTTP request body, by default it uses _msgpack_. Other supported formats are _json_, _json\_stream_ and _json\_lines_ and _gelf_. | msgpack |
| json\_date\_key | Specify the name of the date field in output | date |
| json\_date\_format | Specify the format of the date. Supported formats are _double_, _epoch_, _iso8601_ (eg: _2018-05-30T09:39:52.000681Z_) and _java_sql_timestamp_ (eg: _2018-05-30 09:39:52.000681_) | double |
+| workers | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
## Getting Started