From a38965366989847c707ed4bf21ee23aa92fdb83f Mon Sep 17 00:00:00 2001 From: hellolilyliuyi <96421222+hellolilyliuyi@users.noreply.github.com> Date: Thu, 24 Aug 2023 14:18:38 +0800 Subject: [PATCH] Update connector-sink.md --- docs/content/connector-sink.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/connector-sink.md b/docs/content/connector-sink.md index 80b2bce8..6e5bb50f 100644 --- a/docs/content/connector-sink.md +++ b/docs/content/connector-sink.md @@ -99,7 +99,7 @@ In your Maven project's `pom.xml` file, add the Flink connector as a dependency | sink.properties.row_delimiter | No | \n | The row delimiter for CSV-formatted data. | | sink.properties.column_separator | No | \t | The column separator for CSV-formatted data. | | sink.properties.max_filter_ratio | No | 0 | The maximum error tolerance of the stream load. It's the maximum percentage of data records that can be filtered out due to inadequate data quality. Valid values: 0 to 1. Default value: 0. See [Stream Load](https://docs.starrocks.io/en-us/latest/sql-reference/sql-statements/data-manipulation/STREAM%20LOAD) for details. | -| sink.parallelism | No | NONE | The parallelism of the connector. Only available for Flink SQL. If not set, Flink planner will decide the parallelism. | +| sink.parallelism | No | NONE | The parallelism of the connector. Only available for Flink SQL. If not set, Flink planner will decide the parallelism. In the scenario of multi-parallelism, users need to guarantee data is written in the correct order. | ## Data type mapping between Flink and StarRocks