From 7a22576434e3c1b1b3ba058368f5e0e648c86292 Mon Sep 17 00:00:00 2001 From: WSO2 Builder Date: Thu, 5 Dec 2019 09:24:09 +0000 Subject: [PATCH] [WSO2-Release] [Release 5.0.7] update documentation for release 5.0.7 --- README.md | 10 +- docs/api/5.0.7.md | 432 +++++++++++++++++++++++++++++++++++++++++++++ docs/api/latest.md | 2 +- docs/index.md | 10 +- mkdocs.yml | 1 + 5 files changed, 444 insertions(+), 11 deletions(-) create mode 100644 docs/api/5.0.7.md diff --git a/README.md b/README.md index 11507b87..a83086f0 100644 --- a/README.md +++ b/README.md @@ -19,14 +19,14 @@ For information on Siddhi and i ## Latest API Docs -Latest API Docs is 5.0.6. +Latest API Docs is 5.0.7. ## Features -* kafka *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to use the Kafka transport, the type parameter should have kafka as its value.

-* kafkaMultiDC *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.

-* kafka *(Source)*

A Kafka source receives events to be processed by WSO2 SP from a topic with a partition for a Kafka cluster. The events received can be in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic.

-* kafkaMultiDC *(Source)*

The Kafka Multi-Datacenter(DC) source receives records from the same topic in brokers deployed in two different kafka clusters. It filters out all the duplicate messages and ensuresthat the events are received in the correct order using sequential numbering. It receives events in formats such as TEXT, XML JSON and Binary`.The Kafka Source creates the default partition '0' for a given topic, if the topic has not yet been created in the Kafka cluster.

+* kafka *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to use the Kafka transport, the type parameter should have kafka as its value.

+* kafkaMultiDC *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.

+* kafka *(Source)*

A Kafka source receives events to be processed by WSO2 SP from a topic with a partition for a Kafka cluster. The events received can be in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic.

+* kafkaMultiDC *(Source)*

The Kafka Multi-Datacenter(DC) source receives records from the same topic in brokers deployed in two different kafka clusters. It filters out all the duplicate messages and ensuresthat the events are received in the correct order using sequential numbering. It receives events in formats such as TEXT, XML JSON and Binary`.The Kafka Source creates the default partition '0' for a given topic, if the topic has not yet been created in the Kafka cluster.

## Dependencies diff --git a/docs/api/5.0.7.md b/docs/api/5.0.7.md new file mode 100644 index 00000000..ebb7f1c7 --- /dev/null +++ b/docs/api/5.0.7.md @@ -0,0 +1,432 @@ +# API Docs - v5.0.7 + +!!! Info "Tested Siddhi Core version: *5.1.2*" + It could also support other Siddhi Core minor versions. + +## Sink + +### kafka *(Sink)* +

+

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to use the Kafka transport, the type parameter should have kafka as its value.

+

+Syntax + +``` +@sink(type="kafka", bootstrap.servers="", topic="", partition.no="", sequence.id="", key="", is.binary.message="", optional.configuration="", @map(...))) +``` + +QUERY PARAMETERS + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionDefault ValuePossible Data TypesOptionalDynamic
bootstrap.servers

This parameter specifies the list of Kafka servers to which the Kafka sink must publish events. This list should be provided as a set of comma separated values. e.g., localhost:9092,localhost:9093.

STRINGNoNo
topic

The topic to which the Kafka sink needs to publish events. Only one topic must be specified.

STRINGNoNo
partition.no

The partition number for the given topic. Only one partition ID can be defined. If no value is specified for this parameter, the Kafka sink publishes to the default partition of the topic (i.e., 0)

0INTYesNo
sequence.id

A unique identifier to identify the messages published by this sink. This ID allows receivers to identify the sink that published a specific message.

nullSTRINGYesNo
key

The key contains the values that are used to maintain ordering in a Kafka partition.

nullSTRINGYesNo
is.binary.message

In order to send the binary events via kafka sink, this parameter is set to 'True'.

nullBOOLNoNo
optional.configuration

This parameter contains all the other possible configurations that the producer is created with.
e.g., producer.type:async,batch.size:200

nullSTRINGYesNo
+ +Examples +EXAMPLE 1 +``` +@App:name('TestExecutionPlan') +define stream FooStream (symbol string, price float, volume long); +@info(name = 'query1') +@sink( +type='kafka', +topic='topic_with_partitions', +partition.no='0', +bootstrap.servers='localhost:9092', +@map(type='xml')) +Define stream BarStream (symbol string, price float, volume long); +from FooStream select symbol, price, volume insert into BarStream; + +``` +

+

This Kafka sink configuration publishes to 0th partition of the topic named topic_with_partitions.

+

+EXAMPLE 2 +``` +@App:name('TestExecutionPlan') +define stream FooStream (symbol string, price float, volume long); +@info(name = 'query1') +@sink( +type='kafka', +topic='{{symbol}}', +partition.no='{{volume}}', +bootstrap.servers='localhost:9092', +@map(type='xml')) +Define stream BarStream (symbol string, price float, volume long); +from FooStream select symbol, price, volume insert into BarStream; +``` +

+

This query publishes dynamic topic and partitions that are taken from the Siddhi event. The value for partition.no is taken from the volume attribute, and the topic value is taken from the symbol attribute.

+

+### kafkaMultiDC *(Sink)* +

+

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.

+

+Syntax + +``` +@sink(type="kafkaMultiDC", bootstrap.servers="", topic="", sequence.id="", key="", partition.no="", is.binary.message="", optional.configuration="", @map(...))) +``` + +QUERY PARAMETERS + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionDefault ValuePossible Data TypesOptionalDynamic
bootstrap.servers

This parameter specifies the list of Kafka servers to which the Kafka sink must publish events. This list should be provided as a set of comma -separated values. There must be at least two servers in this list. e.g., localhost:9092,localhost:9093.

STRINGNoNo
topic

The topic to which the Kafka sink needs to publish events. Only one topic must be specified.

STRINGNoNo
sequence.id

A unique identifier to identify the messages published by this sink. This ID allows receivers to identify the sink that published a specific message.

nullSTRINGYesNo
key

The key contains the values that are used to maintain ordering in a Kafka partition.

nullSTRINGYesNo
partition.no

The partition number for the given topic. Only one partition ID can be defined. If no value is specified for this parameter, the Kafka sink publishes to the default partition of the topic (i.e., 0)

0INTYesNo
is.binary.message

In order to send the binary events via kafkaMultiDCSink, it is required to set this parameter to true.

nullBOOLNoNo
optional.configuration

This parameter contains all the other possible configurations that the producer is created with.
e.g., producer.type:async,batch.size:200

nullSTRINGYesNo
+ +Examples +EXAMPLE 1 +``` +@App:name('TestExecutionPlan') +define stream FooStream (symbol string, price float, volume long); +@info(name = 'query1') +@sink(type='kafkaMultiDC', topic='myTopic', partition.no='0',bootstrap.servers='host1:9092, host2:9092', @map(type='xml'))Define stream BarStream (symbol string, price float, volume long); +from FooStream select symbol, price, volume insert into BarStream; + +``` +

+

This query publishes to the default (i.e., 0th) partition of the brokers in two data centers

+

+## Source + +### kafka *(Source)* +

+

A Kafka source receives events to be processed by WSO2 SP from a topic with a partition for a Kafka cluster. The events received can be in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic.

+

+Syntax + +``` +@source(type="kafka", bootstrap.servers="", topic.list="", group.id="", threading.option="", partition.no.list="", seq.enabled="", is.binary.message="", topic.offsets.map="", enable.offsets.commit="", optional.configuration="", @map(...))) +``` + +QUERY PARAMETERS + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionDefault ValuePossible Data TypesOptionalDynamic
bootstrap.servers

This specifies the list of Kafka servers to which the Kafka source must listen. This list can be provided as a set of comma-separated values.
e.g., localhost:9092,localhost:9093

STRINGNoNo
topic.list

This specifies the list of topics to which the source must listen. This list can be provided as a set of comma-separated values.
e.g., topic_one,topic_two

STRINGNoNo
group.id

This is an ID to identify the Kafka source group. The group ID ensures that sources with the same topic and partition that are in the same group do not receive the same event.

STRINGNoNo
threading.option

This specifies whether the Kafka source is to be run on a single thread, or in multiple threads based on a condition. Possible values are as follows:
single.thread: To run the Kafka source on a single thread.
topic.wise: To use a separate thread per topic.
partition.wise: To use a separate thread per partition.

STRINGNoNo
partition.no.list

The partition number list for the given topic. This is provided as a list of comma-separated values. e.g., 0,1,2,.

0STRINGYesNo
seq.enabled

If this parameter is set to true, the sequence of the events received via the source is taken into account. Therefore, each event should contain a sequence number as an attribute value to indicate the sequence.

falseBOOLYesNo
is.binary.message

In order to receive binary events via the Kafka source,it is required to setthis parameter to 'True'.

falseBOOLYesNo
topic.offsets.map

This parameter specifies reading offsets for each topic and partition. The value for this parameter is specified in the following format:
 <topic>=<offset>,<topic>=<offset>,
  When an offset is defined for a topic, the Kafka source skips reading the message with the number specified as the offset as well as all the messages sent previous to that message. If the offset is not defined for a specific topic it reads messages from the beginning.
e.g., stocks=100,trades=50 reads from the 101th message of the stocks topic, and from the 51st message of the trades topic.

nullSTRINGYesNo
enable.offsets.commit

This parameter specifies whether to commit offsets.
If the manual asynchronous offset committing is needed, enable.offsets.commit should be true and enable.auto.commit should be false.
If periodical committing is needed enable.offsets.commit should be true and enable.auto.commit should be true.
If committing is not needed, enable.offsets.commit should be false.

Note: enable.auto.commit is an optional.configuration property. If it is set to true, Source will periodically(default: 1000ms. Configurable with auto.commit.interval.ms property as an optional.configuration) commit its current offset (defined as the offset of the next message to be read) for the partitions it is reading from back to Kafka.
To guarantee at-least-once processing, we recommend you to enable Siddhi Periodic State Persistence when enable.auto.commit property is set to true.
During manual committing, it might introduce a latency during consumption.

trueBOOLYesNo
optional.configuration

This parameter contains all the other possible configurations that the consumer is created with.
e.g., ssl.keystore.type:JKS,batch.size:200.

nullSTRINGYesNo
+ +Examples +EXAMPLE 1 +``` +@App:name('TestExecutionPlan') +define stream BarStream (symbol string, price float, volume long); +@info(name = 'query1') +@source( +type='kafka', +topic.list='kafka_topic,kafka_topic2', +group.id='test', +threading.option='partition.wise', +bootstrap.servers='localhost:9092', +partition.no.list='0,1', +@map(type='xml')) +Define stream FooStream (symbol string, price float, volume long); +from FooStream select symbol, price, volume insert into BarStream; + +``` +

+

This kafka source configuration listens to the kafka_topic and kafka_topic2 topics with 0 and 1 partitions. A thread is created for each topic and partition combination. The events are received in the XML format, mapped to a Siddhi event, and sent to a stream named FooStream.

+

+EXAMPLE 2 +``` +@App:name('TestExecutionPlan') +define stream BarStream (symbol string, price float, volume long); +@info(name = 'query1') +@source( +type='kafka', +topic.list='kafka_topic', +group.id='test', +threading.option='single.thread', +bootstrap.servers='localhost:9092', +@map(type='xml')) +Define stream FooStream (symbol string, price float, volume long); +from FooStream select symbol, price, volume insert into BarStream; + +``` +

+

This Kafka source configuration listens to the kafka_topic topic for the default partition because no partition.no.list is defined. Only one thread is created for the topic. The events are received in the XML format, mapped to a Siddhi event, and sent to a stream named FooStream.

+

+### kafkaMultiDC *(Source)* +

+

The Kafka Multi-Datacenter(DC) source receives records from the same topic in brokers deployed in two different kafka clusters. It filters out all the duplicate messages and ensuresthat the events are received in the correct order using sequential numbering. It receives events in formats such as TEXT, XML JSON and Binary`.The Kafka Source creates the default partition '0' for a given topic, if the topic has not yet been created in the Kafka cluster.

+

+Syntax + +``` +@source(type="kafkaMultiDC", bootstrap.servers="", topic="", partition.no="", is.binary.message="", optional.configuration="", @map(...))) +``` + +QUERY PARAMETERS + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionDefault ValuePossible Data TypesOptionalDynamic
bootstrap.servers

This contains the kafka server list which the kafka source listens to. This is given using comma-separated values. eg: 'localhost:9092,localhost:9093'

STRINGNoNo
topic

This is the topic that the source listens to. eg: 'topic_one'

STRINGNoNo
partition.no

This is the partition number of the given topic.

0INTYesNo
is.binary.message

In order to receive the binary events via the Kafka Multi-DC source, the value of this parameter needs to be set to 'True'.

falseBOOLYesNo
optional.configuration

This contains all the other possible configurations with which the consumer can be created.eg: producer.type:async,batch.size:200

nullSTRINGYesNo
+ +Examples +EXAMPLE 1 +``` +@App:name('TestExecutionPlan') +define stream BarStream (symbol string, price float, volume long); +@info(name = 'query1') +@source(type='kafkaMultiDC', topic='kafka_topic', bootstrap.servers='host1:9092,host1:9093', partition.no='1', @map(type='xml')) +Define stream FooStream (symbol string, price float, volume long); +from FooStream select symbol, price, volume insert into BarStream; + +``` +

+

The following query listens to 'kafka_topic' topic, deployed in the broker host1:9092 and host1:9093, with partition 1. A thread is created for each broker. The receiving xml events are mapped to a siddhi event and sent to the FooStream.

+

diff --git a/docs/api/latest.md b/docs/api/latest.md index 79beedd4..ebb7f1c7 100644 --- a/docs/api/latest.md +++ b/docs/api/latest.md @@ -1,4 +1,4 @@ -# API Docs - v5.0.6 +# API Docs - v5.0.7 !!! Info "Tested Siddhi Core version: *5.1.2*" It could also support other Siddhi Core minor versions. diff --git a/docs/index.md b/docs/index.md index 11507b87..a83086f0 100644 --- a/docs/index.md +++ b/docs/index.md @@ -19,14 +19,14 @@ For information on Siddhi and i ## Latest API Docs -Latest API Docs is 5.0.6. +Latest API Docs is 5.0.7. ## Features -* kafka *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to use the Kafka transport, the type parameter should have kafka as its value.

-* kafkaMultiDC *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.

-* kafka *(Source)*

A Kafka source receives events to be processed by WSO2 SP from a topic with a partition for a Kafka cluster. The events received can be in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic.

-* kafkaMultiDC *(Source)*

The Kafka Multi-Datacenter(DC) source receives records from the same topic in brokers deployed in two different kafka clusters. It filters out all the duplicate messages and ensuresthat the events are received in the correct order using sequential numbering. It receives events in formats such as TEXT, XML JSON and Binary`.The Kafka Source creates the default partition '0' for a given topic, if the topic has not yet been created in the Kafka cluster.

+* kafka *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to use the Kafka transport, the type parameter should have kafka as its value.

+* kafkaMultiDC *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.

+* kafka *(Source)*

A Kafka source receives events to be processed by WSO2 SP from a topic with a partition for a Kafka cluster. The events received can be in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic.

+* kafkaMultiDC *(Source)*

The Kafka Multi-Datacenter(DC) source receives records from the same topic in brokers deployed in two different kafka clusters. It filters out all the duplicate messages and ensuresthat the events are received in the correct order using sequential numbering. It receives events in formats such as TEXT, XML JSON and Binary`.The Kafka Source creates the default partition '0' for a given topic, if the topic has not yet been created in the Kafka cluster.

## Dependencies diff --git a/mkdocs.yml b/mkdocs.yml index e7eca95d..aebc61ac 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -34,6 +34,7 @@ pages: - Information: index.md - API Docs: - latest: api/latest.md + - 5.0.7: api/5.0.7.md - 5.0.6: api/5.0.6.md - 5.0.5: api/5.0.5.md - 5.0.4: api/5.0.4.md