Skip to content

Commit

Permalink
[ISSUE #166] Provide updated Runtime/Docker deployment and compilatio…
Browse files Browse the repository at this point in the history
…n docs (#170)

* Format updated connectors doc

* Remove zh incubator desc

* fix zh next 1.10.0 release notes

* Sync connectors change

* migrate connector status

* Split connectors table

* Sync connector doc, order by roadmap

* add link to doc in roadmap

* fix minor issues

* use new http connector doc

* provide runtime docker deployment and compilation doc

* provide runtime deployment and compilation doc

* minor optimization

* fix broken links

* fix broken anchors and some minor issue

* Make less important Eclipse documentation less prominent

* copy Next docs to v1.10.0 & add sidebar & fix relative path

* Add the Event Store Implementation Status

* some minor change

* link to Event Store instead of Github

* add non-standalone notice

* add code block highlight

* upgrade docusaurus from 2.4.1 to 2.4.3 and node from 16 to 18

* Update outdated info of Runtime docs
  • Loading branch information
Pil0tXia authored Jan 10, 2024
1 parent 81ac48f commit fb16c27
Show file tree
Hide file tree
Showing 179 changed files with 3,894 additions and 2,538 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: "16"
node-version: "18"

- name: Install Dependencies
run: |
Expand Down
6 changes: 3 additions & 3 deletions docs/design-document/02-observability/04-zipkin.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@ Distributed tracing is a method used to profile and monitor applications built w

To enable the trace exporter of EventMesh Runtime, set the `eventMesh.server.trace.enabled` field in the `conf/eventmesh.properties` file to `true`.

```conf
```properties
# Trace plugin
eventMesh.server.trace.enabled=true
eventMesh.trace.plugin=zipkin
```

To customize the behavior of the trace exporter such as timeout or export interval, edit the `exporter.properties` file.

```conf
```properties
# Set the maximum batch size to use
eventmesh.trace.max.export.size=512
# Set the queue size. This must be >= the export batch size
Expand All @@ -31,7 +31,7 @@ eventmesh.trace.export.interval=5

To send the exported trace data to Zipkin, edit the `eventmesh.trace.zipkin.ip` and `eventmesh.trace.zipkin.port` fields in the `conf/zipkin.properties` file to match the configuration of the Zipkin server.

```conf
```properties
# Zipkin's IP and Port
eventmesh.trace.zipkin.ip=localhost
eventmesh.trace.zipkin.port=9411
Expand Down
6 changes: 3 additions & 3 deletions docs/design-document/02-observability/05-jaeger.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@ For the installation of Jaeger, you can refer to the [official documentation](ht

To enable the trace exporter of EventMesh Runtime, set the `eventMesh.server.trace.enabled` field in the `conf/eventmesh.properties` file to `true`.

```conf
```properties
# Trace plugin
eventMesh.server.trace.enabled=true
eventMesh.trace.plugin=jaeger
```

To customize the behavior of the trace exporter such as timeout or export interval, edit the `exporter.properties` file.

```conf
```properties
# Set the maximum batch size to use
eventmesh.trace.max.export.size=512
# Set the queue size. This must be >= the export batch size
Expand All @@ -31,7 +31,7 @@ eventmesh.trace.export.interval=5

To send the exported trace data to Jaeger, edit the `eventmesh.trace.jaeger.ip` and `eventmesh.trace.jaeger.port` fields in the `conf/jaeger.properties` file to match the configuration of the Jaeger server.

```conf
```properties
# Jaeger's IP and Port
eventmesh.trace.jaeger.ip=localhost
eventmesh.trace.jaeger.port=14250
Expand Down
40 changes: 7 additions & 33 deletions docs/design-document/03-connect/00-connectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@

A connector is an image or instance that interacts with a specific external service or underlying data source (e.g., Databases) on behalf of user applications. A connector is either a Source or a Sink.

Connector runs as a standalone service by `main()`.

## Source

A source connector obtains data from an underlying data producer, and delivers it to targets after original data has been transformed into CloudEvents. It doesn't limit the way how a source retrieves data. (e.g., A source may pull data from a message queue or act as an HTTP server waiting for data sent to it).
Expand All @@ -23,55 +25,27 @@ Add a new connector by implementing the source/sink interface using [eventmesh-o
## Technical Solution

### Structure and process

![source-sink connector architecture](../../../static/images/design-document/connector-architecture.png)

### Design Detail

![eventmesh-connect-detail](../../../static/images/design-document/connector-design-detail.png)

### Describe
### Description

#### Worker

Worker is divided into Source Worker and Sink Worker, which are triggered by the `Application` class and implement the methods of the `ConnectorWorker` interface respectively, which include the worker's running life cycle, and the worker carries the running of the connector. Workers can be lightweight and independent through mirroring Running, the eventmesh-sdk-java module is integrated internally, and the cloudevents protocol is used to interact with eventmesh. Currently, the tcp client is used by default. In the future, support for dynamic configuration can be considered
Worker is divided into Source Worker and Sink Worker, which are triggered by the `Application` class and implement the methods of the `ConnectorWorker` interface respectively, which include the worker's running life cycle, and the worker carries the running of the connector. Workers can be lightweight and independent through mirroring Running, the eventmesh-sdk-java module is integrated internally, and the CloudEvents protocol is used to interact with EventMesh. Currently, the TCP client is used by default. In the future, support for dynamic configuration can be considered.

#### Connector

Connectors are divided into Source Connector and Sink Connector. Connectors have their own configuration files and run independently. Workers perform reflective loading and configuration analysis to complete Connector initialization and subsequent operation. Source Connector implements the poll method, and Sink Connector implements The put method uniformly uses `ConnectorRecord` to carry data. Both Source Connector and Sink Connector can operate independently.

#### ConnectorRecord with CloudEvents

`ConnectorRecord` is a connector layer data protocol. When workers interact with eventmesh, a protocol adapter needs to be developed to convert `ConnectorRecord` to CloudEvents protocol.
`ConnectorRecord` is a connector layer data protocol. When workers interact with EventMesh, a protocol adapter needs to be developed to convert `ConnectorRecord` to CloudEvents protocol.

#### Registry

The Registry module is responsible for storing the synchronization progress of synchronizing data of different Connector instances, ensuring high availability between multiple Connector images or instances.

## Connector Status

| Connector Name | Source | Sink |
|:------------------------------------------------:|:-----------:|:-------:|
| [RocketMQ](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-rocketmq) |||
| ChatGPT |||
| ClickHouse |||
| [DingTalk](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-dingtalk) |||
| Email |||
| [Feishu/Lark](./lark-connector) |||
| [File](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-file) |||
| GitHub |||
| [HTTP](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-http) |||
| [Jdbc](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-jdbc) |||
| [Kafka](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-kafka) |||
| [Knative](./knative-connector) |||
| [MongoDB](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-mongodb) |||
| [OpenFunction](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-openfunction) |||
| [Pravega](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-pravega) |||
| [Prometheus](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-prometheus) |||
| [Pulsar](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-pulsar) |||
| [RabbitMQ](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-rabbitmq) |||
| [Redis](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-redis) |||
| [S3 File](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-s3) |||
| [Slack](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-slack) |||
| [Spring](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-spring) |||
| [WeCom](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-wecom) |||
| [WeChat](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors/eventmesh-connector-wechat) |||
| More connectors will be added... | N/A | N/A |
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
# RabbitMQ

## RabbitMQSinkConnector: from eventmesh to rabbitmq.
## RabbitMQSinkConnector: From EventMesh to RabbitMQ

1. launch your rabbitmq server and eventmesh-runtime.
1. launch your RabbitMQ server and EventMesh Runtime.
2. enable sinkConnector and check `sink-config.yml`.
3. send a message to eventmesh with the topic defined in `pubSubConfig.subject`
3. start your `RabbitMQConnectorServer`, it will subscribe to the topic defined in `pubSubConfig.subject` of EventMesh Runtime and send data to `connectorConfig.queueName` in your RabbitMQ.
4. send a message to EventMesh with the topic defined in `pubSubConfig.subject` and then you will receive the message in RabbitMQ.

```yaml
pubSubConfig:
# default port is 10000
# default port 10000
meshAddress: your.eventmesh.server:10000
subject: TopicTest
idc: FT
Expand All @@ -32,8 +34,9 @@ connectorConfig:
autoAck: true
```
## RabbitMQSourceConnector: from rabbitmq to eventmesh.
1. launch your rabbitmq server and eventmesh-runtime.
## RabbitMQSourceConnector: From RabbitMQ to EventMesh
1. launch your RabbitMQ server and EventMesh Runtime.
2. enable sourceConnector and check `source-config.yml` (Basically the same as `sink-config.yml`)
3. start your `RabbitMQConnectorServer` and you will find the channel in rabbitmq server.
4. send a cloudevent message to the queue and then you will receive the message in eventmesh.
3. start your `RabbitMQConnectorServer`, it will subscribe to the queue defined in `connectorConfig.queueName` in your RabbitMQ and send data to `pubSubConfig.subject` of EventMesh Runtime.
4. send a CloudEvent message to the queue and then you will receive the message in EventMesh.
33 changes: 33 additions & 0 deletions docs/design-document/03-connect/03-redis-connector.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Redis

## RedisSinkConnector: From EventMesh to Redis topic queue

1. start your Redis instance if needed and EventMesh Runtime.
2. enable sinkConnector and check `sink-config.yml`.
3. start your `RedisConnectServer`, it will subscribe to the topic defined in `pubSubConfig.subject` of EventMesh Runtime and send data to `connectorConfig.topic` in your Redis.
4. send a message to EventMesh with the topic defined in `pubSubConfig.subject` and then you will receive the message in Redis.

```yaml
pubSubConfig:
# default port 10000
meshAddress: your.eventmesh.server:10000
subject: TopicTest
idc: FT
env: PRD
group: redisSink
appId: 5031
userName: redisSinkUser
passWord: redisPassWord
connectorConfig:
connectorName: redisSink
server: redis://127.0.0.1:6379
# the topic in redis
topic: SinkTopic
```
## RedisSourceConnector: From Redis topic queue to EventMesh
1. start your Redis instance if needed and EventMesh Runtime.
2. enable sourceConnector and check `source-config.yml` (Basically the same as `sink-config.yml`)
3. start your `RedisConnectServer`, it will subscribe to the topic defined in `connectorConfig.topic` in your Redis and send data to `pubSubConfig.subject` of EventMesh Runtime.
4. send a CloudEvent message to the topic in Redis, and you will receive the message in EventMesh.
36 changes: 36 additions & 0 deletions docs/design-document/03-connect/04-mongodb-connector.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# MongoDB

## MongoDBSinkConnector: From EventMesh to MongoDB

1. launch your MongoDB server and EventMesh Runtime.
2. enable sinkConnector and check `sink-config.yml`.
3. start your MongoDBConnectorServer, it will subscribe to the topic defined in `pubSubConfig.subject` of EventMesh Runtime and send data to `connectorConfig.collection` in your MongoDB.
4. send a message to EventMesh with the topic defined in `pubSubConfig.subject` and then you will receive the message in MongoDB.

```yaml
pubSubConfig:
# default port 10000
meshAddress: your.eventmesh.server:10000
subject: TopicTest
idc: FT
env: PRD
group: mongodbSink
appId: 5031
userName: mongodbSinkUser
passWord: mongodbPassWord
connectorConfig:
connectorName: mongodbSink
# REPLICA_SET or STANDALONE is supported
connectorType: STANDALONE
# mongodb://root:[email protected]:27018,127.0.0.1:27019
url: mongodb://127.0.0.1:27018
database: yourDB
collection: yourCol
```
## MongoDBSourceConnector: From MongoDB to EventMesh
1. launch your MongoDB server and EventMesh Runtime.
2. enable sourceConnector and check `source-config.yml` (Basically the same as `sink-config.yml`)
3. start your `MongoDBSourceConnector`, it will subscribe to the collection defined in `connectorConfig.collection` in your MongoDB and send data to `pubSubConfig.subject` of EventMesh Runtime.
4. write a CloudEvent message to `yourCol` at `yourDB` in your MongoDB and then you will receive the message in EventMesh.
34 changes: 34 additions & 0 deletions docs/design-document/03-connect/07-dingtalk-connector.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# DingTalk

## DingtalkSinkConnector: From EventMesh to DingTalk

1. launch your EventMesh Runtime.
2. enable sinkConnector and check `sink-config.yml`.
3. send a message to EventMesh with the topic defined in `pubSubConfig.subject`

```yaml
pubSubConfig:
# default port 10000
meshAddress: your.eventmesh.server:10000
subject: TEST-TOPIC-DINGTALK
idc: FT
env: PRD
group: dingTalkSink
appId: 5034
userName: dingTalkSinkUser
passWord: dingTalkPassWord
sinkConnectorConfig:
connectorName: dingTalkSink
# Please refer to: https://open.dingtalk.com/document/orgapp/the-robot-sends-a-group-message
appKey: dingTalkAppKey
appSecret: dingTalkAppSecret
openConversationId: dingTalkOpenConversationId
robotCode: dingTalkRobotCode
```
### CloudEvent Attributes
When using the eventmesh-connector-dingtalk sinking event, you need to add the corresponding extension filed in CloudEvent:
- When key=`dingtalktemplatetype`, value=`text`/`markdown`, indicating the text type of the event.
- When text type is markdown, you can add extension: key=`dingtalkmarkdownmessagetitle`, value indicates the title of the event.
30 changes: 30 additions & 0 deletions docs/design-document/03-connect/08-wecom-connector.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# WeCom

## WecomSinkConnector: From EventMesh to WeCom

1. launch your EventMesh Runtime.
2. enable sinkConnector and check `sink-config.yml`.
3. send a message to EventMesh with the topic defined in `pubSubConfig.subject`

```yaml
pubSubConfig:
# default port 10000
meshAddress: your.eventmesh.server:10000
subject: TEST-TOPIC-WECOM
idc: FT
env: PRD
group: weComSink
appId: 5034
userName: weComSinkUser
passWord: weComPassWord
sinkConnectorConfig:
connectorName: weComSink
# Please refer to: https://developer.work.weixin.qq.com/document/path/90236
robotWebhookKey: weComRobotWebhookKey
```
### CloudEvent Attributes
When using the eventmesh-connector-wecom sinking event, you need to add the corresponding extension filed in CloudEvent:
- When key=`wecomtemplatetype`, value=`text`/`markdown`, indicating the text type of the event.
25 changes: 25 additions & 0 deletions docs/design-document/03-connect/09-slack-connector.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Slack

## SlackSinkConnector: From EventMesh to Slack

1. launch your EventMesh Runtime.
2. enable sinkConnector and check `sink-config.yml`.
3. send a message to EventMesh with the topic defined in `pubSubConfig.subject`

```yaml
pubSubConfig:
# default port 10000
meshAddress: your.eventmesh.server:10000
subject: TEST-TOPIC-SLACK
idc: FT
env: PRD
group: slackSink
appId: 5034
userName: slackSinkUser
passWord: slackPassWord
sinkConnectorConfig:
connectorName: slackSink
# Please refer to: https://api.slack.com/messaging/sending
appToken: slackAppToken
channelId: slackChannelId
```
42 changes: 0 additions & 42 deletions docs/instruction/00-eclipse.md

This file was deleted.

Loading

0 comments on commit fb16c27

Please sign in to comment.