-
Notifications
You must be signed in to change notification settings - Fork 235
Cluster Deployment on Linux
This article will introduce how to quickly deploy and start an AutoMQ cluster with 3 CONTROLLER nodes and 2 BROKER nodes in a Public Cloud environment and test the core features of AutoMQ.
AutoMQ supports deployment in a Private Cloud. You can choose to build your own storage system compatible with AWS EBS and AWS S3, such as Ceph, CubeFS, or MinIO.
-
Prepare 5 hosts for deploying the AutoMQ cluster. In a Public Cloud environment, it is recommended to choose network-optimized Linux amd64 hosts with 2 CPUs and 16GB of memory, ensuring that the system disk storage space is not less than 10GB and the data volume disk space is not less than 10GB. Configuration can be appropriately reduced for the testing environment. Example as follows:
Role
IP
Node ID
System Volume
Data Volume
CONTROLLER
192.168.0.1
0
EBS 20GB
EBS 20GB
CONTROLLER
192.168.0.2
1
EBS 20GB
EBS 20GB
CONTROLLER
192.168.0.3
2
EBS 20GB
EBS 20GB
BROKER
192.168.0.4
1000
EBS 20GB
EBS 20GB
BROKER
192.168.0.5
1001
EBS 20GB
EBS 20GB
It is recommended to specify the same subnet and IP addresses as in this example when purchasing computing resources, making it convenient to directly copy operation commands.
-
Download the binary installation package to install AutoMQ. Refer to: Software Artifact▸。
-
Create two custom-named object storage buckets, such as automq-data and automq-ops.
-
Create an IAM user and generate an Access Key and Secret Key for this user. Then, ensure that the IAM user has full read and write permissions to the previously created object storage bucket.
Please refer to the official website for more detailed information.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject"
],
"Resource": [
"arn:aws-cn:s3:::automq-data/*",
"arn:aws-cn:s3:::automq-ops/*"
]
}
]
}
Due to the incompatibility between Azure's object storage and AWS S3 in terms of network protocol, AutoMQ currently cannot run on Azure. To address this issue, the AutoMQ Team is developing a compatibility solution for Azure and plans to release it soon.
{
"title": "AutomqStorageRole",
"description": "Custom Roles for AutoMQ Store Operations",
"stage": "GA",
"includedPermissions": [
"storage.multipartUploads.create",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.get"
]
}
Please refer to the official website for more detailed information.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject"
],
"Resource": [
"arn:aws-cn:s3:::automq-data",
"arn:aws-cn:s3:::automq-ops"
]
}
]
}
Please refer to the official website for more detailed information.
{
"Version": "1",
"Statement": [
{
"Effect": "Allow",
"Action": [
"oss:PutObject",
"oss:AbortMultipartUpload",
"oss:GetObject",
"oss:DeleteObject"
],
"Resource": [
"acs:oss:*:*:automq-data",
"acs:oss:*:*:automq-ops"
]
}
]
}
Please refer to the official website for more detailed information.
{
"statement": [
{
"action": [
"cos:AbortMultipartUpload",
"cos:GetObject",
"cos:CompleteMultipartUpload",
"cos:InitiateMultipartUpload",
"cos:DeleteObject",
"cos:PutObject",
"cos:UploadPart"
],
"effect": "allow",
"resource": [
"qcs::cos:ap-nanjing:uid/1258965391:automq-data-1258965391/*",
"qcs::cos:ap-nanjing:uid/1258965391:automq-ops-1258965391/*"
]
}
],
"version": "2.0"
}
Please refer to the official website for more detailed information.
{
"Version": "1.1",
"Statement": [
{
"Effect": "Allow",
"Action": [
"obs:object:GetObject",
"obs:object:AbortMultipartUpload",
"obs:object:DeleteObject",
"obs:object:PutObject"
],
"Resource": [
"OBS:*:*:object:automq-data/*",
"OBS:*:*:object:automq-ops/*"
]
}
]
}
Please refer to the official website for more detailed information.
{
"accessControlList": [
{
"service": "bce:bos",
"region": "*",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"READ",
"WRITE"
],
"resource": ["automq-data/*"],
}
]
}
Please refer to the official website for more detailed information.
Allow group 'AutoMQ PoC' to manage objects in tenancy where any {target.bucket.name='automq-data', target.bucket.name='automq-ops'}
AutoMQ requires EBS and S3 services. As long as the cloud platform supports the standard protocols for these two services, AutoMQ can run on that platform. The AutoMQ Team will continuously improve the compatibility test reports for other cloud platforms. The completed compatibility tests are as follows:
AutoMQ provides the automq-cli.sh tool for AutoMQ cluster operations and maintenance. automq-cli.sh cluster create \[project\]
will automatically create a cluster configuration template in the current directory clusters/\[project\]/topo.yaml
。
bin/automq-cli.sh cluster create poc
Success create AutoMQ cluster project: poc
========================================================
Please follow the steps to deploy AutoMQ cluster:
1. Modify the cluster topology config clusters/poc/topo.yaml to fit your needs
2. Run ./bin/automq-cli.sh cluster deploy --dry-run clusters/poc , to deploy the AutoMQ cluster
Edit the configuration template generated in Step 1. A sample configuration template is shown below:
global:
clusterId: ''
# Bucket URI Pattern: 0@s3://$bucket?region=$region&endpoint=$endpoint
# Bucket URI Example:
# AWS : 0@s3://xxx_bucket?region=us-east-1
# AWS-CN: 0@s3://xxx_bucket?region=cn-northwest-1&endpoint=https://s3.amazonaws.com.cn
# ALIYUN: 0@s3://xxx_bucket?region=oss-cn-shanghai&endpoint=https://oss-cn-shanghai.aliyuncs.com
# TENCENT: 0@s3://xxx_bucket?region=ap-beijing&endpoint=https://cos.ap-beijing.myqcloud.com
# OCI: 0@s3://xxx_bucket?region=us-ashburn-1&endpoint=https://xxx_namespace.compat.objectstorage.us-ashburn-1.oraclecloud.com&pathStyle=true
config: |
s3.data.buckets=0@s3://xxx_bucket?region=us-east-1
s3.ops.buckets=1@s3://xxx_bucket?region=us-east-1
envs:
- name: KAFKA_S3_ACCESS_KEY
value: 'xxxxx'
- name: KAFKA_S3_SECRET_KEY
value: 'xxxxx'
controllers:
# The Controllers Default Are Combined Nodes Which Roles Are Controller and Broker.
# The Default Controller Port Is 9093 and the Default Broker Port Is 9092
- host: 192.168.0.1
nodeId: 0
- host: 192.168.0.2
nodeId: 1
- host: 192.168.0.3
nodeId: 2
brokers:
- host: 192.168.0.5
nodeId: 1000
- host: 192.168.0.6
nodeId: 1001
-
global.clusterId
:The uniquely generated ID, no modification needed; -
global.config
:Custom incremental configurations for all nodes in the cluster. Here, you must changes3.data.buckets
ands3.ops.buckets
to their actual values. Additional configuration items can be added on new lines; -
global.envs
:Environment variables for the nodes. Here, you must replace the values ofKAFKA_S3_ACCESS_KEY
andKAFKA_S3_SECRET_KEY
with the actual values; -
controllers
:Controller node list, needs to be replaced with actual values; -
brokers
:Broker node list, needs to be replaced with actual values;
The default location for AutoMQ to store metadata and WAL data is the /tmp directory. For better suitability in production or formal testing environments, it is recommended to add global configurations in the cluster configuration template. Set the metadata directory log.dirs and the WAL data directory s3.wal.path to paths that are set to persistent storage. The configuration reference is as follows:
global:
...
configs: |
s3.data.buckets=0@s3://xxx_bucket?region=us-east-1
s3.ops.buckets=1@s3://xxx_bucket?region=us-east-1
log.dirs=/root/kraft-logs
s3.wal.path=/root/kraft-logs/s3wal
...
Execute the cluster deployment command:
bin/automq-cli.sh cluster deploy --dry-run clusters/poc
This command will first check the correctness of the S3 configuration, ensure successful access to S3, and finally output the startup commands for each node. The output example is as follows:
Host: 192.168.0.1
KAFKA_S3_ACCESS_KEY=xxxx KAFKA_S3_SECRET_KEY=xxxx ./bin/kafka-server-start.sh -daemon config/kraft/server.properties --override cluster.id=JN1cUcdPSeGVnzGyNwF1Rg --override node.id=0 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override advertised.listener=192.168.0.1:9092 --override s3.data.buckets='0@s3://xxx_bucket?region=us-east-1' --override s3.ops.buckets='1@s3://xxx_bucket?region=us-east-1'
...
To start the cluster, execute the command list from the previous step sequentially on the pre-specified CONTROLLER or BROKER hosts. For example, to start the first CONTROLLER process on 192.168.0.1, execute the corresponding command from the generated startup command list on that host.
KAFKA_S3_ACCESS_KEY=xxxx KAFKA_S3_SECRET_KEY=xxxx ./bin/kafka-server-start.sh -daemon config/kraft/server.properties --override cluster.id=JN1cUcdPSeGVnzGyNwF1Rg --override node.id=0 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092 --override s3.data.buckets='0@s3://xxx_bucket?region=us-east-1' --override s3.ops.buckets='1@s3://xxx_bucket?region=us-east-1'
After starting the AutoMQ cluster, you can run the following demo to verify its functionality.
After completing the tests, you can refer to the following steps to stop and uninstall the AutoMQ cluster.
- Execute the following command on each node to stop the process.
bin/kafka-server-stop.sh
-
You can configure the lifecycle rules for the object storage to automatically clear data in the
s3-data-bucket
ands3-ops-bucket
, and then delete these buckets. -
Delete the created compute instance along with its corresponding system and data volumes.
-
Delete the test user and their associated AccessKey and SecretKey.
- What is automq: Overview
- Difference with Apache Kafka
- Difference with WarpStream
- Difference with Tiered Storage
- Compatibility with Apache Kafka
- Licensing
- Deploy Locally
- Cluster Deployment on Linux
- Cluster Deployment on Kubernetes
- Example: Produce & Consume Message
- Example: Simple Benchmark
- Example: Partition Reassignment in Seconds
- Example: Self Balancing when Cluster Nodes Change
- Example: Continuous Data Self Balancing
-
S3stream shared streaming storage
-
Technical advantage
- Deployment: Overview
- Runs on Cloud
- Runs on CEPH
- Runs on CubeFS
- Runs on MinIO
- Runs on HDFS
- Configuration
-
Data analysis
-
Object storage
-
Kafka ui
-
Observability
-
Data integration