Skip to content

Deploy Direct S3 Cluster

lyx edited this page Jan 17, 2025 · 1 revision

This article will introduce how to quickly deploy and start a single-node AutoMQ instance in a Public Cloud environment, and test the core features of AutoMQ.

Prerequisites

  • Prepare a host for deploying the AutoMQ cluster. In a Public Cloud environment, it is recommended to choose a network-optimized Linux amd64 host with 2 cores and 16GB of memory. Ensure the system disk storage space is no less than 10GB, and the data volume disk space is no less than 10GB. For a test environment, configurations can be appropriately lowered.

  • Download the AutoMQ binary installation package supporting Direct S3 version for installing AutoMQ.

  • Create a custom-named object storage bucket, for example, automq-data.

  • Create an IAM user and generate an Access Key and Secret Key for it. Then, ensure the IAM user has full read and write permissions for the previously created object storage bucket.

Install and Start the AutoMQ Cluster

  1. Modify the AutoMQ configuration

The instance configuration is located at config/kraft/server.properties, and the following configuration needs to be modified


s3.data.buckets=0@s3://<your-bucket>?region=<your-region>&endpoint=<your-s3-endpoint>
s3.ops.buckets=0@s3://<your-bucket>?region=<your-region>&endpoint=<your-s3-endpoint>
s3.wal.path=0@s3://<your-bucket>?region=<your-region>&endpoint=<your-s3-endpoint>

Use the endpoint and region of the S3-compatible service and the created bucket to fill in the above configuration


s3.data.buckets=0@s3://$bucket?region=$region[&endpoint=$endpoint][&pathStyle=$enablePathStyle]

s3.ops.buckets=0@s3://$bucket?region=$region[&endpoint=$endpoint][&pathStyle=$enablePathStyle]

s3.wal.path=0@s3://$bucket?region=$region[&endpoint=$endpoint][&pathStyle=$enablePathStyle][&batchInterval=$batchInterval][&maxBytesInBatch=$maxBytesInBatch][&maxUnflushedBytes=$maxUnflushedBytes][&maxInflightUploadCount=$maxInflightUploadCount]

Common configurations for s3.data.buckets, s3.ops.buckets, and s3.wal.path:

  • bucket: The object storage bucket of the S3-compatible service

  • region: The region of the S3-compatible service

  • endpoint: The endpoint of the S3-compatible service

  • pathStyle: Path access method for S3-compatible services. Different S3-compatible services have different requirements for this. For example, when using MinIO, this should be set to true.

s3.wal.path: Configuration exclusive to s3.wal.path

  • batchInterval: Interval time to trigger data upload, in milliseconds.

  • maxBytesInBatch: Cached data amount to trigger data upload, in bytes.

  • maxUnflushedBytes: Maximum amount of unuploaded data, in bytes. If the amount of pending data exceeds this value, subsequent writes will be rejected.

  • maxInflightUploadCount: Maximum concurrent upload count.

  • readAheadObjectCount: Number of objects to pre-read during recovery.

  1. Use configuration file to start AutoMQ.

export KAFKA_S3_ACCESS_KEY=<your-ak>
export KAFKA_S3_SECRET_KEY=<your-sk>
bin/kafka-server-start.sh config/kraft/server.properties

Use the access key and secret key of the S3-compatible service to populate environment variables

Run the Demo Program

After starting the AutoMQ cluster, you can run the following demo program to verify its functionality

  1. Example: Produce & Consume Message▸

  2. Example: Simple Benchmark▸

  3. Example: Partition Reassignment in Seconds▸

  4. Example: Self-Balancing When Cluster Nodes Change▸

  5. Example: Continuous Data Self-Balancing▸

Stop and Uninstall the AutoMQ Cluster

After completing the tests, you can refer to the following steps to stop and uninstall the AutoMQ cluster

  1. Execute the following command to stop the process

bin/kafka-server-stop.sh

  1. You can automatically clear the data in s3-data-bucket and s3-ops-bucket by configuring the lifecycle rules of object storage, and then delete these buckets

  2. Delete the created compute instances and their corresponding system volumes and data volumes

  3. Delete the test user and their associated AccessKey and SecretKey

AutoMQ Wiki Key Pages

What is automq

Getting started

Architecture

Deployment

Migration

Observability

Integrations

Releases

Benchmarks

Reference

Articles

Clone this wiki locally