diff --git a/.config/vale/styles/Vocab/Base/accept.txt b/.config/vale/styles/Vocab/Base/accept.txt index 0118f88d..0d55f696 100644 --- a/.config/vale/styles/Vocab/Base/accept.txt +++ b/.config/vale/styles/Vocab/Base/accept.txt @@ -1,17 +1,18 @@ +(?i)config +(?i)datacenter (?i)http (?i)https (?i)jwt (?i)keystore +(?i)middleware +(?i)otlp (?i)pkcs12 (?i)prometheus +(?i)proxying +(?i)realtime (?i)subnet(s)? (?i)todo (?i)truststore -(?i)proxying -(?i)realtime -(?i)datacenter -(?i)middleware -(?i)config (?i)yaml acks Aiven @@ -23,6 +24,7 @@ artifacthub Bidi bitnami BOMs +CA(s)? cleartext CLIs config @@ -30,6 +32,7 @@ confluentinc CQRS cspell datagram +declaratively docsearch etag fanout @@ -38,6 +41,8 @@ gitea grpc grpcurl GSSAPI +heroImage +heroText hostname http idempotency @@ -88,6 +93,3 @@ yaml Zilla zpm zpmw -heroImage -heroText -declaratively diff --git a/.config/vale/styles/Vocab/Docs/accept.txt b/.config/vale/styles/Vocab/Docs/accept.txt deleted file mode 100644 index 273926f9..00000000 --- a/.config/vale/styles/Vocab/Docs/accept.txt +++ /dev/null @@ -1,68 +0,0 @@ -acks -alpn -amqp -APIs -artifacthub -Bidi -bitnami -BOMs -cleartext -confluentinc -config -CQRS -cspell -datagram -docsearch -etag -fanout -fontawesome -gitea -grpc -grpcurl -GSSAPI -hostname -(?i)http -(?i)https -idempotency -inet -inkey -jceks -jq -(?i)jwt -kafka -kafkacat -kcat -Kerberos -(?i)keystore -keytool -lycheeverse -mqtt -mvnw -netcat -npm -OAUTHBEARER -oneOf -oneway -overprovisioned -(?i)pkcs12 -prometheus -proto -protobuf -Quickstart -quickstarts -repo(s)? -routeguide -sasl -simlinks -sse -tcp -tls -(?i)todo -trustcacerts -(?i)truststore -VSCode -vuepress -ws -yaml -zpm -zpmw diff --git a/.github/contributing/writing-guide.md b/.github/contributing/writing-guide.md index 7a3d18d4..520d2ff3 100644 --- a/.github/contributing/writing-guide.md +++ b/.github/contributing/writing-guide.md @@ -19,7 +19,7 @@ The docs will be organized according to the [Diataxis](https://diataxis.fr/) fra The Diataxis framework will dictate what content should be created and and keep the conent scoped to a purpose: Tutorials, How-Tos, Concepts, Reference. -The user navigation can be a collection of any content organized by primary feature. This way a user can find the solution to their problem and see different kinds of content all related to that solution. +The user navigation can be a collection of any content organized by primary feature. This way a user can find the solution to their problem and see different kinds of content all related to that solution. ### File structure @@ -62,6 +62,33 @@ Your users need reference material because they need truth and certainty - firm - **Get Started**: This is where users will start and learn what they need to be successful - **Reference**: This is an echo of the Diataxis definition and should remain as dry and generated as possible. The structure is set up for direct linking to individual components that readers may need more context on. Each component should have some sort of example to give context for it's usage +### Links and References + +Links to other md files should use the local files path reference to the new document in order for any file render to correctly add links. + +- `[grpc-kafka](../../reference/config/bindings/binding-grpc-kafka.md)` + +When referencing specific attributes of the Zilla api always use and highlight the syntactically correct words found in the reference docs or config. Add any extra descriptive words before or after. + +- `[grpc-kafka](../../reference/config/bindings/binding-grpc-kafka.md) Binding` +- `[jwt](../../reference/config/guards/guard-jwt.md) Guard` +- `[produce capability](../../reference/config/bindings/binding-grpc-kafka.md#produce-capability)` + +[Reference-style links](https://www.markdownguide.org/basic-syntax/#reference-style-links) should only be used to clarify reading the raw text document when it is needed. This should be used in lists, tables, or complicated paragraphs. When used the reference definition should be placed in a group with other references and as close the the usage as makes sense. Exact highlighted context should be used unless there is a case when reuse of the same link is needed in the same section. + +Good examples improve readability without adding extra work for a plain text reader: + +- [vuejs-example] +- [github-example] + +A bad example has no consistency which doesn't improve readability while navigation and maintenance are worse: + +- [electron-example] + +[vuejs-example]: https://github.com/vuejs/vue/blob/main/README.md?plain=1 +[github-example]: https://github.com/github/gitignore/blob/main/README.md?plain=1 +[electron-example]: https://github.com/electron/electron/blob/main/docs/faq.md?plain=1 + ## Writing & Grammar ### Style diff --git a/README.md b/README.md index cc4b1dd2..ac58e160 100644 --- a/README.md +++ b/README.md @@ -2,8 +2,8 @@
- - + +

diff --git a/src/.vuepress/public/logo-dark.png b/src/.vuepress/public/logo-dark.png index d391ed84..e2ed4e24 100644 Binary files a/src/.vuepress/public/logo-dark.png and b/src/.vuepress/public/logo-dark.png differ diff --git a/src/.vuepress/public/logo.png b/src/.vuepress/public/logo.png index c4602cac..67da1621 100644 Binary files a/src/.vuepress/public/logo.png and b/src/.vuepress/public/logo.png differ diff --git a/src/.vuepress/public/logo.svg b/src/.vuepress/public/logo.svg deleted file mode 100644 index cce3b98a..00000000 --- a/src/.vuepress/public/logo.svg +++ /dev/null @@ -1,41 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/src/.vuepress/public/zilla-rings.webp b/src/.vuepress/public/zilla-rings.webp new file mode 100644 index 00000000..a195b243 Binary files /dev/null and b/src/.vuepress/public/zilla-rings.webp differ diff --git a/src/.vuepress/public/zilla-rings@2x.png b/src/.vuepress/public/zilla-rings@2x.png deleted file mode 100644 index 9fe7dc7a..00000000 Binary files a/src/.vuepress/public/zilla-rings@2x.png and /dev/null differ diff --git a/src/.vuepress/sidebar/en.ts b/src/.vuepress/sidebar/en.ts index 7d11beaa..98432e4e 100644 --- a/src/.vuepress/sidebar/en.ts +++ b/src/.vuepress/sidebar/en.ts @@ -119,11 +119,11 @@ export const enSidebar = sidebar({ ], }, { - text: "Apache Kafka Proxying", + text: "Kafka Proxying", link: "concepts/kafka-proxies/rest-proxy.md", children: [ { - text: "REST-Kafka Proxy", + text: "REST Kafka Proxy", collapsible: true, link: "concepts/kafka-proxies/rest-proxy.md", children: [ @@ -152,7 +152,7 @@ export const enSidebar = sidebar({ ], }, { - text: "SSE-Kafka Proxy", + text: "SSE Kafka Proxy", collapsible: true, link: "concepts/kafka-proxies/sse-proxy.md", children: [ @@ -181,7 +181,7 @@ export const enSidebar = sidebar({ ], }, { - text: "gRPC-Kafka Proxy", + text: "gRPC Kafka Proxy", collapsible: true, link: "concepts/kafka-proxies/grpc-proxy.md", children: [ @@ -195,6 +195,25 @@ export const enSidebar = sidebar({ }, ], }, + { + text: "MQTT Kafka Proxy", + collapsible: true, + link: "concepts/kafka-proxies/mqtt-proxy.md", + children: [ + { + text: "Overview", + link: "concepts/kafka-proxies/mqtt-proxy.md", + }, + { + text: "Run the Taxi Demo", + link: "https://github.com/aklivity/zilla-demos/tree/main/taxi", + }, + { + text: "Create a Simple MQTT Broker", + link: "tutorials/mqtt/mqtt-intro.md", + } + ], + }, { text: "Amazon MSK Pubic Proxy", collapsible: true, diff --git a/src/.vuepress/styles/index.scss b/src/.vuepress/styles/index.scss index 5ca9bf72..35bf2d20 100644 --- a/src/.vuepress/styles/index.scss +++ b/src/.vuepress/styles/index.scss @@ -104,3 +104,25 @@ table.no-head { } } + + +.language-output { + border-radius: 0px !important; + background: transparent !important; + + pre { + border-inline-start: 0.2rem solid #333; + margin: 1rem 0; + padding: 0.25rem 0 0.25rem 1rem; + font-size: 1rem; + overflow-wrap: break-word; + code { + color: #666 !important; + + } + } + + .copy-code-button { + display: none; + } +} diff --git a/src/.vuepress/theme.ts b/src/.vuepress/theme.ts index 9c67b833..cdd6f6ed 100644 --- a/src/.vuepress/theme.ts +++ b/src/.vuepress/theme.ts @@ -5,8 +5,8 @@ import { hostnameSEO, docsRepo, docsBranch } from "./env.js"; export default hopeTheme({ hostname: hostnameSEO, - logo: "/logo-dark.png", - logoDark: "/logo.png", + logo: "/logo.png", + logoDark: "/logo-dark.png", iconAssets: "fontawesome-with-brands", favicon: "favicon.ico", diff --git a/src/README.md b/src/README.md index 26a6e7ce..e05a763d 100644 --- a/src/README.md +++ b/src/README.md @@ -2,10 +2,10 @@ home: true icon: home title: Home -heroImage: /zilla-rings@2x.png -heroImageDark: /zilla-rings@2x.png +heroImage: /zilla-rings.webp +heroImageDark: /zilla-rings.webp heroText: Introduction -tagline: Zilla is an API Gateway for event-driven architectures. It securely interfaces web apps, IoT clients, and microservices to Apache Kafka® via declaratively defined API endpoints. +tagline: Zilla is a multi-protocol, edge and service proxy. It abstracts Apache Kafka® for non-native clients, such as browsers and IoT devices, by exposing Kafka topics via user-defined REST, Server-Sent Events (SSE), MQTT, or gRPC API entry points. actions: - text: Quickstart link: /tutorials/quickstart/kafka-proxies.md diff --git a/src/concepts/kafka-proxies/grpc-proxy.md b/src/concepts/kafka-proxies/grpc-proxy.md index e5051428..e2940c07 100644 --- a/src/concepts/kafka-proxies/grpc-proxy.md +++ b/src/concepts/kafka-proxies/grpc-proxy.md @@ -2,7 +2,7 @@ description: This guide will walk through each unique gRPC message request and response design and how Zilla is configured to manage the connection for each. --- -# gRPC Proxy +# gRPC Kafka Proxy This guide will walk through each unique gRPC message request and response design and how Zilla is configured to manage the connection for each. diff --git a/src/concepts/kafka-proxies/mqtt-proxy.md b/src/concepts/kafka-proxies/mqtt-proxy.md new file mode 100644 index 00000000..8e010f64 --- /dev/null +++ b/src/concepts/kafka-proxies/mqtt-proxy.md @@ -0,0 +1,114 @@ +--- +description: This guide will walk through the way Zilla manages MQTT Pub/Sub connections and messages. +--- + +# MQTT Kafka Proxy + +This guide will walk through the way Zilla manages MQTT Pub/Sub connections and messages. + +An MQTT server acts as a broker between publishers and subscribers. This requires a complex protocol to manage the wide range of IoT devices and use cases. By proxying these messages on and off of Kafka with the [mqtt-kafka](../../reference/config/bindings/binding-mqtt-kafka.md) binding, IoT devices can transmit data to a wider range of tech stacks, adapting to more business needs. + +Unlike other proxies, Zilla manages the different MQTT topics instead of blindly passing them down to Kafka. This way the Kafka architecture can be optimized for MQTT Pub/Sub. MQTT client subscribers and publishers will communicate with Zilla the same as any broker. + +## Step 1: Declaring the broker + +A Zilla MQTT server can manage client sessions and broker all of the messages sent. + +```yaml +mqtt_server: + type: mqtt + kind: server + exit: mqtt_kafka_proxy + +mqtt_kafka_proxy: + type: mqtt-kafka + kind: proxy + options: + topics: + sessions: mqtt-sessions + messages: mqtt-messages + retained: mqtt-retained +``` + +### Protocol version + +The Zilla MQTT `server` supports the [MQTT v5.0 Specification]. + +::: info Feature Coming Soon +[MQTT v3.1.1 Specification] support is currently on the [Zilla roadmap]. Star and watch the [Zilla repo] for new releases! +::: + +[MQTT v5.0 Specification]:https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html +[MQTT v3.1.1 Specification]:http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html +[Zilla roadmap]:https://github.com/orgs/aklivity/projects/4 +[Zilla repo]:https://github.com/aklivity/zilla/releases + +### QOS + +The Zilla MQTT `server` supports the "At most once (QoS 0)" Quality of Service flag. + +::: info Feature Coming Soon +At least once (QoS 1) and Exactly once (QoS 2) delivery will be support is currently on the [Zilla roadmap]. Star and watch the [Zilla repo] for new releases! +::: + +## Step 2: Pub/Sub message reflect with Kafka + +Zilla manages MQTT pub/sub using three kafka topics. The specific topic names can be configured using the [options.topics](../../reference/config/bindings/binding-mqtt-kafka.md#options-topics) property. + +```yaml +topics: + messages: mqtt-messages + retained: mqtt-retained + sessions: mqtt-sessions +``` + +### Messages on Kafka + +All MQTT messages brokered by Zilla are published on the `messages` Kafka topic. The MQTT message topic becomes the kafka key + +### Retaining Messages + +MQTT messages with the `retain` flag set will have a copy published on the `retained` Kafka topic. + +### Session Management + +MQTT connect and disconnect messages are published on the `sessions` Kafka topic. + +## Step 3: Authorizing clients + +A client connection to the MQTT server can be guarded by the [jwt](../../reference/config/guards/guard-jwt.md) guard. + +```yaml{2,19,25} +guards: + jwt_mqtt_auth: + type: jwt + options: + issuer: https://auth.example.com + audience: https://api.example.com + keys: + - kty: RSA + n: qq...aDQ== + e: AQAB + alg: RS256 + kid: example +bindings: + mqtt_server: + type: mqtt + kind: server + options: + authorization: + jwt_mqtt_auth: + credentials: + connect: + username: Bearer {credentials} + routes: + - guarded: + jwt_mqtt_auth: + - mqtt:stream + exit: mqtt_kafka_proxy + +``` + +## Try it out + +Go check out the [MQTT Kafka Reflect example](https://github.com/aklivity/zilla-examples/tree/main/mqtt.kafka.reflect) or the [JWT Auth example](https://github.com/aklivity/zilla-examples/tree/main/mqtt.kafka.reflect.jwt) example for a full implementation of an MQTT proxy. diff --git a/src/concepts/kafka-proxies/rest-proxy.md b/src/concepts/kafka-proxies/rest-proxy.md index 86504f2a..62927aa2 100644 --- a/src/concepts/kafka-proxies/rest-proxy.md +++ b/src/concepts/kafka-proxies/rest-proxy.md @@ -1,4 +1,8 @@ -# REST Proxy +--- +description: Zilla lets you configure application-centric REST API endpoints that unlock Kafka event-driven architectures. +--- + +# REST Kafka Proxy @@ -104,7 +108,7 @@ bindings: ### CORS -Zilla supports Cross-Origin Resource Sharing (CORS) and allows you to specify fine-grained access control including specific request origins, methods and headers allowed, and specific response headers exposed. Since it acts more like a guard and has no dependency on Apache Kafka configuration, you need to define it in the [http binding](../../reference/config/bindings/binding-http.md) +Zilla supports Cross-Origin Resource Sharing (CORS) and allows you to specify fine-grained access control including specific request origins, methods and headers allowed, and specific response headers exposed. Since it acts more like a guard and has no dependency on Apache Kafka configuration, you need to define it in the [http](../../reference/config/bindings/binding-http.md) binding. ### zilla.yaml @@ -132,7 +136,7 @@ http_server: ### Authorization -Since `Zilla` config is very much modular it has the concept of [`guard`](../../reference/config/overview.md#guards) where you define your `guard` configuration and reference that `guard` to authorize a specific endpoint. Currently, `Zilla` supports [`JSON Web Token (JWT)`](../../reference/config/guards/guard-jwt.md) mechanism to authorize the endpoint. +Since `Zilla` config is very much modular it has the concept of [`guard`](../../reference/config/overview.md#guards) where you define your `guard` configuration and reference that `guard` to authorize a specific endpoint. Currently, `Zilla` supports JSON Web Token (JWT) authorization with the [`jwt`](../../reference/config/guards/guard-jwt.md) Guard. The information about keys and other details such as issuer and audience you can get from `JWT` providers for example in the case of Auth0 you can use the command below. @@ -202,4 +206,4 @@ bindings: ### More -For a more detailed explanation please check out Zilla Runtime Configuration Reference doc for [HTTP Binding](../../reference/config/bindings/binding-http.md), [HTTP-Kafka Binding](../../reference/config/bindings/binding-http-kafka.md), and [Guard(JWT)](../../reference/config/guards/guard-jwt.md). +For a more detailed explanation please check out Zilla Runtime Configuration Reference doc for [http](../../reference/config/bindings/binding-http.md) Binding, [http-kafka](../../reference/config/bindings/binding-http-kafka.md) Binding, and [jwt](../../reference/config/guards/guard-jwt.md) Guard. diff --git a/src/concepts/kafka-proxies/sse-proxy.md b/src/concepts/kafka-proxies/sse-proxy.md index 279cc702..796e0a45 100644 --- a/src/concepts/kafka-proxies/sse-proxy.md +++ b/src/concepts/kafka-proxies/sse-proxy.md @@ -1,4 +1,4 @@ -# SSE Proxy +# SSE Kafka Proxy There is an increasing rise in integrating the event stream into front ends where companies are starting to adopt Server-sent Events (SSE) standards. `SSE` naturally fits into the event-driven architecture and you will be able to take advantage of all the benefits it provides such as SDK-free and the ability to auto-reconnect in case of an unstable connection(Be resilient to faults). Zilla supports SSE protocol that you can easily configure the frontend SSE with Kafka topic. @@ -9,7 +9,7 @@ A brief explanation of replaceable values from the config examples below: ## Configure Endpoint -Configuring `Zilla` with SSE endpoint and Kafka binding is as simple as it is shown below: +Configuring `Zilla` with SSE endpoint and Kafka binding is as simple as it is shown below: ::: code-tabs#yaml @@ -44,4 +44,4 @@ Similar to [REST Proxy](./rest-proxy.md) you can secure the `SSE` endpoints as w ### More -For the full capability of `SSE` configure you can check out Zilla Runtime Configuration Reference: [SSE Binding](../../reference/config/bindings/binding-sse.md), [SSE-Kafka Binding](../../reference/config/bindings/binding-sse-kafka.md). +For the full capability of `SSE` configure you can check out Zilla Runtime Configuration Reference: [sse](../../reference/config/bindings/binding-sse.md) Binding, [sse-kafka](../../reference/config/bindings/binding-sse-kafka.md) Binding. diff --git a/src/how-tos/amazon-msk/development.md b/src/how-tos/amazon-msk/development.md index 7e6b0841..b3f9cd2a 100644 --- a/src/how-tos/amazon-msk/development.md +++ b/src/how-tos/amazon-msk/development.md @@ -259,7 +259,7 @@ systemctl status zilla-plus.service Verify that the `msk-proxy` service is active and logging output similar to that shown below. -```text:no-line-numbers +```output:no-line-numbers ● zilla-plus.service - Zilla Plus Loaded: loaded (/etc/systemd/system/zilla-plus.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 20:56:51 UTC; 1 day 19h ago @@ -432,9 +432,9 @@ bin/kafka-console-producer.sh --topic public-proxy-test --producer.config client A prompt will appear for you to type in the messages: -```text:no-line-numbers ->This is my first event ->This is my second event +```output:no-line-numbers +This is my first event +This is my second event ``` #### Receive messages @@ -447,7 +447,7 @@ bin/kafka-console-consumer.sh --topic public-proxy-test --from-beginning --consu You should see the `This is my first event` and `This is my second event` messages. -```text:no-line-numbers +```output:no-line-numbers This is my first event This is my second event ``` diff --git a/src/how-tos/amazon-msk/private-proxy.md b/src/how-tos/amazon-msk/private-proxy.md index c8c8b9bb..457476b6 100644 --- a/src/how-tos/amazon-msk/private-proxy.md +++ b/src/how-tos/amazon-msk/private-proxy.md @@ -177,7 +177,7 @@ systemctl status zilla-plus.service Verify that the `msk-proxy` service is active and logging output similar to that shown below. -```text:no-line-numbers +```output:no-line-numbers ● zilla-plus.service - Zilla Plus Loaded: loaded (/etc/systemd/system/zilla-plus.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 20:56:51 UTC; 1 day 19h ago @@ -385,9 +385,9 @@ bin/kafka-console-producer.sh --topic vpce-test --producer.config client.propert A prompt will appear for you to type in the messages: -```text:no-line-numbers ->This is my first event ->This is my second event +```output:no-line-numbers +This is my first event +This is my second event ``` #### Receive messages @@ -400,7 +400,7 @@ bin/kafka-console-consumer.sh --topic vpce-test --from-beginning --consumer.conf You should see the `This is my first event` and `This is my second event` messages. -```text:no-line-numbers +```output:no-line-numbers This is my first event This is my second event ``` diff --git a/src/how-tos/amazon-msk/production-mutual-trust.md b/src/how-tos/amazon-msk/production-mutual-trust.md index c22eeeae..a7591cd3 100644 --- a/src/how-tos/amazon-msk/production-mutual-trust.md +++ b/src/how-tos/amazon-msk/production-mutual-trust.md @@ -297,7 +297,7 @@ systemctl status zilla-plus.service Verify that the `msk-proxy` service is active and logging output similar to that shown below. -```text:no-line-numbers +```output:no-line-numbers ● zilla-plus.service - Zilla Plus Loaded: loaded (/etc/systemd/system/zilla-plus.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 20:56:51 UTC; 1 day 19h ago @@ -443,9 +443,9 @@ bin/kafka-console-producer.sh --topic public-proxy-test --producer.config client A prompt will appear for you to type in the messages: -```text:no-line-numbers ->This is my first event ->This is my second event +```output:no-line-numbers +This is my first event +This is my second event ``` #### Receive messages @@ -458,7 +458,7 @@ bin/kafka-console-consumer.sh --topic public-proxy-test --from-beginning --consu You should see the `This is my first event` and `This is my second event` messages. -```text:no-line-numbers +```output:no-line-numbers This is my first event This is my second event ``` diff --git a/src/how-tos/amazon-msk/production.md b/src/how-tos/amazon-msk/production.md index 746dae70..b0fcaa40 100644 --- a/src/how-tos/amazon-msk/production.md +++ b/src/how-tos/amazon-msk/production.md @@ -269,7 +269,7 @@ systemctl status zilla-plus.service Verify that the `msk-proxy` service is active and logging output similar to that shown below. -```text:no-line-numbers +```output:no-line-numbers ● zilla-plus.service - Zilla Plus Loaded: loaded (/etc/systemd/system/zilla-plus.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 20:56:51 UTC; 1 day 19h ago @@ -402,7 +402,7 @@ bin/kafka-console-producer.sh --topic public-proxy-test --producer.config client A prompt will appear for you to type in the messages: -```text:no-line-numbers +```output:no-line-numbers >This is my first event >This is my second event ``` @@ -417,7 +417,7 @@ bin/kafka-console-consumer.sh --topic public-proxy-test --from-beginning --consu You should see the `This is my first event` and `This is my second event` messages. -```text:no-line-numbers +```output:no-line-numbers This is my first event This is my second event ``` diff --git a/src/how-tos/amazon-msk/public-proxy.md b/src/how-tos/amazon-msk/public-proxy.md index ecb7274e..8815872f 100644 --- a/src/how-tos/amazon-msk/public-proxy.md +++ b/src/how-tos/amazon-msk/public-proxy.md @@ -13,8 +13,6 @@ description: Securely access your Amazon MSK cluster via the internet. ::: tip Estimated time to complete 20-30 minutes. ::: -## Overview - The [Zilla Plus (Public MSK Proxy)](https://aws.amazon.com/marketplace/pp/prodview-jshnzslazfm44) lets authorized Kafka clients connect, publish messages and subscribe to topics in your Amazon MSK cluster via the internet. By automating the configuration of an internet-facing network load balancer and auto-scaling group of stateless proxies to access your MSK cluster via the public internet, Kafka clients can connect, publish messages and subscribe to topics in your Amazon MSK cluster from outside AWS. @@ -23,11 +21,11 @@ You will need to choose a wildcard DNS pattern to use for public internet access Both `Development` and `Production` deployment options are available. -### Development +## Development Follow the [Development](./development.md) guide to setup connectivity to your MSK cluster from your local development environment via the internet using a locally trusted TLS server certificate for the example wildcard DNS pattern `*.aklivity.example.com`. -### Production +## Production Follow the [Production](./production.md) guide to setup connectivity to your MSK cluster from anywhere on the internet using a globally trusted TLS server certificate for a wildcard DNS pattern under your control. We use `*.example.aklivity.io` to illustrate the steps. diff --git a/src/how-tos/connecting-to-kafka/aiven.md b/src/how-tos/connecting-to-kafka/aiven.md index d5d95137..8fc882b2 100644 --- a/src/how-tos/connecting-to-kafka/aiven.md +++ b/src/how-tos/connecting-to-kafka/aiven.md @@ -23,7 +23,7 @@ A brief explanation of replaceable values from the config examples below: ## Aiven Parameters -The Aiven Kafka requires clients to connect via `TLS mutual authentication` and provides the following files `Client Key`, `Client Certificate`, and `CA Certificate` to achieve that. You can download them by going to `Aiven Console` -> `Kafka Cluster` -> `Overview Tab` as shown below. +The Aiven Kafka requires clients to connect via `TLS mutual authentication` and provides the following files `Client Key`, `Client Certificate`, and `CA Certificate` to achieve that. You can download them by going to `Aiven Console` -> `Kafka Cluster` -> `Overview Tab` as shown below. ![Connection Info](./aivien-connection-information.png) @@ -35,8 +35,8 @@ you should have the following files: The next step is to generate the truststore and keystore. -* `truststore.p12` - contains the trusted server certificates or certificate authorities -* `keystore.p12` - contains the signed client certificates +* `truststore.p12` - contains the trusted server certificates or certificate authorities +* `keystore.p12` - contains the signed client certificates You can use the scripts shown below to generate `truststore.p12` and `keystore.p12` files using certificates and keys downloaded from `Aiven Kafka Console`. Please replace all caps lock words. @@ -62,7 +62,7 @@ openssl pkcs12 -export -in service.cert -inkey service.key \ ## Configure Zilla -And the final step is to configure a `vault` with `truststore` and `keystore`, then reference the vault in the `tls_client` binding. +And the final step is to configure a `vault` with `truststore` and `keystore`, then reference the vault in the `tls_client` binding. ### zilla.yaml diff --git a/src/how-tos/connecting-to-kafka/amazon-msk.md b/src/how-tos/connecting-to-kafka/amazon-msk.md index fbd3ad0a..0e7d091c 100644 --- a/src/how-tos/connecting-to-kafka/amazon-msk.md +++ b/src/how-tos/connecting-to-kafka/amazon-msk.md @@ -42,7 +42,7 @@ aws acm-pca get-certificate --certificate-authority-arn CERTIFICATE_AUTHORITY_AR #### output -```text:no-line-numbers +```output:no-line-numbers ----BEGIN CERTIFICATE----- MIIEdzCCA1+gAwIBAgIQDLtFK9uDUb6VpObjhusyhTANBgkqhkiG9w0BAQsFADAS ...... @@ -58,7 +58,7 @@ Copy first certificate and save it as `client.cert` #### client.cert -```text:no-line-numbers +```output:no-line-numbers ----BEGIN CERTIFICATE----- MIIEdzCCA1+gAwIBAgIQDLtFK9uDUb6VpObjhusyhTANBgkqhkiG9w0BAQsFADAS ...... diff --git a/src/how-tos/connecting-to-kafka/apache-kafka.md b/src/how-tos/connecting-to-kafka/apache-kafka.md index fe7108aa..e0b02b19 100644 --- a/src/how-tos/connecting-to-kafka/apache-kafka.md +++ b/src/how-tos/connecting-to-kafka/apache-kafka.md @@ -60,7 +60,7 @@ As usual, you need to define the host and port and flush the data to the network ## Connect to Kafka over `TLS/SSL` -By default, Kafka communicates in `PLAINTEXT`, which means that all data is sent without encryption. However, Kafka running in production needs to expose only a secure connection that encrypts communication, and you should therefore configure Zilla to use TLS/SSL encrypted communication. +By default, Kafka communicates in `PLAINTEXT`, which means that all data is sent without encryption. However, Kafka running in production needs to expose only a secure connection that encrypts communication, and you should therefore configure Zilla to use TLS/SSL encrypted communication. If the `Kafka` cluster is secured by a `TLS` server certificate that is provided by a public certificate authority, then configure `Zilla` add a `TLS` client binding as shown below with the `trustcacerts` option to set to `true`. @@ -156,8 +156,8 @@ Next, you will explore how to connect to `Kafka` cluster over `TLS/SSL` using cl The following items need to be prepared: -* `truststore.p12` - contains the trusted server certificates or certificate authorities -* `keystore.p12` - contains the signed client certificates +* `truststore.p12` - contains the trusted server certificates or certificate authorities +* `keystore.p12` - contains the signed client certificates Kafka clients connecting to Kafka clusters that are configured for `TLS mutual authentication` require three files; a `Client Key`, a `Client Certificate`, and a `CA Certificate`. @@ -182,7 +182,7 @@ openssl pkcs12 -export -in service.cert -inkey service.key ::: -You also need to configure a `vault` with `truststore` and `keystore`, then reference the vault in the `tls_client` binding. +You also need to configure a `vault` with `truststore` and `keystore`, then reference the vault in the `tls_client` binding. ### zilla.yaml diff --git a/src/how-tos/connecting-to-kafka/confluent-cloud.md b/src/how-tos/connecting-to-kafka/confluent-cloud.md index 1db15435..36ed97b8 100644 --- a/src/how-tos/connecting-to-kafka/confluent-cloud.md +++ b/src/how-tos/connecting-to-kafka/confluent-cloud.md @@ -30,7 +30,7 @@ Before we proceed further let's use the below command to verify connectivity to ```bash:no-line-numbers kcat -b BOOTSTRAP_SERVER_HOSTNAME:BOOTSTRAP_SERVER_PORT \ -X security.protocol=sasl_ssl -X sasl.mechanisms=PLAIN \ --X sasl.username=API_KEY_KEY -X sasl.password=API_KEY_SECRET \ +-X sasl.username=API_KEY_KEY -X sasl.password=API_KEY_SECRET \ -L ``` diff --git a/src/how-tos/install.md b/src/how-tos/install.md index 737686de..006bc5ce 100644 --- a/src/how-tos/install.md +++ b/src/how-tos/install.md @@ -10,7 +10,7 @@ docker run ghcr.io/aklivity/zilla:latest start -v The output should display the zilla config and `started` to know zilla is ready for traffic. -```text:no-line-numbers +```output:no-line-numbers // default Zilla config { "name": "default" diff --git a/src/reference/amazon-msk/create-client-certificate-acm.md b/src/reference/amazon-msk/create-client-certificate-acm.md index f238e194..13664aaf 100644 --- a/src/reference/amazon-msk/create-client-certificate-acm.md +++ b/src/reference/amazon-msk/create-client-certificate-acm.md @@ -123,7 +123,7 @@ Next we need to create a certificate corresponding to the key, with metadata abo openssl req -new -key client-1.key.pem -out client-1.csr ``` -```text:no-line-numbers +```output:no-line-numbers You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. diff --git a/src/reference/amazon-msk/create-server-certificate-acm.md b/src/reference/amazon-msk/create-server-certificate-acm.md index 42cdbaeb..d0e525c9 100644 --- a/src/reference/amazon-msk/create-server-certificate-acm.md +++ b/src/reference/amazon-msk/create-server-certificate-acm.md @@ -35,7 +35,7 @@ Next we need to create a certificate corresponding to the key, with metadata abo openssl req -new -key wildcard.aklivity.example.com.key.pem -out wildcard.aklivity.example.com.csr ``` -```text:no-line-numbers +```output:no-line-numbers You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. diff --git a/src/reference/amazon-msk/create-server-certificate-letsencrypt.md b/src/reference/amazon-msk/create-server-certificate-letsencrypt.md index 43fb7d6f..60eccb0e 100644 --- a/src/reference/amazon-msk/create-server-certificate-letsencrypt.md +++ b/src/reference/amazon-msk/create-server-certificate-letsencrypt.md @@ -25,7 +25,7 @@ This will require you to respond to the challenge by adding a custom DNS record When `certbot` completes, the relevant files for the certificate chain and private key have been generated, called `fullchain.pem and` `privkey.pem`. -```text:no-line-numbers +```output:no-line-numbers - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/example.aklivity.io/fullchain.pem Your key file has been saved at: diff --git a/src/reference/config/bindings/binding-amqp.md b/src/reference/config/bindings/binding-amqp.md index af6f82eb..e8e37df3 100644 --- a/src/reference/config/bindings/binding-amqp.md +++ b/src/reference/config/bindings/binding-amqp.md @@ -1,22 +1,20 @@ --- -shortTitle: amqp 🔜 -description: Zilla runtime amqp binding (incubator) +shortTitle: amqp +description: Zilla runtime amqp binding category: - Binding tag: - Server --- -# amqp Binding - -Zilla runtime amqp binding. - -::: info Feature Coming Soon - -This is currently in the incubator. Follow the [Zilla repo](https://github.com/aklivity/zilla/releases) to know when it will be released! +# amqp Binding +::: info Feature Coming Soon +This is currently on the [Zilla roadmap](https://github.com/orgs/aklivity/projects/4). Star and watch the [Zilla repo](https://github.com/aklivity/zilla/releases) for new releases! ::: +Zilla runtime amqp binding. + ```yaml {2} amqp_server: type: amqp @@ -124,7 +122,10 @@ Defaults to `"send_and_receive"`. Next binding when following this route. ```yaml -exit: echo_server +routes: + - when: + ... + exit: echo_server ``` --- diff --git a/src/reference/config/bindings/binding-grpc-kafka.md b/src/reference/config/bindings/binding-grpc-kafka.md index 57ab90f1..cc30a900 100644 --- a/src/reference/config/bindings/binding-grpc-kafka.md +++ b/src/reference/config/bindings/binding-grpc-kafka.md @@ -300,7 +300,10 @@ Base64 encoded value for binary metadata header. Next binding when following this route. ```yaml -exit: kafka_cache_client +routes: + - when: + ... + exit: kafka_cache_client ``` ### routes[].with\* diff --git a/src/reference/config/bindings/binding-grpc.md b/src/reference/config/bindings/binding-grpc.md index 733c6c4b..5d26a91e 100644 --- a/src/reference/config/bindings/binding-grpc.md +++ b/src/reference/config/bindings/binding-grpc.md @@ -173,7 +173,10 @@ Base64 encoded value for binary metadata header. Routed exit binding when conditional route matches. ```yaml -exit: echo_server +routes: + - when: + ... + exit: echo_server ``` --- diff --git a/src/reference/config/bindings/binding-http-filesystem.md b/src/reference/config/bindings/binding-http-filesystem.md index 332f10c8..33d788b5 100644 --- a/src/reference/config/bindings/binding-http-filesystem.md +++ b/src/reference/config/bindings/binding-http-filesystem.md @@ -25,7 +25,7 @@ http_filesystem_proxy: ## Summary -Defines a binding with `http-filesystem` support, with `proxy` behavior. +Defines a binding with `http-filesystem` support, with `proxy` behavior. The `proxy` kind `http-filesystem` binding adapts `http` data streams into `filesystem` data streams by mapping the path from an inbound `http` `GET` request into a filesystem relative path. @@ -124,7 +124,10 @@ Path with optional embedded parameter names, such as `/{path}`. Next binding when following this route. ```yaml -exit: filesystem_server +routes: + - when: + ... + exit: filesystem_server ``` ### routes[].with diff --git a/src/reference/config/bindings/binding-http-kafka.md b/src/reference/config/bindings/binding-http-kafka.md index 1b07f295..da99612f 100644 --- a/src/reference/config/bindings/binding-http-kafka.md +++ b/src/reference/config/bindings/binding-http-kafka.md @@ -264,7 +264,10 @@ Path with optional embedded parameter names, such as `/{topic}`. Default exit binding when no conditional routes are viable. ```yaml -exit: kafka_cache_client +routes: + - when: + ... + exit: kafka_cache_client ``` ### routes[].with diff --git a/src/reference/config/bindings/binding-http.md b/src/reference/config/bindings/binding-http.md index 82d1d690..305b077e 100644 --- a/src/reference/config/bindings/binding-http.md +++ b/src/reference/config/bindings/binding-http.md @@ -314,7 +314,10 @@ Header name value pairs (all match). Next binding when following this route. ```yaml -exit: echo_server +routes: + - when: + ... + exit: echo_server ``` --- diff --git a/src/reference/config/bindings/binding-kafka-grpc.md b/src/reference/config/bindings/binding-kafka-grpc.md index 9826bea1..88495728 100644 --- a/src/reference/config/bindings/binding-kafka-grpc.md +++ b/src/reference/config/bindings/binding-kafka-grpc.md @@ -238,7 +238,10 @@ Pattern matching the fully qualified name of a `grpc` service method, in the for Default exit binding when no conditional routes are viable. ```yaml -exit: kafka_cache_client +routes: + - when: + ... + exit: kafka_cache_client ``` ### routes[].with diff --git a/src/reference/config/bindings/binding-mqtt-kafka.md b/src/reference/config/bindings/binding-mqtt-kafka.md index a3a044b6..08ccc3bc 100644 --- a/src/reference/config/bindings/binding-mqtt-kafka.md +++ b/src/reference/config/bindings/binding-mqtt-kafka.md @@ -1,6 +1,6 @@ --- -shortTitle: mqtt-kafka 🔜 -description: Zilla runtime mqtt-kafka binding (incubator) +shortTitle: mqtt-kafka +description: Zilla runtime mqtt-kafka binding category: - Binding tag: @@ -11,28 +11,34 @@ tag: Zilla runtime mqtt-kafka binding. -::: info Feature Coming Soon - -This is currently in the incubator. Follow the [Zilla repo](https://github.com/aklivity/zilla/releases) to know when it will be released! - -::: - ```yaml {2} -mqtt_server: +mqtt_kafka_proxy: type: mqtt-kafka kind: proxy + options: + server: mqtt-1.example.com:1883 + topics: + sessions: mqtt-sessions + messages: mqtt-messages + retained: mqtt-retained exit: kafka_cache_client ``` ## Summary -Defines a binding with `mqtt-kafka` support, with `proxy` behavior. +Defines a binding with `mqtt-kafka` support, with `proxy` behavior. ## Configuration :::: note Properties - [kind\*](#kind) +- [options](#options) + - [options.server](#options-server) + - [options.topics](#options-topics) + - [topics.sessions\*](#topics-sessions) + - [topics.messages\*](#topics-messages) + - [topics.retained\*](#topics-retained) - [exit](#exit) ::: right @@ -47,6 +53,55 @@ Defines a binding with `mqtt-kafka` support, with `proxy` behavior. Behave as a `mqtt-kafka` `proxy`. +### options + +> `object` + +`mqtt-kafka`-specific options for configuring the `kafka` topics that the proxy will use to route mqtt messages and session states; and define server reference of the MQTT server in Zilla + +#### options.server + +> `string` + +The server reference used by the MQTT server in Zilla. This config enables scaling of the MQTT server when running multiple Zilla instances as it uses [server redirection](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901255). + +```yaml +options: + server: mqtt-1.example.com:1883 +``` + +#### options.topics + +> `object` + +The `kafka` topics Zilla needs when routing MQTT messages + +```yaml +options: + topics: + sessions: mqtt-sessions + messages: mqtt-messages + retained: mqtt-retained +``` + +##### topics.sessions\* + +> `string` + +Compacted Kafka topic for storing mqtt session states. + +##### topics.messages\* + +> `string` + +Kafka topic used for routing mqtt messages. + +##### topics.retained\* + +> `string` + +Compacted Kafka topic for storing mqtt retained messages. + ### exit > `string` diff --git a/src/reference/config/bindings/binding-mqtt.md b/src/reference/config/bindings/binding-mqtt.md index 493d7e31..a9dd46dc 100644 --- a/src/reference/config/bindings/binding-mqtt.md +++ b/src/reference/config/bindings/binding-mqtt.md @@ -1,6 +1,6 @@ --- -shortTitle: mqtt 🔜 -description: Zilla runtime mqtt binding (incubator) +shortTitle: mqtt +description: Zilla runtime mqtt binding category: - Binding tag: @@ -11,28 +11,27 @@ tag: Zilla runtime mqtt binding. -::: info Feature Coming Soon - -This is currently in the incubator. Follow the [Zilla repo](https://github.com/aklivity/zilla/releases) to know when it will be released! - -::: - ```yaml {2} mqtt_server: type: mqtt kind: server routes: - when: - - topic: messages - capabilities: publish_and_subscribe - exit: mqtt_kafka_proxy + - session: + - client-id: "*" + - publish: + - topic: command/one + - topic: command/two + - subscribe: + - topic: reply + exit: mqtt_kafka_proxy ``` ## Summary -Defines a binding with `mqtt 5.0` protocol support, with `server` behavior. +Defines a binding with [MQTT v5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html) protocol support, with `server` behavior. -The `server` kind `mqtt` binding decodes `mqtt 5.0` protocol on the inbound network stream, producing higher level application streams for each `publish` or `subscribe` `topic`. The `session` state is also described by a higher level application stream. +The `server` kind `mqtt` binding decodes the MQTT protocol on the inbound network stream, producing higher level application streams for each `publish` or `subscribe` `topic`. The `session` state is also described by a higher level application stream. Conditional routes based on the `topic` `name` are used to route these application streams to an `exit` binding. @@ -45,8 +44,12 @@ Conditional routes based on the `topic` `name` are used to route these applicati - [routes](#routes) - [routes\[\].guarded](#routes-guarded) - [routes\[\].when](#routes-when) - - [when\[\].topic\*](#when-topic) - - [when\[\].capabilities](#when-capabilities) + - [when\[\].session](#when-session) + - [session.client-id](#session-client-id) + - [when\[\].publish](#when-publish) + - [publish.topic](#publish-topic) + - [when\[\].subscribe](#when-subscribe) + - [subscribe.topic](#subscribe-topic) - [routes\[\].exit\*](#routes-exit) ::: right @@ -99,22 +102,55 @@ List of conditions (any match) to match this route. ```yaml routes: - when: - - topic: echo - capabilities: publish_and_subscribe + # any required + - session: + - client-id: "*" + - publish: + - topic: command/one + - subscribe: + - topic: reply + - when: + # all required + - session: + - client-id: "*" + publish: + - topic: command/two + subscribe: + - topic: reply ``` -#### when[].topic\* +#### when[].session + +> `array` of `object` + +Array of mqtt session properties + +##### session.client-id > `string` -Topic name. +An MQTT client identifier, allowing the usage of wildcards. -#### when[].capabilities +#### when[].publish -> `enum` [ "session", "publish_only", "subscribe_only", "publish_and_subscribe" ] +> `array` of `object` + +Array of MQTT topic names for publish capability. + +##### publish.topic + +> `string` + +#### when[].subscribe + +> `array` of `object` + +Array of MQTT topic names for subscribe capability. + +##### subscribe.topic + +> `string` -Session, publish, subscribe, or both publish and subscribe.\ -Defaults to `"publish_and_subscribe"`. ### routes[].exit\* @@ -123,7 +159,10 @@ Defaults to `"publish_and_subscribe"`. Next binding when following this route. ```yaml -exit: mqtt_kafka_proxy +routes: + - when: + ... + exit: mqtt_kafka_proxy ``` --- diff --git a/src/reference/config/bindings/binding-proxy.md b/src/reference/config/bindings/binding-proxy.md index c5b35526..c20cd689 100644 --- a/src/reference/config/bindings/binding-proxy.md +++ b/src/reference/config/bindings/binding-proxy.md @@ -166,7 +166,10 @@ Port number. Next binding when following this route. ```yaml -exit: echo_server +routes: + - when: + ... + exit: echo_server ``` --- diff --git a/src/reference/config/bindings/binding-sse-kafka.md b/src/reference/config/bindings/binding-sse-kafka.md index ddb31b51..366e2ef4 100644 --- a/src/reference/config/bindings/binding-sse-kafka.md +++ b/src/reference/config/bindings/binding-sse-kafka.md @@ -27,7 +27,7 @@ sse_kafka_proxy: ## Summary -Defines a binding with `sse-kafka` support, with `proxy` behavior. +Defines a binding with `sse-kafka` support, with `proxy` behavior. The `proxy` kind `sse-kafka` binding adapts `sse` data streams into `kafka` data streams, so that `kafka` messages can be delivered to `sse` clients. @@ -136,7 +136,10 @@ Path with optional embedded parameter names, such as `/{topic}`. Next binding when following this route. ```yaml -exit: kafka_cache_client +routes: + - when: + ... + exit: kafka_cache_client ``` ### routes[].with diff --git a/src/reference/config/bindings/binding-sse.md b/src/reference/config/bindings/binding-sse.md index 9e4823d4..351fa810 100644 --- a/src/reference/config/bindings/binding-sse.md +++ b/src/reference/config/bindings/binding-sse.md @@ -134,7 +134,10 @@ Path pattern. Next binding when following this route. ```yaml -exit: sse_kafka_proxy +routes: + - when: + ... + exit: sse_kafka_proxy ``` --- diff --git a/src/reference/config/bindings/binding-tcp.md b/src/reference/config/bindings/binding-tcp.md index f6bfa3de..24c771af 100644 --- a/src/reference/config/bindings/binding-tcp.md +++ b/src/reference/config/bindings/binding-tcp.md @@ -136,6 +136,13 @@ Port number(s), including port number ranges. Next binding when following this route, for kind `server` only. +```yaml +routes: + - when: + ... + exit: echo_server +``` + --- ::: right diff --git a/src/reference/config/bindings/binding-tls.md b/src/reference/config/bindings/binding-tls.md index 22d91985..ffe5af45 100644 --- a/src/reference/config/bindings/binding-tls.md +++ b/src/reference/config/bindings/binding-tls.md @@ -228,7 +228,10 @@ Application protocol. Next binding when following this route. ```yaml -exit: echo_server +routes: + - when: + ... + exit: echo_server ``` --- diff --git a/src/reference/config/bindings/binding-ws.md b/src/reference/config/bindings/binding-ws.md index 98b7d26f..2cfb9887 100644 --- a/src/reference/config/bindings/binding-ws.md +++ b/src/reference/config/bindings/binding-ws.md @@ -185,7 +185,10 @@ Path pattern. Next binding when following this route. ```yaml -exit: echo_server +routes: + - when: + ... + exit: echo_server ``` --- diff --git a/src/reference/config/overview.md b/src/reference/config/overview.md index 2370567d..9340dd82 100644 --- a/src/reference/config/overview.md +++ b/src/reference/config/overview.md @@ -66,7 +66,7 @@ See each of the specific `binding` types linked below for more detailed examples Behavioral type supporting either encoding and decoding for a specific protocol or translation between protocols. #### routes.exit - + > `string` diff --git a/src/reference/config/zilla-cli.md b/src/reference/config/zilla-cli.md index 6125fd20..a4d9f815 100644 --- a/src/reference/config/zilla-cli.md +++ b/src/reference/config/zilla-cli.md @@ -15,7 +15,7 @@ The Zilla Runtime command line interface uses the [Zilla Runtime Configuration]( - [zilla metrics](#zilla-metrics) - [--namespace ``](#namespace-namespace) - [zilla start](#zilla-start) - - [--verbose](#verbose) + - [-v --verbose](#v-verbose) - [--workers](#workers) - [zilla stop](#zilla-stop) - [zilla tune](#zilla-tune) @@ -62,15 +62,17 @@ Examples: ./zilla metrics echo_server ``` -> namespace binding metric value -> example echo_server stream.opens.received 24 -> example echo_server stream.opens.sent 24 -> example echo_server stream.closes.received 24 -> example echo_server stream.closes.sent 24 -> example echo_server stream.data.received 13 -> example echo_server stream.data.sent 13 -> example echo_server stream.errors.received 0 -> example echo_server stream.errors.sent 0 +```output:no-line-numbers +namespace binding metric value +example echo_server stream.opens.received 24 +example echo_server stream.opens.sent 24 +example echo_server stream.closes.received 24 +example echo_server stream.closes.sent 24 +example echo_server stream.data.received 13 +example echo_server stream.data.sent 13 +example echo_server stream.errors.received 0 +example echo_server stream.errors.sent 0 +``` ### zilla start @@ -80,45 +82,40 @@ The `zilla start` command resolves the [Zilla Runtime Configuration](./) in `zil zilla start ``` -#### --verbose +> started + +#### -v --verbose Show verbose output +```bash:no-line-numbers +zilla start -v +``` + +```output:no-line-numbers +name: example +bindings: + tcp: + type: tcp + kind: server + options: + host: 0.0.0.0 + port: + - 12345 + - 12346 + exit: echo + echo: + type: echo + kind: server +started +``` + #### --workers > Defaults to # CPU cores available Worker count -Examples: - -```bash:no-line-numbers -./zilla start --verbose -{ - "name": "example", - "bindings": - { - "tcp": - { - "type" : "tcp", - "kind": "server", - "options": - { - "host": "0.0.0.0", - "port": [ 12345, 12346 ] - }, - "exit": "echo" - }, - "echo": - { - "type" : "echo", - "kind": "server" - } - } -} -started -``` - ### zilla stop The `zilla stop` command signals the runtime engine to stop. @@ -127,26 +124,12 @@ The `zilla stop` command signals the runtime engine to stop. zilla stop ``` -Examples: - -```bash:no-line-numbers -./zilla start -started -... -``` - -```bash:no-line-numbers -./zilla stop -... -stopped -``` +> stopped ### zilla tune -::: info Feature Coming Soon - -This is currently in the incubator. Follow the [Zilla repo](https://github.com/aklivity/zilla/releases) to know when it will be released! - +::: info Feature Coming Soon +This is currently on the [Zilla roadmap](https://github.com/orgs/aklivity/projects/4). Star and watch the [Zilla repo](https://github.com/aklivity/zilla/releases) for new releases! ::: The `zilla tune` command tunes the mapping from runtime engine workers to bindings. @@ -155,30 +138,28 @@ The `zilla tune` command tunes the mapping from runtime engine workers to bindin zilla tune [NAME=VALUE] ``` -Examples: - -```bash:no-line-numbers -./zilla start --workers 4 -``` - -> started - ```bash:no-line-numbers ./zilla tune ``` -> `xxxx example.tcp`\ -> `xxxx example.echo` +```output:no-line-numbers +xxxx example.tcp +xxxx example.echo +``` ```bash:no-line-numbers ./zilla tune example.echo=2 ``` -> `..x. example.echo` +```output:no-line-numbers +..x. example.echo +``` ```bash:no-line-numbers ./zilla tune ``` -> `xxxx example.tcp`\ -> `.x.. example.echo` +```output:no-line-numbers +xxxx example.tcp +.x.. example.echo +``` diff --git a/src/reference/manager/zpm-cli.md b/src/reference/manager/zpm-cli.md index 7ccea42f..a4fd6647 100644 --- a/src/reference/manager/zpm-cli.md +++ b/src/reference/manager/zpm-cli.md @@ -138,7 +138,7 @@ Remote Maven repository URL #### --version `` -Require `zpm` wrapper to use `` +Require `zpm` wrapper to use `` Example: diff --git a/src/reference/troubleshooting/amazon-msk.md b/src/reference/troubleshooting/amazon-msk.md index a9c261a0..e64a7afe 100644 --- a/src/reference/troubleshooting/amazon-msk.md +++ b/src/reference/troubleshooting/amazon-msk.md @@ -66,7 +66,7 @@ nc -v 9094 The `nc` output should be as shown below: -```text:no-line-numbers +```output:no-line-numbers Connection to port 9094 [tcp/*] succeeded! ``` @@ -95,7 +95,7 @@ openssl s_client \ The `openssl` output should be as shown below: -```text:no-line-numbers +```output:no-line-numbers ... Verify return code: 0 (ok) --- diff --git a/src/tutorials/grpc/grpc-intro.md b/src/tutorials/grpc/grpc-intro.md index 9bd4c842..5a1ecdcc 100644 --- a/src/tutorials/grpc/grpc-intro.md +++ b/src/tutorials/grpc/grpc-intro.md @@ -70,13 +70,13 @@ bindings: kafka_cache_client: type: kafka kind: cache_client - options: - bootstrap: - - echo-messages exit: kafka_cache_server kafka_cache_server: type: kafka kind: cache_server + options: + bootstrap: + - echo-messages exit: kafka_client # Connect to local Kafka @@ -148,12 +148,6 @@ package example; service EchoService { rpc EchoSimple(EchoMessage) returns (EchoMessage); - - rpc EchoClientStream(stream EchoMessage) returns (EchoMessage); - - rpc EchoServerStream( EchoMessage) returns (stream EchoMessage); - - rpc EchoBidiStream(stream EchoMessage) returns (stream EchoMessage); } message EchoMessage @@ -174,7 +168,7 @@ docker-compose up -d ### Use [grpcurl](https://github.com/fullstorydev/grpcurl) to send a greeting ```bash:no-line-numbers -grpcurl -plaintext -proto echo.proto -d '{"message":"Hello World"}' localhost:8080 example.EchoService.EchoSimple +grpcurl -plaintext -proto echo.proto -d '{"message":"Hello World"}' localhost:8080 example.EchoService.EchoSimple ``` ::: note Wait for the services to start diff --git a/src/tutorials/mqtt/docker-compose.yaml b/src/tutorials/mqtt/docker-compose.yaml new file mode 100644 index 00000000..2aeb4f27 --- /dev/null +++ b/src/tutorials/mqtt/docker-compose.yaml @@ -0,0 +1,49 @@ +version: '3' +services: + kafka: + image: docker.io/bitnami/kafka:latest + container_name: kafka + ports: + - 9092:9092 + - 29092:9092 + environment: + ALLOW_PLAINTEXT_LISTENER: "yes" + KAFKA_CFG_NODE_ID: "1" + KAFKA_CFG_BROKER_ID: "1" + KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "1@127.0.0.1:9093" + KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CLIENT:PLAINTEXT,INTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT" + KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER" + KAFKA_CFG_LOG_DIRS: "/tmp/logs" + KAFKA_CFG_PROCESS_ROLES: "broker,controller" + KAFKA_CFG_LISTENERS: "CLIENT://:9092,INTERNAL://:29092,CONTROLLER://:9093" + KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "INTERNAL" + KAFKA_CFG_ADVERTISED_LISTENERS: "CLIENT://localhost:9092,INTERNAL://kafka:29092" + KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true" + + kafka-init: + image: docker.io/bitnami/kafka:3.2 + command: + - "/bin/bash" + - "-c" + - | + /opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic mqtt-messages + /opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic mqtt-sessions --config cleanup.policy=compact + /opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic mqtt-retained --config cleanup.policy=compact + depends_on: + - kafka + init: true + + zilla: + image: ghcr.io/aklivity/zilla:latest + depends_on: + - kafka + ports: + - 1883:1883 + volumes: + - ./zilla.yaml:/etc/zilla/zilla.yaml + command: start -v -e + +networks: + default: + name: zilla-network + driver: bridge diff --git a/src/tutorials/mqtt/mqtt-intro.md b/src/tutorials/mqtt/mqtt-intro.md new file mode 100644 index 00000000..e8fde93e --- /dev/null +++ b/src/tutorials/mqtt/mqtt-intro.md @@ -0,0 +1,181 @@ +--- +description: Running these Zilla samples will introduce some MQTT features. +--- + +# MQTT Intro + +Get started with Zilla by deploying our Docker Compose stack. Before proceeding, you should have [Docker Compose](https://docs.docker.com/compose/gettingstarted/) installed. + +## MQTT Broker onto Kafka event streams + +Running this Zilla sample will create a simple API to create and list items. All of the data will be stored on a Kafka topic. + +### Setup + +Create these files, `zilla.yaml` and `docker-compose.yaml`, in the same directory. + +::: code-tabs#yaml + +@tab zilla.yaml + +```yaml {11,26-28,41-42} +name: MQTT-intro +bindings: + +# Gateway ingress config + tcp_server: + type: tcp + kind: server + options: + host: 0.0.0.0 + port: + - 1883 + exit: mqtt_server + +# MQTT Broker With an exit to Kafka + mqtt_server: + type: mqtt + kind: server + exit: mqtt_kafka_proxy + +# Proxy MQTT messages to Kafka + mqtt_kafka_proxy: + type: mqtt-kafka + kind: proxy + options: + topics: + sessions: mqtt-sessions + messages: mqtt-messages + retained: mqtt-retained + exit: kafka_cache_client + +# Kafka caching layer + kafka_cache_client: + type: kafka + kind: cache_client + exit: kafka_cache_server + kafka_cache_server: + type: kafka + kind: cache_server + options: + bootstrap: + - mqtt-sessions + - mqtt-retained + exit: kafka_client + +# Connect to local Kafka + kafka_client: + type: kafka + kind: client + exit: kafka_tcp_client + kafka_tcp_client: + type: tcp + kind: client + options: + host: kafka + port: 29092 + routes: + - when: + - cidr: 0.0.0.0/0 +``` + +@tab docker-compose.yaml + +```yaml {9,40-42} +version: '3' +services: + + zilla: + image: ghcr.io/aklivity/zilla:latest + depends_on: + - kafka + ports: + - 1883:1883 + volumes: + - ./zilla.yaml:/etc/zilla/zilla.yaml + command: start -v -e + + kafka: + image: docker.io/bitnami/kafka:latest + container_name: kafka + ports: + - 9092:9092 + - 29092:9092 + environment: + ALLOW_PLAINTEXT_LISTENER: "yes" + KAFKA_CFG_NODE_ID: "1" + KAFKA_CFG_BROKER_ID: "1" + KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "1@127.0.0.1:9093" + KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CLIENT:PLAINTEXT,INTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT" + KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER" + KAFKA_CFG_LOG_DIRS: "/tmp/logs" + KAFKA_CFG_PROCESS_ROLES: "broker,controller" + KAFKA_CFG_LISTENERS: "CLIENT://:9092,INTERNAL://:29092,CONTROLLER://:9093" + KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "INTERNAL" + KAFKA_CFG_ADVERTISED_LISTENERS: "CLIENT://localhost:9092,INTERNAL://kafka:29092" + KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true" + + kafka-init: + image: docker.io/bitnami/kafka:3.2 + command: + - "/bin/bash" + - "-c" + - | + /opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic mqtt-messages + /opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic mqtt-sessions --config cleanup.policy=compact + /opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic mqtt-retained --config cleanup.policy=compact + depends_on: + - kafka + init: true + + +networks: + default: + name: zilla-network + driver: bridge +``` + +::: + +### Run Zilla and Kafka + +```bash:no-line-numbers +docker-compose up -d +``` + +### Use [mosquitto_pub](https://mosquitto.org/download/) to send a greeting + +Subscribe to the `zilla` topic + +```bash:no-line-numbers +mosquitto_sub -V 'mqttv5' --topic 'zilla' --debug +``` + +In a separate session publish a message on the `zilla` topic + +```bash:no-line-numbers +mosquitto_pub -V 'mqttv5' --topic 'zilla' --message 'Hello, world' --debug --insecure +``` + +Your subscribed session should receive the message + +::: note Wait for the services to start +if you are stuck on `Client null sending CONNECT`, the likely cause is Zilla and Kafka are still starting up. +::: + +### Remove the running containers + +```bash:no-line-numbers +docker-compose down +``` + +::: tip See more of what Zilla can do +Go deeper into this concept with the [mqtt.kafka.reflect](https://github.com/aklivity/zilla-examples/tree/main/mqtt.kafka.reflect) example. +::: + +## Going Deeper + +Try out more MQTT examples: + +- [mqtt.kafka.reflect](https://github.com/aklivity/zilla-examples/tree/main/mqtt.kafka.reflect) +- [mqtt.kafka.reflect.jwt](https://github.com/aklivity/zilla-examples/tree/main/mqtt.kafka.reflect.jwt) diff --git a/src/tutorials/mqtt/zilla.yaml b/src/tutorials/mqtt/zilla.yaml new file mode 100644 index 00000000..0a5c97fa --- /dev/null +++ b/src/tutorials/mqtt/zilla.yaml @@ -0,0 +1,58 @@ +name: MQTT-intro +bindings: + +# Gateway ingress config + tcp_server: + type: tcp + kind: server + options: + host: 0.0.0.0 + port: + - 1883 + exit: mqtt_server + +# MQTT Broker With an exit to Kafka + mqtt_server: + type: mqtt + kind: server + exit: mqtt_kafka_proxy + +# Proxy MQTT messages to Kafka + mqtt_kafka_proxy: + type: mqtt-kafka + kind: proxy + options: + topics: + sessions: mqtt-sessions + messages: mqtt-messages + retained: mqtt-retained + exit: kafka_cache_client + +# Kafka caching layer + kafka_cache_client: + type: kafka + kind: cache_client + exit: kafka_cache_server + kafka_cache_server: + type: kafka + kind: cache_server + options: + bootstrap: + - mqtt-sessions + - mqtt-retained + exit: kafka_client + +# Connect to local Kafka + kafka_client: + type: kafka + kind: client + exit: kafka_tcp_client + kafka_tcp_client: + type: tcp + kind: client + options: + host: kafka + port: 29092 + routes: + - when: + - cidr: 0.0.0.0/0 diff --git a/src/tutorials/quickstart/kafka-proxies.md b/src/tutorials/quickstart/kafka-proxies.md index 98b21069..2848963b 100644 --- a/src/tutorials/quickstart/kafka-proxies.md +++ b/src/tutorials/quickstart/kafka-proxies.md @@ -14,24 +14,25 @@ Get started with Zilla by deploying our Docker Compose stack. Before proceeding, ## Postman Collections -This quickstart is designed to use Aklivity’s public [Postman Workspace](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/overview) to give you fast and easy way to try out Zilla’s multi-protocol Kafka proxying capabilities. +This quickstart uses Aklivity's public [Postman Workspace](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/overview) to give you a fast and easy way to try out Zilla's multi-protocol Kafka proxying capabilities. ::: note App or Desktop Agent -Once the collections are forked you can run them against the local stack if you have either the [Postman App](https://www.postman.com/downloads/) or [Postman Desktop Agent](https://www.postman.com/downloads/postman-agent/) installed. +Once the collections are forked, you can run them against the local stack if you have either the [Postman App](https://www.postman.com/downloads/) or [Postman Desktop Agent](https://www.postman.com/downloads/postman-agent/) installed. ::: -Fork each of these collections into your own workspace. +Fork each of these collections into your personal/team workspace. -- [Zilla - REST Kafka proxy](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/collection/28401168-6941d1fa-698c-4da1-9789-2f806acf9fbb?action=share&creator=28401168) -- [Zilla - SSE Kafka proxy](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/collection/28401168-09c165b3-6e68-45c2-aedb-494f130bc354?action=share&creator=28401168) -- [Zilla - gRPC Kafka proxy](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/collection/64a85751808733dd197c599f?action=share&creator=28401168) +- [Zilla - REST Kafka proxy](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/collection/28401168-6941d1fa-698c-4da1-9789-2f806acf9fbb) +- [Zilla - SSE Kafka proxy](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/collection/28401168-09c165b3-6e68-45c2-aedb-494f130bc354) +- [Zilla - gRPC Kafka proxy](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/collection/64a85751808733dd197c599f) +- [Zilla - MQTT Kafka proxy](https://www.postman.com/aklivity-zilla/workspace/aklivity-zilla-quickstart/collection/651072f021cfcbd1388fe5e9) ![Collection header > View more actions > Create a Fork](./create-fork.png) ## Zilla Docker Compose Stack -Download the [zilla.quickstart](https://github.com/aklivity/zilla-examples/tree/main/zilla.quickstart) folder from the zilla-examples repo. The docker compose file will create everything you need for this quickstart. The `setup.sh` script will start and restart the backend. The `teardown.sh` script stops and destroys all of the containers. +Download the Zilla [quickstart](https://github.com/aklivity/zilla-examples/tree/main/quickstart) folder from the zilla-examples repo. The docker compose file will create everything you need for this quickstart. The `setup.sh` script will start and restart the backend. The `teardown.sh` script stops and destroys all containers. The key components this script will setup: @@ -39,18 +40,19 @@ The key components this script will setup: - Kafka instance and topics - [Kafka UI](http://localhost/ui/clusters/local/all-topics) for browsing topics & messages - gRPC Route Guide server +- MQTT message simulator ::: code-tabs#bash @tab Start and Restart -```bash +"`bash:no-line-numbers ./setup.sh ``` @tab Shutdown -```bash +"`bash:no-line-numbers ./teardown.sh ``` @@ -58,29 +60,34 @@ The key components this script will setup: ### Kafka topics -This Zilla quickstart hosts a UI for the Kafka cluster. Go to the [topics page](http://localhost/ui/clusters/local/all-topics) to browse the data. +This Zilla quickstart hosts a UI for the Kafka cluster. To browse the data, go to the [topics page](http://localhost/ui/clusters/local/all-topics). -- **items-crud** - REST CRUD messages -- **events-sse** - SSE event messages -- **echo-service-messages** - gRPC echo messages -- **route-guide-requests** - gRPC RouteGuide requests -- **route-guide-responses** - gRPC RouteGuide responses +- [items-crud](http://localhost/ui/clusters/local/all-topics/items-crud/messages) - REST CRUD messages +- [events-sse](http://localhost/ui/clusters/local/all-topics/events-sse/messages) - SSE event messages +- [echo-service-messages](http://localhost/ui/clusters/local/all-topics/echo-service-messages/messages) - gRPC echo messages +- [route-guide-requests](http://localhost/ui/clusters/local/all-topics/route-guide-requests/messages) - gRPC RouteGuide requests +- [route-guide-responses](http://localhost/ui/clusters/local/all-topics/route-guide-responses/messages) - gRPC RouteGuide responses +- [iot-messages](http://localhost/ui/clusters/local/all-topics/iot-messages/messages) - MQTT messages responses +- [iot-retained](http://localhost/ui/clusters/local/all-topics/iot-retained/messages) - MQTT messages with the retained flag +- [iot-sessions](http://localhost/ui/clusters/local/all-topics/iot-sessions/messages) - MQTT sessions ## REST Kafka proxy -Zilla can expose common entity CRUD endpoints with the entity data being stored on Kafka topics. Leveraging the `cleanup.policy=compact` feature of Kafka, Zilla enables a standard REST backend architecture with Kafka as the storage layer. Adding a `Idempotency-Key` header during creation will set the message `key` and acts as the `ID` for the record. A UUID is generated if no key is sent. +Zilla can expose common entity CRUD endpoints with the entity data being stored on Kafka topics. Leveraging Kafka's `cleanup.policy=compact` feature, Zilla enables a standard REST backend architecture with Kafka as the storage layer. Adding an `Idempotency-Key` header during creation will set the message `key` and act as the `ID` for the record. A UUID is generated if no key is sent. - **GET** - Fetches all items on the topic or Fetch one item by its key using `/:key`. - **POST** - Create a new item with the `Idempotency-Key` header setting the key. - **PUT** - Update an item based on its key using `/:key`. - **DELETE** - Delete an item based on its key using `/:key`. +The [items-crud](http://localhost/ui/clusters/local/all-topics/items-crud/messages) Kafka topic will have all the objects you posted, updated, and deleted. + ::: note Going Deeper -Zilla can be configured for request-response over Kafka topics both synchronously and asynchronously, and more that we aren't able to cover in this quickstart. Here are some other resources you will want to check out. +Zilla can be configured for request-response over Kafka topics both synchronously and asynchronously, and more that we can't cover in this quickstart. Here are some other resources you will want to check out. - [REST proxy guide](../../concepts/kafka-proxies/rest-proxy.md) - [HTTP proxy example](https://github.com/aklivity/zilla-examples/tree/main/http.proxy) -- [JWT example](https://github.com/aklivity/zilla-examples/tree/main/http.echo.jwt) +- [JWT Auth example](https://github.com/aklivity/zilla-examples/tree/main/http.echo.jwt) - [Kafka cache example](https://github.com/aklivity/zilla-examples/tree/main/http.kafka.cache) - [Kafka sync example](https://github.com/aklivity/zilla-examples/tree/main/http.kafka.sync) - [Kafka async example](https://github.com/aklivity/zilla-examples/tree/main/http.kafka.async) @@ -93,12 +100,14 @@ Zilla can expose a Kafka topic as a Server-sent Events (SSE) stream, enabling a - **POST** - Push a new event. - **GET:SSE** - Stream all of the events published on the `event-sse` Kafka topic. +The [events-sse](http://localhost/ui/clusters/local/all-topics/events-sse/messages) Kafka topic will have a record of each new event sent over HTTP to the SSE stream. + ::: note Going Deeper -Zilla can be configured for more use cases that we aren't able to cover in this quickstart. Here are some other interesting examples you will want to check out. +Zilla can be configured for more use cases we can't cover in this quickstart. Here are some other interesting examples you will want to check out. - [REST proxy guide](../../concepts/kafka-proxies/sse-proxy.md) - [Kafka fanout example](https://github.com/aklivity/zilla-examples/tree/main/sse.kafka.fanout) -- [JWT example](https://github.com/aklivity/zilla-examples/tree/main/sse.proxy.jwt) +- [JWT Auth example](https://github.com/aklivity/zilla-examples/tree/main/sse.proxy.jwt) ::: ## gRPC Kafka proxy @@ -108,8 +117,14 @@ Zilla maps the service method's request and response messages directly to Kafka - **RouteGuide** - Proxy messages through Kafka to a running gRPC server. - **EchoService** - Zilla implements a simple message echo service. +Check out the Kafka topics: + +The [echo-service-messages](http://localhost/ui/clusters/local/all-topics/echo-service-messages/messages) Kafka topic will have both the request and response record for each of the echo messages sent. You can see the records with the same generated UUIDs and `header` values. + +The [route-guide-requests](http://localhost/ui/clusters/local/all-topics/route-guide-requests/messages) Kafka topic will have every proto request object, meaning every message that is sent to the `server`. The [route-guide-responses](http://localhost/ui/clusters/local/all-topics/route-guide-responses/messages) Kafka topic will have every proto response object, meaning every message returned from the `server`. + ::: note Going Deeper -Zilla can be configured for more use cases that we aren't able to cover in this quickstart. Here are some other interesting examples you will want to check out. +Zilla can be configured for more use cases we can't cover in this quickstart. Here are some other interesting examples you will want to check out. - [gRPC proxy guide](../../concepts/kafka-proxies/grpc-proxy.md) - [gRPC proxy example](https://github.com/aklivity/zilla-examples/tree/main/grpc.proxy) @@ -117,6 +132,30 @@ Zilla can be configured for more use cases that we aren't able to cover in this - [Kafka proxy example](https://github.com/aklivity/zilla-examples/tree/main/grpc.kafka.proxy) ::: +## MQTT Kafka proxy + +Zilla provides an MQTT broker by implementing the v5 Specification. Clients can connect and send MQTT messages where zilla will store them in one of three defined Kafka topics. This quickstart manages all messages, messages marked with the `retained` flag, and sessions on any topic. + +::: info Postman MQTT in BETA +Postman recently released MQTT support into [public BETA](https://blog.postman.com/postman-supports-mqtt-apis/), and we are using it for this quickstart. Be mindful that there may be minor issues encountered using it. +::: + +- **Pub/Sub** - Publish your own messages +- **Simulated Topics** - Subscribe to simulated traffic + +Setting the `retain` flag to true on your topic will send that message to the `retained` Kafka topic. After those messages are published, a new subscription will get the last message sent for that topic. + +The [iot-messages](http://localhost/ui/clusters/local/all-topics/iot-messages/messages) Kafka topic will store every message sent to the broker. The [iot-retained](http://localhost/ui/clusters/local/all-topics/iot-retained/messages) Kafka topic will store only messages sent with the `retain` flag set to true. By log compacting this topic, it will only return the most recent copy of the message to a newly subscribed client. The [iot-sessions](http://localhost/ui/clusters/local/all-topics/iot-sessions/messages) Kafka topic will have a record for each connection that Zilla has managed between it and the clients. You can see the `client-id` in the key and the `topic` in the value when necessary. + +An [mqtt-simulator](https://github.com/DamascenoRafael/mqtt-simulator) is included in the quickstart that will produce mock messages and send them to Zilla. The simulator uses the Python `paho-mqtt` library and the MQTT v5 specification. + +::: note Going Deeper +Zilla can be configured for more use cases we can't cover in this quickstart. Here are some other interesting examples you will want to check out. + +- [MQTT Kafka example](https://github.com/aklivity/zilla-examples/tree/main/mqtt.kafka.reflect) +- [JWT Auth example](https://github.com/aklivity/zilla-examples/tree/main/mqtt.kafka.reflect.jwt) +::: + ## Metrics -This Zilla quickstart collects basic metrics for the [streaming](../../reference/config/telemetry/metrics/metric-stream.md), [HTTP](../../reference/config/telemetry/metrics/metric-http.md), and [gRPC](../../reference/config/telemetry/metrics/metric-grpc.md) services. Go to [http://localhost:9090/metrics](http://localhost:9090/metrics) to see the the [Prometheus](../../reference/config/telemetry/exporter/exporter-prometheus.md) exported data. +This Zilla quickstart collects basic metrics for the [streaming](../../reference/config/telemetry/metrics/metric-stream.md), [HTTP](../../reference/config/telemetry/metrics/metric-http.md), and [gRPC](../../reference/config/telemetry/metrics/metric-grpc.md) services. Go to [http://localhost:7190/metrics](http://localhost:7190/metrics) to see the the [Prometheus](../../reference/config/telemetry/exporter/exporter-prometheus.md) exported data. diff --git a/src/tutorials/rest/rest-intro.md b/src/tutorials/rest/rest-intro.md index d0eed8c0..f3a9a7eb 100644 --- a/src/tutorials/rest/rest-intro.md +++ b/src/tutorials/rest/rest-intro.md @@ -66,13 +66,13 @@ bindings: kafka_cache_client: type: kafka kind: cache_client - options: - bootstrap: - - items-snapshots exit: kafka_cache_server kafka_cache_server: type: kafka kind: cache_server + options: + bootstrap: + - items-snapshots exit: kafka_client # Connect to local Kafka diff --git a/src/tutorials/sse/sse-intro.md b/src/tutorials/sse/sse-intro.md index 395b552e..ddc0fa09 100644 --- a/src/tutorials/sse/sse-intro.md +++ b/src/tutorials/sse/sse-intro.md @@ -210,7 +210,7 @@ If the page doesn't load wait for the Zilla and the SSE server to start. With the location input set to `http://localhost:8080/events` you can click the `Go` button to connect to the SSE server. Messages will stream in as long as you have the `messenger` service running in docker. The stream of messages will render on the page. -```text:no-line-numbers +```output:no-line-numbers ... message: Hello, world Wed May 10 14:25:45 UTC 2023 @@ -469,7 +469,7 @@ If the page doesn't load wait for the Zilla and the Kafka server to start. With the location input set to `http://localhost:8080/events` you can click the `Go` button to connect to the SSE server. Messages will stream in as long as you have the `messenger` service running in docker.The stream of messages will render on the page. -```text:no-line-numbers +```output:no-line-numbers ... message: Hello, world Wed May 10 14:25:45 UTC 2023 diff --git a/src/tutorials/todo-app/build.md b/src/tutorials/todo-app/build.md index 6ea839fc..22a352dd 100644 --- a/src/tutorials/todo-app/build.md +++ b/src/tutorials/todo-app/build.md @@ -23,7 +23,7 @@ This Todo Application tutorial has the following goals: * Docker `20.10.14` * Git `2.32.0` -* npm `8.3.1` and above +* npm `8.3.1` and above ### Step 1: Kafka (or Redpanda) @@ -188,7 +188,7 @@ Make sure you see this output at the end of the `example_init-topics` service lo @tab Apache Kafka -```text:no-line-numbers +```output:no-line-numbers ## Creating the Kafka topics Created topic task-commands. Created topic task-replies. @@ -202,7 +202,7 @@ task-snapshots @tab Redpanda -```text:no-line-numbers +```output:no-line-numbers CLUSTER ======= redpanda.initializing @@ -231,7 +231,7 @@ task-snapshots 1 1 ### Step 2: Todo Service -Next, you will need to build a todo service that is implemented using `Spring boot + Kafka Streams` to process commands and generate relevant output. This `Todo` service can deliver near real-time updates when a `Task` is created, renamed, or deleted, and produces a message to the Kafka `task-snapshots` topic with the updated value. +Next, you will need to build a todo service that is implemented using `Spring boot + Kafka Streams` to process commands and generate relevant output. This `Todo` service can deliver near real-time updates when a `Task` is created, renamed, or deleted, and produces a message to the Kafka `task-snapshots` topic with the updated value. Combining this with `cleanup-policy: compact` for the `task-snapshots` topic causes the topic to behave more like a table, where only the most recent message for each distinct message key is retained. @@ -285,20 +285,18 @@ Run the command below to deploy the `todo-service` to your existing stack. docker stack deploy -c stack.yml example --resolve-image never ``` -output: - ::: code-tabs#text @tab Apache Kafka -```text:no-line-numbers +```output:no-line-numbers Creating service example_todo-service Updating service example_kafka (id: st4hq1bwjsom5r0jxnc6i9rgr) ``` @tab Redpanda -```text:no-line-numbers +```output:no-line-numbers Creating service example_todo-service Updating service example_redpanda (id: ilmfqpwf35b7ftd6cvzdis8au) ``` @@ -644,7 +642,7 @@ Each new update arrives automatically, even when changes are made by other clien ### Step 4: Web App -Next, you will build the `Todo` app that's implemented using [VueJs](https://vuejs.org/) framework. Run the commands below in the root directory. +Next, you will build the `Todo` app that's implemented using [VueJs](https://vuejs.org/) framework. Run the commands below in the root directory. ```bash:no-line-numbers git clone https://github.com/aklivity/todo-app && \ @@ -821,7 +819,7 @@ Finally, run docker stack deploy -c stack.yml example --resolve-image never ``` -Make sure that `zilla.yaml` config changes got applied after restarting the `Zilla` service. Check the `example_zilla` service log. +Make sure that `zilla.yaml` config changes got applied after restarting the `Zilla` service. Check the `example_zilla` service log. ### Step 5: Test Drive diff --git a/src/tutorials/todo-app/secure.md b/src/tutorials/todo-app/secure.md index 2692d01f..f9a0cda3 100644 --- a/src/tutorials/todo-app/secure.md +++ b/src/tutorials/todo-app/secure.md @@ -19,7 +19,7 @@ In this guide, you will use the [JWT guard](../../reference/config/guards/guard- * Docker `20.10.14` * Git `2.32.0` -* npm `8.3.1` and above +* npm `8.3.1` and above * jq `1.6` and above * completed [Build the Todo Application](./build.md) with Docker stack still running @@ -510,9 +510,7 @@ Let's verify the Tasks API using `curl` as shown below. curl -v http://localhost:8080/tasks ``` -output: - -```text:no-line-numbers +```output:no-line-numbers > GET /tasks HTTP/1.1 > Host: localhost:8080 > User-Agent: curl/7.79.1