Skip to content

Commit

Permalink
Moved kafka features config info to new page
Browse files Browse the repository at this point in the history
  • Loading branch information
Cali0707 committed Jul 14, 2023
1 parent ef97b63 commit 1b29c60
Show file tree
Hide file tree
Showing 3 changed files with 211 additions and 171 deletions.
4 changes: 3 additions & 1 deletion config/nav.yml
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,9 @@ nav:
- Available Broker types: eventing/brokers/broker-types/README.md
# add default IMC broker page, page explaining broker types
- Channel based Broker: eventing/brokers/broker-types/channel-based-broker/README.md
- Apache Kafka: eventing/brokers/broker-types/kafka-broker/README.md
- Apache Kafka:
- About Apache Kafka Broker: eventing/brokers/broker-types/kafka-broker/README.md
- Configuring Kafka features: eventing/brokers/broker-types/kafka-broker/configuring-kafka-features.md
- RabbitMQ Broker: eventing/brokers/broker-types/rabbitmq-broker/README.md
- Creating a Broker: eventing/brokers/create-broker.md
- Developer configuration options: eventing/brokers/broker-developer-config-options.md
Expand Down
172 changes: 2 additions & 170 deletions docs/eventing/brokers/broker-types/kafka-broker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Notable features are:

- Control plane High Availability
- Horizontally scalable data plane
- [Extensively configurable](#kafka-producer-and-consumer-configurations)
- [Extensively configurable](./configuring-kafka-features)
- Ordered delivery of events based on [CloudEvents partitioning extension](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/extensions/partitioning.md)
- Support any Kafka version, see [compatibility matrix](https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix)
- Supports 2 [data plane modes](#data-plane-isolation-vs-shared-data-plane): data plane isolation per-namespace or shared data plane
Expand Down Expand Up @@ -257,174 +257,6 @@ spec:
!!! note
When using an external topic, the Knative Kafka Broker does not own the topic and is not responsible for managing the topic. This includes the topic lifecycle or its general validity. Other restrictions for general access to the topic may apply. See the documentation about using [Access Control Lists (ACLs)](https://kafka.apache.org/documentation/#security_authz).

## Configure Knative Eventing Kafka features

There are various kafka features/default values the Knative Kafka Broker uses when interacting with Kafka. You can configure these as follows:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-kafka-features
namespace: knative-eventing
data:
# Controls whether the dispatcher should use the rate limiter based on the number of virtual replicas.
# 1. Enabled: The rate limiter is applied.
# 2. Disabled: The rate limiter is not applied.
dispatcher.rate-limiter: "disabled"
# Controls whether the dispatcher should record additional metrics.
# 1. Enabled: The metrics are recorded.
# 2. Disabled: The metrics are not recorded.
dispatcher.ordered-executor-metrics: "disabled"
# Controls whether the controller should autoscale consumer resources with KEDA
# 1. Enabled: KEDA autoscaling of consumers will be setup.
# 2. Disabled: KEDA autoscaling of consumers will not be setup.
controller.autoscaler: "disabled"{% raw %}
# The Go text/template used to generate consumergroup ID for triggers.
# The template can reference the trigger Kubernetes metadata only.
triggers.consumergroup.template: "knative-trigger-{{ .Namespace }}-{{ .Name }}"
# The Go text/template used to generate topics for Brokers.
# The template can reference the broker Kubernetes metadata only.
brokers.topic.template: "knative-broker-{{ .Namespace }}-{{ .Name }}"
# The Go text/template used to generate topics for Channels.
# The template can reference the channel Kubernetes metadata only.
channels.topic.template: "knative-channel-{{ .Namespace }}-{{ .Name }}"
{% endraw %}
```

## Consumer Offsets Commit Interval

Kafka consumers keep track of the last successfully sent events by committing offsets.

Knative Kafka Broker commits the offset every `auto.commit.interval.ms` milliseconds.

!!! note
To prevent negative impacts to performance, it is not recommended committing
offsets every time an event is successfully sent to a subscriber.

The interval can be changed by changing the `config-kafka-broker-data-plane` `ConfigMap`
in the `knative-eventing` namespace by modifying the parameter `auto.commit.interval.ms` as follows:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-kafka-broker-data-plane
namespace: knative-eventing
data:
# Some configurations omitted ...
config-kafka-broker-consumer.properties: |
# Some configurations omitted ...
# Commit the offset every 5000 millisecods (5 seconds)
auto.commit.interval.ms=5000
```

!!! note
Knative Kafka Broker guarantees at least once delivery, which means that your applications may
receive duplicate events. A higher commit interval means that there is a higher probability of
receiving duplicate events, because when a Consumer restarts, it restarts from the last
committed offset.

## Kafka Producer and Consumer configurations

Knative exposes all available Kafka producer and consumer configurations that can be modified to suit your workloads.

You can change these configurations by modifying the `config-kafka-broker-data-plane` `ConfigMap` in
the `knative-eventing` namespace.

Documentation for the settings available in this `ConfigMap` is available on the
[Apache Kafka website](https://kafka.apache.org/documentation/),
in particular, [Producer configurations](https://kafka.apache.org/documentation/#producerconfigs)
and [Consumer configurations](https://kafka.apache.org/documentation/#consumerconfigs).

## Enable debug logging for data plane components

The following YAML shows the default logging configuration for data plane components, that is created during the
installation step:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-config-logging
namespace: knative-eventing
data:
config.xml: |
<configuration>
<appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="jsonConsoleAppender"/>
</root>
</configuration>
```

To change the logging level to `DEBUG`, you must:

1. Apply the following `kafka-config-logging` `ConfigMap` or replace `level="INFO"` with `level="DEBUG"` to the
`ConfigMap` `kafka-config-logging`:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-config-logging
namespace: knative-eventing
data:
config.xml: |
<configuration>
<appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="DEBUG">
<appender-ref ref="jsonConsoleAppender"/>
</root>
</configuration>
```

2. Restart the `kafka-broker-receiver` and the `kafka-broker-dispatcher`, by entering the following commands:

```bash
kubectl rollout restart deployment -n knative-eventing kafka-broker-receiver
kubectl rollout restart deployment -n knative-eventing kafka-broker-dispatcher
```

## Configuring the order of delivered events

When dispatching events, the Kafka broker can be configured to support different delivery ordering guarantees.

You can configure the delivery order of events using the `kafka.eventing.knative.dev/delivery.order` annotation on the `Trigger` object:

```yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
annotations:
kafka.eventing.knative.dev/delivery.order: ordered
spec:
broker: my-kafka-broker
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
```

The supported consumer delivery guarantees are:

* `unordered`: An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management. Useful when there is a high demand of parallel consumption and no need for explicit ordering. One example could be processing of click analytics.
* `ordered`: An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the next message of the partition. Useful when there is a need for more strict ordering or if there is a relationship or grouping between events. One example could be processing of customer orders.

The `unordered` delivery is the default ordering guarantee.

## Data plane Isolation vs Shared Data plane

Expand Down Expand Up @@ -480,7 +312,7 @@ Upon the creation of the first `Broker` with `KafkaNamespaced` class, the `kafka

All the configuration mechanisms that are available for the `Kafka` Broker class are also available for the brokers with `KafkaNamespaced` class with these exceptions:

* [Above](#kafka-producer-and-consumer-configurations) it is described how producer and consumer configurations is done by modifying the `config-kafka-broker-data-plane` configmap in the `knative-eventing` namespace. Since Kafka Broker controller propagates this configmap into the user namespace, currently there is no way to configure producer and consumer configurations per namespace. Any value set in the `config-kafka-broker-data-plane` `ConfigMap` in the `knative-eventing` namespace will be also used in the user namespace.
* [This page](./configuring-kafka-features) describes how producer and consumer configurations is done by modifying the `config-kafka-broker-data-plane` configmap in the `knative-eventing` namespace. Since Kafka Broker controller propagates this configmap into the user namespace, currently there is no way to configure producer and consumer configurations per namespace. Any value set in the `config-kafka-broker-data-plane` `ConfigMap` in the `knative-eventing` namespace will be also used in the user namespace.
* Because of the same propagation, it is also not possible to configure consumer offsets commit interval per namespace.
* A few more configmaps are propagated: `config-tracing` and `kafka-config-logging`. This means, tracing and logging are also not configurable per namespace.
* Similarly, the data plane deployments are propagated from the `knative-eventing` namespace to the user namespace. This means that the data plane deployments are not configurable per namespace and will be identical to the ones in the `knative-eventing` namespace.
Expand Down
Loading

0 comments on commit 1b29c60

Please sign in to comment.