Skip to content

Commit

Permalink
Configuration examples for OIDC/security and metrics (#1384)
Browse files Browse the repository at this point in the history
* Configuration examples for OIDC/security and metrics

Signed-off-by: Michael Edgar <[email protected]>

* fix: use single console example resource for playwright test

Signed-off-by: Michael Edgar <[email protected]>

* Suggested improvements to readability

Co-authored-by: PaulRMellor <[email protected]>

* Add header comment block for each new example with summaries

Signed-off-by: Michael Edgar <[email protected]>

* Suggested improvements to comments/descriptions

Co-authored-by: PaulRMellor <[email protected]>

* Replace subject names with place holders

Signed-off-by: Michael Edgar <[email protected]>

* Mention placeholders in example's description

Co-authored-by: PaulRMellor <[email protected]>

---------

Signed-off-by: Michael Edgar <[email protected]>
Co-authored-by: PaulRMellor <[email protected]>
  • Loading branch information
MikeEdgar and PaulRMellor authored Jan 22, 2025
1 parent f54199d commit 320644a
Show file tree
Hide file tree
Showing 6 changed files with 218 additions and 7 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/playwright-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -133,10 +133,10 @@ jobs:
# Display the resource
export KAFKA_NAMESPACE="${TARGET_NAMESPACE}"
cat examples/console/* | envsubst && echo
cat examples/console/010-Console-example.yaml | envsubst && echo
# Apply the resource
cat examples/console/* | envsubst | kubectl apply -n ${TARGET_NAMESPACE} -f -
cat examples/console/010-Console-example.yaml | envsubst | kubectl apply -n ${TARGET_NAMESPACE} -f -
kubectl wait console/example --for=condition=Ready --timeout=300s -n $TARGET_NAMESPACE
Expand Down
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,9 @@ export NAMESPACE=default
cat install/operator-olm/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -
```

#### Console Custom Resource Example
#### Console Custom Resource Examples
Once the operator is ready, you may then create a `Console` resource in the namespace where the console should be deployed. This example `Console` is based on the example Apache Kafka<sup>®</sup> cluster deployed above in the [prerequisites section](#prerequisites). Also see [examples/console/010-Console-example.yaml](examples/console/010-Console-example.yaml).

```yaml
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
Expand Down Expand Up @@ -100,6 +101,8 @@ spec:
# This is optional if properties are used to configure the user
```

Additional Console resource examples can be found at [examples/console/](examples/console/), including OpenShift monitoring metrics configuration and OIDC security configuration.

### Deploy the operator directly
Deploying the operator without the use of OLM requires applying the component Kubernetes resources for the operator directly. These resources are bundled and attached to each StreamsHub Console release. The latest release can be found [here](https://github.com/streamshub/console/releases/latest). The resource file is named `streamshub-console-operator.yaml`.

Expand All @@ -113,7 +116,7 @@ curl -sL https://github.com/streamshub/console/releases/download/${VERSION}/stre
```
Note: if you are not using the Prometheus operator you may see an error about a missing `ServiceMonitor` custom resource type. This error may be ignored.

With the operator resources created, you may create a `Console` resource like the one shown in [Console Custom Resource Example](#console-custom-resource-example).
With the operator resources created, you may create a `Console` resource like the one shown in [Console Custom Resource Examples](#console-custom-resource-examples).

## Running locally

Expand Down
9 changes: 6 additions & 3 deletions examples/console-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,8 @@ security:
roleNames:
- administrators

# Roles and associate rules for global resources (currently only Kafka clusters) are given here in the `security.roles`
# section. Rules for Kafka-scoped resources are specified within the cluster configuration section below. That is,
# at paths `kafka.clusters[].security.rules[].
# Roles and associated rules for accessing global resources (currently limited to Kafka clusters) are defined in `security.roles`.
# Rules for individual Kafka clusters are specified under `kafka.clusters[].security.rules[]`.
roles:
# developers may perform any operation with clusters 'a' and 'b'.
- name: developers
Expand Down Expand Up @@ -123,6 +122,10 @@ kafka:
- privileges:
- get
- list
- resources:
- consumerGroups
- rebalances
- privileges:
- update

- name: my-kafka2
Expand Down
37 changes: 37 additions & 0 deletions examples/console/console-openshift-metrics.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
#
# This example demonstrates using OpenShift user-workload monitoring as
# a source of Kafka metrics for the console. This configuration uses the
# `openshift-monitoring` metrics source type. The connection between a
# `metricsSources` entry and a `kafkaClusters` entry is established through
# the `metricsSource` specified for the Kafka cluster.
#
# See https://docs.openshift.com/container-platform/4.17/observability/monitoring/enabling-monitoring-for-user-defined-projects.html
# for details on how to enable monitoring of user-defined projects in OpenShift.
#
---
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: example
spec:
hostname: example-console.${CLUSTER_DOMAIN}

metricsSources:
# Example metrics source using OpenShift's built-in monitoring.
# For `type: openshift-monitoring`, no additional attributes are required,
# but you can configure a truststore if needed.
- name: my-ocp-prometheus
type: openshift-monitoring

kafkaClusters:
# Kafka cluster configuration.
# The example uses the Kafka cluster configuration from `examples/kafka`.
# Adjust the values to match your environment.
- name: console-kafka # Name of the `Kafka` CR representing the cluster
namespace: ${KAFKA_NAMESPACE} # Namespace where the `Kafka` CR is deployed
listener: secure # Listener name from the `Kafka` CR to connect the console
metricsSource: my-ocp-prometheus # Name of the configured metrics source defined in `metricsSources`
credentials:
kafkaUser:
name: console-kafka-user1 # Name of the `KafkaUser` CR used by the console to connect to the Kafka cluster
# This is optional if properties are used to configure the user
117 changes: 117 additions & 0 deletions examples/console/console-security-oidc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
#
# This example demonstrates the use of an OIDC provider at `.spec.security.oidc`
# for user authentication in the console. Any OIDC provider should work - such as
# Keycloak or dex with a suitable backend identity provider.
#
# In addition to the OIDC configuration, this example shows how to configure
# subjects and roles for user authorization. Note that global resources (kafka clusters)
# are configured within `.spec.security` whereas resources within a specific Kafka
# cluster are configured within `.spec.kafkaClusters[].security`.
# Replace <placeholders> with actual values specific to the environment.
#
---
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: example
spec:
hostname: example-console.${CLUSTER_DOMAIN}

security:
oidc:
authServerUrl: <OIDC discovery URL> # URL for OIDC provider discovery
clientId: <client-id> # Client ID for OIDC authentication
clientSecret:
# For development use only: provide a secret directly (not recommended for production).
# value: <literal secret - development only!>
valueFrom:
secretKeyRef:
name: my-oidc-secret
key: client-secret

subjects:
# Subjects and their roles may be specified in terms of JWT claims or their subject name (user1, user100 below).
# Using claims is only supported when OIDC security is enabled.
- claim: groups
include:
- <team_name_1>
- <team_name_2>
roleNames:
- developers
- claim: groups
include:
- <team_name_3>
roleNames:
- administrators
- include:
# Match subjects by their name when no claim is specified.
# For JWT, this is typically `preferred_username`, `upn`, or `sub` claims.
# For per-Kafka authentication credentials, this is the user name used to authenticate.
- <user_1>
- <user_2>
roleNames:
- administrators

# Roles and associated rules for accessing global resources (currently limited to Kafka clusters) are defined in `security.roles`.
# Rules for individual Kafka clusters are specified under `kafka.clusters[].security.rules[]`.
roles:
# developers may perform any operation with clusters 'a' and 'b'.
- name: developers
rules:
- resources:
- kafkas
- resourceNames:
- dev-cluster-a
- dev-cluster-b
- privileges:
- '*'
# administrators may operate on any (unspecified) Kafka clusters
- name: administrators
rules:
- resources:
- kafkas
- privileges:
- '*'

kafkaClusters:
# Kafka cluster configuration.
# The example uses the Kafka cluster configuration from `examples/kafka`.
# Adjust the values to match your environment.
- name: console-kafka # Name of the `Kafka` CR representing the cluster
namespace: ${KAFKA_NAMESPACE} # Namespace where the `Kafka` CR is deployed
listener: secure # Listener name from the `Kafka` CR to connect the console
credentials:
kafkaUser:
name: console-kafka-user1 # Name of the `KafkaUser` CR used by the console to connect to the Kafka cluster
# This is optional if properties are used to configure the user
security:
roles:
# developers may only list and view some resources
- name: developers
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
- privileges:
- get
- list

# administrators may list, view, and update an expanded set of resources
- name: administrators
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
- nodes/configs
- privileges:
- get
- list
- resources:
- consumerGroups
- rebalances
- privileges:
- update
51 changes: 51 additions & 0 deletions examples/console/console-standalone-prometheus.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
#
# This example demonstrates the use of a user-supplied Prometheus instance as
# a source of Kafka metrics for the console. This configuration uses the
# `standalone` metrics source type. The connection between a
# `metricsSources` and a `kafkaClusters` entry is established through
# the `metricsSource` specified for the Kafka cluster.
#
# See examples/prometheus for sample Prometheus instance resources configured for
# use with a Strimzi Kafka cluster.
#
---
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: example
spec:
hostname: example-console.${CLUSTER_DOMAIN}

metricsSources:
# type=standalone when providing your own existing prometheus instance
- name: my-custom-prometheus
type: standalone
url: http://my-custom-prometheus.cloud2.example.com
authentication: # optional
# Either username + password or token
username: my-user
password: my-password
#token: my-token
trustStore: # optional
type: JKS
content:
valueFrom:
configMapKeyRef: # or secretKeyRef
name: my-prometheus-configmap
key: ca.jks
password:
# For development use only: if not provided through `valueFrom` properties, provide a password directly (not recommended for production).
value: changeit

kafkaClusters:
# Kafka cluster configuration.
# The example uses the Kafka cluster configuration from `examples/kafka`.
# Adjust the values to match your environment.
- name: console-kafka # Name of the `Kafka` CR representing the cluster
namespace: ${KAFKA_NAMESPACE} # Namespace where the `Kafka` CR is deployed
listener: secure # Listener name from the `Kafka` CR to connect the console
metricsSource: null # Name of the configured metrics source defined in `metricsSources`
credentials:
kafkaUser:
name: console-kafka-user1 # Name of the `KafkaUser` CR used by the console to connect to the Kafka cluster
# This is optional if properties are used to configure the user

0 comments on commit 320644a

Please sign in to comment.