Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CMLK-1343 - Enhance the kamelet azure-storage-blob-sink for append operation and rotate daily #304

Merged
merged 2 commits into from
Nov 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 96 additions & 0 deletions azure-storage-blob-append-sink.kamelet.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# ---------------------------------------------------------------------------
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ---------------------------------------------------------------------------

apiVersion: camel.apache.org/v1alpha1
kind: Kamelet
metadata:
name: azure-storage-blob-append-sink
annotations:
camel.apache.org/catalog.version: "2.0.0"
camel.apache.org/kamelet.icon: "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgOTEgODEiIGZpbGw9IiNmZmYiIGZpbGwtcnVsZT0iZXZlbm9kZCIgc3Ryb2tlPSIjMDAwIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjx1c2UgeGxpbms6aHJlZj0iI0EiIHg9Ii41IiB5PSIuNSIvPjxzeW1ib2wgaWQ9IkEiIG92ZXJmbG93PSJ2aXNpYmxlIj48cGF0aCBkPSJNNjcuNTU4IDBIMjIuNDQxTDAgNDBsMjIuNDQxIDQwaDQ1LjExN0w5MCA0MCA2Ny41NTggMHptLjIzNCA1Ny45NjRjMCAzLjM1My0yLjgwNSA2LjIyNy02LjA3OCA2LjIyN0gyOC41MmMtMy4yNzMgMC02LjA3OC0yLjg3NC02LjA3OC02LjIyN1YyMi4yNzZjMC0zLjM1MyAyLjgwNS02LjIyOCA2LjA3OC02LjIyOGgyOS45MjJsOS4zNTEgOS41ODF2MzIuMzM1ek00MS42MSA0Ni40NjdjMC0uNDc5LS4yMzQtLjcxOC0uMjM0LS45NThzLS4yMzQtLjQ3OS0uNDY3LS43MTgtLjIzNC0uMjQtLjQ2OC0uMjRoLS43MDFjLS40NjggMC0uNzAyIDAtLjkzNS4yNHMtLjQ2OC40NzktLjcwMS45NTgtLjIzNC45NTgtLjIzNCAxLjQzN3Y0LjU1MWMwIC43MTkuMjM0IDEuMTk3LjQ2OCAxLjQzNy4yMzQuNDc5LjQ2OC43MTkuNzAxLjcxOS4yMzQuMjQuNDY4LjI0LjkzNS4yNC4yMzQgMCAuNDY4IDAgLjcwMS0uMjRhLjUxLjUxIDAgMCAwIC40NjgtLjQ3OWMuMjM0LS4yNC4yMzQtLjQ3OS40NjctLjcxOCAwLS4yNC4yMzQtLjQ3OS4yMzQtLjk1OCAwLS4yMzkgMC0uNzE5LjIzMy0xLjE5OHYtMy4xMTRjLS40NjctLjI0LS40NjctLjQ3OS0uNDY3LS45NTh6bTEwLjUyLTE4LjY4M2MwLS40NzktLjIzNC0uNzE4LS4yMzQtLjk1OHMtLjIzNC0uNDc5LS40NjgtLjcxOS0uMjM0LS4yNC0uNDY4LS4yNGgtLjcwMWMtLjQ2NyAwLS43MDEgMC0uOTM1LjI0cy0uNDY4LjQ3OS0uNzAyLjk1OC0uMjMzLjk1OC0uMjMzIDEuNDM3djQuNTUxYzAgLjcxOS4yMzMgMS4xOTguNDY3IDEuNDM3LjIzNC40NzkuNDY4LjcxOS43MDEuNzE5LjIzNC4yNC40NjcuMjQuOTM1LjI0LjIzNCAwIC40NjcgMCAuNzAxLS4yNGEuNTEuNTEgMCAwIDAgLjQ2OC0uNDc5Yy4yMzQtLjI0LjIzNC0uNDc5LjQ2OC0uNzE5IDAtLjIzOS4yMzQtLjQ3OS4yMzQtLjk1OCAwLS4yMzkgMC0uNzE4LjIzNC0xLjE5OHYtMy4xMTRjLS4yMzQgMC0uMjM0LS40NzktLjQ2Ny0uOTU4em00LjY3NS04LjM4M0gyOC41MTljLTEuNjM2IDAtMi44MDUgMS4xOTgtMi44MDUgMi44NzR2MzUuNjg5YzAgMS42NzcgMS4xNjkgMi44NzQgMi44MDUgMi44NzRoMzMuMTk1YTIuODggMi44OCAwIDAgMCAyLjgwNS0yLjg3NFYyNy4zMDVoLTcuNDh2LTcuOTA0ek0zNiAyNi41ODd2LS40NzlsLjIzNC0uMjQgMi44MDUtMS45MTZoMi41NzF2MTEuNDk3aDIuMzM4bC4yMzMuMjRjLjIzNC4yMzkgMCAuMjM5IDAgLjIzOXYxLjE5N3MwIC4yNC0uMjMzLjI0aC03LjcxNGwtLjIzNC0uMjR2LTEuNDM3czAtLjI0LjIzNC0uMjRoMi44MDV2LTguODYybC0yLjEwNCAxLjE5OGMtLjIzNCAwLS4yMzQuMjM5LS40NjcuMjM5aC0uMjM0czAtLjIzOS0uMjM0LS4yMzl2LTEuMTk4em04LjE4MiAyNS42MjljLS4yMzMuOTU4LS40NjcgMS42NzctLjkzNSAyLjE1Ni0uNDY4LjcxOC0uOTM1IDEuMTk4LTEuNDAzIDEuNDM3LS43MDEuMjM5LTEuNDAzLjQ3OS0yLjMzOC40NzlzLTEuNjM2LS4yNC0yLjMzOC0uNDc5YTIuMTMgMi4xMyAwIDAgMS0xLjQwMy0xLjQzN2MtLjIzNC0uNzE4LS43MDEtMS40MzctLjcwMS0yLjE1Ni0uMjM0LS45NTgtLjIzNC0xLjkxNi0uMjM0LTIuODc0IDAtMS4xOTggMC0yLjE1Ni4yMzQtMi44NzQuMjM0LS45NTguNDY3LTEuNjc3LjkzNS0yLjE1NnMuOTM1LTEuMTk4IDEuNDAzLTEuNDM3Yy43MDEtLjI0IDEuNDAzLS40NzkgMi4zMzgtLjQ3OXMxLjYzNi4yMzkgMi4zMzcuNDc5YTIuMTMgMi4xMyAwIDAgMSAxLjQwMyAxLjQzN2MuMjM0LjcxOS43MDEgMS40MzcuNzAxIDIuMTU2LjIzNC45NTguMjM0IDEuOTE2LjIzNCAyLjg3NCAwIDEuMTk4IDAgMi4xNTYtLjIzNCAyLjg3NHptMTAuNTIgMy4zNTN2LjI0czAgLjIzOS0uMjM0LjIzOWgtNy43MTRsLS4yMzQtLjIzOXYtMS40MzdzMC0uMjM5LjIzNC0uMjM5aDIuODA1di04Ljg2MmwtMi4xMDQgMS4xOThjLS4yMzQgMC0uMjM0LjIzOS0uNDY4LjIzOWgtLjIzNHMwLS4yMzktLjIzNC0uMjM5VjQ0Ljc5bC4yMzQtLjI0IDIuODA1LTEuOTE2aDIuNTcydjExLjQ5N2gyLjMzOGwuMjM0LjIzOXYxLjE5OHptLjIzNC0yMi4wMzZjLS4yMzQuOTU4LS40NjggMS42NzctLjkzNSAyLjE1Ni0uNDY4LjcxOC0uOTM1IDEuMTk4LTEuNDAzIDEuNDM3LS43MDEuMjQtMS40MDMuNDc5LTIuMzM4LjQ3OXMtMS42MzYtLjI0LTIuMzM4LS40NzlhMi4xMyAyLjEzIDAgMCAxLTEuNDAzLTEuNDM3Yy0uMjM0LS40NzktLjcwMS0xLjQzNy0uNzAxLTIuMTU2cy0uMjM0LTEuOTE2LS4yMzQtMi44NzRjMC0xLjE5OCAwLTIuMTU2LjIzNC0yLjg3NC4yMzQtLjk1OC40NjgtMS42NzcuOTM1LTIuMTU2LjQ2OC0uNzE4LjkzNS0xLjE5OCAxLjQwMy0xLjQzNy43MDEtLjI0IDEuNDAzLS40NzkgMi4zMzgtLjQ3OXMxLjYzNi4yNCAyLjMzOC40NzlhMi4xMyAyLjEzIDAgMCAxIDEuNDAzIDEuNDM3Yy4yMzQuNzE4LjcwMSAxLjQzNy43MDEgMi4xNTYuMjM0Ljk1OC4yMzQgMS45MTYuMjM0IDIuODc0IDAgMS4xOTgtLjIzNCAyLjE1Ni0uMjM0IDIuODc0eiIgZmlsbD0iIzAwNzhkNyIgc3Ryb2tlPSJub25lIi8+PC9zeW1ib2w+PC9zdmc+"
camel.apache.org/provider: "Red Hat"
camel.apache.org/kamelet.group: "Azure Storage Blob"
labels:
camel.apache.org/kamelet.type: "sink"
spec:
definition:
title: "Azure Storage Blob Append Sink"
description: |-
Upload data in append mode to Azure Storage Blob.

In the header, you can set the `file` / `ce-file` property to specify the filename to upload. If you do set property in the header, the Kamelet uses the exchange ID as filename.
required:
- accountName
- containerName
type: object
properties:
accountName:
title: Account Name
description: The Azure Storage Blob account name.
type: string
x-descriptors:
- urn:camel:group:credentials
containerName:
title: Container Name
description: The Azure Storage Blob container name.
type: string
accessKey:
title: Access Key
description: The Azure Storage Blob access key.
type: string
format: password
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:password
- urn:camel:group:credentials
credentialType:
title: Credential Type
description: Determines the credential strategy to adopt.
type: string
enum: ["SHARED_ACCOUNT_KEY", "AZURE_IDENTITY"]
default: "SHARED_ACCOUNT_KEY"
dependencies:
- "camel:core"
- "camel:azure-storage-blob"
- "camel:kamelet"
template:
from:
uri: "kamelet:source"
steps:
- choice:
when:
- simple: "${header[file]}"
steps:
- set-header:
name: CamelAzureStorageBlobBlobName
simple: "${header[file]}"
- simple: "${header[ce-file]}"
steps:
- set-header:
name: CamelAzureStorageBlobBlobName
simple: "${header[ce-file]}"
otherwise:
steps:
- set-header:
name: CamelAzureStorageBlobBlobName
simple: "${exchangeId}"
- to:
uri: "azure-storage-blob://{{accountName}}/{{containerName}}"
parameters:
accessKey: "{{?accessKey}}"
operation: "commitAppendBlob"
blobType: "appendBlob"
credentialType: "{{credentialType}}"
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
* xref:ROOT:aws-sqs-source.adoc[image:kamelets/aws-sqs-source.svg[] AWS SQS Source]
* xref:ROOT:azure-servicebus-sink.adoc[image:kamelets/azure-servicebus-sink.svg[] Azure Servicebus Sink]
* xref:ROOT:azure-servicebus-source.adoc[image:kamelets/azure-servicebus-source.svg[] Azure Servicebus Source]
* xref:ROOT:azure-storage-blob-append-sink.adoc[image:kamelets/azure-storage-blob-append-sink.svg[] Azure Storage Blob Append Sink]
* xref:ROOT:azure-storage-blob-sink.adoc[image:kamelets/azure-storage-blob-sink.svg[] Azure Storage Blob Sink]
* xref:ROOT:azure-storage-blob-source.adoc[image:kamelets/azure-storage-blob-source.svg[] Azure Storage Blob Source]
* xref:ROOT:azure-storage-queue-sink.adoc[image:kamelets/azure-storage-queue-sink.svg[] Azure Storage Queue Sink]
Expand Down
151 changes: 151 additions & 0 deletions docs/modules/ROOT/pages/azure-storage-blob-append-sink.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
// THIS FILE IS AUTOMATICALLY GENERATED: DO NOT EDIT

= image:kamelets/azure-storage-blob-append-sink.svg[] Azure Storage Blob Append Sink

*Provided by: "Red Hat"*

Upload data in append mode to Azure Storage Blob.

In the header, you can set the `file` / `ce-file` property to specify the filename to upload. If you do set property in the header, the Kamelet uses the exchange ID as filename.

== Configuration Options

The following table summarizes the configuration options available for the `azure-storage-blob-append-sink` Kamelet:
[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example
| *accountName {empty}* *| Account Name| The Azure Storage Blob account name.| string| |
| *containerName {empty}* *| Container Name| The Azure Storage Blob container name.| string| |
| accessKey| Access Key| The Azure Storage Blob access key.| string| |
| credentialType| Credential Type| Determines the credential strategy to adopt.| string| `"SHARED_ACCOUNT_KEY"`|
|===

NOTE: Fields marked with an asterisk ({empty}*) are mandatory.


== Dependencies

At runtime, the `azure-storage-blob-append-sink` Kamelet relies upon the presence of the following dependencies:

- camel:core
- camel:azure-storage-blob
- camel:kamelet

== Usage

This section describes how you can use the `azure-storage-blob-append-sink`.

=== Knative Sink

You can use the `azure-storage-blob-append-sink` Kamelet as a Knative sink by binding it to a Knative object.

.azure-storage-blob-append-sink-binding.yaml
[source,yaml]
----
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
name: azure-storage-blob-append-sink-binding
spec:
source:
ref:
kind: Channel
apiVersion: messaging.knative.dev/v1
name: mychannel
sink:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1alpha1
name: azure-storage-blob-append-sink
properties:
accountName: "The Account Name"
containerName: "The Container Name"

----

==== *Prerequisite*

Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

==== *Procedure for using the cluster CLI*

. Save the `azure-storage-blob-append-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,shell]
----
oc apply -f azure-storage-blob-append-sink-binding.yaml
----

==== *Procedure for using the Kamel CLI*

Configure and run the sink by using the following command:

[source,shell]
----
kamel bind channel:mychannel azure-storage-blob-append-sink -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name"
----

This command creates the KameletBinding in the current namespace on the cluster.

=== Kafka Sink

You can use the `azure-storage-blob-append-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.azure-storage-blob-append-sink-binding.yaml
[source,yaml]
----
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
name: azure-storage-blob-append-sink-binding
spec:
source:
ref:
kind: KafkaTopic
apiVersion: kafka.strimzi.io/v1beta1
name: my-topic
sink:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1alpha1
name: azure-storage-blob-append-sink
properties:
accountName: "The Account Name"
containerName: "The Container Name"

----

==== *Prerequisites*

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

==== *Procedure for using the cluster CLI*

. Save the `azure-storage-blob-append-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,shell]
----
oc apply -f azure-storage-blob-append-sink-binding.yaml
----

==== *Procedure for using the Kamel CLI*

Configure and run the sink by using the following command:

[source,shell]
----
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-blob-append-sink -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name"
----

This command creates the KameletBinding in the current namespace on the cluster.

== Kamelet source file

https://github.com/openshift-integration/kamelet-catalog/blob/main/azure-storage-blob-append-sink.kamelet.yaml

// THIS FILE IS AUTOMATICALLY GENERATED: DO NOT EDIT
8 changes: 5 additions & 3 deletions docs/modules/ROOT/pages/splunk-source.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ The following table summarizes the configuration options available for the `splu
[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example
| *initEarliestTime {empty}* *| Init Earliest Time| Initial start offset of the first search.| string| | `"05/17/22 08:35:46:456"`
| *password {empty}* *| Password| The password to authenticate to Splunk Server.| string| |
| *query {empty}* *| Query| The Splunk query to run.| string| |
| *serverHostname {empty}* *| Splunk Server Address| The address of your Splunk server.| string| | `"my_server_splunk.com"`
Expand All @@ -24,7 +25,6 @@ The following table summarizes the configuration options available for the `splu
| delay| Delay| Milliseconds before the next poll.| integer| |
| earliestTime| Earliest Time| Earliest time of the search time window.| string| | `"05/17/22 08:35:46:456"`
| index| Index| Splunk index to write to.| string| |
| initEarliestTime| Init Earliest Time| Initial start offset of the first search.| string| | `"05/17/22 08:35:46:456"`
| latestTime| Latest Time| Latest time of the search time window.| string| | `"05/17/22 08:35:46:456"`
| protocol| Protocol| Connection Protocol to Splunk server.| string| `"https"`|
| repeat| Repeat| The maximum number of fires.| integer| |
Expand Down Expand Up @@ -68,6 +68,7 @@ spec:
apiVersion: camel.apache.org/v1alpha1
name: splunk-source
properties:
initEarliestTime: "05/17/22 08:35:46:456"
password: "The Password"
query: "The Query"
serverHostname: "my_server_splunk.com"
Expand Down Expand Up @@ -101,7 +102,7 @@ Configure and run the source by using the following command:

[source,shell]
----
kamel bind splunk-source -p "source.password=The Password" -p "source.query=The Query" -p "source.serverHostname=my_server_splunk.com" -p "source.username=The Username" channel:mychannel
kamel bind splunk-source -p "source.initEarliestTime=05/17/22 08:35:46:456" -p "source.password=The Password" -p "source.query=The Query" -p "source.serverHostname=my_server_splunk.com" -p "source.username=The Username" channel:mychannel
----

This command creates the KameletBinding in the current namespace on the cluster.
Expand All @@ -124,6 +125,7 @@ spec:
apiVersion: camel.apache.org/v1alpha1
name: splunk-source
properties:
initEarliestTime: "05/17/22 08:35:46:456"
password: "The Password"
query: "The Query"
serverHostname: "my_server_splunk.com"
Expand Down Expand Up @@ -158,7 +160,7 @@ Configure and run the source by using the following command:

[source,shell]
----
kamel bind splunk-source -p "source.password=The Password" -p "source.query=The Query" -p "source.serverHostname=my_server_splunk.com" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind splunk-source -p "source.initEarliestTime=05/17/22 08:35:46:456" -p "source.password=The Password" -p "source.query=The Query" -p "source.serverHostname=my_server_splunk.com" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
----

This command creates the KameletBinding in the current namespace on the cluster.
Expand Down
Loading
Loading