Skip to content

Commit

Permalink
add support for s3 as destination for backups (#37)
Browse files Browse the repository at this point in the history
* helm: add s3 backup support

* add default configurationOverrides for kafkaStore

* don't bump version

* bump

---------

Co-authored-by: Ravi Singal <[email protected]>
  • Loading branch information
iamsudip and ravisingal authored Nov 29, 2023
1 parent 13cf902 commit 511e714
Show file tree
Hide file tree
Showing 3 changed files with 30 additions and 5 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ This chart will do the following:
* Optionally add an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resource.
* Optionally start a JMX Exporter container inside schema registry pods.
* Optionally create a Prometheus ServiceMonitor for each enabled jmx exporter container.
* Optionally add a cronjob to take backup the schema registry topic and save it in [Google Cloud Storage](https://cloud.google.com/storage).
* Optionally add a cronjob to take backup the schema registry topic and save it in [Google Cloud Storage](https://cloud.google.com/storage) or [AWS S3](https://aws.amazon.com/pm/serv-s3/)


## Installing the Chart
Expand Down
20 changes: 17 additions & 3 deletions helm/templates/cronjob.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,30 +45,44 @@ spec:
- |
timestamp=$(date +%Y-%m-%d-%H-%M-%S)
month=${timestamp:0:7}
BACKUP_LOCATION=gs://$BUCKET/schema-registry/$CLUSTER_NAME/$month
unset JMX_PORT KAFKA_OPTS KAFKA_HEAP_OPTS KAFKA_LOG4J_OPTS
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server $KAFKA_BOOTSTRAP_SERVERS --topic $KAFKA_TOPIC --from-beginning --property print.key=true --timeout-ms 60000 1> schemas-${timestamp}.log || exit 2
tar cvfz schemas-${timestamp}.tar.gz schemas-$timestamp.log || exit 2
{{- if .Values.backup.gcloud }}
BACKUP_LOCATION=gs://$BUCKET/schema-registry/$CLUSTER_NAME/$month
gcloud auth activate-service-account --key-file $GOOGLE_APPLICATION_CREDENTIALS || exit 2
gsutil cp schemas-${timestamp}.tar.gz $BACKUP_LOCATION/schemas-${timestamp}.tar.gz || exit 2
{{- end }}
{{- if .Values.backup.aws }}
BACKUP_LOCATION=s3://$S3_BUCKET/${CLUSTER_NAME}-backups/schema-registry/$month
aws s3 cp schemas-${timestamp}.tar.gz $BACKUP_LOCATION/schemas-${timestamp}.tar.gz || exit 2
{{- end }}
env:
- name: CLUSTER_NAME
value: {{ .Values.backup.cluster }}
{{- if .Values.backup.gcloud }}
- name: BUCKET
value: {{ .Values.backup.gcloud.bucket }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/accounts/key.json"
{{- end }}
{{- if .Values.backup.aws }}
- name: S3_BUCKET
value: {{ .Values.backup.aws.bucket }}
{{- end }}
- name: KAFKA_BOOTSTRAP_SERVERS
value: {{ template "schema-registry.kafka.bootstrapServers" . }}
- name: KAFKA_TOPIC
value: {{ .Values.backup.topic }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/accounts/key.json"
{{- if .Values.backup.gcloud }}
volumeMounts:
- name: gcs-iam-secret
mountPath: "/accounts"
volumes:
- name: gcs-iam-secret
secret:
secretName: {{ .Values.backup.gcloud.secretName }}
{{- end }}
{{- if .Values.backup.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.imagePullSecrets | nindent 12 }}
Expand Down
13 changes: 12 additions & 1 deletion helm/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ image:

imagePullSecrets: []

configurationOverrides: {}
configurationOverrides:
kafkastore.timeout.ms: 2000

customEnv: {}
schemaRegistryOpts: {}
overrideGroupId: ""
Expand Down Expand Up @@ -143,6 +145,14 @@ backup:
imagePullSecrets: []
cluster: "test"
gcloud: {}
# GCS Bucket Configuration
# gcloud:
# bucket: bucketName
# secretName: gcs-bucket-secret
aws: {}
# AWS S3 Bucket Configuration
# aws:
# bucket: bucketName/backups
affinity: {}
nodeSelector: {}
securityContext: {}
Expand All @@ -155,3 +165,4 @@ servicemonitor:
interval: 15s
secure: false
tlsConfig: {}

0 comments on commit 511e714

Please sign in to comment.