You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be great to use an existing persistent volume (if there is) in statefulset yaml file. I know this would mean that the statefulset should have a single pod running.
Describe the solution you'd like.
Add a persistentVolumeClaim block below the claimTemplate in statefulset.yaml
{{- if and ( .Values.persistence.enabled) (eq .Values.persistence.existingClaim "") }}
volumeClaimTemplates:
- metadata:
name: storage
spec:
accessModes:
{{- toYaml .Values.persistence.accessModes | nindent 10 }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
{{- else if and (.Values.persistence.enabled) (not (eq .Values.persistence.existingClaim "")) (eq .Values.replicaCount 1) }}
- name: storage
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}
{{- else }}
- name: storage
emptyDir: {}
{{- end }}
Add a existingClaim key under persistence inside values.yaml
Maybe the logic of applying the condition can be better. Not sure.
Additional context.
This is a suggestion. I have tested using existing claim with azurefile-csi storage class in an aks cluster with single alertmanager pod. It works as expected.
The text was updated successfully, but these errors were encountered:
NandigamAbhishek
changed the title
[alertmanager] Ability to pass existing persistent volume in volume claim template for a single replica statefulset
[alertmanager] Ability to pass existing persistent volume claim for a single replica statefulset
Jan 10, 2025
Is your feature request related to a problem ?
It would be great to use an existing persistent volume (if there is) in statefulset yaml file. I know this would mean that the statefulset should have a single pod running.
Describe the solution you'd like.
{{- if and ( .Values.persistence.enabled) (eq .Values.persistence.existingClaim "") }}
volumeClaimTemplates:
- metadata:
name: storage
spec:
accessModes:
{{- toYaml .Values.persistence.accessModes | nindent 10 }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
{{- else if and (.Values.persistence.enabled) (not (eq .Values.persistence.existingClaim "")) (eq .Values.replicaCount 1) }}
- name: storage
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}
{{- else }}
- name: storage
emptyDir: {}
{{- end }}
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 50Mi
existingClaim: ""
Describe alternatives you've considered.
Maybe the logic of applying the condition can be better. Not sure.
Additional context.
This is a suggestion. I have tested using existing claim with azurefile-csi storage class in an aks cluster with single alertmanager pod. It works as expected.
The text was updated successfully, but these errors were encountered: