-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION] Restart pods after changes on any secrets fails #820
Comments
Pls attach the deployment, secret n config map yamls for reference |
Ouupss I forgot it, sorry. The deployment: apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demo
annotations:
reloader.stakater.com/auto: "true"
name: demo
spec:
replicas: 2
selector:
matchLabels:
app: deployment-demo
strategy: {}
template:
metadata:
labels:
app: deployment-demo
reloader: enabled
spec:
containers:
- image: bitnami/nginx:1.24.0
name: nginx
resources:
requests:
cpu: "0.25"
memory: '64M'
limits:
cpu: "0.5"
memory: '128M'
volumeMounts:
- name: demo-secret
readOnly: true
mountPath: "/data/secrets"
- name: demo2-secret
readOnly: true
mountPath: "/data/secrets2"
- name: demo-configmap
readOnly: true
mountPath: "/data/config"
volumes:
- name: demo-secret
secret:
secretName: demo
- name: demo2-secret
secret:
secretName: demo2
- name: demo-configmap
configMap:
name: demo Secrets apiVersion: v1
kind: Secret
metadata:
name: demo
data:
key: d29ybGQ=
key2: d29ybGQ=
key3: d29ybGQ= Secrets apiVersion: v1
kind: Secret
metadata:
name: demo2
data:
key: d29ybGQ= Configmap: apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
file.properties: |
hello=world |
I have tried it with the mentioned resources, and it works as expected. can you try one more time pls? and if it still doesnt work, pls share the reloader deployment yaml file |
Hello. I'm deploying using helm chart. The value is: # Generated from deployments/kubernetes/templates/chart/values.yaml.tmpl
global:
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imageRegistry: "xxxxx"
imagePullSecrets: []
kubernetes:
host: https://kubernetes.default
nameOverride: ""
fullnameOverride: ""
reloader:
autoReloadAll: false
isArgoRollouts: false
isOpenshift: false
ignoreSecrets: false
ignoreConfigMaps: false
reloadOnCreate: false
reloadOnDelete: false
syncAfterRestart: false
reloadStrategy: default # Set to default, env-vars or annotations
ignoreNamespaces: "kube-system,kube-public" # Comma separated list of namespaces to ignore
namespaceSelector: "" # Comma separated list of k8s label selectors for namespaces selection
resourceLabelSelector: "" # Comma separated list of k8s label selectors for configmap/secret selection
logFormat: "" # json
logLevel: debug # Log level to use (trace, debug, info, warning, error, fatal and panic)
watchGlobally: true
# Set to true to enable leadership election allowing you to run multiple replicas
enableHA: false
# Set to true if you have a pod security policy that enforces readOnlyRootFilesystem
readOnlyRootFileSystem: true
legacy:
rbac: false
matchLabels: {}
# Set to true to expose a prometheus counter of reloads by namespace (this metric may have high cardinality in clusters with many namespaces)
enableMetricsByNamespace: false
deployment:
# If you wish to run multiple replicas set reloader.enableHA = true
replicas: 1
revisionHistoryLimit: 2
nodeSelector:
# cloud.google.com/gke-nodepool: default-pool
# An affinity stanza to be applied to the Deployment.
# Example:
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: "node-role.kubernetes.io/infra-worker"
# operator: "Exists"
affinity: {}
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containerSecurityContext: {}
# capabilities:
# drop:
# - ALL
# allowPrivilegeEscalation: false
# readOnlyRootFilesystem: true
# A list of tolerations to be applied to the Deployment.
# Example:
# tolerations:
# - key: "node-role.kubernetes.io/infra-worker"
# operator: "Exists"
# effect: "NoSchedule"
tolerations: []
# Topology spread constraints for pod assignment
# Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
# Example:
# topologySpreadConstraints:
# - maxSkew: 1
# topologyKey: zone
# whenUnsatisfiable: DoNotSchedule
# labelSelector:
# matchLabels:
# app: my-app
topologySpreadConstraints: []
annotations: {}
labels:
provider: stakater
group: com.stakater.platform
version: v1.2.0
image:
name: ghcr.io/stakater/reloader
base: stakater/reloader
tag: v1.2.0
pullPolicy: IfNotPresent
# Support for extra environment variables.
env:
# Open supports Key value pair as environment variables.
open:
# secret supports Key value pair as environment variables. It gets the values based on keys from default reloader secret if any.
secret:
# ALERT_ON_RELOAD: <"true"|"false">
# ALERT_SINK: <"slack"> # By default it will be a raw text based webhook
# ALERT_WEBHOOK_URL: <"webhook_url">
# ALERT_ADDITIONAL_INFO: <"Additional Info like Cluster Name if needed">
# field supports Key value pair as environment variables. It gets the values from other fields of pod.
field:
# existing secret, you can specify multiple existing secrets, for each
# specify the env var name followed by the key in existing secret that
# will be used to populate the env var
existing:
# existing_secret_name:
# ALERT_ON_RELOAD: alert_on_reload_key
# ALERT_SINK: alert_sink_key
# ALERT_WEBHOOK_URL: alert_webhook_key
# ALERT_ADDITIONAL_INFO: alert_additional_info_key
# Liveness and readiness probe timeout values.
livenessProbe: {}
# timeoutSeconds: 5
# failureThreshold: 5
# periodSeconds: 10
# successThreshold: 1
readinessProbe: {}
# timeoutSeconds: 15
# failureThreshold: 5
# periodSeconds: 10
# successThreshold: 1
# Specify resource requests/limits for the deployment.
# Example:
# resources:
# limits:
# cpu: "100m"
# memory: "512Mi"
# requests:
# cpu: "10m"
# memory: "128Mi"
resources: {}
pod:
annotations: {}
priorityClassName: ""
# imagePullSecrets:
# - name: myregistrykey
# Put "0" in either to have go runtime ignore the set value.
# Otherwise, see https://pkg.go.dev/runtime#hdr-Environment_Variables for GOMAXPROCS and GOMEMLIMIT
gomaxprocsOverride: ""
gomemlimitOverride: ""
service: {}
# labels: {}
# annotations: {}
# port: 9090
rbac:
enabled: true
labels: {}
# Service account config for the agent pods
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
labels: {}
annotations: {}
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
# Optional flags to pass to the Reloader entrypoint
# Example:
# custom_annotations:
# configmap: "my.company.com/configmap"
# secret: "my.company.com/secret"
custom_annotations: {}
serviceMonitor:
# Deprecated: Service monitor will be removed in future releases of reloader in favour of Pod monitor
# Enabling this requires service to be enabled as well, or no endpoints will be found
enabled: false
# Set the namespace the ServiceMonitor should be deployed
# namespace: monitoring
# Fallback to the prometheus default unless specified
# interval: 10s
## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.
# scheme: ""
## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.
## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig
# tlsConfig: {}
# bearerTokenFile:
# Fallback to the prometheus default unless specified
# timeout: 30s
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
labels: {}
## Used to pass annotations that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
annotations: {}
# Retain the job and instance labels of the metrics pushed to the Pushgateway
# [Scraping Pushgateway](https://github.com/prometheus/pushgateway#configure-the-pushgateway-as-a-target-to-scrape)
honorLabels: true
## Metric relabel configs to apply to samples before ingestion.
## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)
metricRelabelings: []
# - action: keep
# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
# sourceLabels: [__name__]
## Relabel configs to apply to samples before ingestion.
## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)
relabelings: []
# - sourceLabels: [__meta_kubernetes_pod_node_name]
# separator: ;
# regex: ^(.*)$
# targetLabel: nodename
# replacement: $1
# action: replace
targetLabels: []
podMonitor:
enabled: false
# Set the namespace the podMonitor should be deployed
# namespace: monitoring
# Fallback to the prometheus default unless specified
# interval: 10s
## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.
# scheme: ""
## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.
## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig
# tlsConfig: {}
# bearerTokenSecret:
# Fallback to the prometheus default unless specified
# timeout: 30s
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
labels: {}
## Used to pass annotations that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
annotations: {}
# Retain the job and instance labels of the metrics pushed to the Pushgateway
# [Scraping Pushgateway](https://github.com/prometheus/pushgateway#configure-the-pushgateway-as-a-target-to-scrape)
honorLabels: true
## Metric relabel configs to apply to samples before ingestion.
## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)
metricRelabelings: []
# - action: keep
# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
# sourceLabels: [__name__]
## Relabel configs to apply to samples before ingestion.
## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)
relabelings: []
# - sourceLabels: [__meta_kubernetes_pod_node_name]
# separator: ;
# regex: ^(.*)$
# targetLabel: nodename
# replacement: $1
# action: replace
podTargetLabels: []
podDisruptionBudget:
enabled: false
# Set the minimum available replicas
# minAvailable: 1
netpol:
enabled: false
from: []
# - podSelector:
# matchLabels:
# app.kubernetes.io/name: prometheus
to: []
# Enable vertical pod autoscaler
verticalPodAutoscaler:
enabled: false
# Recommender responsible for generating recommendation for the object.
# List should be empty (then the default recommender will generate the recommendation)
# or contain exactly one recommender.
# recommenders:
# - name: custom-recommender-performance
# List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory
controlledResources: []
# Specifies which resource values should be controlled: RequestsOnly or RequestsAndLimits.
# controlledValues: RequestsAndLimits
# Define the max allowed resources for the pod
maxAllowed: {}
# cpu: 200m
# memory: 100Mi
# Define the min allowed resources for the pod
minAllowed: {}
# cpu: 200m
# memory: 100Mi
updatePolicy:
# Specifies minimal number of replicas which need to be alive for VPA Updater to attempt pod eviction
# minReplicas: 1
# Specifies whether recommended updates are applied when a Pod is started and whether recommended updates
# are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto".
updateMode: Auto
volumeMounts: []
volumes: []
webhookUrl: "" The final deployment looks like this: ---
# Source: reloader/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
meta.helm.sh/release-namespace: "reloader"
meta.helm.sh/release-name: "reloader"
labels:
app: reloader-reloader
chart: "reloader-1.2.0"
release: "reloader"
heritage: "Helm"
app.kubernetes.io/managed-by: "Helm"
name: reloader-reloader
namespace: reloader
---
# Source: reloader/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
meta.helm.sh/release-namespace: "reloader"
meta.helm.sh/release-name: "reloader"
labels:
app: reloader-reloader
chart: "reloader-1.2.0"
release: "reloader"
heritage: "Helm"
app.kubernetes.io/managed-by: "Helm"
name: reloader-reloader-role
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- list
- get
- watch
- apiGroups:
- "apps"
resources:
- deployments
- daemonsets
- statefulsets
verbs:
- list
- get
- update
- patch
- apiGroups:
- "batch"
resources:
- cronjobs
verbs:
- list
- get
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: reloader/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
meta.helm.sh/release-namespace: "reloader"
meta.helm.sh/release-name: "reloader"
labels:
app: reloader-reloader
chart: "reloader-1.2.0"
release: "reloader"
heritage: "Helm"
app.kubernetes.io/managed-by: "Helm"
name: reloader-reloader-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reloader-reloader-role
subjects:
- kind: ServiceAccount
name: reloader-reloader
namespace: reloader
---
# Source: reloader/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
meta.helm.sh/release-namespace: "reloader"
meta.helm.sh/release-name: "reloader"
labels:
app: reloader-reloader
chart: "reloader-1.2.0"
release: "reloader"
heritage: "Helm"
app.kubernetes.io/managed-by: "Helm"
group: com.stakater.platform
provider: stakater
version: v1.2.0
name: reloader-reloader
namespace: reloader
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: reloader-reloader
release: "reloader"
template:
metadata:
labels:
app: reloader-reloader
chart: "reloader-1.2.0"
release: "reloader"
heritage: "Helm"
app.kubernetes.io/managed-by: "Helm"
group: com.stakater.platform
provider: stakater
version: v1.2.0
spec:
containers:
- image: "xxxx/stakater/reloader:v1.2.0"
imagePullPolicy: IfNotPresent
name: reloader-reloader
env:
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
divisor: '1'
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: '1'
ports:
- name: http
containerPort: 9090
livenessProbe:
httpGet:
path: /live
port: http
timeoutSeconds: 5
failureThreshold: 5
periodSeconds: 10
successThreshold: 1
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /metrics
port: http
timeoutSeconds: 5
failureThreshold: 5
periodSeconds: 10
successThreshold: 1
initialDelaySeconds: 10
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /tmp/
name: tmp-volume
args:
- "--log-level=debug"
- "--namespaces-to-ignore=kube-system,kube-public"
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
serviceAccountName: reloader-reloader
volumes:
- emptyDir: {}
name: tmp-volume |
Hello,
I'm discovering this tools. It is awesome !! I have successfully restarted pods after the secret or configmap changes.
But it is working only if the name of secret/configmap is the same of the deployment:
To test, I have created a new secret called demo2 and mount it in a volume.
If I try to change demo2 secret, nothing happen. Am I missing something ?
If I configure the annotation with reload and specify all secrets statically, it works. But the goal for me is to restart the pods if any of secrets change. I have deployments with a lot of secrets and configmap.
Thanks.
The text was updated successfully, but these errors were encountered: