-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
argo cd application for wiz-kubernetes-integration after a while becomes outofsync #273
Comments
we currently have three separate clusters with two of the three exhibiting this behavior. after successfully resyncing the two by manually deleting that service account, all three clusters are still in good health and not out of sync for 24 hours. i will continue to monitor. we have 4 other clusters that will be deployed to too in the near future so i'll be able to report their status soon. |
another 24 hours and no symptoms. closing ticket. |
4 out of our 8 clusters are reporting out of sync in argo cd this morning. will research and post relevant logs. |
wiz-kubernetes-integration-wiz-admission-controller logs:
looking into any additional network policy modifications needed based on these entries |
we run an alternate pod ip scheme (calico) and found a comment in the admission controller values template (https://github.com/wiz-sec/charts/blob/master/wiz-admission-controller/values.yaml) about the webhook and host network flag. i've set it to true and selected a different port other than 10250. i guess i need to wait for the webhookCert to renew to see if this works. |
I am experiencing this on all my clusters as well. Any chance we can get wiz team to look at this? |
submitted support ticket https://support.wiz.io/hc/en-us/requests/24387 |
we are still experiencing these issues on several clusters. on most of our nonprod EKS cluster, we scale to 0 nodes over night and on weekends. these specific clusters cannot successfully hydrate pods in the morning and report in unhealthy state. |
after some time after successful install, argocd reports that the application (wiz-kubernetes-integration helm chart) is out of sync and unable to self heal.
wiz-kubernetes-integration-wiz-admission-controller:
reported manifest diff that is unable to resolve/self heal:
rollme.webhookCert
argocd sync logs:
deleting wiz-auto-modify-connector service account
workaround:
manual deleting of service account resumes sync successfully. this step seems to kick off the integration job which starts to properly reinstall all the respective resources
environment:
app.kubernetes.io/chartName: wiz-admission-controller
app.kubernetes.io/instance: wiz-kubernetes-integration
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: wiz-admission-controller
app.kubernetes.io/version: '2.4'
helm.sh/chart: wiz-admission-controller-3.4.13
wiz helm chart 0.1.85
AWS EKS 1.27
The text was updated successfully, but these errors were encountered: