What determines when a new scan job should be run? #365
-
Hey, We've implemented starboard operator v0.9.0 in our Kubernetes cluster, running with default settings except for Is this expected? Tried to edit the setting I also have a question regarding when the next scan should occur? There is no point if we only scan once. Is there any kind of setting for this? Hope someone can help me. If I solve this I will make PR to update documentation. Best regards Edit: It seems like starboard-operator never deletes the completed jobs, therefore it is unable to initiate new jobs. Any suggestions? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 12 replies
-
👋 @mvahlberg To help you out I have couple of questions:
The operator scans existing workloads that do not have vulnerability reports. It also rescans a workloads whenever its deployment descriptor has changed. We haven't implemented rescanning based on regular intervals yet, but this is on our roadmap. The operator should delete successfully completed scan jobs, but it may not delete failed ones, which would be a bug. I'm double checking that, but please confirm on your end. |
Beta Was this translation helpful? Give feedback.
-
I played a bit and I was able to reproduce similar scenario when Starboard scans so called static pods, i.e. pods created by kubelet, which are owned by a Kubernetes Nodes rather than a replication controller (ReplicaSet, ReplicationController), DaemonSet or StatefulSet. For example, in the listing below you can see the scan job that failed due to #234 $ kubectl get job -A --show-labels
NAMESPACE NAME COMPLETIONS DURATION AGE LABELS
starboard-operator scan-vulnerabilityreport-5f8cc7d8d8 1/1 11s 26m app.kubernetes.io/managed-by=starboard-operator,pod-spec-hash=6779cdb7c7,starboard.resource.kind=DaemonSet,starboard.resource.name=kindnet,starboard.resource.namespace=kube-system,vulnerabilityReport.scanner=true
starboard-operator scan-vulnerabilityreport-78fcf7d8b 1/1 6s 26m app.kubernetes.io/managed-by=starboard-operator,pod-spec-hash=7c57456746,starboard.resource.kind=Node,starboard.resource.name=kind-control-plane,starboard.resource.namespace=kube-system,vulnerabilityReport.scanner=true I'm currently working on the fix for #234 but in the meantime could you check what's the kind of resources for which you saw failed scan jobs? You could do that by displaying labels that we add to each scan job, e.g.: $ kubectl get job -n starboard-operator -L starboard.resource.kind -L starboard.resource.namespace |
Beta Was this translation helpful? Give feedback.
I played a bit and I was able to reproduce similar scenario when Starboard scans so called static pods, i.e. pods created by kubelet, which are owned by a Kubernetes Nodes rather than a replication controller (ReplicaSet, ReplicationController), DaemonSet or StatefulSet.
For example, in the listing below you can see the scan job that failed due to #234