Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Initial support for CAPG Provider #243

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

upodroid
Copy link

@upodroid upodroid commented Dec 21, 2022

/cc @jayunit100

I have tested this feature and it works.

I noticed some other gaps in both crashd and CAPG.

crashd potential improvements:

  • ssh in general needs a rethink

    • jump_user should be assumed to be the same as username when not provided. In GCP, that is a very accurate assumption.
    • passing an ssh config would be nice and great for running crashd in regulated networks. I wouldn't be able to run this at work as I have to hop through some bastions to egress the work network.
    • private_key_path is set to ~/.ssh/id_rsa which isn't great. by default it should be null
    • offer an option to use the ssh-agent in ssh_config.go
    • please don't manipulate the ssh-agent keys through crashd
  • kubeconfig loading is a bit sketchy

    Error: failed to unpack input arguments: capg_provider: missing argument for mgmt_kube_config 
    

    Now, we assume the default context at $KUBECONFIG is the management cluster and the workload cluster config is pulled out by effectively running clusterctl get kubeconfig NAME and then using that kubeconfig to talk to the workload cluster. This is not true unfortunately and I have to specify a kubeconfig path.

CAPG comments
CAPG manifest
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: dev
  namespace: capg-system
  labels:
    cni: cilium 
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: dev-control-plane
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: GCPCluster
    name: dev
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPCluster
metadata:
  name: dev
  namespace: capg-system
spec:
  network:
    name: capg
  project: coen-mahamed-ali
  region: europe-west4
  failureDomains:
    - europe-west4-b
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  name: dev-control-plane
  namespace: capg-system
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        extraArgs:
          cloud-provider: gce
        timeoutForControlPlane: 20m
      controllerManager:
        extraArgs:
          allocate-node-cidrs: "false"
          cloud-provider: gce
    initConfiguration:
      nodeRegistration:
        kubeletExtraArgs:
          cloud-provider: gce
        name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
    joinConfiguration:
      nodeRegistration:
        kubeletExtraArgs:
          cloud-provider: gce
        name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
  machineTemplate:
    infrastructureRef:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: GCPMachineTemplate
      name: dev-control-plane
  replicas: 1
  version: v1.25.4
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
metadata:
  name: dev-control-plane
  namespace: capg-system
spec:
  template:
    spec:
      image: projects/k8s-staging-cluster-api-gcp/global/images/cluster-api-ubuntu-2004-v1-25-4-nightly
      instanceType: e2-standard-2
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: dev-md-0
  namespace: capg-system
spec:
  clusterName: dev
  replicas: 1
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
          kind: KubeadmConfigTemplate
          name: dev-md-0
      clusterName: dev
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: GCPMachineTemplate
        name: dev-md-0
      version: v1.25.4
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
metadata:
  name: dev-md-0
  namespace: capg-system
spec:
  template:
    spec:
      image: projects/k8s-staging-cluster-api-gcp/global/images/cluster-api-ubuntu-2004-v1-25-4-nightly
      instanceType: e2-standard-4
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
  name: dev-md-0
  namespace: capg-system
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          kubeletExtraArgs:
            cloud-provider: gce
          name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
crashd run with CAPG cluster
 REDACTED  MCW0CDP3YY  ~  Desktop  Git  crash-diagnostics   main  7✎  7+  ERROR  $   ./crashd --debug run examples/capg_provider.crsh --args-file args.txt 
INFO[0000] Detailed logs being written to: /Users/REDACTED/.crashd/crashd_2022-12-21T23-39-48.log 
DEBU[0000] creating working directory /tmp/crashd       
DEBU[0000] Search filters groups:[core]; categories:[]; kinds:[nodes]; namespaces:[]; versions:[]; names:[]; labels:[] containers:[] 
DEBU[0000] searching through 1 groups                   
DEBU[0000] searching for nodes objects in [group=v1; non-namespced; labels=] 
DEBU[0000] found 2 nodes in [group=v1; non-namespaced; labels=] 
DEBU[0000] applying filters on 1 results                
BEFORE IT BREAKS
"capg_provider"(hosts = ["10.164.0.2", "10.164.0.3"], kind = "capg_provider", kube_config = "/var/folders/b1/dthn83bs2qbcrg38qszm22440000gn/T/dev-workload-config3862550388", ssh_config = "ssh_config"(conn_timeout = 30, jump_host = "35.204.32.179", jump_user = "REDACTED", max_retries = 30, port = "22", private_key_path = "/Users/REDACTED/.ssh/google_compute_engine", username = "REDACTED"), transport = "ssh")
THIS IS BROKEN
[]
THIS IS NOT BROKEN
["host_resource"(host = "10.164.0.2", kind = "host_resource", provider = "host_list_provider", ssh_config = "ssh_config"(conn_timeout = 30, jump_host = "35.204.32.179", jump_user = "REDACTED", max_retries = 30, port = "22", private_key_path = "/Users/REDACTED/.ssh/google_compute_engine", username = "REDACTED"), transport = "ssh"), "host_resource"(host = "10.164.0.3", kind = "host_resource", provider = "host_list_provider", ssh_config = "ssh_config"(conn_timeout = 30, jump_host = "35.204.32.179", jump_user = "REDACTED", max_retries = 30, port = "22", private_key_path = "/Users/REDACTED/.ssh/google_compute_engine", username = "REDACTED"), transport = "ssh")]
DEBU[0000] capture: executing command on 2 resources    
DEBU[0000] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0000] capture: capturing output of [cmd=sudo df -i] => [/tmp/crashd/10_164_0_2/sudo_df__i.txt] from 10.164.0.2 using ssh 
DEBU[0000] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo df -i" 
DEBU[0003] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0003] capture: capturing output of [cmd=sudo df -i] => [/tmp/crashd/10_164_0_3/sudo_df__i.txt] from 10.164.0.3 using ssh 
DEBU[0003] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo df -i" 
DEBU[0005] capture: executing command on 2 resources    
DEBU[0005] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0005] capture: capturing output of [cmd=sudo crictl info] => [/tmp/crashd/10_164_0_2/sudo_crictl_info.txt] from 10.164.0.2 using ssh 
DEBU[0005] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo crictl info" 
DEBU[0008] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0008] capture: capturing output of [cmd=sudo crictl info] => [/tmp/crashd/10_164_0_3/sudo_crictl_info.txt] from 10.164.0.3 using ssh 
DEBU[0008] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo crictl info" 
DEBU[0010] capture: executing command on 2 resources    
DEBU[0010] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0010] capture: capturing output of [cmd=df -h /var/lib/containerd] => [/tmp/crashd/10_164_0_2/df__h__var_lib_containerd.txt] from 10.164.0.2 using ssh 
DEBU[0010] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "df -h /var/lib/containerd" 
DEBU[0013] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0013] capture: capturing output of [cmd=df -h /var/lib/containerd] => [/tmp/crashd/10_164_0_3/df__h__var_lib_containerd.txt] from 10.164.0.3 using ssh 
DEBU[0013] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "df -h /var/lib/containerd" 
DEBU[0016] capture: executing command on 2 resources    
DEBU[0016] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0016] capture: capturing output of [cmd=sudo systemctl status kubelet] => [/tmp/crashd/10_164_0_2/sudo_systemctl_status_kubelet.txt] from 10.164.0.2 using ssh 
DEBU[0016] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo systemctl status kubelet" 
DEBU[0018] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0018] capture: capturing output of [cmd=sudo systemctl status kubelet] => [/tmp/crashd/10_164_0_3/sudo_systemctl_status_kubelet.txt] from 10.164.0.3 using ssh 
DEBU[0018] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo systemctl status kubelet" 
DEBU[0020] capture: executing command on 2 resources    
DEBU[0020] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0020] capture: capturing output of [cmd=sudo systemctl status containerd] => [/tmp/crashd/10_164_0_2/sudo_systemctl_status_containerd.txt] from 10.164.0.2 using ssh 
DEBU[0020] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo systemctl status containerd" 
DEBU[0022] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0022] capture: capturing output of [cmd=sudo systemctl status containerd] => [/tmp/crashd/10_164_0_3/sudo_systemctl_status_containerd.txt] from 10.164.0.3 using ssh 
DEBU[0022] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo systemctl status containerd" 
DEBU[0024] capture: executing command on 2 resources    
DEBU[0024] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0024] capture: capturing output of [cmd=sudo journalctl -xeu kubelet] => [/tmp/crashd/10_164_0_2/sudo_journalctl__xeu_kubelet.txt] from 10.164.0.2 using ssh 
DEBU[0024] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo journalctl -xeu kubelet" 
DEBU[0026] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0026] capture: capturing output of [cmd=sudo journalctl -xeu kubelet] => [/tmp/crashd/10_164_0_3/sudo_journalctl__xeu_kubelet.txt] from 10.164.0.3 using ssh 
DEBU[0026] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo journalctl -xeu kubelet" 
DEBU[0028] capture: executing command on 2 resources    
DEBU[0028] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0028] capture: capturing output of [cmd=sudo cat /var/log/cloud-init-output.log] => [/tmp/crashd/10_164_0_2/sudo_cat__var_log_cloud_init_output_log.txt] from 10.164.0.2 using ssh 
DEBU[0028] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo cat /var/log/cloud-init-output.log" 
DEBU[0030] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0030] capture: capturing output of [cmd=sudo cat /var/log/cloud-init-output.log] => [/tmp/crashd/10_164_0_3/sudo_cat__var_log_cloud_init_output_log.txt] from 10.164.0.3 using ssh 
DEBU[0030] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo cat /var/log/cloud-init-output.log" 
DEBU[0032] capture: executing command on 2 resources    
DEBU[0032] capture: created capture dir: /tmp/crashd/10_164_0_2 
DEBU[0032] capture: capturing output of [cmd=sudo cat /var/log/cloud-init.log] => [/tmp/crashd/10_164_0_2/sudo_cat__var_log_cloud_init_log.txt] from 10.164.0.2 using ssh 
DEBU[0032] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo cat /var/log/cloud-init.log" 
DEBU[0035] capture: created capture dir: /tmp/crashd/10_164_0_3 
DEBU[0035] capture: capturing output of [cmd=sudo cat /var/log/cloud-init.log] => [/tmp/crashd/10_164_0_3/sudo_cat__var_log_cloud_init_log.txt] from 10.164.0.3 using ssh 
DEBU[0035] ssh.run: /usr/bin/ssh -q -o StrictHostKeyChecking=no -i /Users/REDACTED/.ssh/google_compute_engine -p 22 [email protected] -o "ProxyCommand ssh -o StrictHostKeyChecking=no -W %h:%p -i /Users/REDACTED/.ssh/google_compute_engine [email protected]" "sudo cat /var/log/cloud-init.log" 
DEBU[0037] kube_capture(what=logs)                      
DEBU[0037] Search filters groups:[core]; categories:[]; kinds:[pods]; namespaces:[default kube-system]; versions:[]; names:[]; labels:[] containers:[] 
DEBU[0037] searching through 1 groups                   
DEBU[0037] searching for pods objects in [group=v1; namespace=default; labels=] 
DEBU[0037] WARN: found 0 pods in [group=v1; namespace=default; labels=] 
DEBU[0037] searching for pods objects in [group=v1; namespace=kube-system; labels=] 
DEBU[0037] found 11 pods in [group=v1; namespace=kube-system; labels=] 
DEBU[0037] applying filters on 1 results                
DEBU[0037] objectWriter: saving pods search results to: /tmp/crashd/kubecapture/core_v1/kube-system/pods-202212212340.8547.json 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-ckrdq/mount-cgroup/mount-cgroup.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-ckrdq/apply-sysctl-overwrites/apply-sysctl-overwrites.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-ckrdq/mount-bpf-fs/mount-bpf-fs.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-ckrdq/clean-cilium-state/clean-cilium-state.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-ckrdq/cilium-agent/cilium-agent.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-operator-69b677f97c-7b6sc/cilium-operator/cilium-operator.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-psrzp/mount-cgroup/mount-cgroup.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-psrzp/apply-sysctl-overwrites/apply-sysctl-overwrites.log 
DEBU[0037] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-psrzp/mount-bpf-fs/mount-bpf-fs.log 
DEBU[0038] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-psrzp/clean-cilium-state/clean-cilium-state.log 
DEBU[0038] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/cilium-psrzp/cilium-agent/cilium-agent.log 
DEBU[0038] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/coredns-565d847f94-6mmvf/coredns/coredns.log 
DEBU[0038] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/coredns-565d847f94-wszzm/coredns/coredns.log 
DEBU[0038] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/etcd-dev-control-plane-7r92k/etcd/etcd.log 
DEBU[0038] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/kube-apiserver-dev-control-plane-7r92k/kube-apiserver/kube-apiserver.log 
DEBU[0038] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/kube-controller-manager-dev-control-plane-7r92k/kube-controller-manager/kube-controller-manager.log 
DEBU[0039] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/kube-proxy-5c27d/kube-proxy/kube-proxy.log 
DEBU[0039] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/kube-proxy-h48wt/kube-proxy/kube-proxy.log 
DEBU[0039] Writing pod container log /tmp/crashd/kubecapture/core_v1/kube-system/kube-scheduler-dev-control-plane-7r92k/kube-scheduler/kube-scheduler.log 
DEBU[0039] kube_capture(what=objects)                   
DEBU[0039] Search filters groups:[]; categories:[]; kinds:[pods services]; namespaces:[default kube-system]; versions:[]; names:[]; labels:[] containers:[] 
DEBU[0040] searching through 25 groups                  
DEBU[0040] searching for pods objects in [group=v1; namespace=default; labels=] 
DEBU[0040] WARN: found 0 pods in [group=v1; namespace=default; labels=] 
DEBU[0040] searching for pods objects in [group=v1; namespace=kube-system; labels=] 
DEBU[0040] found 11 pods in [group=v1; namespace=kube-system; labels=] 
DEBU[0040] applying filters on 1 results                
DEBU[0040] searching for services objects in [group=v1; namespace=default; labels=] 
DEBU[0040] found 1 services in [group=v1; namespace=default; labels=] 
DEBU[0040] searching for services objects in [group=v1; namespace=kube-system; labels=] 
DEBU[0040] found 1 services in [group=v1; namespace=kube-system; labels=] 
DEBU[0040] applying filters on 2 results                
DEBU[0040] objectWriter: saving pods search results to: /tmp/crashd/kubecapture/core_v1/kube-system/pods-202212212340.6329.json 
DEBU[0040] objectWriter: saving services search results to: /tmp/crashd/kubecapture/core_v1/default/services-202212212340.6398.json 
DEBU[0040] objectWriter: saving services search results to: /tmp/crashd/kubecapture/core_v1/kube-system/services-202212212340.6405.json 
DEBU[0040] kube_capture(what=objects)                   
DEBU[0040] Search filters groups:[apps]; categories:[]; kinds:[deployments replicasets]; namespaces:[default kube-system]; versions:[]; names:[]; labels:[] containers:[] 
DEBU[0040] searching through 1 groups                   
DEBU[0040] searching for deployments objects in [group=apps/v1; namespace=default; labels=] 
DEBU[0040] WARN: found 0 deployments in [group=apps/v1; namespace=default; labels=] 
DEBU[0040] searching for deployments objects in [group=apps/v1; namespace=kube-system; labels=] 
DEBU[0040] found 2 deployments in [group=apps/v1; namespace=kube-system; labels=] 
DEBU[0040] applying filters on 1 results                
DEBU[0040] searching for replicasets objects in [group=apps/v1; namespace=default; labels=] 
DEBU[0040] WARN: found 0 replicasets in [group=apps/v1; namespace=default; labels=] 
DEBU[0040] searching for replicasets objects in [group=apps/v1; namespace=kube-system; labels=] 
DEBU[0040] found 2 replicasets in [group=apps/v1; namespace=kube-system; labels=] 
DEBU[0040] applying filters on 1 results                
DEBU[0040] objectWriter: saving deployments search results to: /tmp/crashd/kubecapture/apps_v1/kube-system/deployments-202212212340.9168.json 
DEBU[0040] objectWriter: saving replicasets search results to: /tmp/crashd/kubecapture/apps_v1/kube-system/replicasets-202212212340.9180.json 

@vmwclabot
Copy link

@upodroid, you must sign our contributor license agreement before your changes are merged. Click here to sign the agreement. If you are a VMware employee, read this for further instruction.

@vmwclabot
Copy link

@upodroid, we have received your signed contributor license agreement. It will be reviewed by VMware shortly. Another comment will be added to the pull request to notify you when the merge can proceed.

@vmwclabot
Copy link

@upodroid, VMware has rejected your signed contributor license agreement. The merge can not proceed until the agreement has been resigned. Click here to resign the agreement. Reject reason:

Please provide an address.

@vmwclabot
Copy link

@upodroid, we have received your signed contributor license agreement. It will be reviewed by VMware shortly. Another comment will be added to the pull request to notify you when the merge can proceed.

@vmwclabot
Copy link

@upodroid, VMware has approved your signed contributor license agreement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants