Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: kubectl delete recipe gets stuck for a couple of minutes #139

Open
harshshekhar15 opened this issue Aug 24, 2020 · 0 comments
Open

Comments

@harshshekhar15
Copy link
Contributor

Issue

kubectl delete recipe is getting stuck for a couple of minutes.

Details

  • Recipe yaml
apiVersion: v1
items:
- apiVersion: dope.mayadata.io/v1
  kind: Recipe
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"dope.mayadata.io/v1","kind":"Recipe","metadata":{"annotations":{},"labels":{"d-testing.dope.mayadata.io/inference":"true"},"name":"assert-kubera-pod-running","namespace":"d-testing"},"spec":{"tasks":[{"assert":{"state":{"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"name":"alertmanager"},"namespace":"default"},"status":{"phase":"Running"}},"stateCheck":{"count":1,"stateCheckOperator":"ListCountEquals"}},"name":"assert-running-of-alertmanager"}]}}
    creationTimestamp: "2020-08-24T11:48:53Z"
    deletionGracePeriodSeconds: 0
    deletionTimestamp: "2020-08-24T12:05:16Z"
    finalizers:
    - protect.gctl.metac.openebs.io/dope-finalize-recipe
    generation: 2
    labels:
      d-testing.dope.mayadata.io/inference: "true"
      recipe.dope.mayadata.io/phase: Failed
    name: assert-kubera-pod-running
    namespace: d-testing
    resourceVersion: "373003"
    selfLink: /apis/dope.mayadata.io/v1/namespaces/d-testing/recipes/assert-kubera-pod-running
    uid: 10a252be-b5f7-47d7-93ae-73dccbae71a3
  spec:
    tasks:
    - assert:
        state:
          apiVersion: v1
          kind: Pod
          metadata:
            labels:
              name: alertmanager
            namespace: default
          status:
            phase: Running
        stateCheck:
          count: 1
          stateCheckOperator: ListCountEquals
      name: assert-running-of-alertmanager
  status:
    executionTime:
      readableValue: 1m0.007s
      valueInSeconds: 60.007472033
    phase: Failed
    taskCount:
      failed: 1
      skipped: 0
      total: 1
      warning: 0
    tasks:
      assert-running-of-alertmanager:
        message: 'StateCheckEquals: Resource default : GVK /v1, Kind=Pod: TaskName
          assert-running-of-alertmanager'
        phase: Failed
        step: 1
        timeout: 'Retryable condition timed out after 1m0s: StateCheckEquals: Resource
          default : GVK /v1, Kind=Pod: TaskName assert-running-of-alertmanager: name
          is required'
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
  • Dope controller logs
I0824 11:36:26.865420       1 start.go:117] Discovery cache refresh interval: 30s
I0824 11:36:26.865473       1 start.go:118] API server relist interval i.e. cache flush interval: 4m0s
I0824 11:36:26.865480       1 start.go:119] Debug http server address: :9999
I0824 11:36:26.865485       1 start.go:120] Run metac locally: true
I0824 11:36:26.865492       1 start.go:131] Using in-cluster kubeconfig
I0824 11:36:26.870065       1 metacontroller.go:253] Starting Local GenericController
I0824 11:36:34.880465       1 controller.go:293] Starting GenericController "dope" / "sync-http"
I0824 11:36:34.881500       1 controller.go:293] Starting GenericController "dope" / "sync-recipe"
I0824 11:36:34.882108       1 controller.go:293] Starting GenericController "dope" / "finalize-recipe"
I0824 11:45:09.294279       1 controller.go:957] Explicitly deleted v1:ConfigMap:d-testing:assert-kubera-pod-running-lock: ResourceStatesController Watch dope.mayadata.io/v1:Recipe:d-testing:assert-kubera-pod-running
W0824 11:48:34.917058       1 controller.go:494] Can't sync: recipes.dope.mayadata.io "assert-kubera-pod-running" not found
I0824 12:05:16.579404       1 controller.go:957] Explicitly deleted v1:ConfigMap:d-testing:assert-kubera-pod-running-lock: ResourceStatesController Watch dope.mayadata.io/v1:Recipe:d-testing:assert-kubera-pod-running



W0824 12:08:34.913054       1 controller.go:494] Can't sync: recipes.dope.mayadata.io "assert-kubera-pod-running" not found
E0824 12:08:34.923704       1 reconciler.go:118] Reconcile failed: Controller "sync-recipe": Name "d-testing" "assert-kubera-pod-running": Error Update failed: Runtime error: Recipe: "d-testing" "assert-kubera-pod-running": Get instance failed: Recipe "d-testing" "assert-kubera-pod-running": recipes.dope.mayadata.io "assert-kubera-pod-running" not found
W0824 12:08:34.925462       1 controller.go:494] Can't sync: Failed to update status for watch dope.mayadata.io/v1:Recipe:d-testing:assert-kubera-pod-running: GenericController "dope" / "sync-recipe": Operation cannot be fulfilled on recipes.dope.mayadata.io "assert-kubera-pod-running": StorageError: invalid object, Code: 4, Key: /registry/dope.mayadata.io/recipes/d-testing/assert-kubera-pod-running, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 10a252be-b5f7-47d7-93ae-73dccbae71a3, UID in object meta: 
E0824 12:08:34.925491       1 controller.go:388] Failed to sync "dope.mayadata.io/v1:Recipe:d-testing:assert-kubera-pod-running": GenericController "dope" / "sync-recipe": Failed to update status for watch dope.mayadata.io/v1:Recipe:d-testing:assert-kubera-pod-running: GenericController "dope" / "sync-recipe": Operation cannot be fulfilled on recipes.dope.mayadata.io "assert-kubera-pod-running": StorageError: invalid object, Code: 4, Key: /registry/dope.mayadata.io/recipes/d-testing/assert-kubera-pod-running, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 10a252be-b5f7-47d7-93ae-73dccbae71a3, UID in object meta: 
W0824 12:08:34.925517       1 controller.go:494] Can't sync: recipes.dope.mayadata.io "assert-kubera-pod-running" not found
W0824 12:08:34.930759       1 controller.go:494] Can't sync: recipes.dope.mayadata.io "assert-kubera-pod-running" not found
@AmitKumarDas AmitKumarDas changed the title kubectl delete getting stuck for a couple of minutes perf: kubectl delete recipe gets stuck for a couple of minutes Aug 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants