You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After a while, when deleting a Milvus cluster resource, the operation hangs and I see these kind of lines in the operator logs. The only thing that fixes it is deleting the operator pod. When I comes back, the reconciliation starts working and the delete and other operations continue where they left off. Any idea what might go wrong? This is running in an AKS (Azure) cluster.
{"level":"error","ts":"2025-02-07T12:54:35Z","msg":"Reconciler error","controller":"milvus","controllerGroup":"milvus.io","controllerKind":"Milvus","Milvus":{"name":"milvus-cluster","namespace":"ds-genai-test1"},"namespace":"ds-genai-test1","name":"milvus-cluster","reconcileID":"616e1605-7cde-4a20-993a-199f512385cf","error":"Kubernetes cluster unreachable: the server has asked for the client to provide credentials; Kubernetes cluster unreachable: the server has asked for the client to provide credentials","errorVerbose":"Kubernetes cluster unreachable: the server has asked for the client to provide credentials; Kubernetes cluster unreachable: the server has asked for the client to provide credentials\ngithub.com/milvus-io/milvus-operator/pkg/controllers.glob..func10\n\t/workspace/pkg/controllers/milvus.go:153\ngithub.com/milvus-io/milvus-operator/pkg/controllers.(*MilvusReconciler).Reconcile\n\t/workspace/pkg/controllers/milvus_controller.go:142\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:323\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235"}
The text was updated successfully, but these errors were encountered:
"error":"Kubernetes cluster unreachable: the server has asked for the client to provide credentials; Kubernetes cluster unreachable: the server has asked for the client to provide credentials","errorVerbose":"Kubernetes cluster unreachable: the server has asked for the client to provide credentials
As the error suggests It seems the milvus-operator pods was not provided with valid credentials to access kubernetes API server.
Milvus Operator Pod uses the token provided by ServiceAccount to access Kubernetes API. The token file is usually mounted to a certain path by Kubernetes, and Kubelet agent is responsible to update it periodically. I think there might be something wrong with kubelet agent during that period.
After a while, when deleting a Milvus cluster resource, the operation hangs and I see these kind of lines in the operator logs. The only thing that fixes it is deleting the operator pod. When I comes back, the reconciliation starts working and the delete and other operations continue where they left off. Any idea what might go wrong? This is running in an AKS (Azure) cluster.
The text was updated successfully, but these errors were encountered: