-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Replies: 2 comments · 11 replies
-
Edit: it seems that all it needed was a restart of cloudstack-management, after this I can deploy clusters without issue. I would still like to know why it behaves like this, the logs don't really suggest a restart being needed :) |
Beta Was this translation helpful? Give feedback.
All reactions
-
usually a management restart should only be needed for non-dynamic settings to be changed. During enabling of kubernetes there is some of that, but by your description it seems you already had k8s enabled in your env, did you? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Yes, it was enabled and working in other zones. What was the most weird thing is that it didn't always fail instantly after hitting the create button, sometimes it would bring up 1 or 2 nodes and then fail, other times the control node was up and then the error showed up, etc. |
Beta Was this translation helpful? Give feedback.
All reactions
-
there should be some logs which tells why the network cannot be started. |
Beta Was this translation helpful? Give feedback.
All reactions
-
This is the significant fragement and it shows that the only host is not selected because of not enough capacity:
therefore the router is not deploy and hence the network is not started. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Yeap, this:
|
Beta Was this translation helpful? Give feedback.
All reactions
-
Yes
Looks like an issue with systemvm template. |
Beta Was this translation helpful? Give feedback.
All reactions
-
That host was totally empty, no way a capacity issue on it. About secondary storages, one per zone. |
Beta Was this translation helpful? Give feedback.
All reactions
-
@tdtmusic2 you can register systemvm template again, and update global setting |
Beta Was this translation helpful? Give feedback.
-
Hi all. I have a new zone in ACS and for some reason kubernetes deployment fails on it, I cannot understand why from the logs alone. The difference between this zone and others is that it uses local storage instead of shared, so I'm deploying the cluster with a local storage offering. Sometimes the control vm gets deployed and then I get the error about failing to provision the cluster, other times I get "
Failed to start Kubernetes cluster : vvvvv as its network cannot be started
".In the management logs I have this:
Could this be related to some offering that needs to be modified? From all this log output the thing that stands out is
com.cloud.exception.InsufficientServerCapacityException
, but I don't know what it reffers to.Beta Was this translation helpful? Give feedback.
All reactions