Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes pods stuck at containercreating #722

Closed
bilalAchahbar opened this issue Mar 6, 2018 · 9 comments
Closed

kubernetes pods stuck at containercreating #722

bilalAchahbar opened this issue Mar 6, 2018 · 9 comments

Comments

@bilalAchahbar
Copy link

I have a raspberry pi cluster (one master , 3 nodes)

My basic image is : raspbian stretch lite

I already set up a basic kubernetes setup where a master can see all his nodes (kubectl get nodes) and they're all running.
I used a weave network plugin for the network communication

When everything is all setup i tried to run a nginx pod (first with some replica's but now just 1 pod) on my cluster as followed
kubectl run my-nginx --image=nginx

But somehow the pod get stuck in the status "Container creating" , when i run docker images i can't see the nginx image being pulled. And normally an nginx image is not that large so it had to be pulled already by now (15 minutes).
The kubectl describe pods give the error that the pod sandbox failed to create and kubernetes will rec-create it.

I searched everything about this issue and tried the solutions on stackoverflow (reboot to restart cluster, searched describe pods , new network plugin tried it with flannel) but i can't see what the actual problem is.
I did the exact same thing in Virtual box (just ubuntu not ARM ) and everything worked.

First i thougt it was a permission issue because i run everything as a normal user , but in vm i did the same thing and nothing changed.
Then i checked kubectl get pods --all-namespaces to verify that the pods for the weaver network and kube-dns are running and also nothing wrong over there .

Is this a firewall issue in Raspberry pi ?
Is the weave network plugin not compatible (even the kubernetes website says it is) with arm devices ?
I 'am guessing there is an api network problem and thats why i can't get my pod runnning on a node

[EDIT]
Log files

kubectl describe podName

>     
>     Name:           my-nginx-9d5677d94-g44l6 Namespace:      default Node: kubenode1/10.1.88.22 Start Time:     Tue, 06 Mar 2018 08:24:13
> +0000 Labels:         pod-template-hash=581233850
>                     run=my-nginx Annotations:    <none> Status:         Pending IP: Controlled By:  ReplicaSet/my-nginx-9d5677d94 Containers: 
> my-nginx:
>         Container ID:
>         Image:          nginx
>         Image ID:
>         Port:           80/TCP
>         State:          Waiting
>           Reason:       ContainerCreating
>         Ready:          False
>         Restart Count:  0
>         Environment:    <none>
>         Mounts:
>           /var/run/secrets/kubernetes.io/serviceaccount from default-token-phdv5 (ro) Conditions:   Type           Status  
> Initialized    True   Ready          False   PodScheduled   True
> Volumes:   default-token-phdv5:
>         Type:        Secret (a volume populated by a Secret)
>         SecretName:  default-token-phdv5
>         Optional:    false QoS Class:       BestEffort Node-Selectors:  <none> Tolerations:     node.kubernetes.io/not-ready:NoExecute for
> 300s
>                      node.kubernetes.io/unreachable:NoExecute for 300s Events:   Type     Reason                  Age   From               
> Message   ----     ------                  ----  ----               
>     -------   Normal   Scheduled               5m    default-scheduler   Successfully assigned my-nginx-9d5677d94-g44l6 to kubenode1   Normal  
> SuccessfulMountVolume   5m    kubelet, kubenode1  MountVolume.SetUp
> succeeded for volume "default-token-phdv5"   Warning 
> FailedCreatePodSandBox  1m    kubelet, kubenode1  Failed create pod
> sandbox.   Normal   SandboxChanged          1m    kubelet, kubenode1 
> Pod sandbox changed, it will be killed and re-created.

kubectl logs podName

Error from server (BadRequest): container "my-nginx" in pod "my-nginx-9d5677d94-g44l6" is waiting to start: ContainerCreating
@ageekymonk
Copy link

ageekymonk commented Mar 11, 2018

Check your firewall, i had the same issue in aws, once i allowed all incoming traffic, everything worked fine.

@bilalAchahbar
Copy link
Author

The thing is , and thats the weird part.
It all works fine when i use a fresh rasbian lite image with the same installs and the same settings . But the moment i read the image with win32image on my laptop and burn it on new nodes it doesn't work anymore.
no problem now but i need a copy of my image for back up reasons. So somewhere down the line the image get broken.

@tangleloop
Copy link

I ran into a similar situation using armbian with cloned images where the /etc/machine-id was the same on all nodes. If that is the case, just delete the file on each node and reboot. it recreates the file during reboot.

More info on my situation:
https://groups.google.com/a/weave.works/forum/#!topic/weave-users/yyfmkMZYFhM

@timothysc
Copy link
Member

I really don't think this is a kubeadm issue it sounds like an image compat problem.

If you have specific data pointing to kubeadm being the problem please feel free to reopen.

@pouyaesm
Copy link

In my case, docker could not access the internet (my solution)

@PREngineer
Copy link

PREngineer commented May 11, 2023

Thank you sir (tangleloop)! This was precisely my issue.

@vishlvirus
Copy link

image

@chendave
Copy link
Member

chendave commented Jul 3, 2023

@vishlvirus you can inspect the log of the flannel, and see what's happened there, this does not sound like an issue from kubeadm itself.

Make sure the cni is running first.

@chendave
Copy link
Member

chendave commented Jul 3, 2023

FYI,

kubectl logs kube-flannel-ds-kv2cm -n kube-flannel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants