You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
so I have kubernetes 1.28 with centos 7 running and zfs 2.0.4 - yes all of that is old, but it was working fine before the kubernetes 1.28 upgrade.
So I am aware the issue is probably due to kubernetes 1.28.
My question is, if anyone know a set of commands that could figure out what makes the zfs pool busy. I tried lsof with various pids and trying to search for the pool name, but that comes up empty.
I figured out that I can make the pool not busy by deleting the pod directory and restarting kubelet. (but this interrupts the unmount process)
The dataset is already unmounted.
What magic commands are there, to figure out who's holding that final fd?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
so I have kubernetes 1.28 with centos 7 running and zfs 2.0.4 - yes all of that is old, but it was working fine before the kubernetes 1.28 upgrade.
So I am aware the issue is probably due to kubernetes 1.28.
My question is, if anyone know a set of commands that could figure out what makes the zfs pool busy. I tried lsof with various pids and trying to search for the pool name, but that comes up empty.
I figured out that I can make the pool not busy by deleting the pod directory and restarting kubelet. (but this interrupts the unmount process)
The dataset is already unmounted.
What magic commands are there, to figure out who's holding that final fd?
Beta Was this translation helpful? Give feedback.
All reactions