-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
too many open files #58
Comments
interesting read possibly/probably related here |
the odd thing is from within a container we can see some very decent limit in the 10**6, but in the host it shows only 1024 besides docker seems to have a |
global kernel setting is through I'm getting the same numbers from within a container, so again the settings seem fine |
all this boils down to, the culprit seems to be the per-process limit in the root context. Could it affect dockerd ? |
indeed dockerd does complain; as an experiment I did a live change with
which did not seem to fix the problem, at least not without restarting anything EDIT:
|
also maybe worth investigating, as it could be related I guess, the absence of
|
made an attempt at creating an empty |
useful comments to monitor the issue
don't get it... |
as of jan 22 2018 also:
|
|
possible breakthrough; when inside a container:
BUT
|
in the commit below, an attempt to get this right is made in the dockerfile for the mooc in production parmentelat/flotpython-archive@f141802 the image we use is based on ubuntu when run as root, processes in the container have a decent limit of 1024*1024 open files proposed patch is to create a limit file in users who need an upper limit might be the ones who play a lot with pdflatex when they download the notebooks as pdf. What is truly amazing though is that some containers have been observed with as many as several hundreds of thousands of entries in Putting this deployment in observation; rolling out the new image is done lazily, hence it takes a good day before it's really effective |
unfortunately, I could see the bad symptom again today. it actually pops out immediately when you look at the CPU load, my conjecture is that when the issue triggers CPU load gets a steady increase. See load chart on there were only 2 active containers when I ran this issue just now feb 6 circa 10:20; surprisingly one had been up for 2 days, which should not happen in normal circumstances. I restarted the docker daemon - which also caused the containers to shutdown; this was - of course - enough to bring the box back to normal : the load is back to 0 and I can't see a 'Too many open files' message in the journal. |
Today and not for the first time, I could spot occurrences of this message in the logs
Jan 14 16:07:45 thurst dockerd-current[1574]: OSError: [Errno 24] Too many open files
This clearly happens after a few weeks of operations. For the time being this is another reason to reboot the nbhosting box every once and again - like a couple weeks, or maybe even just one, feels about right
It is not very clear at this point how to handle it properly; the django app does not seem to be the culprit here, although even that is unclear.
The text was updated successfully, but these errors were encountered: