-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use nodemon for reloading on all server processes #160
Conversation
This examples nodemon from the celery entrypoint script to the web and proxito service entrypoint scripts. The Django runserver --reload option doesn't see to be configurable, and seems to watch a lot of files that are not needed for reloading. I can't tell what exactly is being watched by runserver, but I am getting frequent `too many open files` errors from processes in Docker. Running with --no-reload mostly solves this issue, but requires manually restarting services. Someone else should spend some time with this, I'm not confident that this will: 1. Fix the issue for others 2. Not cause irreparable damage to your local installation
I'm not facing any problem with the current Docker setup. I don't know why you are receiving this error 🤷🏼 Also, I'm not sure about the differences between using Mines are:
He he, irreparable damage, I'm scared about pulling down these changes now 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure to understand how this solves your problem since with these changes, you are using 2 watchers: nodemon and django default's one (since --noreload
is not passed to the command).
I see
web_1 | [info ] Watching for file changes with StatReloader [django.utils.autoreload]
and
web_1 | [nodemon] starting `python3 manage.py runserver 0.0.0.0:8000`
Now I'm not able to reproduce this either. I can't get the original error messages from the containers anymore, using the Django reload. These errors were crashing these services, so my development environment was not usable at all without some fix. I'm guessing there were more open files at the kernel level, or something changed in Docker as I did some cleanup of old images/containers at some point too.
My count is similar, 200k files My open file count is around 375k
|
Annnd I'm back to broken. This seems related to me working on the new templates:
Though open files still seems low:
This branch still resolves this error for me 🤯 |
Actually, it doesn't seem related just to the new templates either:
|
I'm hitting "too many open files" rather consistently (see #160 for a first try at resolving this). I am getting some inconsistent results still, so not sure if this solves the problem or not. I was just able to start the development env without altering the reload mechanism though, so perhaps this just needs to be bumped up a bit. I'm also not opposed to changing the reload mechanism too though.
I commented on the other PR that increases the limits as well, but it seems you will need to experiment a little more here by working with these different setups for a few days/weeks and confirm that this solves your issue. Currently, I'm confused why the problem was "solved" originally when watching the files with both monitors. I'm in doubt about what is the right solution but also if either of them is a solution at all 😄 |
Yeah, I'm still experimenting here. It's not even clear what the problem is, but I don't yet feel this is just a host level issue. I think the solution can be in our container configuration. I can't describe why I'm hitting this, other than maybe because I'm working with extra containers too. I would be curious if this PR solves other problems with reload, I know several others can't/couldn't use reload previously -- perhaps @ericholscher can't still? I'd because if using this branch in I don't have any strong opinions about this change here otherwise though. But if it solves one or two problems with local dev, being on a single reload daemon ( |
Yea, I still don't use autoreloading, but that's a docker issue with open file descriptors. If we lower it, maybe that will help? 🤷 |
Aye, that's what I'm wondering. This fix, for some reason, seemed to make Docker less grumpy with me over the number of open files. I'm not sure how to verify that the Docker containers actually see fewer open files, it didn't look different at the host system. But Docker did stop complaining about open files. Anyways, might be worth testing this more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated this PR to work properly with nodemon
for web/proxito processes 👍🏼
BTW,
So... right now, it's 5k per container. We have 4 containers, which gives us 20k files in total. We may be able to tweak a little more the
|
I opened #162 to track the amount of files watched. I think we are close to figure it out. |
I'm hitting "too many open files" rather consistently (see #160 for a first try at resolving this). I am getting some inconsistent results still, so not sure if this solves the problem or not. I was just able to start the development env without altering the reload mechanism though, so perhaps this just needs to be bumped up a bit. I'm also not opposed to changing the reload mechanism too though.
This examples nodemon from the celery entrypoint script to the web and proxito service entrypoint scripts.
The Django runserver --reload option doesn't see to be configurable, and seems to watch a lot of files that are not needed for reloading. I can't tell what exactly is being watched by runserver, but I am getting frequent
too many open files
errors from processes in Docker. Running with --no-reload mostly solves this issue, but requires manually restarting services.Someone else should spend some time with this, I'm not confident that this will: