-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistent files in file_cache_path #118
Comments
As described in test-kitchen/kitchen-dokken#118, the contents of Chef::Config[:file_cache_path] are persistent on the host's file system. This leads to the unexpected effect that Jenkins is not reliably restarted after plugin installation, if the previous run of kitchen-dokken had the same plugins installated. Therefore, put the file into a directory inside the container, which is reliably destroyed together with the container.
Having fixed that one issue resulted in the next one.. a Over the years, I got use take |
As described in test-kitchen/kitchen-dokken#118, the contents of Chef::Config[:file_cache_path] are persistent on the host's file system. This leads to the unexpected effect that the Jenkins job not reliably created, if the previous run of kitchen-dokken had left over an identical template file. Therefore, put the file into a directory inside the container, which is reliably destroyed together with the container.
This is a consequence of bind mounting the sandbox... on a failed I'll investigate the best way to handle this |
Thanks for your feedback. My CI system runs |
is this still an issue for you in 2.6.5? |
Hi, I am still seeing this in 2.6.5. Issues #120 pretty sure is related to this.
Edit: I am pretty sure this line is the problem here: https://github.com/someara/kitchen-dokken/blob/2ed012ecceb114db1da71b29622bc996a77ab1f6/lib/kitchen/helpers.rb#L135 |
@someara I was going over the code during the weekend and thinking about this issue and fixing it would not be too much effort but I fail at understanding why the sandbox is a bind mount from the host. Removing the bind mound and keeping the sandbox inside the container would remove a lot of complexity and manual work of cleaning up and such. Is there a reason why the sandbox is a bind mount? |
I see exactly opposite case: Each time I do kitchen converge few times, without destroying, I see that all my cached files being re-downloaded (looks like file_cache_path destroys on each converge). |
@jsirex Yes, thats why I propose having the cache inside the container and not taking care of it at all. Removing the docker container will automatically remove the cache and no cleanup or anything can intervene in a negative way. |
I've been digging deeper into test-kitchen and dokken and found that the central difference here is that for test-kitchen (vagrant) the kitchen-sandbox and verifier-sandbox are created locally with user permissions and then uploaded (usually via SCP or rsync) to the VM while dokken mounts these sanboxes directly into the containers. This direclty leads to two issues:
So even if we change the behavior from using the mounted sandbox to transferring the sandbox files via "docker cp" [1] or some similar mechnism there is still this upload to the remote docker host which has to be solved. @someara Any thoughts on this. I can prepare a pull request but I would like to know which path to follow before I invest time in this. [1] https://docs.docker.com/engine/reference/commandline/cp/#extended-description |
Any "call" ends up with sandbox_cleanup. This is bad idea. Whenever I developing cookbook I want to run converge multiple times. And each time it re-downloads all huge stuff. Probably this should only call cleanup_dokken_sandbox at destroy time (quick fix) or removed at all. def cleanup_dokken_sandbox
return if sandbox_path.nil? || ENV['KITCHEN_DONT_CLEANUP']
debug("Cleaning up local sandbox in #{sandbox_path}")
FileUtils.rmtree(Dir.glob("#{sandbox_path}/*"))
end |
I found this problem on my CI too. When the CI server calls kitchen destroy at the end of a CI build, the file(s) from kitchen do not get deleted properly. In my case, the next build is affected, as the "starting point" has changed. I can also confirm what @jsirex and @joerg said by manual testing: Multiple runs redownload all the huge artifacts, so it is exactly the opposite. @someara Any thoughts on this? |
@jsirex a workaround for your issue is to use a named volume, and download all your huge stuff to that volume's mountpoint, and use chef features like
although if you're using multiple suites you'll have to watch out for this bug: #152 |
Is there any progress on this? Does the problem persist with newest version of kitchen-dokken? |
I just figured out why one of my cookbooks didn't work well in CI (detecting that a
cookbook_file
wasn't changed).This file was written to
Chef::Config[:file_cache_path]
, which ends up on the host system in~/.dokken/kitchen_sandbox/<unique_prefix>-<suitename>/cache
and stays there between Chef runs - which was unexpected by my cookbook's execution.Was it a bad idea to put a file there, which should not reflect the state from the previous chef run and I should put it instead to somewhere else? Or is this a result of a non-ideal implementation in kitchen-dokken? I understand that caching temp files there has a positive effect..
(to give you a bit of context: I'm writing the plugins + version of Jenkins plugin into that file, which allows me to decide, whether Jenkins needs to restart (code) - result is that Jenkins won't be restarted after plugin installations).
The text was updated successfully, but these errors were encountered: