-
Notifications
You must be signed in to change notification settings - Fork 364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Untag instead of force remove image for podman #1342
Conversation
this is a behavioral change (and hopefully for the better) now cleanup_images will behave the same for podman and docker NOTE: this PR will cause the behavior to deviate from the param `--remove-images` that runs this code
716d084
to
9fdf558
Compare
I get why people want builder to do |
@nitzmahone I was asking myself the same thing. I think it was added to ansible-runner because that's the only thing other than receptor we install on execution nodes, which is where we need to run this. |
My 2 cents - runner is effectively acting as a command allow-list here. If we could ship an arbitrary python file or bash script, it could be done that way, but perhaps, less securely. In the receptor mesh, the control nodes are only allowed to run |
pairs with ansible/ansible-runner#1342 this fix the problem of us forcefully remove images when setting changing ee image that's being used in a job causing the job to fail
if runtime == 'podman': | ||
try: | ||
stdout = run_command([runtime, 'untag', image_tag]) | ||
if not stdout: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get it. Why are we incrementing the count if there is no output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's what podman does... it doesn't output anything when u run podman untag
it will however output an error if it fail to untag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do the condition at all, then? Rely on the logic getting derailed by the exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with @jbradberry lets keep it simple
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it shouldn't be needed but it is possible, since this is best effort and not "critical" i rather the behavior to be move on instead of bail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we get this merged as-in and do an ansible-runner release. Then we can can create a tech-debt task to follow up on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gundalow going to have to push back against that. It's basically a one-line change, and if it's worth doing at all it should be done up-front. If it merges as-is, it more or less means that we don't care about the possibility of podman's behavior changing and a follow-up task would just sink out of sight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I'm not opposed to merging as-is, but we shouldn't fool ourselves that we'll ever address such a minor concern unless it bites us in the future.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
now cleanup_images will behave the same for podman and docker (cherry picked from commit d51a2f3)
now cleanup_images will behave the same for podman and docker (cherry picked from commit d51a2f3) Co-authored-by: Hao Liu <[email protected]>
Prune dangle image periodically pairs with ansible/ansible-runner#1342 this fix the problem of us forcefully remove images when setting changing ee image that's being used in a job causing the job to fail
Prune dangle image periodically pairs with ansible/ansible-runner#1342 this fix the problem of us forcefully remove images when setting changing ee image that's being used in a job causing the job to fail
Prune dangle image periodically pairs with ansible/ansible-runner#1342 this fix the problem of us forcefully remove images when setting changing ee image that's being used in a job causing the job to fail
--remove-images
that runs this codeThis is a behavioral change (and hopefully for the better) now cleanup_images will behave the same for podman and docker
docker rmi
will just untag whilepodman rmi
will untag and remove layers and cause runing container to be killedfor podman we use
untag
to achieve the same behaviorthis only untag the image and does not delete the image prune_images need to be call to delete