-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd-cloud-prune: merge container-prune into cloud-prune #3940
Conversation
b6935e3
to
478cf87
Compare
478cf87
to
5b2eac5
Compare
5b2eac5
to
954ccb3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial round of review
subprocess.check_output(skopeo_args, stderr=subprocess.STDOUT) | ||
print("Image deleted successfully.") | ||
except subprocess.CalledProcessError as e: | ||
raise Exception("An error occurred during deletion:", e.output.decode("utf-8")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if there is a way for us to gracefully exit here i.e. there was a lot of work done before this that would be nice to save (via updating builds.json). But maybe that's not too important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, and this applies to other actions as well. Initially, we discussed this and concluded that updating the builds.json
file in every run didn’t make much sense. The focus was to minimize updates to the file with any kind of failure.
Ideally, subsequent runs shouldn’t take more than a few minutes, apart from the first iteration for each stream. Hence, it didn’t seem like a critical concern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we should consider renaming this coreos-prune
or something since it's doing it all now.
Let's only do that in a separate PR (or maybe one final commit once we're at the end of code review).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will open another PR for this once this gets merged in.
954ccb3
to
8ac5e2c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking pretty good. A few more comments
8ac5e2c
to
9417b02
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Feel free to merge once testing looks good.
Merged the code of the container gc into the cloud one, and update builds.json. Go through the tags in base-oscontainer data in meta.json and prune every tag except the stream-name itself which are moving tags.
9417b02
to
a520781
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Merged the code of the container gc into the cloud one, and updated
builds.json
. Please go through the tags in base-oscontainer data in meta.json and prune every tag except the stream name, which are moving tags.