Optimising our GitHub workflows #576
-
Occasionally our 5GB repository cache will fill before our weekly keys rotate. This will block any PRs from having successful builds until the end of the week. in my most humble of onions, we are not building and testing efficiently for our workflows. what is currently happening, when a PR is opened:
Please post and discuss proposals below |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 9 replies
-
preferably, we stop building images on PRs. It takes far too long in the first place! The reason why are building images, is that we can more closely replicate the environment the smoke tests would run on. I don't think this is suitable..
Solution 1 on PRs:
then on merge:
then on successful publish to GHCR (there is an event we can hook into for GitHub Actions):
OR
IF there are any concerns about using "bad" images, we/you can filter tags/notifications depending on if you care about -rc builds. |
Beta Was this translation helpful? Give feedback.
-
Solution 1.5? on PRs:
then on merge to main:
|
Beta Was this translation helpful? Give feedback.
-
Solution 2 If we still want to run tests against docker images, we can also try this action (instead of docker-layer-caching). From what I understand, this action will
https://github.com/marketplace/actions/build-docker-images-using-cache |
Beta Was this translation helpful? Give feedback.
-
I've had further conversations with @gautampachnandahmo who has kindly assisted me in investigating this issue. The problem lies with the github action we're using for the caching - Due to limitations with monitoring hits it must pull through all previously restored images each time meaning the cache will always fill up after a certain number of runs. @gautampachnandahmo believes we should remove the caching altogether as there's very little return on investment using this, given the size of the team and amount of PRs actually raised per week. We then do proper investigation into other alternatives but only introduce once we know we won't hit similar limitations further down the line. The alternative is to make our caches renew daily rather than weekly, but the issue will continue as soon as the development team grows so is not a solution, and again - The benefits from caching are negligible. @gautampachnandahmo has agreed to discuss if you need further convincing. @jenbutongit , @endamccormack , @gstevenson - Your thoughts? |
Beta Was this translation helpful? Give feedback.
-
Just as a quick update, myself and Jen have agreed that for now we're going to shorten the life of the cache. I will see that this is done in the coming weeks. This discussion will be kept open while we come to an agreement on what the end goal is - Whether that's removing docker image testing, removing caching altogether or swapping out the caching for an alternative (see solution 2 for a suggested alternative). My personal thoughts are that we do not test against a docker image, this would negate the need for docker layer caching altogether but we can still make use of other caching (eg. yarn cache) @endamccormack , @jenbutongit , @TheSpartan1980 , @gstevenson , if you could provide your thoughts on this it'd be much appreciated. EDIT: The bug has been raised to change the cache life span - https://collaboration.homeoffice.gov.uk/jira/browse/FORMS-808 |
Beta Was this translation helpful? Give feedback.
-
fyi - full uncached builds are now ~4min but cached builds are now at ~30s. Enjoy! |
Beta Was this translation helpful? Give feedback.
fyi - full uncached builds are now ~4min but cached builds are now at ~30s. Enjoy!