-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drivers build not triggered for x86_64 #1158
Comments
Note: all the jobs have always been instead triggered correctly during normal weekly kernel-crawler CI. At the same time, PR CI correctly triggered all presubmit jobs instead. |
I might have found a good candidate; note the difference between the presubmit and postsubmit Prow documentation about Pre:
Post:
So, postsubmit are based on the specific push event triggered by github. Payloads are capped to 25M: https://docs.github.com/en/webhooks-and-events/webhooks/webhook-events-and-payloads#push
But this is not our case imho because Prow receives the event with a full payload (otherwise it won't be able to decode it); but there might be some other hard limits on the number of files changes pushed with the event. Prow code confirms this behavior: https://github.com/kubernetes/test-infra/blob/b7c54c4d7991d27ee926d184d3a09c7b7738c65b/prow/plugins/trigger/push.go#L28: as you can see, it uses |
So, github http api does limit list of files changed to 3000: https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#list-pull-requests-files This is the method used to fetch pull request changes; therefore i'd expect them to be cut at 3000 files too. Instead, we are experiencing the issue only on push to master. Push events have the same 3000 changes hard limit. I don't get why this is working on pull request but not on push to master :/ I'd expect the issue to be present on pull request too! |
I now get why it is working on presubmits! |
I see 2 main paths forward:
|
Adding 2 more possible "fixes":
|
Note: these ones would not fix the "new driverversion added" issues. |
A short term solution would be to change all post submit triggers to trigger to any change to
to this:
In this case, we would trigger ALL jobs everytime, even for unmodified configs, then each job would build its own configs. PROs:
CONs:
Let's assume that each week we build likely ~90% of supported distro, it would not be that impactful (ie: we would just spawn 10% useless pods). @maxgio92 wdyt? |
We could also let the entrypoint of each job do the |
Signed-off-by: Federico Di Pierro <[email protected]>
Thank you for this detailed investigation and sorry for my late response. I'd try as hard as possible to avoid triggering all distros jobs. |
What about instead of reducing the granularity of the grid to increase the jobs frequency as you suggested, from weekly to one every two days for example, @FedeDP? PS: in any case IMHO are both short term solutions. |
That would unfortunately not solve the issues when we add a completely new driver version from scratch. |
Agree btw :/ |
Signed-off-by: Federico Di Pierro <[email protected]>
Issues go stale after 90d of inactivity. Mark the issue as fresh with Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh with Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh with Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh with Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale |
/remove-lifecycle stale |
Describe the bug
Not all driver building-postsubmit jobs have been triggered after new driverkit configurations have been introduced - as usual weekly crawling result from kernel-crawler.
How to reproduce it
Please check:
Expected behaviour
All the jobs for x86 to have been triggered accordingly.
The text was updated successfully, but these errors were encountered: