Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add possibility to "reset" autoclose gates #2

Open
iverberk opened this issue May 12, 2019 · 5 comments
Open

Add possibility to "reset" autoclose gates #2

iverberk opened this issue May 12, 2019 · 5 comments

Comments

@iverberk
Copy link

I'm trying to implement a use-case where I put an autoclose gate and wait for all the dependent items to pass. For the next run of my deployment pipeline, I want to re-enable the same autoclose gate and have the exact same logic. Currently it will just ignore the new autogate because the .autoclose extension has been removed and the file is already there. Instead of generating a new file every time I'd like to reuse the same autoclose gate.

@JohannesRudolph
Copy link
Member

JohannesRudolph commented May 12, 2019 via email

@iverberk
Copy link
Author

Hi Johannes,

I was just thinking about this a little more. Maybe it makes more sense to create a new autoclose gate for every version that we deploy. I was just wondering about the amount of files that eventually end up in the git repo.

I agree with you that it might make more sense to have a proper trail than to keep overwriting the same thing (although it is in the git history).

Our use case is this:

  1. We have a number of target accounts that we deploy a Kubernetes platform to. This is a deployment pipeline that consists of several jobs.
  2. The accounts are split into non-prod and prod accounts, effectively creating stages of roll-out.
  3. We want to have a mechanism that we start with the non-prod pipelines and wait for all of the to finish successfully before continuing with the prod pipelines.
  4. The idea is to create an autoclose gate for each stage and have it wait for all the dependent pipelines to pass.
  5. Every pipeline would write its own gate and eventually, after all the non-prod pipelines have finished, the autoclose gate would trigger the next stage.
  6. We were thinking of just resetting the gate for the next deployment because the logic is the same each time and it seems wasteful to create a new autoclose file for each deploy.
  7. However, resetting (removing the autoclose file) and putting the same dependent objects back in there would just immediately close the gate. We actually want the new gate to wait for new versions of all the pipeline gates. But this also means to write new gates for each pipeline. We can incorporate the semver into the gate name to enforce this.

So I guess we are better off just creating new versions of each pipeline gate and writing a new autoclose gate that matches all the specific pipeline gates for that version. Do you agree or is there a better way to make this process work?

Thanks!

@JohannesRudolph
Copy link
Member

I was just wondering about the amount of files that eventually end up in the git repo.

Yes, that's definitely an issue. Also contention on the repo. At the end of the day we do abuse git as a distributed k/v store, which is not exactly what it was built to be :-) Our gate repo has accumulated a few 100Ks of commits certainly things go slower as they should be. However, gates are only a tiny portion of our total build time so it's not such a big deal. One tiny thing we do though is to run a script to regularly delete "orphaned" autogates (e.g. feature branch builds that failed and won't be needed anymore).

We built gate-resource as a workaround to our immediate scaling problems around a mono-repo build & deploy pipeline, so we cut some corners intentionally. Maybe concourse will provide better support for a scenario like ours in the future.

Your use case and intention sound very legit. Unless you deploy hundreds of versions each day you should not have any immediate scaling issues around gate resources. If all you want to have is a distributed lock you can take a look at https://github.com/concourse/pool-resource, which heavily inspired gate-resource but turned out to be too simplistic for what we needed.

@iverberk
Copy link
Author

I think the pool resource is too simplistic for our use-case too, unless you see an option for our use-case? We know up-front how many pipelines we need to wait for, but how do we check that all have completed? I don't see a way to do that with the pool resource. Your resource seems more suited to this. We do use the pool resource to batch our deployments and lock the entire process until all stages have been successfully deployed.

I think we need to prune our git repo from time to time. We are deploying at most a couple of times a day I think so it won't be too much of a problem but I'd like to be ahead of the curve.

@JohannesRudolph
Copy link
Member

Hi, just want to add that I’m working on a 2.0 of gate resource that fixes these concerns.

The first beta uses shallow clones to speed up the gate operations significantly and maintain o(1) performance wrt to the number of commits. One other addition planed is a cleanup script that removes expired gates, see #1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants