You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem description (if applicable)
Current kubeconfig management and workflow in airshipctl has outdone itself.
There are several common problems that we have with kubeconfigs right now:
kubeconfigs are generated and dumped to filesystem per each phase, however it only slows deployment process and adds unnecessary complication since most of the time kubeconfig remains the same and we need to update or merge it only on demand;
it's unclear which particular kubeconfig will be generated and used in a phase since kubeconfigs are retrieved iteratively using all the defined sources in cluster map.
This approach makes delays and it's inconvenient for the end user to define a kubeconfig and its sources in a separate entity; also the information stored in clustermap has no practical benefit since it duplicates itself.
since we generate kubeconfigs each time per phase we should carry kubeconfig entrypoint within phase config manifests, therefore we decrypt kubeconfig from manifests even if we don't need it. Thus all airshipctl subcommands doesn't work
at all without defined in env SOPS decrypt keys and it makes it slow. There are already PS in downstream with workarounds solving that problem (by removing kubeconfig entrypoint from phase bundle) since it become really annoying for colleagues who use airshipctl every day;
we don't have integration with all common kubernetes cli tools which support KUBECONFIG env var and --kubeconfig flag (it was removed from airshipctl in favor of cluster map);
we still don't have handy and reliable approach to merge kubeconfigs - Kubeconfig interface in airshipctl doesn't provide functionality to do that properly and engineers make attempts to workaround this use-case using
toolbox container and custom bash scripts with kubectl under the hood (for instance, capd phase plan).
we work mostly with cluster names instead of contexts, assuming if they are not equal then the appropriate information should be stored in cluster map. However there are no verifications or enforcement of that. Kubernetes client which is beneficiary of kubeconfig information works with contexts instead of clusters, so we should either use context or enforce that defined context for cluster has a relation for that cluster.
Kubeconfig and builder interfaces in airshipctl are bulky, non-efficient and most of their part aren't even used.
Proposed change
To solve described problems we should move forward and use completely different approach which must be:
simple
easy to understand
fast
reliable
predictable
We have the following use cases to cover:
- be able to work with std kubeconfig (or other specified location) that already exist
- be able to modify the std kubeconfig (or other specified location) by adding/removing clusters/contexts
- added/removed parts of kubeconfigs are:
- newly created cluster (capd, capz...)
- generated kubeconfig (e.g. our current baremetal use-case)
- decrypted kubeconfig (if we want to keep kubecofnig parts repo)
- |
e.g. we may want to create kind cluster, add its kubeconfig to std kubconfig, deploy there capi/capz or capm3 deploy target cluster,
get its kubeconfig and add to std place, make move, kill kind cluster. In that case we'll get kubeconfig in std place.
we may want to encrypt its part and add to catalogue/encrypt that (can be done via phase)
we will need also a phase that decrypts and adds kubeconfig back. So the new user could work with that.
- we can see that kind follows the similar approach. it works with std location, when you add cluster it adds info to kubeconfig.
- |
lots of tools work with std location, nobody invents such complicated logic.
it's easier always to get kubeconfig from remote( special phase) and put it to std location, so other tools will also work.
This is more operator friendly thing.
The proposal of new kubeconfig workflow is the following:
airshipctl should work with one single kubeconfig file, by default located at ~/.kube/config.
there will be two options to override this location: standard CLI option (--kubeconfig path or KUBECONFIG env variable) or phase options
airshipctl and it's phases start to work with contexts instead of clusters. context name should be defined in phase options or user can override it using --context flag of phase run subcommand;
to perform operations with kubeconfig a new KRM function (kubeconfig-manager) will be introduced; it can perform following actions: saving kubeconfig from kustomize entrypoint (decryption); get kubeconfig from secret (and merge it into existing one); show parts or entire kubeconfig; remove contexts from kubeconfig;
the KRM function will be defined as a phase with appropriate options and will perform actions on demand;
Potential impacts
Improved performance and user experience.
The text was updated successfully, but these errors were encountered:
Problem description (if applicable)
Current kubeconfig management and workflow in airshipctl has outdone itself.
There are several common problems that we have with kubeconfigs right now:
This approach makes delays and it's inconvenient for the end user to define a kubeconfig and its sources in a separate entity; also the information stored in clustermap has no practical benefit since it duplicates itself.
at all without defined in env SOPS decrypt keys and it makes it slow. There are already PS in downstream with workarounds solving that problem (by removing kubeconfig entrypoint from phase bundle) since it become really annoying for colleagues who use airshipctl every day;
toolbox container and custom bash scripts with kubectl under the hood (for instance, capd phase plan).
Proposed change
To solve described problems we should move forward and use completely different approach which must be:
We have the following use cases to cover:
- be able to work with std kubeconfig (or other specified location) that already exist
- be able to modify the std kubeconfig (or other specified location) by adding/removing clusters/contexts
- added/removed parts of kubeconfigs are:
- newly created cluster (capd, capz...)
- generated kubeconfig (e.g. our current baremetal use-case)
- decrypted kubeconfig (if we want to keep kubecofnig parts repo)
- |
e.g. we may want to create kind cluster, add its kubeconfig to std kubconfig, deploy there capi/capz or capm3 deploy target cluster,
get its kubeconfig and add to std place, make move, kill kind cluster. In that case we'll get kubeconfig in std place.
we may want to encrypt its part and add to catalogue/encrypt that (can be done via phase)
we will need also a phase that decrypts and adds kubeconfig back. So the new user could work with that.
- we can see that kind follows the similar approach. it works with std location, when you add cluster it adds info to kubeconfig.
- |
lots of tools work with std location, nobody invents such complicated logic.
it's easier always to get kubeconfig from remote( special phase) and put it to std location, so other tools will also work.
This is more operator friendly thing.
The proposal of new kubeconfig workflow is the following:
airshipctl should work with one single kubeconfig file, by default located at ~/.kube/config.
there will be two options to override this location: standard CLI option (--kubeconfig path or KUBECONFIG env variable) or phase options
airshipctl and it's phases start to work with contexts instead of clusters. context name should be defined in phase options or user can override it using --context flag of phase run subcommand;
to perform operations with kubeconfig a new KRM function (kubeconfig-manager) will be introduced; it can perform following actions: saving kubeconfig from kustomize entrypoint (decryption); get kubeconfig from secret (and merge it into existing one); show parts or entire kubeconfig; remove contexts from kubeconfig;
the KRM function will be defined as a phase with appropriate options and will perform actions on demand;
Potential impacts
Improved performance and user experience.
The text was updated successfully, but these errors were encountered: