Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes refactoring #450

Open
wants to merge 19 commits into
base: main
Choose a base branch
from
Open

Conversation

cbosdo
Copy link
Contributor

@cbosdo cbosdo commented Sep 10, 2024

What does this PR change?

The installation, migration, upgrade and uninstall for Kubernetes has been refactored to:

  • no longer use a helm chart
  • use jobs for the migration phases
  • merge all the installation, migration and upgrade code into a single Reconcile() function.

This PR paves the way for the transition to an operator and improves the reliability of migrations.

As a collateral, SSH agent can no longer be used as that would require
running on a cluster node again. The tool creates a ConfigMap to store the SSH config and known_hosts and a Secret
for a password-less SSH key.

Using Kubernetes API modules also helps for code reuse with a future operator.

Test coverage

  • Unit tests were added

  • DONE

Links

Issue(s): #431 #432 #433 #436

  • DONE

Changelogs

Make sure the changelogs entries you are adding are compliant with https://github.com/uyuni-project/uyuni/wiki/Contributing#changelogs and https://github.com/uyuni-project/uyuni/wiki/Contributing#uyuni-projectuyuni-repository

If you don't need a changelog check, please mark this checkbox:

  • No changelog needed

If you uncheck the checkbox after the PR is created, you will need to re-run changelog_test (see below)

Before you merge

Check How to branch and merge properly!

@cbosdo cbosdo marked this pull request as draft September 10, 2024 17:36
@cbosdo cbosdo force-pushed the k8s-refactoring branch 7 times, most recently from 997fe46 to 8f9ea9e Compare September 16, 2024 21:01
@cbosdo cbosdo force-pushed the k8s-refactoring branch 3 times, most recently from e95556e to 837af3c Compare September 23, 2024 13:56
@cbosdo cbosdo force-pushed the k8s-refactoring branch 7 times, most recently from e5c4acd to d0bcbeb Compare October 6, 2024 19:34
@cbosdo cbosdo force-pushed the k8s-refactoring branch 3 times, most recently from dcbc987 to ffc6888 Compare October 25, 2024 07:08
@cbosdo cbosdo force-pushed the k8s-refactoring branch 2 times, most recently from d0ca4d4 to b20b6c7 Compare November 6, 2024 08:42
@cbosdo cbosdo force-pushed the k8s-refactoring branch 4 times, most recently from c406d19 to 2a9f8f0 Compare November 13, 2024 10:09
@cbosdo cbosdo marked this pull request as ready for review November 13, 2024 10:16
Copy link
Contributor

@aaannz aaannz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just couple notes, probably for future.
I think we should merge this ASAP even it there are things to change, reviewing this is couple hours job.

mgradm/cmd/inspect/podman.go Outdated Show resolved Hide resolved
mgradm/cmd/install/kubernetes/utils.go Outdated Show resolved Hide resolved
mgradm/cmd/migrate/kubernetes/ssh.go Outdated Show resolved Hide resolved
mgradm/shared/kubernetes/certificates.go Outdated Show resolved Hide resolved
mgradm/shared/kubernetes/certificates.go Outdated Show resolved Hide resolved
mgradm/shared/kubernetes/reconcile.go Outdated Show resolved Hide resolved
Comment on lines +116 to +133

var noSSLPaths = []string{
"/pub",
"/rhn/([^/])+/DownloadFile",
"/(rhn/)?rpc/api",
"/rhn/errors",
"/rhn/ty/TinyUrl",
"/rhn/websocket",
"/rhn/metrics",
"/cobbler_api",
"/cblr",
"/httpboot",
"/images",
"/cobbler",
"/os-images",
"/tftp",
"/docs",
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will need some crosslink with relevant parts of uyuni code so we know when remove/add here where else it should be checked.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but sadly I don't see how we could improve this as long as we have SSL handled inside the container for podman and outside for Kubernetes.

mgradm/shared/templates/reusedCaIssuerTemplate.go Outdated Show resolved Hide resolved
uyuni-tools.spec Show resolved Hide resolved
nadvornik and others added 19 commits November 25, 2024 17:03
During a migration to kubernetes the server is deployed after the rsync
to prepare the SSL secrets and PVC. This has the nasty effect to
corrupt the synchronized data with a too recent catalog version ID.
This would let the DB migration to fail starting the old postgresql
server.

To workaround this, move the data to the the backup place after the
rsync instead of the begining of the db upgrade.
After the k8s migration the pod has been started again since the initial
connection creation. We need to reset the connection to not search for
the old pod name.
Some pods require a long time to run. This is the case for the DB
upgrade finalization that runs a potentially long reindex.
Of of the issuers creation function had two distinct behaviors and this
was only generating confusion when reading the whole code. This function
has been split and some useless intermediary functions have been merged.

This with better function naming should make the SSL setup code more
understandable.
In the kubernetes world we need to link the ports to services. For now
we only have a TCP and an UDP service for server and the same for proxy,
but in the short term, we will need more services to allow splitting
into multiple pods.

This refactoring is preparing this split.
Running commands in a running container only works if there is a running
container and is harder to unit test.

In order to help sharing code for Kubernetes, the SanityCheck now gets
the existing deployment version with inspecting its image. This also
helps adding unit tests for those checks.
In order to later share code between those 3 very similar commands, we
need to share the parameters data structure.
Migration to kubernetes is rather fragile, with:
    1. tasks running in `kubectl exec` or as `pod`.
    2. the uyuni helm chart being deployed multiple times
    3. `hostPath` mounts are used everywhere for the scripts to run and
       data to read and force the script to run on the cluster node.

Here are the solutions to those problems:

1. Each step will run as a Job and those won't be deleted automatically
   for the user to access their logs after.

2. Stop using the helm chart and deploy the resources when we need them.
   This will allow more control of what runs when and reduces the number
   of useless starts of the giant container.

   Postgresql DB upgrade will disable SSL temporarily in the
   postgresql.conf in order to not rely on the SSL certificates to be
   migrated.

3. The scripts to run for each step will be passed directly as `sh -c`
   parameter to the generated Jobs.
   The migration data are be stored in a special volume and not on the
   host.

As a collateral, SSH agent can no longer be used as that would require
running on a cluster node again. The tool now creates a ConfigMap to
store the SSH config and known_hosts and a Secret for a passwordless
SSH key.

The PersistentVolumes are not destroyed after the end of the first job
and are then reused by the next ones and the final deployment.

Using Kubernetes API modules also helps for code reuse with a future
operator.

Note that the old postgresql database cannot be moved to a separate
PersistentVolumes. As we run a `db_upgrade --link`, the old database is
linked by the new one and cannot be disposed of.
In order to share the same code for installation, migration and upgrade
the RunSetup() function needs to move to the mgradm shared utils module.
Remove all server resources without relying on the helm chart.
Refactor upgrade and install of the server to no longer need the helm
chart as initiated for the migration, but merge all those logics into a
single Reconcile() function to avoid redundancy.

Merging the code into a single function will also help figuring out how
to implement an operator in the future.
There is no need to run a potentially lengthy reindexing on minor
upgrades, only on major ones.

Don't call su with `-` parameter as it shows the warning message for
terminals... and that looks ugly in logs.
Since helm is no longer used installing Uyuni, but only cert-manager,
rename the flags. Also drop those that are no longer used for the
server after the refactoring.
With CGO enabled there are include problems on that architecture and
that would probably require cross-compiling.
Traefik helm chart changed the structure of the expose property starting
version 27. Read the chart version from the trafik.yaml file and write
the config accordingly.
Running the first user creation from outside the container relies on the
pod to be seen as ready by kubernetes... and sometimes it takes longer
than others. Calling the API from the setup script inside the container
allows to use localhost and not rely on ingress to route the request.
During the installation, there was a message indicating that the
timezone from the host couldn't be set in the container. This was due to
no removing the line end from the command output.
Copy link
Contributor

@nadvornik nadvornik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants