Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add infrastructure for testing environment #559

Merged
merged 13 commits into from
Nov 2, 2022
Merged

Conversation

nirs
Copy link
Member

@nirs nirs commented Sep 29, 2022

This PR adds the drenv tool and python package for creating minikube based
local or CI test environment.

The changes includes a regional DR environment with ocm, rook, minio and rbd
mirroring.

The goal of this change:

  • having an easy and reliable way to set up a testing environment, to make it
    easy for new contributors.
  • having easy to understand and maintain code

When this work is finished, it should replace the hack deployment scripts.

Some components installed by hack/ocm-minimube.sh are missing:

I'm not sure if these are needed, which version is required, which is
the right repo to install them from, and what changes are needed for
ramen.

Building and installing ramen is not done yet since it depends on the missing
components.

The next step is adding environment for metro DR with external ceph storage.
This will probably require some changes in the rook scripts.

@nirs nirs force-pushed the test branch 3 times, most recently from 36d1da6 to 2b4172e Compare September 29, 2022 18:21
@nirs nirs self-assigned this Sep 29, 2022
@nirs nirs added the enhancement New feature or request label Sep 29, 2022
@nirs nirs force-pushed the test branch 2 times, most recently from cb1fb98 to 438b4ec Compare September 29, 2022 20:03
Copy link
Member

@ShyamsundarR ShyamsundarR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have not tested it yet... but code looks fine.

test/README.md Outdated Show resolved Hide resolved
test/README.md Show resolved Hide resolved
test/README.md Outdated Show resolved Hide resolved
To start the environment:

```
drenv start example.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Older version of minikube will complain that --extra-disk option does not exist. A new version of minikube is needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And after that it failed with this:

2022-10-28 15:02:16,610 ERROR   [ex1] Cluster failed
Traceback (most recent call last):
  File "/home/user1/projects/github/ramen/test/drenv/__main__.py", line 120, in execute
    f.result()
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/lib64/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/user1/projects/github/ramen/test/drenv/__main__.py", line 147, in start_cluster
    run_script(script, name=profile["name"])
  File "/home/user1/projects/github/ramen/test/drenv/__main__.py", line 178, in run_script
    run(script["file"], *script["args"], name=name)
  File "/home/user1/projects/github/ramen/test/drenv/__main__.py", line 201, in run
    f"[{name}] Command {cmd} failed rc={p.returncode}\n"
RuntimeError: [ex1] Command ('example/start', 'ex1') failed rc=125

Last messages:
  /usr/bin/env: invalid option -- 'S'
  Try '/usr/bin/env --help' for more information.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a note about minimal minkube version.

Which OS are you running? this was tested on Fedora 36, which has:

$ env --version
env (GNU coreutils) 9.0

But we the -S argument is needed only for the using unbuffered output, and we can solve
this in a better way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am using CentOS 7.
The patch you posted fixed the problem.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Centos 7 (8 years old now?) is a bit extreme for development tool, I think it will
be hard to support it.

I think supporting current Fedora (and maybe current Ubuntu/Debian) is good enough.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Older version of minikube will complain that --extra-disk option does not exist. A new version of minikube is needed.

Current version mention --extra-disk and the minikube version that I tested with.

@nirs
Copy link
Member Author

nirs commented Oct 28, 2022

@BenamarMk thanks for testing! I hope the last commit works for you.

@nirs nirs requested a review from BenamarMk October 28, 2022 22:17
test/README.md Outdated Show resolved Hide resolved
nirs added 7 commits October 30, 2022 20:39
The `drenv` tool creates a minikube based test environment for local
testing or for running system tests in CI.

The `drevn` tool uses a yaml file describing the clusters and how to
deploy them in a more declarative way. The `example.yaml` file is a
simple example demonstrating how the tool works and how to write scripts
for new environments.

The `drenv` python package provides helpers for scripts written in
python:
- logging progress and output from sub processes
- running kubectl in minikube environment
- waiting until resource or resource status exists, not provided by
  kubectl
- using yaml templates
- using temporary kustomization directories
- sharing information between scripts

Signed-off-by: Nir Soffer <[email protected]>
This environment uses hub cluster and 2 managed clusters using rook for
storage on both managed clusters.

This change adds the olm deployment on all clusters. This is a simple
deployment without any local yaml files.

This is a good example for using `drenv.wait_for()` to wait until a
resource is created and report status.phase. In the hack scripts this
was solved by ignoring all errors until the expected output is seen.

Example usage:

    $ drenv start regional-dr.yaml
    2022-09-29 18:39:29,517 INFO    [env] Using regional-dr.yaml
    2022-09-29 18:39:29,521 INFO    [dr1] Starting cluster
    2022-09-29 18:39:29,521 INFO    [dr2] Starting cluster
    2022-09-29 18:39:29,522 INFO    [hub] Starting cluster
    2022-09-29 18:40:11,510 INFO    [hub] Cluster started in 41.99 seconds
    2022-09-29 18:40:11,510 INFO    [hub] Starting olm/start
    2022-09-29 18:40:30,658 INFO    [dr2] Cluster started in 61.14 seconds
    2022-09-29 18:40:30,658 INFO    [dr2] Starting olm/start
    2022-09-29 18:40:35,003 INFO    [hub] olm/start completed in 23.49 seconds
    2022-09-29 18:40:53,662 INFO    [dr1] Cluster started in 84.14 seconds
    2022-09-29 18:40:53,663 INFO    [dr1] Starting olm/start
    2022-09-29 18:40:57,056 INFO    [dr2] olm/start completed in 26.40 seconds
    2022-09-29 18:41:23,644 INFO    [dr1] olm/start completed in 29.98 seconds
    2022-09-29 18:41:23,645 INFO    [env] Started in 114.13 seconds

Signed-off-by: Nir Soffer <[email protected]>
This changes installs cluster-manager on the hub cluster, and klusterlet
on the managed clusters.

This is a good example for global scripts running after all clusters are
deployed (klusterlet/start), and for self test (klusterlet/test)
ensuring that the deployment works before we use it to test ramen
itself.

Example usage:

    $ drenv start regional-dr.yaml
    2022-09-29 18:54:44,029 INFO    [env] Using regional-dr.yaml
    2022-09-29 18:54:44,033 INFO    [dr1] Starting cluster
    2022-09-29 18:54:44,034 INFO    [dr2] Starting cluster
    2022-09-29 18:54:44,034 INFO    [hub] Starting cluster
    2022-09-29 18:55:25,787 INFO    [dr1] Cluster started in 41.75 seconds
    2022-09-29 18:55:25,787 INFO    [dr1] Starting olm/start
    2022-09-29 18:55:43,598 INFO    [hub] Cluster started in 59.56 seconds
    2022-09-29 18:55:43,598 INFO    [hub] Starting olm/start
    2022-09-29 18:55:54,471 INFO    [dr1] olm/start completed in 28.68 seconds
    2022-09-29 18:56:02,722 INFO    [dr2] Cluster started in 78.69 seconds
    2022-09-29 18:56:02,722 INFO    [dr2] Starting olm/start
    2022-09-29 18:56:11,994 INFO    [hub] olm/start completed in 28.40 seconds
    2022-09-29 18:56:11,994 INFO    [hub] Starting cluster-manager/start
    2022-09-29 18:56:26,088 INFO    [dr2] olm/start completed in 23.37 seconds
    2022-09-29 18:57:21,465 INFO    [hub] cluster-manager/start completed in 69.47 seconds
    2022-09-29 18:57:21,465 INFO    [env] Starting klusterlet/start
    2022-09-29 18:58:15,991 INFO    [env] klusterlet/start completed in 54.53 seconds
    2022-09-29 18:58:15,991 INFO    [env] Starting klusterlet/test
    2022-09-29 18:58:36,262 INFO    [env] klusterlet/test completed in 20.27 seconds
    2022-09-29 18:58:36,262 INFO    [env] Started in 232.23 seconds

Signed-off-by: Nir Soffer <[email protected]>
This change install rook on the managed clusters. This cluster will be
configured for mirroring in the next change.

Deploying rook takes about 170 seconds per cluster, but because we
deploy all clusters in parallel, the total time to start the environment
was increased by less than 100 seconds.

Example usage:

    $ drenv start regional-dr.yaml
    2022-09-29 19:19:48,769 INFO    [env] Using regional-dr.yaml
    2022-09-29 19:19:48,773 INFO    [dr1] Starting cluster
    2022-09-29 19:19:48,774 INFO    [dr2] Starting cluster
    2022-09-29 19:19:48,774 INFO    [hub] Starting cluster
    2022-09-29 19:20:31,366 INFO    [dr1] Cluster started in 42.59 seconds
    2022-09-29 19:20:31,366 INFO    [dr1] Starting olm/start
    2022-09-29 19:20:50,842 INFO    [dr2] Cluster started in 62.07 seconds
    2022-09-29 19:20:50,842 INFO    [dr2] Starting olm/start
    2022-09-29 19:20:58,195 INFO    [dr1] olm/start completed in 26.83 seconds
    2022-09-29 19:20:58,195 INFO    [dr1] Starting rook/start
    2022-09-29 19:21:13,383 INFO    [hub] Cluster started in 84.61 seconds
    2022-09-29 19:21:13,383 INFO    [hub] Starting olm/start
    2022-09-29 19:21:14,692 INFO    [dr2] olm/start completed in 23.85 seconds
    2022-09-29 19:21:14,692 INFO    [dr2] Starting rook/start
    2022-09-29 19:21:39,497 INFO    [hub] olm/start completed in 26.11 seconds
    2022-09-29 19:21:39,497 INFO    [hub] Starting cluster-manager/start
    2022-09-29 19:22:53,314 INFO    [hub] cluster-manager/start completed in 73.82 seconds
    2022-09-29 19:23:51,425 INFO    [dr1] rook/start completed in 173.23 seconds
    2022-09-29 19:24:03,886 INFO    [dr2] rook/start completed in 169.19 seconds
    2022-09-29 19:24:03,886 INFO    [env] Starting klusterlet/start
    2022-09-29 19:24:54,121 INFO    [env] klusterlet/start completed in 50.23 seconds
    2022-09-29 19:24:54,121 INFO    [env] Starting klusterlet/test
    2022-09-29 19:25:15,914 INFO    [env] klusterlet/test completed in 21.79 seconds
    2022-09-29 19:25:15,914 INFO    [env] Started in 327.14 seconds

Signed-off-by: Nir Soffer <[email protected]>
This change installs minio on the managed clusters. The yaml file was
copied from the hack directory as is.

It would be nice to add a self test ensuring that minio works, but we
can do this later.

Example usage:

    $ drenv start regional-dr.yaml
    2022-09-29 19:35:11,775 INFO    [env] Using regional-dr.yaml
    2022-09-29 19:35:11,780 INFO    [dr1] Starting cluster
    2022-09-29 19:35:11,780 INFO    [dr2] Starting cluster
    2022-09-29 19:35:11,781 INFO    [hub] Starting cluster
    2022-09-29 19:35:52,351 INFO    [dr2] Cluster started in 40.57 seconds
    2022-09-29 19:35:52,351 INFO    [dr2] Starting olm/start
    2022-09-29 19:36:12,383 INFO    [dr1] Cluster started in 60.60 seconds
    2022-09-29 19:36:12,383 INFO    [dr1] Starting olm/start
    2022-09-29 19:36:22,934 INFO    [dr2] olm/start completed in 30.58 seconds
    2022-09-29 19:36:22,934 INFO    [dr2] Starting rook/start
    2022-09-29 19:36:32,733 INFO    [hub] Cluster started in 80.95 seconds
    2022-09-29 19:36:32,733 INFO    [hub] Starting olm/start
    2022-09-29 19:36:37,277 INFO    [dr1] olm/start completed in 24.89 seconds
    2022-09-29 19:36:37,277 INFO    [dr1] Starting rook/start
    2022-09-29 19:37:02,223 INFO    [hub] olm/start completed in 29.49 seconds
    2022-09-29 19:37:02,223 INFO    [hub] Starting cluster-manager/start
    2022-09-29 19:38:21,571 INFO    [hub] cluster-manager/start completed in 79.35 seconds
    2022-09-29 19:39:06,525 INFO    [dr2] rook/start completed in 163.59 seconds
    2022-09-29 19:39:06,525 INFO    [dr2] Starting minio/start
    2022-09-29 19:39:20,373 INFO    [dr1] rook/start completed in 163.10 seconds
    2022-09-29 19:39:20,373 INFO    [dr1] Starting minio/start
    2022-09-29 19:39:25,599 INFO    [dr2] minio/start completed in 19.07 seconds
    2022-09-29 19:39:34,447 INFO    [dr1] minio/start completed in 14.07 seconds
    2022-09-29 19:39:34,447 INFO    [env] Starting klusterlet/start
    2022-09-29 19:40:28,625 INFO    [env] klusterlet/start completed in 54.18 seconds
    2022-09-29 19:40:28,625 INFO    [env] Starting klusterlet/test
    2022-09-29 19:40:50,919 INFO    [env] klusterlet/test completed in 22.29 seconds
    2022-09-29 19:40:50,919 INFO    [env] Started in 339.14 seconds

Signed-off-by: Nir Soffer <[email protected]>
This change configure rbd mirroring between both clusters, and runs a
self test ensuring that mirroring works in both ways.

Since rbd mirroring depends on rook in both clusters, we must run it at
the end after both managed clusters are ready. The start script
configure mirroring in both ways in parallel, by splitting code to
configure and waiting, and waiting for both clusters in parallel. The
test scripts uses the same way to test mirroring in parallel.

Another way to do parallel configuration and testing would be to an
executor (like drenv uses) in the script. Yet another way would be to
run the global scripts in parallel, so we can configure and test rbd
mirroring while deploying and testing klusterlet. I think we need to
look at this later, after we add the metro DR environment.

Example usage:

    $ drenv start regional-dr.yaml
    2022-09-29 19:54:21,317 INFO    [env] Using regional-dr.yaml
    2022-09-29 19:54:21,322 INFO    [dr1] Starting cluster
    2022-09-29 19:54:21,323 INFO    [dr2] Starting cluster
    2022-09-29 19:54:21,324 INFO    [hub] Starting cluster
    2022-09-29 19:55:03,834 INFO    [dr2] Cluster started in 42.51 seconds
    2022-09-29 19:55:03,834 INFO    [dr2] Starting olm/start
    2022-09-29 19:55:21,664 INFO    [hub] Cluster started in 60.34 seconds
    2022-09-29 19:55:21,664 INFO    [hub] Starting olm/start
    2022-09-29 19:55:35,304 INFO    [dr2] olm/start completed in 31.47 seconds
    2022-09-29 19:55:35,304 INFO    [dr2] Starting rook/start
    2022-09-29 19:55:46,641 INFO    [dr1] Cluster started in 85.32 seconds
    2022-09-29 19:55:46,641 INFO    [dr1] Starting olm/start
    2022-09-29 19:55:48,224 INFO    [hub] olm/start completed in 26.56 seconds
    2022-09-29 19:55:48,224 INFO    [hub] Starting cluster-manager/start
    2022-09-29 19:56:15,552 INFO    [dr1] olm/start completed in 28.91 seconds
    2022-09-29 19:56:15,552 INFO    [dr1] Starting rook/start
    2022-09-29 19:57:03,221 INFO    [hub] cluster-manager/start completed in 75.00 seconds
    2022-09-29 19:58:26,053 INFO    [dr2] rook/start completed in 170.75 seconds
    2022-09-29 19:58:26,053 INFO    [dr2] Starting minio/start
    2022-09-29 19:58:47,126 INFO    [dr2] minio/start completed in 21.07 seconds
    2022-09-29 19:59:06,685 INFO    [dr1] rook/start completed in 171.13 seconds
    2022-09-29 19:59:06,685 INFO    [dr1] Starting minio/start
    2022-09-29 19:59:20,789 INFO    [dr1] minio/start completed in 14.10 seconds
    2022-09-29 19:59:20,789 INFO    [env] Starting klusterlet/start
    2022-09-29 20:00:12,250 INFO    [env] klusterlet/start completed in 51.46 seconds
    2022-09-29 20:00:12,250 INFO    [env] Starting klusterlet/test
    2022-09-29 20:00:35,980 INFO    [env] klusterlet/test completed in 23.73 seconds
    2022-09-29 20:00:35,980 INFO    [env] Starting rbd-mirror/start
    2022-09-29 20:01:26,111 INFO    [env] rbd-mirror/start completed in 50.13 seconds
    2022-09-29 20:01:26,111 INFO    [env] Starting rbd-mirror/test
    2022-09-29 20:01:41,611 INFO    [env] rbd-mirror/test completed in 15.50 seconds
    2022-09-29 20:01:41,611 INFO    [env] Started in 440.29 seconds

Signed-off-by: Nir Soffer <[email protected]>
After the vrc is completed, log the rbd mirror image status, expanding
the metrics in the description field.

Example log (running rbd-mirror/test dr1 dr2):

    * rbd mirror image status in cluster dr1
      {
        "name": "csi-vol-09d91ca0-709d-422c-ade8-8da87aea1d44",
        "global_id": "03001adf-5001-44f5-ac28-fead2ad9baec",
        "state": "up+stopped",
        "description": "local image is primary",
        "daemon_service": {
          "service_id": "4378",
          "instance_id": "4380",
          "daemon_id": "a",
          "hostname": "dr1"
        },
        "last_update": "2022-10-25 18:16:03",
        "peer_sites": [
          {
            "site_name": "c25c4619-8f17-44ad-a3b4-bde531b97822",
            "mirror_uuids": "8a813851-7731-43a7-b31e-f3b44fd43788",
            "state": "up+replaying",
            "description": {
              "state": "replaying",
              "metrics": {
                "bytes_per_second": 0.0,
                "bytes_per_snapshot": 0.0,
                "remote_snapshot_timestamp": 1666721761,
                "replay_state": "idle"
              }
            },
            "last_update": "2022-10-25 18:16:03"
          }
        ],
        "snapshots": [
          {
            "id": 50,
            "name": ".mirror.primary.03001adf-5001-44f5-ac28-fead2ad9baec.3f6e8450-1a7b-4f75-9ea3-48ed81fc2de8",
            "demoted": false,
            "mirror_peer_uuids": [
              "d53aa0da-b0b9-418c-81dc-95db0efcc4e9"
            ]
          }
        ]
      }

It takes from seconds until rbd report the local_snapshot_timestamp, not
sure why. It always have the same timestamp as the remove snapshot, so
it seems pointless to wait few seconds until it is reported.

Signed-off-by: Nir Soffer <[email protected]>
If the env executable does not support the -S option, the scripts fail
with:

    /usr/bin/env: invalid option -- 'S'

Avoid this issue by using PYTHONUNBUFFERED=1 environment variable, set
once in the drenv tool instead of in every script.

Signed-off-by: Nir Soffer <[email protected]>
import drenv

# Update this when upgrading rook.
ROOK_BASE_URL = "https://raw.githubusercontent.com/rook/rook/release-1.10/deploy/examples"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(future) Is it possible to configure per module args from the higher level input YAML file (like test/regional-dr.yaml). The intention being, we could run different versions of Rook or other dependent components across different test.

With the current scheme each modules args would need to be edited.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, I think we need to make this much more declarative as we go.

@nirs nirs marked this pull request as draft October 31, 2022 17:32
@nirs
Copy link
Member Author

nirs commented Oct 31, 2022

Broken now since cluster-manager (probably also klusterlet) installs now v0.9.1, but the code
is hardcoding v0.8.0. Working on a fix.

nirs added 5 commits October 31, 2022 19:52
In some cases we wait for until a value exists (e.g. .status.currentCSV)
and then we want to use the value. Returning the non-empty value
avoids another kubectl get command.

Signed-off-by: Nir Soffer <[email protected]>
Since cluster-manager was update to 0.9.1, the cluster-manager scripts
times out waiting for `cluster-manager.v0.8.0`, when the actual deployed
version is `cluster-manager.v0.9.1`.

Now we get the name from the subscription .status.currentCSV.

Signed-off-by: Nir Soffer <[email protected]>
klusterlet had the same issue hard coding the version, apply the same
fix use for cluster-manager.

Signed-off-by: Nir Soffer <[email protected]>
This is not the right way, we can get the version and the entire image
spec from the csv:

    $ minikube kubectl -p hub -- get csv -n operators cluster-manager.v0.9.1 -o jsonpath='{.metadata.annotations.alm-examples}' | jq
    [
      {
        "apiVersion": "operator.open-cluster-management.io/v1",
        "kind": "ClusterManager",
        "metadata": {
          "name": "cluster-manager"
        },
        "spec": {
          "deployOption": {
            "mode": "Default"
          },
          "placementImagePullSpec": "quay.io/open-cluster-management/placement:v0.9.0",
          "registrationConfiguration": {
            "featureGates": [
              {
                "feature": "DefaultClusterSet",
                "mode": "Enable"
              }
            ]
          },
          "registrationImagePullSpec": "quay.io/open-cluster-management/registration:v0.9.0",
          "workImagePullSpec": "quay.io/open-cluster-management/work:v0.9.0"
    ...

Signed-off-by: Nir Soffer <[email protected]>
Like cluster-manager, we can fetch the right value from the csv, but
lets try to make this work first.

Signed-off-by: Nir Soffer <[email protected]>
@nirs
Copy link
Member Author

nirs commented Oct 31, 2022

With latest commit we work now with latest cluster-manager and klusterlet, but we need to remove the hard coded 0.9.0 tags.

registrationImagePullSpec: 'quay.io/open-cluster-management/registration:v0.8.0'
workImagePullSpec: 'quay.io/open-cluster-management/work:v0.8.0'
registrationImagePullSpec: 'quay.io/open-cluster-management/registration:v0.9.0'
workImagePullSpec: 'quay.io/open-cluster-management/work:v0.9.0'
Copy link
Member Author

@nirs nirs Oct 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This yaml looks exactly like the alm example - can we drop it and use the example from
the csv?

$ minikube kubectl -p hub -- get csv -n operators cluster-manager.v0.9.1 -o jsonpath='{.metadata.annotations.alm-examples}' | jq
[
  {
    "apiVersion": "operator.open-cluster-management.io/v1",
    "kind": "ClusterManager",
    "metadata": {
      "name": "cluster-manager"
    },
    "spec": {
      "deployOption": {
        "mode": "Default"
      },
      "placementImagePullSpec": "quay.io/open-cluster-management/placement:v0.9.0",
      "registrationConfiguration": {
        "featureGates": [
          {
            "feature": "DefaultClusterSet",
            "mode": "Enable"
          }
        ]
      },
      "registrationImagePullSpec": "quay.io/open-cluster-management/registration:v0.9.0",
      "workImagePullSpec": "quay.io/open-cluster-management/work:v0.9.0"
    }
  },
  {
    "apiVersion": "operator.open-cluster-management.io/v1",
    "kind": "ClusterManager",
    "metadata": {
      "name": "cluster-manager"
    },
    "spec": {
      "deployOption": {
        "hosted": {
          "registrationWebhookConfiguration": {
            "address": "management-control-plane",
            "port": 30443
          },
          "workWebhookConfiguration": {
            "address": "management-control-plane",
            "port": 31443
          }
        },
        "mode": "Hosted"
      },
      "placementImagePullSpec": "quay.io/open-cluster-management/placement:v0.9.0",
      "registrationImagePullSpec": "quay.io/open-cluster-management/registration:v0.9.0",
      "workImagePullSpec": "quay.io/open-cluster-management/work:v0.9.0"
    }
  }
]

registrationImagePullSpec: 'quay.io/open-cluster-management/registration:v0.8.0'
workImagePullSpec: 'quay.io/open-cluster-management/work:v0.8.0'
registrationImagePullSpec: 'quay.io/open-cluster-management/registration:v0.9.0'
workImagePullSpec: 'quay.io/open-cluster-management/work:v0.9.0'
Copy link
Member Author

@nirs nirs Oct 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think can be dropped and we can use the alm-example from the csv (see previous commit).

$ minikube kubectl -p dr1 -- get csv -n operators klusterlet.v0.9.1 -o jsonpath='{.metadata.annotations.alm-examples}' | jq
[
  {
    "apiVersion": "operator.open-cluster-management.io/v1",
    "kind": "Klusterlet",
    "metadata": {
      "name": "klusterlet"
    },
    "spec": {
      "clusterName": "cluster1",
      "deployOption": {
        "mode": "Default"
      },
      "externalServerURLs": [
        {
          "url": "https://localhost"
        }
      ],
      "namespace": "open-cluster-management-agent",
      "registrationConfiguration": {
        "featureGates": [
          {
            "feature": "AddonManagement",
            "mode": "Enable"
          }
        ]
      },
      "registrationImagePullSpec": "quay.io/open-cluster-management/registration:v0.9.0",
      "workImagePullSpec": "quay.io/open-cluster-management/work:v0.9.0"
    }
  }
]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In both cases, yes it can be used instead of adding another yaml to our repo.

Copy link
Member

@BenamarMk BenamarMk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nirs
Copy link
Member Author

nirs commented Nov 2, 2022

@ShyamsundarR do you want to merge the current ugly fixes as is? I can post a better version
but I can also post this as the next pr.

@nirs nirs marked this pull request as ready for review November 2, 2022 16:31
@BenamarMk BenamarMk merged commit 500a711 into RamenDR:main Nov 2, 2022
@nirs nirs deleted the test branch November 13, 2022 01:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants