Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(image): merges manifests builder stages to one #773

Closed

Conversation

bartoszmajsak
Copy link
Contributor

Description

We can combine two build stages into one, as there is no need to always build both images to only then decide from which one we want to copy manifests to the target image. Instead manifests stage will either copy local manifests or fetches using the script based on FETCH_MANIFESTS argument (previously known as USE_LOCAL).

In addition, it opens up IMAGE_BUILD_FLAGS variable to be overwritten when executing make targets instead of changing its value in the Makefile directly, as it was previously needed.

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

@openshift-ci openshift-ci bot requested review from etirelli and LaVLaS December 1, 2023 09:39
@bartoszmajsak bartoszmajsak requested review from zdtsw and ykaliuta and removed request for etirelli and LaVLaS December 1, 2023 09:39
We can combine two build stages to one, as there is no need to
always build both images to only then decide from which one we want to
copy manifests to the target image. Instead `manifests` stage will
either copy local manifests or fetches using the script based on
`FETCH_MANIFESTS` argument (previously known as `USE_LOCAL`).

In addition it opens up `IMAGE_BUILD_FLAGS` variable to be overwritten
when executing make targets instead of changing its value in the
`Makefile` directly, as it was previously needed.
Makefile Outdated
# set to "true" to use local instead
# see target "image-build"
IMAGE_BUILD_FLAGS = --build-arg USE_LOCAL=false
IMAGE_BUILD_FLAGS ?= --build-arg FETCH_MANIFESTS=true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not make sense for the usage like in the README.md. Command line variables have priority in make anyway. Even over environment regardless of -e flag.

But may be having a separate variable to avoid passing --build-arg can be a good idea, what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not make sense for the usage like in the README.md. Command line variables have priority in make anyway. Even over environment regardless of -e flag.

That's right. Thanks for catching it. I think the source of confusion for me was not expecting to see both build stages (git clone and COPY) executed in the original form of the docker file when I was expecting to see only the one based on the flag. That's now cleaned up in 0d3a574

But may be having a separate variable to avoid passing --build-arg can be a good idea, what do you think?

I am on the verge here. I have

alias image='make image -e IMAGE_BUILD_FLAGS="--build-arg FETCH_MANIFESTS=false" -e IMG=quay.io/maistra-dev/opendatahub-operator:$\(VERSION\) -e IMAGE_BUILDER=docker -e VERSION=dev-$(git branch --show-current)'

so I'm good.

Would it make it easier for you to have such a flag?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But may be having a separate variable to avoid passing --build-arg can be a good idea, what do you think?

I am on the verge here. I have

alias image='make image -e IMAGE_BUILD_FLAGS="--build-arg FETCH_MANIFESTS=false" -e IMG=quay.io/maistra-dev/opendatahub-operator:$\(VERSION\) -e IMAGE_BUILDER=docker -e VERSION=dev-$(git branch --show-current)'

so I'm good.

Great! Just keep in mind that -e is redundant here, it works differently for make than to podman-run.

Would it make it easier for you to have such a flag?

No actually, I haven't ever used that in practice :). So I personally would vote to avoid the change at all. Especially because it's unrelated to the purpose of the patch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bartoszmajsak was thinking about avoiding go version setting in 4 places. One of the ideas was getting it from go.mod and instead of patching Dockerfile, pass it as the argument (similar to toolbox), but then your alias will be broken.

ARG GOLANG_VERSION=1.19
ARG USE_LOCAL=false
ARG FETCH_MANIFESTS=true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to leave it here (except as a matter of documentation) since according to the Dockerfile documentation ARG is valid till the next FROM (USE_LOCAL was expanded in the FROM line itself before)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right, I cleared it up in 51d8c5d

@ykaliuta
Copy link
Contributor

ykaliuta commented Dec 1, 2023

It actually did not build both images (if I understand my build logs correctly) due to dependency tracking. But the change makes Dockerfile much cleaner. A question I have -- does it make sense to avoid that manifest stage and do it in the bulider? It used go-toolset image to make manifests before.

@bartoszmajsak
Copy link
Contributor Author

It actually did not build both images (if I understand my build logs correctly) due to dependency tracking.

@ykaliuta Ok I'm triple puzzled now, have a look at the build log below

docker build
❯ make image -e IMG=quay.io/maistra-dev/opendatahub-operator:$\(VERSION\) -e IMAGE_BUILDER=docker -e VERSION=
$(git branch --show-current)                                                                                 
docker build --no-cache -f Dockerfiles/Dockerfile  --build-arg USE_LOCAL=true -t quay.io/maistra-dev/opendata
hub-operator:incubation .                                                                                    
Sending build context to Docker daemon  48.51MB                                                              
Step 1/33 : ARG GOLANG_VERSION=1.19                                                                          
Step 2/33 : ARG USE_LOCAL=false                                                                              
Step 3/33 : ARG OVERWRITE_MANIFESTS=""                                                                       
Step 4/33 : FROM registry.access.redhat.com/ubi8/go-toolset:$GOLANG_VERSION as builder_local_false           
 ---> 8d5f48f7fee4                                                                                           
Step 5/33 : ARG OVERWRITE_MANIFESTS                                                                          
 ---> Running in fd2771df5f75                                                                                
Removing intermediate container fd2771df5f75                                                                 
 ---> b149d829c88a                                                                                           
Step 6/33 : USER root                                                                                        
 ---> Running in 05dd34e299fa                                                                                
Removing intermediate container 05dd34e299fa
 ---> c065dbd4d8e2
Step 7/33 : WORKDIR /opt
 ---> Running in fbdfc66830ab
Removing intermediate container fbdfc66830ab
 ---> 5b00defbec40
Step 8/33 : COPY get_all_manifests.sh get_all_manifests.sh
 ---> 356097080b83
Step 9/33 : RUN ./get_all_manifests.sh ${OVERWRITE_MANIFESTS}
 ---> Running in 7f856e101399
Cloning repo kserve: opendatahub-io:kserve:release-v0.11.0:config:kserve
Cloning into './.kserve'... 
Cloning repo odh-dashboard: opendatahub-io:odh-dashboard:incubation:manifests:dashboard
Cloning into './.odh-dashboard'...
Cloning repo odh-notebook-controller: opendatahub-io:kubeflow:v1.7-branch:components/odh-notebook-controller/config:odh-notebook-controller/odh-notebook-controller 
Cloning into './.kubeflow'...
Cloning repo notebooks: opendatahub-io:notebooks:main:manifests:notebooks
Cloning into './.notebooks'...
Cloning repo odh-model-controller: opendatahub-io:odh-model-controller:release-0.11.0:config:odh-model-controller
Cloning into './.odh-model-controller'...
Cloning repo ray: opendatahub-io:kuberay:master:ray-operator/config:ray
Cloning into './.kuberay'...
Cloning repo kf-notebook-controller: opendatahub-io:kubeflow:v1.7-branch:components/notebook-controller/config:odh-notebook-controller/kf-notebook-controller
Cloning into './.kubeflow'...
Cloning repo data-science-pipelines-operator: opendatahub-io:data-science-pipelines-operator:main:config:data-science-pipelines-operator 
Cloning into './.data-science-pipelines-operator'...
Cloning repo trustyai: trustyai-explainability:trustyai-service-operator:release/1.10.2:config:trustyai-service-operator
Cloning into './.trustyai-service-operator'...
Cloning repo model-mesh: opendatahub-io:modelmesh-serving:release-0.11.0:config:model-mesh
Cloning into './.modelmesh-serving'...
Cloning repo codeflare: opendatahub-io:codeflare-operator:main:config:codeflare
Cloning into './.codeflare-operator'...
Removing intermediate container 7f856e101399
 ---> a1671278199f
Step 10/33 : FROM registry.access.redhat.com/ubi8/go-toolset:$GOLANG_VERSION as builder_local_true
 ---> 8d5f48f7fee4
Step 11/33 : USER root
 ---> Running in 4f8948077214
Removing intermediate container 4f8948077214
 ---> d20e290c3c70
Step 12/33 : WORKDIR /opt
 ---> Running in d1a749b1d44d
Removing intermediate container d1a749b1d44d
 ---> 9b304fee78da
Step 13/33 : COPY odh-manifests/ /opt/odh-manifests/
 ---> 9d406db3f325
Step 14/33 : FROM builder_local_${USE_LOCAL} as builder
```
docker version Client: Version: 20.10.17 API version: 1.41 Go version: go1.16.15 Git commit: aa7e414 Built: Fri Jul 8 23:32:01 2022 OS/Arch: linux/amd64 Context: default Experimental: true
Server:
 Engine:
  Version:          20.10.17
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.15
  Git commit:       f756502
  Built:            Fri Jul  8 23:32:01 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.14
  GitCommit:        9ba4b250366a5ddde94bb7c9d1def331423aa323
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        

@ykaliuta
Copy link
Contributor

ykaliuta commented Dec 1, 2023

It actually did not build both images (if I understand my build logs correctly) due to dependency tracking.

@ykaliuta Ok I'm triple puzzled now, have a look at the build log below

Hah, interesting. Looks like podman is more clever. But I remember I read about dependencies in docker as well in the docs :) Makes more sense then. But what about squashing it one phase then? AFAIU it was splitted into two phases to make that condition on the image builder level. With the patch when it's in the script there is no sense for additional phase. Or I'm missing something?

podman log ```podman build --no-cache -f Dockerfiles/Dockerfile --build-arg USE_LOCAL=false -t quay.io/ykaliuta/opendatahub-operator:latest . [1/4] STEP 1/6: FROM registry.access.redhat.com/ubi8/go-toolset:1.19 AS builder_local_false [1/4] STEP 2/6: ARG OVERWRITE_MANIFESTS --> 0658cce527fa [1/4] STEP 3/6: USER root --> 6d2a0b9363d0 [1/4] STEP 4/6: WORKDIR /opt --> 3bd13acfa0fb [1/4] STEP 5/6: COPY get_all_manifests.sh get_all_manifests.sh --> fc074ef2b318 [1/4] STEP 6/6: RUN ./get_all_manifests.sh ${OVERWRITE_MANIFESTS} Cloning repo kserve: opendatahub-io:kserve:release-v0.11.0:config:kserve Cloning into './.kserve'... Cloning repo odh-dashboard: opendatahub-io:odh-dashboard:incubation:manifests:dashboard Cloning into './.odh-dashboard'... Cloning repo odh-notebook-controller: opendatahub-io:kubeflow:v1.7-branch:components/odh-notebook-controller/config:odh-notebook-controller/odh-notebook-controller Cloning into './.kubeflow'... Cloning repo notebooks: opendatahub-io:notebooks:main:manifests:notebooks Cloning into './.notebooks'... Cloning repo odh-model-controller: opendatahub-io:odh-model-controller:release-0.11.0:config:odh-model-controller Cloning into './.odh-model-controller'... Cloning repo ray: opendatahub-io:kuberay:master:ray-operator/config:ray Cloning into './.kuberay'... Cloning repo kf-notebook-controller: opendatahub-io:kubeflow:v1.7-branch:components/notebook-controller/config:odh-notebook-controller/kf-notebook-controller Cloning into './.kubeflow'... Cloning repo data-science-pipelines-operator: opendatahub-io:data-science-pipelines-operator:main:config:data-science-pipelines-operator Cloning into './.data-science-pipelines-operator'... Cloning repo trustyai: trustyai-explainability:trustyai-service-operator:release/1.10.2:config:trustyai-service-operator Cloning into './.trustyai-service-operator'... Cloning repo model-mesh: opendatahub-io:modelmesh-serving:release-0.11.0:config:model-mesh Cloning into './.modelmesh-serving'... Cloning repo codeflare: opendatahub-io:codeflare-operator:main:config:codeflare Cloning into './.codeflare-operator'... --> a0196c56ddc5 [3/4] STEP 1/13: FROM a0196c56ddc53a2f62783f1a40a0ecc47ec0c05055fb7f7dbe1afa161d9ede1d AS builder [3/4] STEP 2/13: USER root --> 92530b5fda6a [3/4] STEP 3/13: WORKDIR /workspace --> 262e23812a42 [3/4] STEP 4/13: COPY go.mod go.mod --> e39a33962a8b [3/4] STEP 5/13: COPY go.sum go.sum --> d145cda8323c [3/4] STEP 6/13: RUN go mod download --> 4a2bee6f7ac5 [3/4] STEP 7/13: COPY apis/ apis/ --> 432be0644019 [3/4] STEP 8/13: COPY components/ components/ --> 8d409afb27fd [3/4] STEP 9/13: COPY controllers/ controllers/ --> b046471d1ddf [3/4] STEP 10/13: COPY main.go main.go --> 35582a002821 [3/4] STEP 11/13: COPY pkg/ pkg/ --> 2c2841d130be [3/4] STEP 12/13: COPY infrastructure/ infrastructure/ --> 405d2af8cba9 [3/4] STEP 13/13: RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go --> b4b01e4fb965 [4/4] STEP 1/7: FROM registry.access.redhat.com/ubi8/ubi-minimal:latest [4/4] STEP 2/7: WORKDIR / --> 5a1ddd47a55a [4/4] STEP 3/7: COPY --from=builder /workspace/manager . --> 4873e93cdc67 [4/4] STEP 4/7: COPY --chown=1001:0 --from=builder /opt/odh-manifests /opt/manifests --> 4fc738ad01f0 [4/4] STEP 5/7: RUN chown -R 1001:0 /opt/manifests && chmod -R a+r /opt/manifests --> 984a23fbba03 [4/4] STEP 6/7: USER 1001 --> ff88d059ae68 [4/4] STEP 7/7: ENTRYPOINT ["/manager"] [4/4] COMMIT quay.io/ykaliuta/opendatahub-operator:latest --> e154b0975db4 Successfully tagged quay.io/ykaliuta/opendatahub-operator:latest e154b0975db48709bbcb7aa5b5f3f8da65fddbf404beb7bde816a6c6b08e2328 ```

@bartoszmajsak
Copy link
Contributor Author

Hah, interesting. Looks like podman is more clever. But I remember I read about dependencies in docker as well in the docs :) Makes more sense then.

I assumed podman is clever, but I'm on a very outdated fedora on my main workstation and podman ain't happy about it :)

But what about squashing it one phase then? AFAIU it was splitted into two phases to make that condition on the image builder level. With the patch when it's in the script there is no sense for additional phase. Or I'm missing something?

Let's not start this squash discussion again! Just kidding :D Personally, I like such separation, but if we significantly trim on execution time and image sizes, then I'm happy to do it. WDYT @zdtsw?

@bartoszmajsak
Copy link
Contributor Author

/test opendatahub-operator-e2e

@bartoszmajsak
Copy link
Contributor Author

@ykaliuta @zdtsw Is there anything else to be improved in this PR?

Copy link
Member

@zdtsw zdtsw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Dec 4, 2023
Copy link

openshift-ci bot commented Dec 4, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: zdtsw
Once this PR has been reviewed and has the lgtm label, please assign etirelli for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ykaliuta
Copy link
Contributor

ykaliuta commented Dec 4, 2023

/lgtm

@zdtsw
Copy link
Member

zdtsw commented Dec 13, 2023

/test opendatahub-operator-e2e

@zdtsw zdtsw enabled auto-merge (squash) December 13, 2023 06:20
@openshift-ci openshift-ci bot removed the lgtm label Jan 4, 2024
Copy link

openshift-ci bot commented Jan 4, 2024

New changes are detected. LGTM label has been removed.

@bartoszmajsak
Copy link
Contributor Author

/retest

@bartoszmajsak
Copy link
Contributor Author

/test opendatahub-operator-e2e

Copy link

openshift-ci bot commented Apr 24, 2024

@bartoszmajsak: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/opendatahub-operator-pr-image-mirror da6dd48 link true /test opendatahub-operator-pr-image-mirror
ci/prow/ci-index da6dd48 link true /test ci-index
ci/prow/opendatahub-operator-e2e da6dd48 link true /test opendatahub-operator-e2e
ci/prow/images da6dd48 link true /test images

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@ykaliuta
Copy link
Contributor

The PR is still in my github list of assignment. It was approved, looks good to me as well, but was not merged due to failing tests.
@bartoszmajsak can we rebase and finish it?

@bartoszmajsak
Copy link
Contributor Author

@ykaliuta Unfortunately, I don't have the bandwidth to dive into a year-old PR at the very moment. That said, I’d like to know if this approach is still worth it. If it does I can squeeze in some time in the coming days.

ykaliuta added a commit to ykaliuta/opendatahub-operator that referenced this pull request Nov 19, 2024
We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] opendatahub-io#773

Signed-off-by: Yauheni Kaliuta <[email protected]>
@ykaliuta
Copy link
Contributor

@ykaliuta Unfortunately, I don't have the bandwidth to dive into a year-old PR at the very moment. That said, I’d like to know if this approach is still worth it. If it does I can squeeze in some time in the coming days.

Well, you are the customer of that, with podman nobody noticed :)

Ok, I fetched the main change into #1381 avoiding interface changes.

@bartoszmajsak
Copy link
Contributor Author

Thanks @ykaliuta! Closing in favor of #1381

auto-merge was automatically disabled November 19, 2024 17:09

Pull request was closed

ykaliuta added a commit to ykaliuta/opendatahub-operator that referenced this pull request Nov 19, 2024
We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] opendatahub-io#773

Signed-off-by: Yauheni Kaliuta <[email protected]>
openshift-merge-bot bot pushed a commit that referenced this pull request Nov 20, 2024
We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] #773

Signed-off-by: Yauheni Kaliuta <[email protected]>
zdtsw pushed a commit to zdtsw-forking/opendatahub-operator that referenced this pull request Nov 28, 2024
We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] opendatahub-io#773

Signed-off-by: Yauheni Kaliuta <[email protected]>
(cherry picked from commit c1671ab)
zdtsw pushed a commit to zdtsw-forking/opendatahub-operator that referenced this pull request Nov 29, 2024
We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] opendatahub-io#773

Signed-off-by: Yauheni Kaliuta <[email protected]>
(cherry picked from commit c1671ab)
zdtsw pushed a commit to zdtsw-forking/opendatahub-operator that referenced this pull request Dec 2, 2024
We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] opendatahub-io#773

Signed-off-by: Yauheni Kaliuta <[email protected]>
(cherry picked from commit c1671ab)
openshift-merge-bot bot pushed a commit that referenced this pull request Dec 2, 2024
* update: owners (#1376)

Signed-off-by: Wen Zhou <[email protected]>
(cherry picked from commit 4a15a8f)
Signed-off-by: Wen Zhou <[email protected]>

* Dockerfile, Makefile: make CGO_ENABLED configurable (#1382)

The commit
a107703 ("feat(fips): enable GO_ENABLED in build (#1001)")

enabled CGO which makes problems for builders on non-x86 platforms.

Make it as an in the Dockerfile keeping default the same (enabled),
but make it possible to override with either environment
(`export CGO_ENABLED=0`) or make (`make CGO_ENABLED=0 image-build`)

Signed-off-by: Yauheni Kaliuta <[email protected]>
(cherry picked from commit 2d94349)

* Makefile: make USE_LOCAL overridable (#1384)

Get USE_LOCAL image build flag from makefile variable to make it
overridable with `make USE_LOCAL=true image`

Do not allow to get its value for environment be default due to
pretty generic name.

This is shorter than the old recommendation of overriding
IMAGE_BUILD_FLAGS. And since now CGO_ENABLED is also a flag, does
not mess up with it.

* USE_LOCAL=true uses existing manifests from opt/manifests for the
produced image without downloading them with get_all_manifests.sh
making it possible to both save time and make local amendments.

Signed-off-by: Yauheni Kaliuta <[email protected]>
(cherry picked from commit cea41dc)

* Dockerfile: merges manifests builder stages to one (#1381)

We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] #773

Signed-off-by: Yauheni Kaliuta <[email protected]>
(cherry picked from commit c1671ab)

* fix: rhoai build has extra manifests than odh's

- changes made in odh, missing manifests for rhoai
  since it is only using builder for main and others from manifests stage

Signed-off-by: Wen Zhou <[email protected]>

---------

Signed-off-by: Wen Zhou <[email protected]>
Co-authored-by: Yauheni Kaliuta <[email protected]>
zdtsw added a commit that referenced this pull request Dec 6, 2024
* cluster_config: init cluster variables on startup (#1059)

* cluster_config: move type definitions to the beginning of the file

Just a bit more clarity

Signed-off-by: Yauheni Kaliuta <[email protected]>

* cluster_config: init cluster variables on startup

Cluster configuration is supposed to be the same during operator pod
lifetime. There is no point to detect it on every reconcile cycle
keeping in mind that it can causes many api requests.

Do the initialization on startup. Keep GetOperatorNamespace()
returning error since it defines some logic in the DSCInitialization
reconciler.

Automatically called init() won't work here due to need to check
error of the initialization.

Wrap logger into context and use it in the Init() like
controller-runtime does [1][2].

Save root context without the logger for mgr.Start since "setup"
logger does not fit normal manager's work.

Leave GetDomain() as is since it uses OpenshiftIngress configuration
which is created when DSCInitialization instantiated.

Log cluster configuration on success from Init, so remove platform
logging from main.

[1] https://github.com/kubernetes-sigs/controller-runtime/blob/38546806f2faf5973e3321a7bd5bb3afdbb5767d/pkg/internal/controller/controller.go#L297
[2] https://github.com/kubernetes-sigs/controller-runtime/blob/38546806f2faf5973e3321a7bd5bb3afdbb5767d/pkg/internal/controller/controller.go#L111

Signed-off-by: Yauheni Kaliuta <[email protected]>

* cluster: do not return error from GetRelease

GetRelease return values are defined at startup, the error checked
in main, so no point to return error anymore.

Signed-off-by: Yauheni Kaliuta <[email protected]>

---------

Signed-off-by: Yauheni Kaliuta <[email protected]>

* components: move params.env image updating to Init stage (#1191)

* components, main: add Component Init method

Add Init() method to the Component interface and call it from main on
startup.

Will be used to move startup-time code from ReconcileComponent
(like adjusting params.env).

Signed-off-by: Yauheni Kaliuta <[email protected]>

* components: move params.env image updating to Init stage

Jira: https://issues.redhat.com/browse/RHOAIENG-11592

Image names in environment are not supposed to be changed during
runtime of the operator, so it makes sense to update them only on
startup.

If manifests are overriden by DevFlags, the DevFlags' version will
be used.

The change is straight forward for most of the components where only
images are updated and params.env is located in the kustomize root
directory, but some components (dashboard, ray, codeflare,
modelregistry) also update some extra parameters. For them image
part only is moved to Init since other updates require runtime DSCI
information.

The patch also changes logic for ray, codeflare, and modelregistry
in this regard to update non-image parameters regardless of DevFlags
like it was changed in dashboard recently.

The DevFlags functionality brings some concerns:

- For most components the code is written such a way that as soon as
DevFlags supplied the global path variables are changed and never
reverted back to the defaults. For some (dashboard, trustyai) there
is (still global) OverridePath/entryPath pair and manifests reverted
to the default, BUT there is no logic of transition.

- codeflare: when manifests are overridden namespace param is
updated in the hardcoded (stock) path;

This logic is preserved.

Signed-off-by: Yauheni Kaliuta <[email protected]>

---------

Signed-off-by: Yauheni Kaliuta <[email protected]>

* update: remove webhook service from bundle (#1275)

- we do not need it in bundle, CSV auto generate one during installation
- if we install operator via OLM, webhook service still get used from config/webhook/service.yaml

Signed-off-by: Wen Zhou <[email protected]>

* update: add validation on application and monitoring namespace in DSCI (#1263)

Signed-off-by: Wen Zhou <[email protected]>

* logger: add zap command line switches (#1280)

Allow to tune preconfigured by --log-mode zap backend with standard
zap command line switches from controller-runtime (zap-devel,
zap-encoder, zap-log-level, zap-stacktrace-level,
zap-time-encoding)[1].

This brings flexibility in logger setup for development environments
first of all.

The patch does not change default logger setup and does not change
DSCI override functionality.

[1] https://sdk.operatorframework.io/docs/building-operators/golang/references/logging

Signed-off-by: Yauheni Kaliuta <[email protected]>

* Modify Unit Tests GitHub Actions workflow to get code coverage test reports (#1279)

* Create codecov.yml

* Added to run test coverage also on PRs

* Removing trailing ]

* Update .github/workflows/codecov.yml

Co-authored-by: Wen Zhou <[email protected]>

* Removed go mod install dependency

* Consolidated codecov workflow into unit tests workflow

---------

Co-authored-by: Wen Zhou <[email protected]>

* webhook: move initialization inside of the module (#1284)

Add webhook.Init() function and hide webhook setup inside of the
module. It will make it possible to replace Init with a NOP (no
operation) with conditional compilation for no-webhook build.

Signed-off-by: Yauheni Kaliuta <[email protected]>

* feat: pass platform from env variable or fall back to use old logic (#1252)

* feat: pass platform from env variables or fall back to use old logic

- introduce new env var ODH_PLATFORM_TYPE, value set during build time
  - if value not match, fall back to detect managed then self-managed
- introduce new env var OPERATOR_RELEASE_VERSION, value also set during build time
  - if value is empty, fall back to use old way from CSV to read version or use 0.0.0
- add support from makefile
  - use envstubst to replace version

Signed-off-by: Wen Zhou <[email protected]>

* update: remove release version in the change

Signed-off-by: Wen Zhou <[email protected]>

---------

Signed-off-by: Wen Zhou <[email protected]>

* fix: update release version in DSCI and DSC .status for upgrade case (#1287)

- DSCI: if current version is not matching, update it
- DSC: in both reconcile pass and fail case, update it

Signed-off-by: Wen Zhou <[email protected]>

* Update version to 2.19.0 (#1291)

Co-authored-by: VaishnaviHire <[email protected]>

* Makefile, webhook: add run-nowebhook (#1286)

Make it possible to compile operator without webhook enabled (with
-tags nowebhook). Create a stub webhook.Init() function for that.

Add run-nowebhook target to run webhook locally. It requires
`make install` to be executed on the cluster beforehand.

Since it repeats `make run`, move the command to a variable.
Also use variable RUN_ARGS for operator arguments. It makes it
possible to override them from the command line.

In VSCode it is possible to debug it with the following example
launch.json configuration:

```
        {
            "name": "Debug Operator No Webhook",
            "type": "go",
            "request": "launch",
            "mode": "debug",
            "program": "main.go",
            "buildFlags": [
                "-tags", "nowebhook"
            ],
            "env": {
                "OPERATOR_NAMESPACE": "opendatahub-operator-system",
                "DEFAULT_MANIFESTS_PATH": "./opt/manifests"
            },
            "args": [
                "--log-mode=devel"
            ],
            "cwd": "${workspaceFolder}",
        }
```

Signed-off-by: Yauheni Kaliuta <[email protected]>

* update: use namespace dyniamically from operator env than hardcode value (#1298)

- thses are only needed when it is downstream speicific cases

Signed-off-by: Wen Zhou <[email protected]>

* webhook: switch to contextual logger (#1295)

Use k8s contextual logger instead of own. Comparing to other parts
of the operator webhook is a bit special since it serves own
endpoints and has context independent from controllers.

Include more info (operation), just to get more details. Do not add
kind since it's clear from "resource" field.

Since controller-framework adds "admission" to the name, use own
LogConstructor with own base logger for the framework's
DefaultLogConstructor.

Add name locally to distinguish from framework's messages.

Add Name field to the structures to avoid copying string literal for
both WithName() and WithValues().

The output changes and it looks like

```
{"level":"info","ts":"2024-10-11T05:17:20Z","logger":"ValidatingWebhook","msg":"Validation request","object":{"name":"default-dsci"},"namespace":"","name":"default-dsci","resource":{"group":"dscinitialization.opendatahub.io","version":"v1","resource":"dscinitializations"},"user":"kube:admin","requestID":"e5bf3768-6faa-4e14-9004-e54ee84ad8b7","webhook":"ValidatingWebhook","operation":"CREATE"}
```

or for the defaulter:

```
{"level":"info","ts":"2024-10-11T04:50:48Z","logger":"DefaultingWebhook","msg":"Defaulting DSC","object":{"name":"default-dsc"},"namespace":"","name":"default-dsc","resource":{"group":"datasciencecluster.opendatahub.io","version":"v1","resource":"datascienceclusters"},"user":"kube:admin","requestID":"c9213ff3-80ee-40c0-9f15-12188dece68e","webhook":"DefaultingWebhook"}
```

(the messages are not from the current codebase, was added for demo only)

Signed-off-by: Yauheni Kaliuta <[email protected]>

* chore: use client.OjectKeyFromObject in client.Get() (#1301)

Signed-off-by: Wen Zhou <[email protected]>

* (fix): Change on trigger event to `pull_request_target` in the "Check config and readme updates / Ensure generated files are included (pull_request)" action to fix "Resource not accessible by integration" error while running the action (#1296)

Signed-off-by: AJAY JAGANATHAN <[email protected]>

* feat: Operator disable create usergroup if detect user enabled external auth (#1297)

* feat: Operator disable create usergroup if detect users enabled external auth

- use internal Authentication CR Type "" or IntegratedOAuth to indicate if Operator should create usergroup  or not
  CRD has validation to only allow "IntegratedOAuth", "", "None" or "OIDC"
- only grant "get, watch , list" as least permission
- remove duplicated rbac for "ingress", has been defined in other places
- add object into cache
- add CRD into unit-test
- add unit-test: since we dont have auth CR, it should not create usergroup

Signed-off-by: Wen Zhou <[email protected]>

* update: review comments

Signed-off-by: Wen Zhou <[email protected]>

* update: review comments use const

Signed-off-by: Wen Zhou <[email protected]>

---------

Signed-off-by: Wen Zhou <[email protected]>

* components, logger: use contextual logging approach (#1253)

Switch ReconcileComponent from passing logger explicitly to wrapping
it into context[1][2]

Makes one parameter less to pass and will allow called utilities to
report component context where they are called from.

No user or logging format impact until utilities takes contextual
logging in use.

[1] https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/
[2] https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/3077-contextual-logging/README.md

Credits-To: Bartosz Majsak [email protected]

Signed-off-by: Yauheni Kaliuta <[email protected]>

* logger, controllers: use common logging level, INFO by default (#1289)

Remove increasing logging level for controllers (it was also passed
to components if not overridden from DSCI) since:

- it made logging inconsistent. The base contoller runtime logger is
set up with INFO level for all log modes, so when controllers are
configured for Error level, user sees INFO messages from both
controller-runtime and other parts of operator which use
controller-runtime's logger directly;

- since the base logger is configured for INFO, there is no
difference in levels between "default" and "devel". Having levels 1
and 2 there is misleading.

Update documentation.

This patch changes default logging, former filtered Info messages
are displayed now.

There is no _big_ difference in practice since currently the log is
anyway full of info messages from parts which do not use
reconciler's logger, like:

{"level":"info","ts":"2024-10-09T13:23:11Z","msg":"waiting for 1 deployment to be ready for dashboard"}

Signed-off-by: Yauheni Kaliuta <[email protected]>

* removed lint and typo fixes (#1302)

* fix: to make the unset env variable in CSV work with fallback (#1306)

- previous code, will run into opendathaub case if env is not set

Signed-off-by: Wen Zhou <[email protected]>

* controllers: switch to k8s contextual logger (#1308)

Jira: https://issues.redhat.com/browse/RHOAIENG-14096

This is a squashed commit of the following patches:

d012b67 ("controllers: switch to k8s contextual logger")
9880530 ("logger: blindly convert ctrl.Log users to contextual")

- controllers: switch to k8s contextual logger

Remove own logger from controllers' reconcilers and switch to k8s
contextual logger instead [1].

Use contextual logger for SecretGeneratorReconciler and
CertConfigmapGeneratorReconciler setups as well.

Add name to the logger coming from the framework. It will contains
"controller" field already, and like in webhook with the name it's
easy to distinguish framework and operator messages.

- logger: blindly convert ctrl.Log users to contextual

All the users should have proper context now. The log level changes
will affect it as well.

[1] https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/

Signed-off-by: Yauheni Kaliuta <[email protected]>

* Tidy up run-nowebhook recipe & make clean PHONY (#1310)

* chore: Tidy run-nowebhook recipe

The suggestion to tidy up the run-nowebhook recipe comes from this
conversation:
https://github.com/opendatahub-io/opendatahub-operator/pull/1304/files#r1806731373

* chore: Make `clean` a PHONY target

I believe `clean` should be a PHONY target, since it doesn't create a
file called `clean`

* refactor the secret generator controller (#1311)

The commit is meant to:
- make the code easier to understand, reducing complexity caused by
  nested if/else and error conditions (align happy path to the left)
- remove shadowed error vars to avoid reporting misleading errors
- add some basic unit test for the reconcile loop

* update: rename variables rhods to rhoai (#1313)

Signed-off-by: Wen Zhou <[email protected]>
Co-authored-by: ajaypratap003 <[email protected]>

* logger: add logLevel to DSCI devFlags (#1307)

Jira: https://issues.redhat.com/browse/RHOAIENG-14713

This is a squashed commit of the patchset (original ids):

959f84d ("logger: import controller-runtime zap as ctrlzap")
b8d9cde ("logger: add logLevel to DSCI devFlags")
2d3efe2 ("components, logger: always use controller's logger")
1ef9266 ("components, logger: use one function for ctx creation")

- logger: import controller-runtime zap as ctrlzap

To avoid patch polluting with the next changes where uber's zap is
imported as `zap`.

- logger: add logLevel to DSCI devFlags

Allow to override global zap log level from DSCI devFlags. Accepts
the same values as to `--zap-log-level` command line switch.

Use zap.AtomicLevel type to store loglevel. Locally use atomic to
store its value. We must store the structure itsel since command
line flags parser (ctrlzap.(*levelFlag).Set()) stores the structure
there. In its order ctrlzap.addDefault() stores pointers,
newOptions() always sets the level to avoid setting it by
addDefaults(). Otherwise SetLevel should handle both pointer and the
structure as logLevel atomic.Value.

It is ok to store structure of zap.AtomicLevel since it itself
contains a pointer to the atomic.Int32 and common level value is
changed then.

The parsing code is modified version of the one from
controller-runtime/pkg/log/zap/flag.go.

Deprecate DSCI.DevFlags.logmode.

- components, logger: always use controller's logger

Since the log level is overridable with its own field of devFlags,
do not use logmode anymore. It was used to create own logger with
own zap backend in case if devFlags exist.

Just add name and value to the existing logger instead.

Rename NewLoggerWithOptions back to NewLogger since former NewLogger
is removed.

Change component logger name. "DSC.Component" is redundant. It was
usuful when component's logger was created from scratch, but now
when it is always based on the reconciler's one, it's clear that it
is a part of DSC.

- components, logger: use one function for ctx creation

Move both logger and component creation to one function.

Just a cleanup.

Signed-off-by: Yauheni Kaliuta <[email protected]>

* secret-generator controller: avoid reporting 'Secret not found' error in reconcile (#1312)

* update: remove dsp with v1(tekton)backend related code (#1281)

* update: remove dsp with v1(tekton)backend related code

- images
- tekton rbac
- descriptions

Signed-off-by: Wen Zhou <[email protected]>
Co-authored-by: Humair Khan <[email protected]>

---------

Signed-off-by: Wen Zhou <[email protected]>
Co-authored-by: Humair Khan <[email protected]>

* update: remove two SA which does not seem valid (#1254)

Signed-off-by: Wen Zhou <[email protected]>

* Update version to 2.20.0 (#1325)

Co-authored-by: VaishnaviHire <[email protected]>

* fix: wrong type (#1322)

-  we should watch IngressController from operator.openshift.io, not the k8s ingress
-  this matched the cache we added in main

Signed-off-by: Wen Zhou <[email protected]>

* update golangci-lint to v1.61.0 (#1327)

* chore(linter): update golangci-lint to v1.61.0

* chore(linter): fix findings

* chore: remove duplicated image in modelmesh (#1336)

- this only apply for deplenet odh-model-controller
- no need to set it in the default map, a waste

Signed-off-by: Wen Zhou <[email protected]>

* fix: ensure input CAbundle always end with a newline (#1339)

* fix: ensure input CAbundle always end with a newline

- missing newline will cause problem when inject data into configmap
- this happens when user use kustomize with their own CA for DSCI, it set to use |- not |
- fix lint

Signed-off-by: Wen Zhou <[email protected]>

* update: better way to add newline

Signed-off-by: Wen Zhou <[email protected]>

---------

Signed-off-by: Wen Zhou <[email protected]>

* fix: add check to remove old kube-rbac-proxy container in modelregistry operator, fixes RHOAIENG-15328 (#1341)

* fix: add check to remove old kube-rbac-proxy container in modelregistry operator, fixes RHOAIENG-15328

Signed-off-by: Dhiraj Bokde <[email protected]>

* update: remove check in ModelReg code, add it to upgrade logic

Signed-off-by: Wen Zhou <[email protected]>

---------

Signed-off-by: Dhiraj Bokde <[email protected]>
Signed-off-by: Wen Zhou <[email protected]>
Co-authored-by: Wen Zhou <[email protected]>

* fix: wrong name of variable and func lead to always return self-managed (#1362)

Signed-off-by: Wen Zhou <[email protected]>

* fix: add extra check on servicemesh CRD and svc before proceeding create SMCP CR (#1359)

* feat: add extra check on servicemesh CRD and service before proceeding create SMCP CR

- when we install operator we might have a race condition:
    - ossm is not ready to server
    - dsci already created smcp CR
    - since we do not know the version of smcp to use
    - ossm webhook can run into conversion problem since no version specify in smcp CR
   -  test: add more negative test
       - when CRD does not exist
       - when service does not exist

Signed-off-by: Wen Zhou <[email protected]>

---------

Signed-off-by: Wen Zhou <[email protected]>

* logger: set loglevel from ZAP_LOG_LEVEL environment variable (#1344)

Add possibility to set loglevel with the environment. The name is
used used already.

Command line switch has priority over environment.

[1] https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/config/base/params.env#L13

Signed-off-by: Yauheni Kaliuta <[email protected]>

* fix: added a filter to evicted pods when checking for ready status (#1334)

* fix: added a filter to evicted pods

Signed-off-by: Alessio Pragliola <[email protected]>

---------

Signed-off-by: Alessio Pragliola <[email protected]>

* Task/NVPE-32: Cleanup NIM integration tech preview resources (#1369)

* chore: cleanup nim integration tech preview resources

Signed-off-by: Tomer Figenblat <[email protected]>

* chore: apply suggestions from code review

Co-authored-by: Wen Zhou <[email protected]>

* chore: addressed pr reviews, better logging

Signed-off-by: Tomer Figenblat <[email protected]>

* chore: set nim cleanup for minors 14 and 15

Signed-off-by: Tomer Figenblat <[email protected]>

* chore: nim tp cleanup should remove api key secret

Signed-off-by: Tomer Figenblat <[email protected]>

* chore: addresed pr reviews, rewrite cleanup func

Signed-off-by: Tomer Figenblat <[email protected]>

---------

Signed-off-by: Tomer Figenblat <[email protected]>
Co-authored-by: Wen Zhou <[email protected]>

* update: owners (#1376)

Signed-off-by: Wen Zhou <[email protected]>

* Updated codecov version to 4.6.0 (#1378)

* Update version to 2.21.0 (#1379)

Co-authored-by: VaishnaviHire <[email protected]>

* Dockerfile, Makefile: make CGO_ENABLED configurable (#1382)

The commit
a107703 ("feat(fips): enable GO_ENABLED in build (#1001)")

enabled CGO which makes problems for builders on non-x86 platforms.

Make it as an in the Dockerfile keeping default the same (enabled),
but make it possible to override with either environment
(`export CGO_ENABLED=0`) or make (`make CGO_ENABLED=0 image-build`)

Signed-off-by: Yauheni Kaliuta <[email protected]>

* Makefile: make USE_LOCAL overridable (#1384)

Get USE_LOCAL image build flag from makefile variable to make it
overridable with `make USE_LOCAL=true image`

Do not allow to get its value for environment be default due to
pretty generic name.

This is shorter than the old recommendation of overriding
IMAGE_BUILD_FLAGS. And since now CGO_ENABLED is also a flag, does
not mess up with it.

* USE_LOCAL=true uses existing manifests from opt/manifests for the
produced image without downloading them with get_all_manifests.sh
making it possible to both save time and make local amendments.

Signed-off-by: Yauheni Kaliuta <[email protected]>

* Dockerfile: merges manifests builder stages to one (#1381)

We can combine two build stages into one, as there is no need to
always build both images (not done by podman) to only then decide
from which one we want to copy manifests to the target
image. Instead manifests stage will either copy local manifests or
fetches using the script based on USE_LOCAL argument.

Move USE_LOCAL and OVERWIRTE_MANIFESTS args under FROM since args
have scope of the FROM they are declared in.

It requires opt/manifests directory to exist, but since it's a part
of git repo, it's fine.

Original patch from: Bartosz Majsak <[email protected]> [1]

[1] #773

Signed-off-by: Yauheni Kaliuta <[email protected]>

* Use incubating for odh-model-controller manifests (#1393)

* Use incubating for odh-model-controller manifests

* Use kserve v0.14 branch for manifests

* add enablement flag for nim (#1330)

* add enablement flag for nim

* add generated files

* add api generated file

* revert image for csv

* update NIM with its own spec

* create nim struct remove from serverless

* updated generated files

* update nim to default removed

* update image to correct var

* update to managed as default

* add nim-state to params.env

* fix NVIDIA typo

* resolve conflict

* add NIM flag check to model mesh

* fix condtional for standard practice nim flag

* fix to conditional for standards

* update typo for error output nim flag kserve

* Fix deploy to dependent path

* update doc/manifests references for nim flag

* fix image typo csv

* update if on nim-state

* Remove adelton from the platform owners list. (#1402)

* Typo fixes "manifests" (#1409)

* doc: remove -e from make invocation examples (#1408)

The intention in the examples is to override make variables which is
done with setting the variable in the command line, while -e switch
does not get argument and changes make to a mode when environment
takes precedence over makefile settings[1][2]

Short examples:

```
 % cat Makefile
VAR = var
DEFVAR ?= defvar
all:
        @echo "VAR $(VAR), origin $(origin VAR)"
        @echo "DEFVAR $(DEFVAR), origin $(origin DEFVAR)"

% make
VAR var, origin file
DEFVAR defvar, origin file

% make VAR=cmd DEFVAR=cmd
VAR cmd, origin command line
DEFVAR cmd, origin command line

% VAR=env DEFVAR=env make
VAR var, origin file
DEFVAR env, origin environment

% VAR=env DEFVAR=env make -e
VAR env, origin environment override
DEFVAR env, origin environment

% VAR=env DEFVAR=env make -e VAR=cmd DEFVAR=cmd
VAR cmd, origin command line
DEFVAR cmd, origin command line
```

[1] https://www.gnu.org/software/make/manual/html_node/Values.html
[2] https://www.gnu.org/software/make/manual/html_node/Environment.html

Signed-off-by: Yauheni Kaliuta <[email protected]>

* (fix): Avoid using pull_request_target event trigger. (#1417)

Using pull_request_target event trigger results in security vulnerabilities explained here: https://securitylab.github.com/resources/github-actions-preventing-pwn-requests/

Signed-off-by: AJAY JAGANATHAN <[email protected]>

* fix:

- missing authentications rbca
- merge problem
- duplicated API definition
- lint

Signed-off-by: Wen Zhou <[email protected]>

---------

Signed-off-by: Yauheni Kaliuta <[email protected]>
Signed-off-by: Wen Zhou <[email protected]>
Signed-off-by: AJAY JAGANATHAN <[email protected]>
Signed-off-by: Dhiraj Bokde <[email protected]>
Signed-off-by: Alessio Pragliola <[email protected]>
Signed-off-by: Tomer Figenblat <[email protected]>
Co-authored-by: Yauheni Kaliuta <[email protected]>
Co-authored-by: Wen Zhou <[email protected]>
Co-authored-by: Adrián Sanz Gómiz <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: VaishnaviHire <[email protected]>
Co-authored-by: Ajay Jaganathan <[email protected]>
Co-authored-by: Gerard Ryan <[email protected]>
Co-authored-by: Luca Burgazzoli <[email protected]>
Co-authored-by: ajaypratap003 <[email protected]>
Co-authored-by: Marek Laššák <[email protected]>
Co-authored-by: Humair Khan <[email protected]>
Co-authored-by: Dhiraj Bokde <[email protected]>
Co-authored-by: Alessio Pragliola <[email protected]>
Co-authored-by: Tomer Figenblat <[email protected]>
Co-authored-by: Hannah DeFazio <[email protected]>
Co-authored-by: Marcus Trujillo <[email protected]>
Co-authored-by: Jan Pazdziora <[email protected]>
Co-authored-by: Christian Zaccaria <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants