Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not getting images from mirror-registry #3367

Open
eb4x opened this issue Aug 5, 2024 · 5 comments
Open

Not getting images from mirror-registry #3367

eb4x opened this issue Aug 5, 2024 · 5 comments
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@eb4x
Copy link

eb4x commented Aug 5, 2024

What happened:
The cronjobs fetching fedora/centos images time out in an air-gapped cluster.

I0805 13:40:08.423084 1 registry-datasource.go:176] Copying proxy certs
2024/08/05 13:40:08 Ignore common certificate dir: open /proxycerts/: no such file or directory
I0805 13:40:08.423157 1 transport.go:228] Inspecting image from 'docker://quay.io/containerdisks/fedora:latest'
E0805 13:41:08.425189 1 transport.go:78] Could not create image reference: pinging container registry quay.io: Get "https://quay.io/v2/": dial tcp 34.206.201.197:443: i/o timeout
2024/08/05 13:41:08 Failed to get image digest: Could not create image reference: pinging container registry quay.io: Get "https://quay.io/v2/": dial tcp 34.206.201.197:443: i/o timeout

What you expected to happen:
Getting the image from a mirror via the /etc/containers/registries.conf.d/99-mirror-registries.conf configuration on each node.

[[registry]]
prefix="quay.io"
location="mirror-registry.openshift-utv.uio.no:8443/mirrors/quay.io"

Additional context:

I have a baremetal OKD 4.14 SCOS in an semi-air-gapped environment. I push images to a minimal mirror-registry (quay) separate from the cluster, and all nodes have a configuration (see above snippet) that forwards the most common registry-urls to a location in mirror-registry.

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml): 1.10.7
  • Kubernetes version (use kubectl version): v1.27.1-3507+28ed2d70d51caa-dirty
  • DV specification: N/A
  • Cloud provider or hardware configuration: OKD/Baremetal
  • OS (e.g. from /etc/os-release): CentOS Stream CoreOS 414.9.202401231453-0
  • Kernel (e.g. uname -a): 5.14.0-410.el9.x86_64
@eb4x eb4x added the kind/bug label Aug 5, 2024
@aglitke
Copy link
Member

aglitke commented Aug 12, 2024

@arnongilboa could you take a look at this? It seems we should be supporting this scenario. @eb4x Are you using HCO to deploy the kubevirt components are are you doing a more custom setup? Make sure that you have configured your DataImportCron objects to point to the correct registry. From the error message it looks like you are still trying to fetch from quay.

@arnongilboa
Copy link
Collaborator

arnongilboa commented Aug 12, 2024

@aglitke we should definitely support this scenario. @eb4x As mentioned, the DataImportCrons should point urls in the mirror-registry. See here. If you are using OpenShift (with HCO) you may follow this.

@eb4x
Copy link
Author

eb4x commented Aug 16, 2024

@eb4x Are you using HCO to deploy the kubevirt components are are you doing a more custom setup?

Yep, I'm using the HCO to deploy.

Make sure that you have configured your DataImportCron objects to point to the correct registry. From the error message it looks like you are still trying to fetch from quay.

I'll look into this further, I might have missed something. The idea behind our registry overrides is that we don't have to change where we're pulling from. The worker nodes should know to pull from the mirror-registry.

It works for resources specified in kubernetes. (deployments, daemonsets, pods, etc.) But maybe not from within a running container which is kinda what's happening here?

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 14, 2024
@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants