Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consolidate with instructlab container images #43

Open
ericcurtin opened this issue Aug 13, 2024 · 7 comments
Open

Consolidate with instructlab container images #43

ericcurtin opened this issue Aug 13, 2024 · 7 comments

Comments

@ericcurtin
Copy link
Collaborator

We should consolidate our efforts with instructlab and share container base images:

https://github.com/instructlab/instructlab/tree/main/containers

@ericcurtin
Copy link
Collaborator Author

ericcurtin commented Aug 28, 2024

2 issues with this at present:

  1. Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9
  2. Can find where these images are published in general.

@tarilabs
Copy link
Member

  1. Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9

this is not necessarily an issue, in case a token based auth can be given to the oci registry? 🤔 wdyt

@ericcurtin
Copy link
Collaborator Author

ericcurtin commented Aug 28, 2024

  1. Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9

this is not necessarily an issue, in case a token based auth can be given to the oci registry? 🤔 wdyt

If we can make it work I'm happy 😄

@ericcurtin
Copy link
Collaborator Author

ericcurtin commented Aug 28, 2024

We also have to think of ways of auto-detecting the primary GPU (that's kinda separate to this issue), I have an idea of how to do that for AMD GPU, but for Nvidia, not sure... Then we automatically pull the relevant container image, set up podman with the correct holes, etc.

Presence of "blacklist=nouveau" in /proc/cmdline is one idea, another one is presence of /proc/driver/nvidia/gpus directory, another one is presence of "nvidia-smi"...

AMD has a nice easy to use, fast API (the check has to be quick also) to check the VRAM size of each AMD GPU present in a system which is quite nice, the GPU with the most VRAM can be selected.

It may be a case where nothing is absolutely perfect also, so we also will have to introduce a command-line flag to manually select GPU (also sometimes one may not want to use primary GPU, etc.)

@ericcurtin
Copy link
Collaborator Author

@tarilabs I'm also unsure if the instructlab team plan on maintaining/publishing those in future so maybe we should create our own...

@rhatdan
Copy link
Member

rhatdan commented Sep 3, 2024

We need to create open versions of those images, and store them in quay.io/ramalama repository, or if they want to maintain them, I would be fine with using a different repo. An issue might be on content that is not allowed to be shipped as a container image, but only pulled from an upstream vendor.

@ericcurtin
Copy link
Collaborator Author

ericcurtin commented Sep 3, 2024

Here's another image I was pointed towards that will be useful:

https://github.com/rh-aiservices-bu/llm-on-openshift/blob/main/llm-servers/vllm/gpu/Containerfile

this will be a useful reference for our image with vllm runtime. It's UBI9 based which is exactly what we want

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants