Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add container package #3690

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Add container package #3690

wants to merge 1 commit into from

Conversation

treydock
Copy link
Contributor

No description provided.

@treydock treydock force-pushed the package-container branch from 19fd6d0 to 3d43938 Compare July 24, 2024 13:12
@treydock treydock force-pushed the package-container branch from 3d43938 to 562b854 Compare July 24, 2024 13:27
@treydock treydock marked this pull request as ready for review July 24, 2024 13:43
@treydock
Copy link
Contributor Author

@johrstrom Should there also be a process to release to Docker Hub when new tags are created? The current Github Action only tests the container can be built.

apache ALL=(ALL) NOPASSWD: /opt/ood/nginx_stage/sbin/nginx_stage' >/etc/sudoers.d/ood

# run the OOD executables to setup the env
RUN /opt/ood/ood-portal-generator/sbin/update_ood_portal --insecure
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we still install from source and use --insecure for this? This is going to have a lot of caveats to be actually useful, but I feel like a package install and a systemd entypoint is really the only way anyone will be able to use this in a meaningful way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Installing from source is not a supported way to install OnDemand so want to avoid that. This container will be far from useful irregardless of the install mechanisms. The insecure is because there are no SSL certs I believe.

systemd is not really used in traditional containers. That's essentially treating a container like a VM if you launch it with init.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay - I started a review or something and my comments were never posted.

Installing from source is not a supported way to install OnDemand so want to avoid that.

This Dockerfile installs from source.

This container will be far from useful irregardless of the install mechanisms.

This is kind of why I've hesitated to implement this at all. Why not something like this that just installs OnDemand and call it a day? No entrypoint, it just has everything installed and they have to take it further.

FROM rockylinux/rockylinux:8

RUN dnf install -y https://yum.osc.edu/ondemand/3.1/ondemand-release-web-3.1-1.el8.noarch.rpm && \
    dnf clean all && rm -rf /var/cache/dnf/*

RUN dnf -y update && \
    dnf install -y dnf-utils && \
    dnf config-manager --set-enabled powertools && \
    dnf -y module enable nodejs:18 ruby:3.1 && \
    dnf install -y epel-release && \
    dnf install -y ondemand ondemand-dex && \
    dnf install -y gcc gcc-c++ libyaml-devel nc && \
    dnf clean all && rm -rf /var/cache/dnf/*

systemd is not really used in traditional containers. That's essentially treating a container like a VM if you launch it with init.

Yet another reason I don't really want to do this. If you launch apache from a script can you bounce it? Or maybe they'd have to bounce the whole container.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You bounce the whole container, but our entrypoint is an example, not what folks should use. I only use it to validate the container works but we also use the same entry point for dev containers. It's a nice way to validate OnDemand container works but not something folks should be using. It's trivial to override ENTRYPOINT with site specific Dockerfile that pulls from our image.

@johrstrom
Copy link
Contributor

@johrstrom Should there also be a process to release to Docker Hub when new tags are created? The current Github Action only tests the container can be built.

I'm happy to just build right now. We won't make a tag for some time and we likely need to coalesce on what this container is actually for and our messaging around it's use & configuration.

@treydock
Copy link
Contributor Author

This container I feel is just a way to get the OnDemand install. To make it useful folks need to supply their own ENTRYPOINT in a new container that does like

FROM ohiosupercomputer/ondemand:3.1.8

<Do site specific things>

ENTRYPOINT ["/some/site/script"]

I don't think the container can be useful if used directly, but could be useful as the starting point for site-specific containers.

@johrstrom johrstrom self-requested a review July 25, 2024 17:57
@johrstrom
Copy link
Contributor

@johrstrom Should there also be a process to release to Docker Hub when new tags are created? The current Github Action only tests the container can be built.

Thinking about this more - if we do publish the image and use a package in it, we'd have to sort out what package we use. If we pull from yum.osc.edu, there'll be a delay for the package to get built. If we just build the package in-situ for the container, that could work, but IDK how that'd relate to yum.osc.edu or if that even matters given you'd just want to get a new container instead of updating an existing one.

@treydock
Copy link
Contributor Author

treydock commented Jul 25, 2024

Thinking about this more - if we do publish the image and use a package in it, we'd have to sort out what package we use. If we pull from yum.osc.edu, there'll be a delay for the package to get built. If we just build the package in-situ for the container, that could work, but IDK how that'd relate to yum.osc.edu or if that even matters given you'd just want to get a new container instead of updating an existing one.

To publish I think we'd have to integrate the publish into Gitlab and not Github so that we do this:

  1. Build packages - this is what we do now
  2. Push packages to repos (latest not 3.1) - this is what we do now
  3. Consume local packages with docker build that uses a context that can access the local RPMs
  4. Push container that is using same version as newly built RPMs, ie build 3.1.12 RPMs then container is :3.1.12.

@treydock
Copy link
Contributor Author

Big downside with consuming the RPMs is the COPY step in the Dockerfile would create an image layer of the RPMs that we wouldn't be able to later delete to make the image small. I don't think we'd ever be able to achieve a really small image size.

The alternative is a hack I used for really old version of OnDemand packaging where you push the RPMs to yum.osc.edu and then download them with build script in Dockerfile and then delete them in same script when done which allows saving that space since the entire script run is a single layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants