-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Dockerfile #404
Fix Dockerfile #404
Conversation
fcami
commented
Jun 10, 2022
- switch to CentOS Streams 8 for more package availability
- install s3cmd to upload test results to s3 easily
- switch to using python3-virtualenv to force Python 3.8 in venv, otherwise Python 3.6 is used. This commit uses Python 3.7, leading pip to try to install backports-datetime-fromisoformat which fails because the image has no gcc.
Able to create container successfully on local system http://pastebin.test.redhat.com/1057656 |
@fcami Request you to send PR with Signature |
build/Dockerfile
Outdated
@@ -9,7 +9,7 @@ ARG OC_CHANNEL=stable | |||
|
|||
|
|||
RUN dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm &&\ | |||
dnf install -y python38 jq git unzip chromium chromedriver redhat-lsb-core &&\ | |||
dnf install -y python38 jq git unzip chromium chromedriver redhat-lsb-core s3cmd python3-virtualenv &&\ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
virtualenv is used for dependency isolation. You want to prevent any dependencies or packages installed from leaking between applications. Podman/Docker achieves the same thing, it isolates your dependencies within your container and prevent leaks between containers and between applications. There are some cases where virtualenv could be used in containers mainly in multistage. Is it necessary to add the virtualenv?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Without that change, "venv" was used. But venv cannot create a virtualenv with a different version of Python. ods-ci uses Python 3.7+ Python features but ubi8/streams8 has 3.6 by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be missing something here, but I still don't see the need of using virtualenv, (or venv before the change), because when you use a container you can install the Python version of your choice directly using the package manager, and will still be isolated. Could you please elaborate this a bit more?
build/Dockerfile
Outdated
@@ -43,8 +43,7 @@ COPY utils/scripts/logger.py utils/scripts/logger.py | |||
COPY utils/scripts/util.py utils/scripts/util.py | |||
RUN chmod +x run.sh | |||
COPY requirements.txt setup.py . | |||
RUN python3 --version | |||
RUN python3 -m venv venv && source venv/bin/activate && pip3 install --upgrade pip && venv/bin/pip3 install -r requirements.txt | |||
RUN virtualenv -p /usr/bin/python3.8 venv && source venv/bin/activate && pip3 install --upgrade pip && venv/bin/pip3 install -r requirements.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't use the virtualenv we also can delete this line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use it. I thought that was obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I see that you use it, that's not what I meant with my comment, let me clarify it. I was just suggesting to not use it to avoid unnecessary extra layers and complexity in the docker image. Is there any chance to avoid it? That's why I asked you in my other comment if there's a reason to use it that I'm missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how.
At this point I'm just trying to make the current Dockerfile work.
My solution does the job, but I'm open to suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'm not blocking the merge because of this, but I would like to look at it and see if the Dockerfile can work without the virtualenv. If it does, the changes can be added in a future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather not remove the venv altogether. It doesn't hurt at all and it separates the system python packages from what's used by ods-ci. It's not exactly isolation, but it's much cleaner.
hey @fcami thanks for raising the PRs, we always appreciate that :D I will go through all the proposed changes as soon as possible. In the meantime, some comments:
I think it's better to stick with ubi images when possible, like requested to partners deliverying containers.
where this package will be used? We don't, so are you going to raise a PR where this package is needed?
hm sounds strange to me. I think once python3.8 is installed, it is used as default to create the venv. We have dependencies where python 3.8 is required (e.g., However, I will do some investigation. |
Except this requires using an entitlement to get libcanberra-gtk3 (comes from CentOS or RHEL) which is required by chromium (EPEL). Do we have any partner building on ods-ci?
We (PSAP) already use it once ods-ci has run to upload the logs. The package has to be in the image.
That's not what I saw. I'd be curious to see how you can get 3.8.12 by default.
Please see this log. It contains:
gcc is needed because backports-datetime-fromisoformat is selected by pip ; in turn, backports-datetime-fromisoformat is selected by pip because datetime-fromisoformat is a Python 3.7+ feature and compiling the backport is necessary since this is running on 3.6. Let me know if that helps. |
I've added relevant signed-off-by mentions on all commits. Please let me know if you require anything else. |
Are you hitting any error because of ubi image?
no we don't. I was referring to this policy
Before adding this pkg I'll need to understand better this point. I'll ping you offline in the next days
The directory at that link is empty. Could you share the build logs of ods-ci container please? We have tried reproducing the python version problem without success. When built on our local systems (tried by 3 of us), we got python3.8 installed as expected. Maybe more context on how/where the ods-ci image is build might help. I'll ping you offline in the next days to see together |
Yes, libcanberra-gtk3.
My question becomes: do we want to /certify/ any ods-ci image?
OK.
OK, I guess it was auto-cleaned-up after a while. |
Hello @bdattoma, I'm working with @fcami in the Perf&Scale PSAP team, The context is that we're working on an automated testing of RHODS/JupyterHub at large scale (hundreds of users), and In this workload, In this context, the requirement for an entitled UBI makes it complicated to automate the build of the image in any CI, be it Github Actions, OpenShift CI (where we're working) or Quay.io. Now regarding the requirement for the |
@kpouget thank you for the context.
Ok we can add the package you need. Just out of my curiosity, couldn't you create a second container in the same pod and put it in charge of sharing the results? in that container you could install any packages you need to share files and change the logic independently from the main container. Ok for the image too |
good point, I think it would make more sense this way do you have any example of Pod using such a side-car container? I wonder what's the best way to handle the synchronization (ie, in the side-car, wait for the termination of the main container before doing anything) as part of another work, I did it with a simple file, something like
but I wonder if there can be any cleaner way to do it ... ? |
I'll remove the s3cmd commit. Update: commit removed, branch re-pushed. |
Signed-off-by: François Cami <[email protected]>
I think there's not an out of the box functionality to wait until the other container is finished. Your solution should work but I suggest you to add a timeout for safety. With a similar logic, you could check for the ods-ci container status in the pod, and once the status is "terminated" (and the output files are generated) then you publish the results. Unfortunately I don't have an example :( |
After more testing:
I'll remove the virtualenv commit and open a separate PR with it that you may choose to close/wontfix ; in my mind I'd use virtualenv because ods-ci requires a different interpreter than the default, making the build vulnerable. |
thanks for the explanations François, makes sense to me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM