-
-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FR: Build Dockerfiles with incremental cache support #35
Comments
Thanks for your proposal. @thesayyn has been thinking about this, as well. He will comment on this issue with his thoughts. Hopefully, we can come up with a solution that satisfies your goals. |
There are some problems that'll prevent us from doing this the way you want it;
I have been thinking about this issue for a while, how to tackle the problem without sacrificing a whole lot. combination of given a Dockerfile; FROM node:12
RUN apt-install curl
WORKDIR /app
COPY index.js .
RUN yarn install --production
CMD ["node", "index.js"]
EXPOSE 3000 a gazelle plugin for rules_oci would translate it to; WORKSPACE debian_archive(
name = "debian_amd64_curl"
urls = ["debian_archive_url"]
)
oci_pull(
name = "node_12",
repository = "index.docker.io/library/node",
ref = "12"
)
translate_pnpm_lock(
name = "npm",
yarn_lock = "yarn.lock"
) BUILD js_binary(
name = "app",
entry_point = "index.js"
)
oci_image(
name = "image",
base = "@node_12//:image",
tars = [
":app"
]
workdir = "/app",
cmd = ["index.js"],
ports = [3000],
) |
Does the |
This is from the distroless repo https://github.com/GoogleContainerTools/distroless/blob/main/private/remote/debian_archive.bzl however it's not very easy to use and in order to keep your packages up to date you will need automated tooling. They have written I spent a few hours but noticed the way the rules are setup currently it is fairly difficult to just pull the rules out and use them. I unfortunately had to fall back to using rules_docker because of the timeline I am working on however I really want to use rules_oci and have created Additionally https://github.com/GoogleContainerTools/rules_distroless looks to eventually server the same purpose and I have asked about helping with this repo in another ticket but ideally my rule set would be merged with rules_distroless or replaced by it in the future. |
I come here with a similar frustration. Understand this is not an easy problem, but it seems like it is impossible to build anything without breaking outside of bazel. Like, to build a base image with all the years-old-wisdom packages (mysql, postgres, curl etc) we need to do this outside of bazel with vanilla dockerfile (or an alternative to docker cli such as https://buildah.io/). Upstream maintainers spent half of their lives building rpm and deb packages. It isn't as simple as just downloading a deb, we'd have to find all the required debs too... We should find a solution together. Falling back to Slapping on yet another rule seems like npm's approach. While I favor keeping |
Now that https://github.com/chainguard-dev/rules_apko exists and dockerfile_image and container_run_and_commit are a mistake, it might be comfortable to use but it barely does the right thing. It's not platforms compatible, not-hermetic (therefore root of cache misses). NOTE: Whatever |
Closing as completed #570 |
Feature request:
Allow Bazel to build Dockerfile images in a way that takes advantage of incremental build caching. The
dockerfile_image
rule in rules_docker contrib allows building such images, but it's incredibly slow because of the lack of build cache.The text was updated successfully, but these errors were encountered: