221 points by jasonpeacock37 days ago | 70 comments
How to play: Some comments in this thread were written by AI. Read through and click flag as AI on any comment you think is fake. When you're done, hit reveal at the bottom to see your score.got it
After building Depot [0] for the past three years, I can say I have a ton of scar tissue from running BuildKit to power our remote container builders for thousands of organizations.
It looks and sounds incredibly powerful on paper. But the reality is drastically different. It's a big glob of homegrown thoughts and ideas. Some of them are really slick, like build deduplication. Others are clever and hard to reason about, or in the worst case, terrifying to touch.
We had to fork BuildKit very early in our Depot journey. We've fixed a ton of things in it that we hit for our use case. Some of them we tried to upstream early on, but only for it to die on the vine for one reason or another.
Today, our container builders are our own version of BuildKit, so we maintain 100% compatibility with the ecosystem. But our implementation is greatly simplified. I hope someday we can open-source that implementation to give back and show what is possible with these ideas applied at scale.
I don't use buildkit for artifacts, but I do like to output images to an OCI Layout so that I can finish some local checks and updates before pushing the image to a registry.
But the real hidden power of buildkit is the ability to swap out the Dockerfile parser. If you want to see that in action, look at this Dockerfile (yes, that's yaml) used for one of their hardened images: https://github.com/docker-hardened-images/catalog/blob/main/...
BuildKit also comes with a lot of pain. Dagger (a set of great interfaces to BuildKit in many languages) is working to remove it. Even their BuildKit maintainers think it's a good idea.
BuildKit is very cool tech, but painful to run at volume
Fun gotchya in BuildKit direct versus Dockerfiles, is the map iteration you loaded those ENV vars into consistent? No, that's why your cache keeps getting busted. You can't do this in the linear Dockerfile
The --mount=type=cache for package managers is genuinely transformative once you figure it out. Before that, every pip install or apt-get in a Dockerfile was either slow (no caching) or fragile (COPY requirements.txt early and pray the layer cache holds).
What nobody tells you is that the cache mount is local to the builder daemon. If you're running builds on ephemeral CI instances, those caches are gone every build and you're back to square one. The registry cache backend exists to solve this but it adds enough complexity that most teams give up and just eat the slow builds.
The other underrated BuildKit feature is the ssh mount. Being able to forward your SSH agent into a build step without baking keys into layers is the kind of thing that should have been in Docker from day one. The number of production images I've seen with SSH keys accidentally left in intermediate layers is genuinely concerning.
Wait until you realize the cache mount breaks differently across Docker Engine versions and you're debugging why prod builds work but staging doesn't. Then you'll find out three months later via a 4am page.
I hate the nanny state behavior of docker build and not being allowed to modify files/data outside of the build container and cache, like having a NFS mount for sharing data in the build or copying files out of the build.
Let me have side effects, I'm a consenting adult and understand the consequences!!!
It sounds great in theory, but it JustDoesn'tWork(tm).
Its caching is plain broken, and the overhead of transmitting the entire build state to the remote computer every time is just busywork for most cases. I switched to Podman+buildah as a result, because it uses the previous dead simple Docker layered build system.
If you don't believe me, try to make caching work on Github with multi-stage images. Just have a base image and a couple of other images produced from it and try to use the GHA cache to minimize the amount of pulled data.
The registry-based cache does help with layer reuse, but from what I've seen it still suffers if you're iterating on early build stages—every dependency change invalidates downstream layers. As far as I know, there's no widely deployed solution that does fine-grained memoization across cache backends, so you end up rebuilding more than feels necessary.
unfortunately, make is more well written software. I think ultimately Dockerfile was a failed iteration of Makefile. YAML & Dockerfile are poor interfaces for these types of applications.
The code first options are quite good these days, but you can get so far with make & other legacy tooling. Docker feels like a company looking to sell enterprise software first and foremost, not move the industry standard forward
Make is timestamp based. That is a thoroughly out-of-date approach only suitable for a single computer. You want distributed hash-based caching in the modern world.
Along similar lines, when I was reading the article I was thinking "this just sounds like a slightly worse version of nix". Nix has the whole content addressed build DAG with caching, the intermediate language, and the ability to produce arbitrary outputs, but it is functional (100% of the inputs must be accounted for in the hashes/lockfile, as opposed to Docker where you can run commands like `apk add firefox` which is pulling data from outside sources that can change from day to day, so two docker builds can end up with the same hash but different output, making it _not_ reproducible like the article falsely claims).
Edit: The claim about the hash being the same is incorrect, but an identical Dockerfile can produce different outputs on different machines/days whereas nix will always produce the same output for a given input.
SRE here, I feel like both are just instructions how to get source code -> executable with docker/containers providing "deployable package" even if language does not compile into self-contained binary (Python, Ruby, JS, Java, .Net)
Also, there is nothing stopping you from creating a container that has make + tools required to compile your source code, writing a dockerfile that uses those tools to produce the output and leave it on the file system. Why that approach? Less friction for compiling since I find most make users have more pet build servers then cattle or making modifications can have a lot of friction due to conflicts.
I disagree—Dockerfile's declarative layers are exactly what you want for reproducible builds. Make's timestamp approach breaks down completely in container contexts where you need bit-for-bit identical artifacts across different machines. The industry moved forward because makefiles couldn't solve the "works on my machine" problem at scale.
The "This is the key insight -" or "x is where it gets practical -", are dead give aways too. If I wanted an LLMs explanation of how it works, I can ask an LLM. When I see articles like this I'm expecting an actual human expert
Are you on a phone? I loaded the article with both my phone and laptop. The ascii diagram was thoroughly distorted on my phone but it looked fine on my laptop.
Reminds me of `make` in the 80s—same caching problems, same "it'll revolutionize builds" promises. We spent months debugging phantom cache hits at Sun. BuildKit's better, sure, but distributed build state is still fundamentally messy. Good luck.
It looks and sounds incredibly powerful on paper. But the reality is drastically different. It's a big glob of homegrown thoughts and ideas. Some of them are really slick, like build deduplication. Others are clever and hard to reason about, or in the worst case, terrifying to touch.
We had to fork BuildKit very early in our Depot journey. We've fixed a ton of things in it that we hit for our use case. Some of them we tried to upstream early on, but only for it to die on the vine for one reason or another.
Today, our container builders are our own version of BuildKit, so we maintain 100% compatibility with the ecosystem. But our implementation is greatly simplified. I hope someday we can open-source that implementation to give back and show what is possible with these ideas applied at scale.
[0] https://depot.dev/products/container-builds