What has Docker become?

Posted by tuananh 1 day ago

Counter256Comment273OpenOriginal

Comments

Comment by mg794613 1 day ago

"The problem is that Docker the technology became so successful that Docker the company struggled to monetize it. When your core product becomes commoditized and open source, you need to find new ways to add value."

No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.

"Docker Swarm was Docker’s attempt to compete with Kubernetes in the orchestration space."

No, it never was intended like that. That some people build infra/business around it is something completely different, but swarm was never intended to be a kubernetes contender.

"If you’re giving away your security features for free, what are you selling?"

This, is what actually is going to cost their business, I'm extremely grateful for what they have done for us. But they didn't gave themselves a chance. Their behaviour has been more akin to a non-profit. Great for us, not so great for them in the long run.

Comment by dralley 1 day ago

It didn't help them that they rejected the traditionally successful ways of monetizing open source software. Which is, selling support contracts to large corporate users.

Corporate customers didn't like the security implications of the Docker daemon running as root, they wanted better sandboxing and management (cgroups v2), wanted to be able to run their own internal registries, didn't want to have docker trying to fight with systemd, etc.

Docker was not interested (in the early years) in adopting cgroups v2 or daemonless / rootless operation, and they wanted everyone to pay to use Dockerhub on the public internet rather than running their own internal registries, so docker-cli didn't support alternate registries for a long long time. And it seemed like they disliked systemd for "ideological" reasons to an extent that they didn't make much effort to resolve the problems that would crop up between docker and systemd.

Because Docker didn't want to build the product that corporate customers wanted to use, and didn't accept patches when Red Hat tried to get them implemented those features themselves, eventually Red Hat just went out and built up Podman, Quay, and the entire ecosystem of tooling that those corporate customers wanted themselves (and sold it to them). That was a bit of an own goal.

Comment by cpuguy83 1 day ago

Absolutely none of this is true. Docker had support contracts (Docker EE... and trying to remember, docker-cs before that naming pivot?).

Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about.

There was never "no interest" at Docker in cgv2 or rootless. Never. cgv2 early on was not useable. It lacked so much functionality that v1 had. It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves.

Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on).

Comment by djb_hackernews 1 day ago

for the record, cpuguy83 was in the trenches at docker circa 2013, it was like him a handful of other people working on docker when it went viral, he has an extremely insiders perspective, I'd trust what he says.

Comment by FireBeyond 1 day ago

I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation.

Comment by mikepurvis 1 day ago

I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner.

That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer".

Comment by FireBeyond 1 day ago

Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch).

Comment by mikepurvis 1 day ago

My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB.

But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away.

I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image.

Comment by cpuguy83 1 day ago

That's true, we didn't do much around it. Small startup with monetization problems and all.

Comment by jeremyjh 1 day ago

So absolutely at least some of that is true.

I’d be surprised if the systemd thing was not also true.

I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason?

Comment by cpuguy83 21 hours ago

I made no comment about RedHat's offerings.

I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.

What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.

Comment by oso2k 17 hours ago

   I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.
Correct. Maybe starting with RHEL7, Red Hat took the stance that “containers are Linux”. Supporting Docker in RHEL7 was built-in as soon as we added it to ‘rhel-7-server-extras-rpms’ repo. The containers were supported as “customer workloads” while we docker daemon and cli were supported as part of the OS.

   What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.
Not quite right. RHEL containers (and now UBI containers) are only supported when they run on RHEL OS hosts or RHEL CoreOS hosts as part of an OpenShift cluster. systemd did not work (well?) in containers for a while and has not been ever a requirement. There’s several reasons for this RHEL containers on RHEL/RHCOS requirement. For one, RHEL/UBI containers inherit their subscription information from their host. This is much like how RHEL VMs can inherit their subscription if you have virtualization host-based subscriptions. If containers weren’t tied to their host, then by convention, each container would need to subscribe to Red Hat on instantiation and would consume a Red Hat subscription instance.

https://access.redhat.com/articles/2726611

Comment by oblio 23 hours ago

I've worked in build/release engineering/devops for a long time.

I would be utterly shocked if corporate customers wouldn't want corporate Docker proxies/caches/mirrors.

Entire companies have been built on language specific artifact repositories. Generic ones like Docker are even more sought after.

Comment by cpuguy83 22 hours ago

Right, and Docker sold such products and from early on.

Comment by PaulHoule 1 day ago

When Docker was new I had a really bad ADSL connection (2Mbps) and couldn't ever stack up a containerized system properly because Dockerhub would time out.

I did large downloads all the time, I used to download 25GB games for my game consoles for instance. I just had to use schedule them and use tools that could resume downloads.

If I'd had a local docker hub I might have used docker but because I didn't it was dead to me.

Comment by nyrikki 1 day ago

Unfortunately even podman etc.. are still limited by OCIs decision to copy the Docker model.

Crun just stamp couples security profiles as an example, so everything in the shared kernel that is namespace incompatible is enabled.

This is why it is trivial to get in-auditable communication between pods on a host etc…

Comment by ragall 1 day ago

> Unfortunately even podman etc.. are still limited by OCIs decision to copy the Docker model.

Which parts of the model are you referring to ?

Comment by nyrikki 20 hours ago

OCI Container Runtimes like OCI's runc are "container runtimes", so the runtime spec[2]

Basically, docker started using lxc, but wanted a go native option, and wrote runc. If you look at [0] you can see how it actually instantiates the container. Here is a random blog that describes it fairly well [1]

crun is the podman related project written in c, which is more efficient than the go based runc.

You can try this even as the user nobody 65534:65534, but you may need to make some dirs, or set envs.

Here is an example pulling an image with podman to make it easier, but you could just make an OCI spec bundle and run it:

    mkdir hello
    cd hello
    podman pull docker.io/hello-world
    podman export $(podman create hello-world) > hello-world.tar
    mkdir rootfs
    tar -C rootfs -xf hello-world.tar
    runc spec --rootless
    sed -i 's;"sh";"/hello";' config.json
    runc run container1
    
    Hello from Docker!
runc doesn't support any form of constraints like a bounding set on seccomp, selinux, apparmor, etc.. but it will apply profiles you pass it.

Basically it fails open, and with the current state of apparmor and selinux it is trivial to bypass the minimal userns restrictions they place.

Historically, before rootless containers this was less of an issue, because you had to be a privileged user to launch a container. But with the holes in the LSMs, no ability to set administrative bounding sets, and the reality that none of the defaults constrain risky kernel functionality like vsock, openat2 etc... there are a million ways to break netns isolation etc...

Originally the docker project wanted to keep all the complexity of mutating LSM rules etc... in containerd. and they also fought even basic controls like letting an admin disable the `--privileged` flag at the daemon level.

Unfortunately due to momentum, opinions, and friction in general, that means that now those container runtimes have no restrictions on callers, and cannot set reasonable defaults.

Thus now we have to resort to teaching every person who launches a container to be perfect and disable everything, which they never do.

If you run a k8s cluster with nodes on VMs, try this for example, if it doesn't error out, any pod can talk to any other pod on the node, with a protocol you aren't logging, and which has limited ability to log anyway. (if your k8s nodes are running systemd v256+ and you aren't using containerd which blocked vsock, but cri-o, podman, etc... don't (at least up to a couple of weeks ago)

    socat - VSOCK-LISTEN:3000
You can also play around with other af_families as IPX, Appletalk, etc... are all available by default, or see if you can use openat2 to use some file in /proc to break out.

[0] https://manpages.debian.org/testing/runc/runc-spec.8.en.html [1] https://mkdev.me/posts/the-tool-that-really-runs-your-contai... [2] https://github.com/opencontainers/runtime-spec/blob/main/REA...

Comment by oblio 23 hours ago

> Crun just stamp couples security profiles

I don't understand any of this :-)

Comment by anonymars 1 day ago

I can't help but see a parallel with some of the entertainment franchises in recent years (Star Wars, etc.) -- where a company seems to be allergic to taking money by giving people what they want, and instead insists on telling people what they should want and blaming them when they don't

Comment by Normal_gaussian 1 day ago

yes; its really notable that corporates and other support companies (e.g. redhat) don't want to start down the path of NIH, and will go to significant efforts to avoid it. However, once they have done it, it is very hard to make them come back.

Comment by PaulHoule 1 day ago

I think the Star Wars problem was that instead of making the movies at a steady cadence they stretched it out too long.

Comment by tracker1 1 day ago

I think what Docker should have done, is charge for Docker Desktop from the start... even $5/mo/user as a discount rate for non-open-source usage... similar for container storage, had a commercial offering for private containers from very early on.

The former felt like a rug pull when they did it later, and the latter should have been obvious from the start. But it wasn't there in the beginning and too many alternatives from every cloud provider popped in to fill that gap and it was too late.

There were a lot of cool ideas, and I think early on, they were more focused on the cool ideas and less on how to make it a successful, long lived business that didn't rely on VC funding and an exit strategy they didn't have to succeed.

Comment by xp84 19 hours ago

I have to agree. Of all the per-seat subs that my employer has, the thing Docker Desktop provides is of so much easily provable value. I tend to agree that making Docker Desktop a commercial product way back then would have probably been good. The only hurdle would be figuring out enough of a 'free tier' to get developers to get into it and get addicted and demand a license, but not so much that everyone just uses the "free tier" or "personal" edition indefinitely - which I suspect many, many companies' developers do to this day with Docker Desktop, with their employers' tacit consent.

This "free to start using" move is best exemplified by Slack, which ended up taking over many companies guerrilla-style. They did a pretty good job of pivoting companies to paying, too.

Comment by paradox460 22 hours ago

They could have invested more into docker desktop as well. I pay for orbstack, because docker desktop is trash on macos

Comment by mattwiese 1 day ago

> Their behaviour has been more akin to a non-profit. Great for us, not so great for them in the long run.

This is particularly amusing when considering they helped start the Open Container Initiative with others back in 2015.

What if Docker "the company" was just a long con to use VC bux to fund open source? I say mostly in jest.

Comment by pjmlp 1 day ago

Only because with Google open sourcing Kubernetes, it was a decision on still be able to play the game, or be left completely out, helping with OCI was a survival decision.

As proven later when Kubernetes became container runtime agnostic.

Comment by ffsm8 22 hours ago

>> Docker Swarm was Docker’s attempt to compete with Kubernetes in the orchestration space."

> No, it never was intended like that.

It was certainly marketed as that though...

Comment by JeremyNT 1 day ago

> No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.

Yes. It was a helpful UI abstraction for people uncomfortable with lower level tinkering. I think the big "innovations" were 1) the file format and 2) the (free!) registry hosting. This drove a lot of community adoption because it was so easy to share stuff and it was based on open source.

And while Docker the company isn't the behemoth the VCs might have wanted, those contributions live on. Even if I'm using a totally different tool to run things, I'm writing a Dockerfile, and the artifacts are likely stored in something that acts basically the same as Docker Hub.

Comment by daveisfera 23 hours ago

Arrogance was what actually killed them. They picked fights with Google and RedHat and then showing up at conferences with shirts that said "we don't accept pull requests" tipped the scales so that RedHat and Google both went their own way and their technology was now pushed out of 2 of their biggest channels.

Comment by diceduckmonk 10 hours ago

> "we don't accept pull requests"

Any posts on the internet archives to understand the history ?

Comment by crimps 1 day ago

I joined them after they were clearly in decline and half of the office was empty. Contrary to some of the comments here, there were enterprise products (Docker EE, private registry, orchestration) and a very large sales team.

There were also a lot of talented, well-paid engineers working on open source side projects with no business value. It just wasn't a very well-run company. You can't take on half a billion dollars in VC just to sell small enterprise support contracts.

Comment by abronan 1 day ago

> but swarm was never intended to be a kubernetes contender.

Your comment is accurate for the original Swarm project, but a bit misleading regarding Swarm mode (released later on and integrated into docker).

I have worked on the original Swarm project and Swarmkit (on the distributed store/raft backend), and the latter was intended to compete with Kubernetes.

It was certainly an ambitious and borderline delusional strategy (considering the competition), but the goal was to offer a streamlined and integrated experience so that users wouldn't move away from Docker and use Swarm mode instead of Kubernetes (with a simple API, secured by default, just docker to install, no etcd or external key value metadata store required).

You can only go so far with a team of 10 people versus the hundreds scattered across Google/RedHat/IBM/Amazon, etc. There were so many evangelists and tech influencers/speakers rooting for Kubernetes already, reversing that trend was extremely difficult, even after initiating sort of a revolution in how developers deployed their apps with docker. The narrative that cluster orchestration was Google's territory (since they designed Borg that was used at a massive scale) was too entrenched to be challenged.

Swarm failed for many reasons (it was released too soon with a buggy experience and at an incomplete state, lacking a lot of the features k8s had, but also too late in terms of timing with k8s adoption). However, the goal for "Docker Swarm mode" was to compete with Kubernetes.

Comment by chuckadams 1 day ago

I love Kubernetes, but it's still a big leap from docker-compose to k8s, and swarm filled that niche admirably. I'm still in that niche -- k8s is overkill for every one of my projects -- but k3s is pretty lightweight, easy to install, and there's a lot of great tooling for k8s I can use with it. Still wish there were something as simple as "docker-compose plus a couple bits" that was swarm mode -- I'm drowning in YAML files!

Comment by tln 23 hours ago

Thanks for chiming in, I was questioning that assertion myself.

I think the problem was giving up on swarm TBH. At some point it was clear k8s would be dominant, but there was still room for that streamlined and integrated experience.

Comment by xeromal 1 day ago

For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.

Comment by karolist 1 day ago

Sir, this is a Docker, not Dropbox

Comment by aruggirello 21 hours ago

I'm replacing Dropbox with Unison [0] over ssh, BTW. It's a great piece of software (multiplatform, and it even has a GUI).

[0] https://github.com/bcpierce00/unison

Comment by ChocolateGod 1 day ago

It's not the 90s anymore.

Comment by metaltyphoon 1 day ago

It that comment says is “I don’t know what docker solves”

Comment by karolist 23 hours ago

it's a parody of the infamous https://news.ycombinator.com/item?id=9224

Comment by ahepp 23 hours ago

I mean, isn't that just about what happened to Docker?

They wrote a really nice wrapper around cgroups/ns/tarball hosting and then struggled to monetize it because a large portion of their users are exactly the kind of people who could set up a curlftpfs document cloud.

Comment by nixosbestos 1 day ago

> No, it never was intended like that. That some people build infra/business around it is something completely different, but swarm was never intended to be a kubernetes contender.

That would be news to the then Docker CTO, who reached out to my boss to try to get me in trouble, because I was tweeting away about [cloud company] and investing heavily in Kubernetes. The cognitive dissonance Docker had about Swarm was emblematic of the missteps they took during that era where Mesos, Kube and Swarm all looked like they could be The Winner.

Comment by pc86 1 day ago

My own mental model of swarm is "k8s but easier" - is that wrong?

Comment by Moto7451 1 day ago

One thing that really hurt them from my PoV was how they acted when they changed their licensing structure with respect to revenue generating companies. I’m fine with the idea that licensing Docker and Docker Desktop is a good thing to do. However, I think they just made people distrust their motives with their approached to this.

At two places I worked their reps reached out to essentially ensnare the company in a sort of “gotcha” scheme where if we were running the version of Docker Desktop after the commercial licensing requirement change, they sent a 30 day notice to license the product or they’d sue. Due to the usual “mid size software company not micromanaging the developers” standard, we had a few people on a new enough version that it would trigger the new license terms and we were in violation. They didn’t seem to do much outreach other than threatening us.

So in each case we switched to Rancher Desktop.

The licensing cost wasn’t that high, but it was hard to take them in good faith after their approach.

Comment by someone7x 1 day ago

> they sent a 30 day notice to license the product or they’d sue

This tracks with what I saw, one day there was an email sent out to make sure you don’t have docker desktop installed.

It was wild because we were on the heels of containerize-all-th-things push and now we’re winding down docker?? Sure whatever you say boss.

Comment by bullonabender 7 hours ago

Same here. The rug pull was not received well by our teams. The messaging was terrible. Some still joke it was a like a stick up. "Pulling a docker" has now become internal slang for firms that let you use/build for years and then ransom you later. We pivoted just after too. They also tagged my personal accounts which had nothing to do with my day job.

Comment by steve1977 1 day ago

So they have become Oracle...

Comment by Someone 1 day ago

> if we were running the version of Docker Desktop after the commercial licensing requirement change, they sent a 30 day notice to license the product or they’d sue.

What exactly are you objecting to? Since you say “I’m fine with the idea that licensing Docker and Docker Desktop is a good thing to do” it’s not the change, so what is it? The 30 days, them saying they would sue after that, or the tone?

I haven’t seen the messages so I cannot comment on that, but if you accept that the licensing can be changed, whats wrong with writing offenders to remind them to either stop using the product or start paying? And what’s wrong with giving them 30 days, since, in my memory, they announced the licensing change months in advance?

Comment by dec0dedab0de 1 day ago

It's rude behavior, and generally not a good way to start a business relationship.

It reminds me of someone handing me something on the street then asking me to pay for it, whenever they do that I just throw whatever it is as far as I can and keep walking.

Comment by saghm 21 hours ago

Normally people who want to sell something don't start out right off the bat with the threat of a lawsuit

Comment by dangus 1 day ago

They basically made the case for podman existing, and I see podman gaining steam and being easier and easier to drop in as a replacement for Docker.

If they never changed that licensing, nobody would have had an incentive to put big effort into an alternative.

I think the hosted Docker registry should have been their first revenue source and then they should have created more closed source enterprise workflow solutions and hosted services that complement the docker tooling that remained truly open source, including desktop.

Comment by b40d-48b2-979e 1 day ago

    Due to the usual “mid size software company not micromanaging the developers”
    standard
You didn't have a device management system or similar product managing software installs (SCCM in Windows land)? That's table stakes for any admin.

Comment by Moto7451 1 day ago

I believe you’re using royal-you but just to be clear I didn’t run these companies.

At one place there wasn’t and at the other it wasn’t well managed. I agree from a compliance point of view and have advocated for this but I was not on the IT/Ops side of the business so I could only use soft power.

The CTO at the first company had a “zero hindrances for the developers” mindset and the latter was reeling from being the merger of five different companies. The latter did a better job of trying to say the least but wasn’t great about it. Outcome was the same none the less.

Comment by jabroni_salad 1 day ago

I mainly consult but we have a few managed clients that are dev houses too. We do their employee onboarding, wrangle their licensing, keep them updated, give them a self service storefront for commercial software that they pay for, add SSO integrations for them etc. Basically they wanted to do NoOps but also didnt want to have to procure or configure their equipment.

But outside of 'make sure the oracle lawyers never contact us' they dont want us policing them and they are admins on their own devices. For a lot of businesses their computer network has separate production and business zones and the production zone is a YOLO type situation.

Comment by coredog64 1 day ago

Amazon has device management but still allows developers to install software via `brew`. Windows is slightly more locked down in that user's don't have admin by default, but there's a very low bar to clear to get it temporarily.

Comment by b40d-48b2-979e 1 day ago

Brew also has workbrew which gives the admin control of the repository. There's also JAMF on macos. None of these systems must give developers free reign to violate software licenses.

Comment by dangus 1 day ago

Device management != micromanaging developer workflow.

At my midsize company, our engineers could absolutely say something like “we don’t like Terraform Cloud, we want to switch to OpenTofu and env0” and our management would be okay with it and make it happen as long as we justify the change.

We wouldn’t even really have to ask permission if the change was no cost.

Comment by ajcp 1 day ago

-> and make it happen.

I think OPs point is they failed on this part. "Making it happen" should have been ensuring a compliant and approved version of the software was the one made available to the developers. At a large scale that is done via device management, but even at a medium sized enterprise that should have been done via a source management portal of some sort.

Comment by rmccue 1 day ago

> Docker’s journey reads like a startup trying to find product-market fit, except Docker already had product-market fit

Strongly disagree. The core Docker technology was an excellent product and as the article says, had a massive impact on the industry. But they never found a market for that technology at any price point that wasn't ~free, so they didn't have PMF. That technology also only took off in the way it did because it was free and open source.

Comment by LtWorf 1 day ago

The entire technology is a wrapper on setns().

Comment by airstrike 1 day ago

And Dropbox is just curlftpfs

Comment by llbbdd 1 day ago

The whole computer is a wrapper around electricity

Comment by ronsor 1 day ago

And humans are just a wrapper around oxygen, food, and water.

Comment by hashstring 19 hours ago

That’s just a wrapper around quarks for the most part…

Comment by MikeNotThePope 17 hours ago

I personally identify as a quark-gluon plasma held together by the strong force and a sheer lack a boundaries.

Comment by steve_adams_86 21 hours ago

I could build that in a weekend

Comment by radioradioradio 1 day ago

Seems like (according to the author) whatever docker is doing it is a sign of their immediate demise and everyone on HN is cheering for the company to go down in flames no matter what.

The tech is open source and free forever - thats somehow a problem? The company monitised enterprise features, while keeping core and hub free - also a problem? Is exploring AI tools, like everyone else is? should they not? should they just stay stagnant? Has made hardened images free instead of making that a premium feature only for people in banks? - and monitising SLAs, how is that a problem?

Docker is still maintaining the runtime on which orbstack, podman etc are all using, and all the cloud providers are using, but apparently at the same time Docker is deeply irrelevant and should not make money - while all of us on HN with well paid tech jobs get to have high thoughts on their every move to pay their employees and investors...

Comment by bmitch3020 1 day ago

I agree with a lot of the above, but then there's:

> Docker is still maintaining the runtime on which orbstack, podman etc are all using, and all the cloud providers are using

I need to fact check that one. runc was donated by Docker to OCI a while back. And containerd was created under the CNCF from a lot of Docker code and ideas. podman is sitting on the RedHat containers stack, which has their own code base. Docker itself uses runc and containerd, and so do most Kubernetes deployments. Many of these tools go to containerd directly without deploying the Docker engine.

Comment by shykes 1 day ago

> containerd was created under the CNCF from a lot of Docker code and ideas

No. containerd was created by Docker, as part of a refactoring of dockerd, then later donated to cncf. Over time it gained a healthy base of maintainers from various companies. It is the most successful of Docker's cncf contributions. But it was not created under the CNCF.

Comment by amluto 1 day ago

> Docker is still maintaining the runtime on which orbstack, podman

Podman? Podman appears to have reimplemented basically everything. What runtime are you talking about?

Comment by JCattheATM 21 hours ago

Hub.

Comment by amluto 19 hours ago

What do you mean? There is a website called Docker Hub. There is a competing product, affiliated with Podman, called Quay, which is also a website and an on-prem solution that I think you can pay for and also an open-source product:

https://github.com/quay/quay

Comment by JCattheATM 18 hours ago

> There is a competing product, affiliated with Podman, called Quay

Most stuff is not published on Quay; most podman users use Docker Hub or Compose files.

Comment by amluto 18 hours ago

> Most stuff is not published on Quay; most podman users use Docker Hub

or GHCR, etc. Docker Hub is hardly a “runtime”.

> or Compose files.

Compose files aren’t a replacement for Docker Hub. And Podman has a reimplementation of compose.

Comment by JCattheATM 18 hours ago

> Docker Hub is hardly a “runtime”.

In the context of your question Hub makes sense, as it is something Docker maintains that most podman users still rely on

> Compose files aren’t a replacement for Docker Hub.

Correct, but most compose files refer to Docker Hub.

You seem to be highlighting that alternatives, which I don't dispute, but most people are overwhelmingly using the services that Docker maintain. That's the answer to your question. Read up a few replies if you've forgotten the context.

Comment by radioradioradio 1 day ago

to the respondants above - you are right - that lacked nuance

Look at the maintainer lists of containerd and moby, which are used by loads of others, several docker employees on those lists - I didn't check what their amount of involvement is compared to other companies, nor whether they are even sanctioned by docker to do the work, but afaik those projects came out of OCI with Docker as one of the primary backers.

Comment by shykes 1 day ago

OP is wrong. Docker created containerd, then donated to cncf, then other contributors joined.

Comment by pjmlp 1 day ago

Not really, rancher, containerd, podman don't depend on Docker other than offering a compatibility layer for tools that expect talking to the real Docker.

Comment by shykes 1 day ago

containerd is the lower half of dockerd, spun out by Docker as a standalone open source project. It remains a core component of Docker.

Comment by pjmlp 1 day ago

I stand corrected on that one, however it was then another piece of the stack they ended up losing as added value.

Comment by shykes 1 day ago

The spinning out of containerd is best understood in combination with the launch of Docker Desktop, which was not open source, and later became the main source of revenue.

Docker in its entirety was at risk of being wrapped as a commodity component. By spinning out lower-level components under a different brand, they (we) made it possible to keep control of the Docker brand, and use it to sell value-added products.

Source: I'm the founder of Docker.

Comment by sneak 1 day ago

> The tech is open source and free forever - thats somehow a problem? The company monitised enterprise features, while keeping core and hub free - also a problem?

Docker Desktop, among other things, is not open source and is not free.

Open Core is not something that people who care about software freedoms engage in. It’s what proprietary software makers engaging in open source cosplay do.

Comment by shykes 1 day ago

Hi, I'm the founder of Docker. I started it in 2008 (under the name Dotcloud) and left in 2018.

AMA.

Comment by biggestlou 1 hour ago

How did it feel the first time you or someone on your team built a container and ran it?

Comment by lenova 1 day ago

Hi! Thanks for offering an AMA here. I don't have a specific question, but I am interested in hearing about the general story of what it was like developing Docker, what the experience was like trying to build a business around it, and what you're up to these days in post-Docker life. Thanks in advance!

Comment by shykes 23 hours ago

It's difficult to tell the whole story in a HN comment, but if you're interested, I did share my experience in a few podcasts over the years. Here are a few that I could find on youtuve: https://www.youtube.com/watch?v=UVED44sb7zg https://www.youtube.com/watch?v=MSlHvz57RKs

I also recently discovered a trove of my old presentations, retracing my early obsession with the same problem, and my repeated failed attempts to get people to care. I shared some of them in a talk a few weeks ago: https://www.youtube.com/watch?v=huRfsLMK5sA

Comment by meonkeys 1 day ago

What's the most important thing for Docker, Inc. to do right now?

Comment by shykes 1 day ago

I would say: listen to your customers. Listen to your engineers. Don't overhire. Pick your battles carefully. Don't tolerate mediocre VPs.

All generic advice since I don't have inside information.

Comment by McP 1 day ago

What are your thoughts on Podman?

Comment by shykes 1 day ago

Imitation is the highest form of flattery! Obviously there was demand for an alternative to Docker that was native to the Red Hat platform. We couldn't offer that (although we tried in the early days) so it made sense that they would.

In the early days we tried very hard to accommodate their needs, for example by implementing support for devicemapper as an alternative to aufs. I remember spending many hours in their Boston office whiteboarding solutions. But we soon realized our priorities were fundamentally at odds: they cared most about platform lock-in, and we cared most about platform independence. There was also a cultural issue: when Red Hat contributes to open source it's always from a position of strength. If a project is important to them, they need merge authority - they simply don't know how to meaningfully contribute to an upstream project when they're not in charge. Because of the diverging design priorities, they never earned true merge rights on the repo: they had to argue for their pull requests like everyone else, and input from maintainers was not optional. Many pull requests were never merged because of fundamental design issues, like breaking compatibility with non-Red Hat platforms. Others because of subjective architecture disagreements. They really didn't like that, which led to all sorts of drama and bad behavior. In the process I lost respect for a company I once admired.

I also think they made a mistake marketing podman as a drop-in replacement to Docker. This promise of compatibility limited their design freedom and I'm sure caused the maintainers a lot of headaches- compatibility is hard!

Ultimately the true priority of podman - native integration with the Red Hat platform - makes it impossible for it to overtake Docker. I'm sure some of the podman authors would like to jettison that constraint, but I don't think that's structurally possible. Red Hat will never invest in a project that doesn't contribute to their platform lock-in. Back when RH was a dominant platform, that was a strength. Nowadays it is a hindrance.

Comment by daveisfera 22 hours ago

There was probably a lot going on behind closed doors, but from the outside, it appeared that RedHat was trying to improve the security and technical details of containers, but Docker was just refusing pull requests and not playing nice. This eventually drove RedHat to make their own implementation (i.e. Podman), so it was a self created enemy and not necessarily one that was built-in/inevitable. I'm definitely not a fan of RedHat's moves since being acquired, but at least from the outside, this looked like Docker being arrogant and problematic and not a "RedHat problem".

Comment by shykes 22 hours ago

I am painfully aware of that narrative. All I can say is that it is a false narrative, deliberately pushed by Red Hat for competitive reasons. There was a deliberate decision to spend marketing dollars making Docker look bad (specifically less secure), at a time where we were competing directly in the datacenter market.

Ask yourself: how many open source projects reject PRs every day because of design disagreements? That's just how open source works. Why did you hear about that specific case of PRs getting rejected, and why do you associate it with vague concepts like "arrogance" and "insecurity"? That's because a marketing team engineered a narrative, then spent money to deploy that narrative - via blog posts, social media posts, talks at conferences, analyst briefings, partner briefings, sales pitches, and so on. This investment was justified by the business imperative of countering what was perceived to be an existential threat to Red Hat's core business.

It opened my eyes to the reality of big business in tech: many of the "vibes" and beliefs held by the software engineering community, are engineered by marketing. If you have enough money to spend, you can get software engineers to believe almost anything. It is a depressing realization that I am still grappling with.

The most damning example I can give you: we once rejected a PR because it broke compatibility with other platforms. Red Hat went ahead and merged it in their downstream RPM package. So, Fedora and RHEL users who thought they were installing Docker, were in fact installing an unauthorized modified version of it. Later, a security vulnerability was discovered in their modified version only, but advertised as a vulnerability in Docker - imagine our confusion, looking for a vulnerability in code that we had not shipped. Then Red Hat used this specific vulnerability, which only existed in their modified version, in their marketing material attacking Docker as "insecure". That was an eye-opening moment for me...

Comment by hyperman1 9 hours ago

If it is pure marketing, I wonder why docker couldn't play the same game and be better at it?

E.g. for your most damning example: If docker published this story, blogged about it, made noice in places like HN, it is exactly what the press would love: RH breaks docker security while claiming to be more secure! The Emperor has no clothes! If you take security serious, accept no fake substitutes!

Comment by ghthor 18 hours ago

Not sure the docker license supports calling distribution patches “unauthorized”

Comment by shykes 18 hours ago

The trademark policy does.

In any case I meant it in an informal software engineering sense: it's bad form for a packager to distribute upstream software under its original name, with substantial modifications beyond what users would expect distro packagers to make - backporting, build rules, etc.

For such a downstream change to introduce security vulnerabilities is a major fuckup. To actively blame upstream for said vulnerability, while competing with them in the market, is unethical.

Comment by JCattheATM 20 hours ago

> They really didn't like that, which led to all sorts of drama and bad behavior.

Which stand out? Any particular mailing list or github issue discussions?

Comment by incognito124 1 day ago

What's next for Dagger? Any upcoming features?

Comment by shykes 1 day ago

Yes :)

We heard the feedback that we should pick a lane between CI and AI agents. We're refocusing on CI.

We're making Dagger faster, simpler to adopt.

We're also building a complete CI stack that is native to Dagger. The end-to-end integration allows us to do very magical things that traditional CI products cannot match.

We're looking for beta testers! Email me at solomon@dagger.io

Comment by linkage 1 day ago

Dagger has been a godsend in helping me cope with the unending misery that is GitHub Actions. A big thanks to you and the whole team at Dagger for making this possible.

Comment by shykes 1 day ago

Thank you for the kind words! I'd love to show you a demo of the new features we're working on, and get your thoughts. Want to DM me on the Dagger discord server? Or email me at solomon@dagger.io

Comment by monkchips 1 hour ago

This is the way

Comment by LikeAnElephant 1 day ago

Really happy to hear this. I was tinkering with Dagger soon before the pivot to AI, and assumed this would not be solving my CI woes anytime soon.

Focusing on CI would still enable the AI stuff too! But my use case is CI, no AI.

Comment by shykes 1 day ago

Exactly. The LLM primitives will remain - we were careful to never compromise the modular, lego-like design of the system. But now we have clarity on the primary use case.

Thanks for giving us another chance! Come say hi on our discord, if you ever want to ask questions or discuss your use case. We have a friendly group of CI nerds who love to help.

Comment by shepherdjerred 1 day ago

Wait Dagger and Docker are related?

Comment by shykes 1 day ago

Yes, I am the co-founder of Docker and also of Dagger. The other two co-founders of Dagger, Sam Alba and Andrea Luzzardi, were early employees of Docker.

The companies themselves are not related beyond that.

Comment by jiehong 1 day ago

What would you have done differently in retrospect?

Comment by shykes 1 day ago

What I would tell my younger self:

Only listen to your users and customers, ignore everyone else.

Don't hire an external CEO unless you're ready to leave. Hiring a CEO will not fix the loneliness of not having a co-founder.

Having haters is part of success. Accept it, and try to not let it get to you.

Don't partner with Red Hat. They are competitors even though they're not honest about it.

Not everyone hates you even though it may seem that way on hacker news and twitter. People actually appreciate your work and it will get better. Keep going.

Comment by logube 1 day ago

Did you know Solaris zones at all before creating Docker?

Comment by shykes 1 day ago

Yes, of course. I was also an avid user of vserver and openvz on Linux, back when they required patching the kernel, and lxc didn't exist yet.

When we open sourced Docker, we had considerable experience running openvz in production, as well as migrating to lxc - a miserable experience in the early days because the paint was still so fresh. To my knowledge we were the very first production deployment of managed databases and multi-tenant application servers on lxc, back in 2010.

It's a common misconception that Docker was a naive reinvention of, or a thin wrapper around, pre-existing technology like solaris zones or lxc. In reality that is not the case. Those technologies were always intended as alternative forms of virtualization: a new way to slice up a machine. Docker was the first to use container and copy-on-write tech for the purpose of packaging and distributing applications, rather than provisioning machines. Before Docker, nobody would ever consider running a linux container or solaris zone on top of a VM: that would be nonsensical because they were considered to be at the same layer of the stack. Sun invented a lot of things, but they did not everything :)

Comment by justsomehnguy 19 hours ago

'Bridge' was and still is an established network term for joining two broadcast domains into one. Why the hell you decided to name your NAT'ed network layer a 'bridge'?

Comment by shykes 19 hours ago

As far as I know, Docker uses the term "bridge" in the standard way, to designate the use of Linux bridge interfaces (basically virtual ethernet switches) to interconnect containers. Containers connect to each other via a layer 2 bridge, not NAT.

Comment by justsomehnguy 6 hours ago

It has as much sense as calling all the car roads in the world 'bridges'. They are interconnecting some areas via a physical connection, not some 5th dimension magik, after all.

It's even more egregious with 'ipvlan' and 'macvlan' drivers:

> ipvlan Connect containers to external VLANs.

Duh, that's a 'routed network' and nobody cares if it's on a separate vlan or not.

> macvlan Containers appear as devices on the host's network.

And this is a bridge!

Comment by Avamander 19 hours ago

Which reminds me that BuildKit does not have support for specifying a network which is crazy given how you can configure the daemon to not attach one by-default.

Comment by Baarrdd 1 day ago

[flagged]

Comment by shykes 1 day ago

Not interested, sorry.

Comment by Baarrdd 1 day ago

Sure thing! Thank you for the reply.

Comment by vivzkestrel 1 day ago

- well time to announce DockerVM, a super fast under 100ms boot time competitor to firecracker and gvisor and try selling this to some of the cloud providers out there

- take advantage of the current agentic wave and announce a Docker Sandbox runner product that lets you run agents inside cloud sandboxes

Comment by pploug 1 day ago

Comment by vivzkestrel 1 day ago

I was not aware of this one but I am talking about running it on the cloud like making a direct competitor to modal

Comment by jiehong 1 day ago

Or maybe a CI runner service?

Comment by jaynamburi 8 hours ago

Docker started as a simple, opinionated UX around Linux containers and became a product company wrapping an ecosystem that moved on without it.

The original breakthrough wasn’t containers themselves (LXC already existed), but the combination of: a reproducible image format, layered filesystem semantics, a simple CLI, and a registry model that made distribution trivial. That unlocked a whole workflow shift.

What happened next is that Docker the company tried to own the platform, while the industry standardized around the parts that mattered. The runtime split into containerd/runc, orchestration moved to Kubernetes, image specs went to OCI, and “Docker” became more of a developer UX brand than a core infrastructure primitive.

Today Docker mostly means:

A local dev environment (Docker Desktop)

A build UX (Dockerfile, buildx)

A compatibility layer over containerd

A commercial product with licensing constraints

Meanwhile, production container infrastructure largely bypasses Docker entirely.

That’s not failure it’s a common arc. Docker succeeded so well that it got standardized out of the critical path. What remains is a polished on ramp for developers, not the foundation of the container ecosystem.

In other words: Docker won the mindshare, lost the control, and pivoted to selling convenience.

Comment by amelius 1 day ago

What I hate about docker and other such solutions is that I cannot install it as nonroot user, and that it keeps images between users in a database. I want to move things around using mv and cp, and not have another management layer that I need to be aware of and that can end up in an inconsistent state.

Comment by bmitch3020 1 day ago

> What I hate about docker and other such solutions is that I cannot install it as nonroot user

There's a rootless [0] option, but that does require some sysadmin setup on the host to make it possible. That's a Linux kernel limitation on all container tooling, not a limitation of Docker.

> and that it keeps images between users in a database.

Not a traditional database, but content addressable filesystem layers, commonly mounted as an overlay filesystem. Each of those layers are read-only and reusable between multiple images, allowing faster updates (when only a few layers change), and conserving disk space (when multiple images share a common base image).

> I want to move things around using mv and cp, and not have another management layer that I need to be aware of and that can end up in an inconsistent state.

You can mount volumes from the host into a container, though this is often an anti-pattern. What you don't want to do is modify the image layers directly, since they are shared between images. That introduces a lot of security issues.

[0]: https://docs.docker.com/engine/security/rootless/

Comment by Alupis 1 day ago

If I install podman on my Linux machine, it's rootless by default. No fiddling required of me.

Docker could do a lot better job in the packaging of their software. Even major updates require manual uninstalling and reinstalling it... Podman just works.

Comment by WhyNotHugo 21 hours ago

I packaged docker-rootless Arch (AUR) and Alpine (community) downstream long ago. I'm sure it's available for other distros too nowadays, although it wasn't at the time.

Docker could definitely do a much better job of making packaging easier. The docker-rootless just includes an sh script which has several of the files inline and writes them to the target location… assuming you're making a user-only installation (even though other potions of the setup require root intervention).

So packaging this requires reverse engineering how the installation process works, and extracting some of those inline files from the sh script, and figuring out where they'd be installed for a system-wide location.

Comment by scoodah 21 hours ago

While true, what the grandparent comment mentions still applies to podman:

> I cannot install it as nonroot user

You still need root privileges to install podman initially.

Comment by esafak 1 day ago

Comment by iberator 1 day ago

It's hilarious. Your 'solution' to use docker without root is to make some system changes as root and then use/build docker LOL.

Comment by embedding-shape 1 day ago

> is to make some system changes as root

Yeah, I mean what do you expect or is the alternative? If you have a process that needs access to something only root typically can do, and the solution been to give that process root so it can do it's job, you usually need root to be able to give that process permission to do that thing without becoming root. Doesn't that make sense? What alternative are you suggesting?

Comment by IshKebab 1 day ago

Uhm no. Podman is a different product that is pretty much a drop-in replacement for Docker but lets you run as non-root.

You have to be root to set it up, but after that you don't need any special privileges. With Docker the only option is to basically give everyone root access.

It's true that it requires root for some setup though. Unclear if op was complaining about that.

Comment by cpuguy83 1 day ago

Docker can run rootless the same way podman does.

Comment by FireBeyond 1 day ago

Now. I was at Red Hat at the time, in the BU that built podman, and Docker was just largely refusing any of Red Hat's patches around rootless operation, and this was one of the top 3, if not the top motivation for Red Hat spinning up podman.

Comment by cpuguy83 1 day ago

You'd have to point me to those PR's, I don't recall anything specifically around rootless. I recall a lot of things like a `--systemd` flag to `docker run`, and just general things that reduce container security to make systemd fit in.

Comment by IshKebab 1 day ago

Ah the classic "it's a terrible idea until you implement it elsewhere and show us up".

Comment by kccqzy 1 day ago

> I cannot install it as nonroot user

Sure you cannot install docker or podman as a non-root user. But take your argument a bit further: what if the kernel is compiled without cgroups support? Then you will need root to replace the kernel and reboot. The root user can do arbitrarily many things to prevent you from installing any number of software. The root user can prevent you from using arbitrary already installed software. The root user can even prevent you from logging in.

It is astounding to me that someone would complain that a non-root user cannot install software. A much more reasonable complaint is that a non-root user can become root while using docker. This complaint has been resolved by podman.

Comment by oarsinsync 1 day ago

> It is astounding to me that someone would complain that a non-root user cannot install software.

Depends on what you mean by "install software".

If your definition is "put an executable in a directory that is in every other user's standard $PATH", then yes, this is an absurd complaint. Of course only root should be able to do this.

If your definition is "make an executable available to run as my user", then no, this is not absurd. You absolutely should not need root to be able to run software that doesn't require root privileges. If the software requires root, it's either doing something privileged, or it's doing it wrong.

Comment by kccqzy 1 day ago

I don’t think you understood my comment.

> You absolutely should not need root to be able to run software that doesn't require root privileges.

But root can approve or disapprove you running that software. Have you heard of SELinux or AppArmor? The root user can easily and simply preventing you from running an executable even as your own user.

A malware can run as your own user and exfiltrate files you have access to. The malware does not need root privileges. Should root have the capability to prevent the malware from being installed? Regardless of what your definition of “install” is, the answer is unequivocally yes.

Comment by tucnak 1 day ago

If you're not into rootless Docker, but still want to improve sandboxing capabilities, consider alternative runtimes such as runsc (also known as gVisor)

https://gvisor.dev/docs/

Comment by outcoldman 1 day ago

If somebody missed it, apple/container is a good replacement for Docker for Mac on macOS. I have been using it for the last 6 months, there are issues, but also team is actively developing it.

https://github.com/apple/container

Comment by cpuguy83 1 day ago

I haven't personally used it, but containerd also has "nerdbox": http://github.com/containerd/nerdbox

Comment by embedding-shape 1 day ago

Does that let you build images on a macOS host that works on Windows and Linux too? It doesn't seem to talk about what platform the images support, only where you could run containers.

Comment by outcoldman 1 day ago

Not sure about Windows, but yes to Linux. It runs linux containers (not darwin), plus can have rosetta. And I build multi arch images (arm64/amd64). It uses buildkit, the same Docker uses, so I am sure you can build Windows containers with it as well.

Just a note, I am working for the org, that sells enterprise software shipped as container images, publishes on Docker Hub and RedHat. No issues migrating to apple/container.

Comment by pawelduda 1 day ago

How is the performance overhead of this compared to docker on MacOS?

Comment by outcoldman 1 day ago

The only big noticeable issue for me was building a large enterprise images (like Splunk). This issue was fixed [1]. Other than that I have not seen any issues with IO or performance. Running Splunk/OpenSearch/ElasticSearch, some performance tests, enterprise software written in Go (building for arm64/amd64). No issues at all.

1. https://github.com/apple/container/issues/68

Comment by __MatrixMan__ 1 day ago

I used to be very enthusiastic about docker compose, but I've been playing around with nix + process-compose lately and its pretty great. I can have k3s and tilt in there only when it's necessary--which it's usually not.

Comment by chuckadams 1 day ago

Nix is wonderful for reproducible and declarative infrastructure, but how do you manage multiple server instances with it? I have a handful of projects active at any time, and am currently running four web servers, three mysql instances, two postgres, and a partridge in a pear tree. Should I run Nix in Docker, Docker from Nix, or is there a nix-only solution for this?

Comment by __MatrixMan__ 22 hours ago

I couldn't speak to separate physical machines, but I run several "servers" as part of my dev environment.

You'd have to ensure that their ports and data directories don't collide, but I don't think you'll have a problem having "process-compose up" start multiple separate mysql, postgres, or webserver instances.

I just dedicate a terminal pane to it so I can arrow around and see the logs and health status of my databases (plus things like Prometheus and Grafana, I like to be able to nuke the cluster and have everything flatline, rather than having telemetry itself die when k8s goes away).

Both mysql and postgres are included in https://github.com/juspay/services-flake which you might find interesting.

Comment by wkrp 1 day ago

There are tools such as deploy-rs, colmena, and morph that let you deploy nixOs configs using nix. I can't speak to how good they are personally, I use ansible to push my nix configs.

Comment by gf000 1 day ago

I may misunderstand your problem, but I just have a configuration repository for various "hosts". There are a couple of settings I share between them, and then just specify the differences.

"Deploying" one is as simple as `nixos-rebuild switch --flake .#hostName`

Comment by chuckadams 1 day ago

These are all dev environments running at the same time. I wasn't sure if Nix had some kind of port mapping or proxy config for this sort of thing. I'm still partial to having containers as self-contained build artifacts, I just like to have options as dev environments go, and "Docker from Nix" looks like the best option so far. But it's a vast ecosystem, and there's plenty I might be missing.

Comment by pxc 22 hours ago

You just plug Nix into a service manager that you have Nix bring along for you. Many years ago, I did this for a proof-of-concept at work with supervisord[1] and flake.nix. Devenv[2] builds in[3] support for process-compose[4], which GP mentioned. A few years ago, one long-time Nixer even created[5] a framework, nix-processmgmt[6] that abstracts over various service managers including supervisord and s6[7], which can both be used in a self-contained way regardless of the init system on the host.

There are a ton of other open-source process supervisors you can use to manage long-lived processes in a portable way, too, notably Foreman[8] and various clones written in languages other than JS, and GNU Shepherd[9]. In the course of writing this post, I discovered one called dinit[10] which looks sort of similar to s6 and the GNU Shepherd in that it supports both pure usermode operation as well as functioning as an OS's init system. Anyway, all of 'em are in Nixpkgs, so you can pick them up and use them without any packaging work, too.

Service orchestration and containers are basically orthogonal concerns. Before Docker was born, there were already plenty of portable tools for standalone "process supervision", "service management", whatever you wanna call it. So it is after Docker, as well.

If I needed this for one of my dev environments I would take a look at process-compose to decide if it's acceptable to me. If it isn't, then after surveying the contemporary landscape of usermode service managers, I'd then write a devenv module that generates configs for it, and use that.

> Should I run Nix in Docker, Docker from Nix, or is there a nix-only solution for this?

I'd do this in a "Nix-only" way if possible, but if it's convenient for you to run a service via Docker (or Podman or any other container runtime), you can still do that.

If you can safely assume that all of your devs have it available, you can ship the client (`docker` or `podman` CLI or whatever) as part of your Nix environment, then have your process manager launch it via that command line interface. I'd avoid running Nix from within Docker for the purposes of development environments.

--

1: https://supervisord.org/

2: https://devenv.sh/

3: https://devenv.sh/supported-process-managers/process-compose...

4: https://f1bonacc1.github.io/process-compose/

5: https://sandervanderburg.blogspot.com/2020/02/a-declarative-...

6: https://github.com/svanderburg/nix-processmgmt

7: https://skarnet.org/software/s6/

8: https://github.com/ddollar/foreman

9: https://shepherding.services/

10: https://davmac.org/projects/dinit/

Comment by pxc 13 hours ago

(Actually, now that I think of it, that experiment with supervisor was pre-flakes, so probably shell.nix.)

Comment by tuananh 1 day ago

cool,, i have to check out process-compose.

Comment by __MatrixMan__ 23 hours ago

It's pretty much just docker compose, but you don't have to forward ports or map volumes because the processes are not running in containers. The TUI is pretty nice also. If docker compose has an equivalent I'm not aware of it.

Its especially nice for use with agents because the process-compose commands can be used to understand what's running, what's pending, what's failing, etc. Of course there's always `ps aux | grep` but that's full of noise from the rest of your system and it doesn't provide and structure for understanding: "foo is not running because the readiness check for bar is failing".

Containers have their place, but I don't think it's everywhere.

Comment by Havoc 1 day ago

Reminds me a bit of stuff like curl - the importance of it and the monetization opportunities are out of sync. Tricky

Comment by justonceokay 1 day ago

I’m currently building a micro transaction version of `ls`

Comment by yomismoaqui 1 day ago

Comment by Havoc 1 day ago

Not a charity - they’re going to want to see a viable eventual monetization path too

Comment by Macha 1 day ago

Pretty sure that was meant to be a jab (mostly at YC) rather than a serious suggestion

Comment by Havoc 1 day ago

Ah right. Very plausible

Comment by Loeffelmann 1 day ago

An AI version of ls and fzf bringing your file system to the AI age

Comment by bmitch3020 1 day ago

Another year, another story written about the demise of Docker. This has been happening since before Kubernetes took off. My own take:

Docker had a choice of markets to go after, the enterprise market was being dominated by the hyperscalers pushing their own Kubernetes offerings. So they pivoted to focus on the developer tooling market. This is a hard market to make work, particularly since developers are very famous for not paying for tooling, but they appear to making a profit.

With Docker Hub, it's always been a challenge to limit how much that costs to run. And with more stuff being thrown in larger images, I don't want to see that monthly bill. The limits they added hurt, but also made a lot of people realize they should have been running their own mirror on-prem, if not only to better handle an upstream outage when us-east-1 has a bad day.

Everything else has been pushing into each of the various popular development markets, from AI, to offloading builds to the cloud, to Hardened Images. They release things for free when they need to keep up with the competition, and charge when enterprises will pay for it.

They've shifted their focus a lot over the years. My fear would be if they stayed stagnant, trying to extract rents without pushing into new offerings. So I'm not worried they'll fail this year, just like I wasn't worried any of the previous years when similar posts were made.

Comment by jesse_dot_id 23 hours ago

We use swarm in production and love it. K8s is extreme overkill for a high percentage of most of the shops who are using it, in my estimation.

Comment by shermantanktop 1 day ago

New cool tech is almost never a moat.

It will get a company started but if the tech has any success, that success is always replicable (even if the exact tech isn’t). IP protection is worthless and beside the point.

The only moat is the creativity of a company’s core staff when they spend a lot of time on valuable problems. Each thing they produce will grow, live, and die, but if the company has no pipeline it is doomed.

And VCs know this, which is why they want to pump startups up, and then cash out before they flop, even while founders talk about all the great things they can do next.

Naming your company after your one successful product is a pretty good sign of a limited lifespan.

Comment by OptionOfT 1 day ago

I just want to disable "Ask Gordon" in the sidebar. I don't want to see it. My brain works in weird ways. Whenever I see a name for the first time I attach that person to it.

Gordon is the character from Half Life.

Docker a piece of software. Don't anthropomorphize it.

Comment by gordonhart 1 day ago

Eventually there will be enough anthropomorphized pieces of software for everybody to have their "Alexa" moment. Mine came last year (thanks, Docker).

Comment by Joel_Mckay 1 day ago

Gordon was the office pet tortoise if I recall, and might still be around given they may live a very long time. Thus it became the default user in parts of their software. =3

Comment by bmitch3020 1 day ago

Gordon unfortunately passed away in 2023: https://x.com/solomonstre/status/1637537983988629504

Comment by Joel_Mckay 1 day ago

In a way, it is fun the memory still affects design choices. =3

Comment by whinvik 1 day ago

Sorry off topic question but has Docker come up with a easy to use dev solution. I always end up with using Devcontainer: it solves the sandboxed, ready to use dev env.

But the actual experience with developing on VSCode with Dev Containers is not great. It's laggy and slow.

Comment by eYrKEC2 1 day ago

My one experience with dev containers put me off of dev containers... but standard `docker compose` is just great for me.

I worked at a company where we were trying to test code with our product and, for a time, everyone on the team was given a mandate to go out and find X number of open source projects to test against, every week.

Independently, every member of the (small) team settled on only trying to test repos where you could do:

        get clone repo && cd repo && docker compose up
Everything else was just a nightmare to boot up their environment in a reasonable amount of time.

Comment by mfro 1 day ago

Devcontainers are great for me on windows and macos. What stack are you using?

Comment by whinvik 22 hours ago

I am on a Mac but I develop remotely on a VM, LSP is sometimes so slow, I want to shut it down.

Comment by Avamander 19 hours ago

I've had no lag issues with IntelliJ and Devcontainers on macOS. Are you using an Intel Mac or virtualizing something?

Comment by wilsonpa 1 day ago

Really? I work across multiple vscode projects (locally), some use dev-containers and others don't. I have never noticed any difference in experience across the two.

I have also used them remotely (ssh and using tailscale) and noticed a little lag, but nothing really distracting.

Comment by amonith 1 day ago

Most likely a Windows or MacOS user, where docker runs in a linux VM. Optimized as much as possible and lightweight, but still a VM.

Comment by okanat 1 day ago

No, on Windows it is very quick too. On WSL2 compiling Rust programs are almost as fast as Linux on bare metal. However the files need to live inside the Linux filesystem. Sharing with Windows drives actually compiles slower than native Windows.

Comment by pjmlp 1 day ago

You can use dev drives instead, I guess.

Comment by okanat 1 day ago

If you are building natively, yes. However the original comment is about Dev Containers which runs under WSL2.

If you open a native Windows folder in VSCode and activate the Dev Container, it will use the special drvfs mounts that communicate via Plan9 to host Windows OS to access native Windows files from the Docker distro. Since it is a network layer accross two kernels, it is slow as hell.

Comment by pjmlp 1 day ago

Windows is a bit "yes but" kind of situation.

First of all it supports containers natively, Windows own ones, and Linux on WSL.

Secondly, because Microsoft did not want to invent their own thing, the OS APIs are exposed the same way as Docker daemon would expect them.

Finally, with the goal to improving Kubernetes support and the ongoing changes for container runtimes in the industry, nowadays it exposes several touch points.

https://learn.microsoft.com/en-us/virtualization/windowscont...

Comment by 0xbadcafebee 1 day ago

> For developers, this doesn’t change much. Docker containers will continue to work, and the open source nature of Docker means the technology will persist regardless of what happens to the company. But it’s worth watching how Docker Inc’s search for identity plays out - it could affect the ecosystem of tools and services built around containers.

Things will actually change quite a bit. First of all, millions of people depend on Docker Desktop, and Podman Desktop is (as everything from RedHat is) a poor replacement for it. And the Docker CLI and daemon power a huge amount of container technology; Podman is, again, quite a poor replacement. If these solutions go away, a large amount of business and technology is gonna get left in the lurch.

Second, most of the containerized world depends on Docker Hub. If that went away, actually a huge swath of businesses would just go hard-down, with no easy fix. I know a million HNers will be crying out about the evils of "centralization", but actually the issue is it's corporate-run rather than an open body. The architecture should have had mirrors built-in from the start, but even without mirrors, the company and all its investment and support going away is the bigger rug-pull.

The industry and ecosystem have this terribly human habit of rushing at the path-of-least-resistance. If we don't plan an intelligent, robust migration strategy away from Docker, we'll end up relying on something worse.

Comment by forty 1 day ago

Why do you say podman is a poor replacement? It has been consistently a better replacement for me on Linux, with easy rootless, daemon less, quadlet, etc. And at work where I have to use macos, it works just as well.

Comment by starkparker 1 day ago

The interfaces, CLI and Podman Desktop, are still not at parity. Podman contributors will be the first to tell you this.

That's not to say they aren't effective, or even good, at least for the CLI. They're just still catching up. It's not and shouldn't be a surprise considering the head start.

Comment by Spivak 1 day ago

Yeah, people are sleeping on Podman who is now genuinely leading the space now that docker-engine is all but in maintenance mode.

Quadlets are amazing and greatly simplify the deployment and management of containers.

The systemd integration is so good because you have this battle tested process manager with a gazillion features and you can use them with your containers for free.

Podman can run pods, hence the name, which is an abstraction that k8s has proven is useful but docker completely lacks.

Podman pushing k8s manifests as an (imho better) compose with podman play is refreshing. And it can be dropped in with Quadlets too.

Podman can generate your k8s manifests from your running containers. Get everything running how you like and save.

buildah frees you from Dockerfile and lets you build containers completely rootlesslessly.

Comment by godzillabrennus 1 day ago

I switched to Podman on Windows and found it less laggy, and it works fine for local development. I'm sure I'm missing some features, but as Docker continues to struggle to generate revenue, the open-source option will be important to an increasingly large part of the industry.

FYI- If I was docker, I'd stand up some bare metal hosting (i.e., a Docker Cloud) designed around making it easier for novice developers to take containers and turn them into web applications, with a product similar to Supabase built around this cloud to let novice developers quickly prototype and launch apps without learning how to do deployments in more sophisticated clouds. Supabase and AI vibe coders pair well, but the hole in the market is vibe coders who want to launch a web app vibe coded but don't know how to deploy containers to the cloud without a steep learning curve. It keeps many vibe coders trapped in AIO vibe coding platforms like Lovable and AI Studio.

Comment by embedding-shape 1 day ago

> but the hole in the market is vibe coders who want to launch a web app vibe coded but don't know how to deploy containers to the cloud without a steep learning curve

Is it really a hole? I'm not the target user, but I keep coming across "Build & deploy your own platform/service/application with VibeCodingLikeThereIsNoTomorrow" and similar, maybe new one every week or so.

Comment by godzillabrennus 1 day ago

Seems like its a hole in the market if new services are cropping up. If there wasn't a hole then established clouds would have this. I don't have to think if I want a virtual machine booted with Ubuntu. I can do that in any cloud. How many have vibe coding support to launch containers that work locally in a cloud so they are accessible as a website? How many of those have a build process that does security checks and helps patch the code and automates building browser tests to verify the functionality keeps working (or kicks it back to the coding agent to fix)? Basically, the last 10% of the vibe coding a web app locally that isn't automated. This is a big opportunity for a semi established vendor like Docker that a startup would need users and capital (for bare metal) to fix. Two things that a Docker has at their disposal.

Comment by embedding-shape 1 day ago

Those seems like such basic and tablestake features of such a platform, that I've assumed they all do something like that already. Is that not the case? Is it vibecoders who aren't programmers who are building these services or what's going on?

Comment by godzillabrennus 23 hours ago

Yes, vibe coders are telling prompts to build web apps in IDEs like Windsurf/Antigravity, and it sets up the local environment, but getting that from local to web is still a pain point. It's a hole in the market with potential for a firm like Docker that needs to monetize without upsetting its community. Remember, vibe coders are more enthusiasts than professionals. Check out /r/vibecoding on Reddit for an idea of the general market that would use something like this.

Comment by skwashd 1 day ago

A few times I've wondered, where would Docker Inc be today if Microsoft acquired them back in 2017?

Early 2017 was peak Docker and Docker Inc. Those were the days. Container hype was everywhere. Before moby. Before all the pivots.

Microsoft was embracing open source and the cloud. They were acquiring dev tools.

It was a missed opportunity for both companies.

Comment by hamdingers 1 day ago

They probably would've kept autobuilds free for open source and I wouldn't have switched to GHCR and Github Actions for all my projects. Seems Microsoft got my "business" anyway.

Comment by eigencoder 1 day ago

I don't want Microsoft's fingers all over docker -- if anything that would have accelerated the rise of e.g. podman

Comment by 0hw0t 23 hours ago

It's a config DSL for a config DSL (OS files). Docker isn't much different from an AI wrapper. What was this mighty corporate machine supposed to become shipping config scripts?

The team I was before Docker got popular just used the OG container, user accounts, and set up namespaces and cgroups per user.

Docker represents perfectly the issue with the software industry; it is software that duplicates existing software chasing "line go up" not actual utility. No net new utility just different semantics to perform sys admin work.

Developers did not want to learn sys admin, and instead learned a meta-Docker-driven-sysadmin anyway.

Comment by gregoryl 1 day ago

  For a while, Docker seemed to focus on developer experience.
ahh yes, docker desktop, where the error messages are "something went wrong", and the primary debugging step is to wipe it, uninstall, and reinstall.

Comment by reedf1 1 day ago

It is honestly incredible that such an important part of the Windows dev process is nearly unusable. It is easily the most fickle and opaque bit of software that I am required to depend upon.

Comment by hu3 1 day ago

Yep. I used to have a ton of problems with Docker in Windows.

It has been a year without problems since I enabled WSL2 engine for Docker.

Honestly they should make the WSL2 Docker engine mandatory because otherwise things barely work.

Comment by bonesss 1 day ago

Docker on Windows issues, back before WSL had matured enough, gave a pretty compelling argument for doing windows development on OSX inside a VM.

Comment by tuananh 1 day ago

at work, i opted for remote development workspace because of this problem. Windows & Docker ain't meant to be together :(

Comment by throw20251220 1 day ago

Windows is the problem, not Docker. Just try wsl2 and you’ll see…

Comment by breakingcups 1 day ago

That's a very naive take. The issue is Docker Desktop, a buggy mess. I have plenty of well-functioning, complex Windows applications with detailed troubleshooting utilities.

Comment by FireBeyond 1 day ago

Yup. How many years did I go where the most frequently pushed button in the Docker Desktop UI was "reset my installation"?

Comment by leetrout 1 day ago

> Docker created a standard so successful that it became infrastructure, and infrastructure is hard to monetize

Open infrastructure is hard to monetize. Old school robotics players have a playbook for this. You may or may not agree DBs are infra but Oracle has done well by capitalistic standards.

The reality is in our economy exploitation is a basic requirement. Nothing says a company providing porcelain for Linux kernel capabilities has a right to exist. What has turned into OCI is great. Docker desktop lost on Mac to Orb stack and friends (but I guess they have caught back up?) the article does make it clear they have tried hard to find a place to leverage rent and it probably is making enough for a 10-100 person company to be very comfortable but 500-1000 seems very over grown at this point.

Really should not have given up on Swarm just to come back to it. Kubernetes is over kill for so many people using it for a convenient deployment story.

Comment by torginus 1 day ago

Imo the problem with SaaS products is that their revenue expectations are priced accordingly to the market they serve, not the money it takes recreating them.

If I wrote the best word processor in the world, I could probably sell it for a decent sum to quite a few people.

However if I expressed my revenue expectations as a percentage of revenue from the world's bestselling novels, I would be very quickly disappointed.

Comment by physicsguy 1 day ago

This is a great way of framing it that I'd never thought of before.

I worked in engineering software for a long time and because of who we sell to, there's always been a very hard cost-benefit analysis for customers of SaaS in that space. If customers didn't see a saving equal to more than the cost of the software in Y1 they could and would typically cancel.

Comment by ragall 23 hours ago

That's because in the US it's common to see pricing based on "value", rather than based on costs plus a reasonable profit margin. This is one big reason why US products don't have much success in the rest of the world unless they're truly irreplaceable like the hyperscalers. Most of the world considers value-pricing as basically immoral.

Comment by fragmede 1 day ago

> Open infrastructure is hard to monetize.

But not impossible. Terraform seems to have paid its creator quite well.

Comment by tuananh 1 day ago

I think Hashicorp got out just in time. They are declining in recent years.

Comment by b40d-48b2-979e 1 day ago

They are stagnant and their dev experience is very poor.

Comment by chuckadams 1 day ago

They're IBM now, I think they just consider you and me beneath their notice. I guess some things never change.

Comment by echelon 1 day ago

The "Fair Source" [1] and "Fair Code" [2] licenses are sustainable and user-friendly.

Imagine if Docker the company could charge AWS and Google for their use of their technology.

Imagine if Redis, Elastic, and so many other technologies could.

Modern database companies will typically dual license their work so they don't have their lunch eaten. I've done it for some of my own work [3].

You want your customers to have freedom, but you don't want massive companies coming in and ripping you off. You'd also like to provide a "easy path" for payments that sustain the engineering, but not require your users to be bound to you.

"OSI-approved" Open Source is an industry co-opt of labor. Amazon and Google benefit immensely with an ecosystem of things they can offer, but they in turn give you zero of the AWS/GCP code base.

Hyperscalers are miles of crust around an open source interior. They charge and make millions off of the free labor of open source.

I think we need a new type of license that requires that the companies using the license must make their entire operational codebases available.

[1] https://fair.io/licenses/

[2] https://faircode.io/

[3] https://github.com/storytold/artcraft/blob/main/LICENSE.md

Comment by WJW 1 day ago

Charging companies for software is as old as computers itself. We don't have to imagine.

Comment by echelon 1 day ago

The idea of not compensating for software took hold in the 2000s, both with engineers and consumers (remember when users scoffed at 99 cent apps?)

Big tech companies saw this as an opportunity to build proprietary value-add systems around open source, but not make those systems in turn open. As they scaled, it became impossible to compete. You're not paying Redis for Redis. You're paying AWS or Google.

Comment by vladms 1 day ago

> As they scaled, it became impossible to compete.

To compete at offering infrastructure maybe, but what I would like is more capability to build solutions.

And I think that today one has much more open-source technologies that one can deploy with modest efforts, so I see progress, even if some big players take advantage of people that don't want or are not capable to make even modest efforts.

Comment by mschuster91 1 day ago

> The idea of not compensating for software took hold in the 2000s, both with engineers and consumers (remember when users scoffed at 99 cent apps?)

Part of that was that the platform churn costs were a new thing for developers that needed to be priced in now. In the "old world" aka Windows, application developers didn't need to do much, if any at all, work to keep their applications working with new OS versions. DOS applications could be run up until and including Windows 7 x32 - that meant in the most ridiculous case about 42 years of life time (first release of DOS was 1981, end of life for Win 7 ESU was 2023). As an application developer, you could get away with selling a piece of software once and then just provide bug fixes if needed, and it's reasonably possible to maintain extremely old software even on modern Windows - AFAIK (but never tried it), Visual Basic 6 (!!!) still runs on Windows 11 and can be used to compile old software.

In contrast to this, with both major mobile platforms (Android and iOS) as an app developer you have to deal with constant churn that the OS developer forces upon you, and application stores make it impossible to even release bugfixes for platforms older than the OS developer deems worthy to support - for Google Play Store, that's Android 12 (released in 2021) [1], for iOS the situation is a bit better but still a PITA [2].

[1] https://developer.android.com/google/play/requirements/targe...

[2] https://news.ycombinator.com/item?id=44222561

Comment by c0balt 1 day ago

> Imagine if Docker the company could charge AWS and Google for their use of their technology.

An "issue" is that Docker these days mostly builds on open standards and has well documented APIs. Open infrastructure like this has only limited vendor lock-in.

Building a docker daemon compatible service is not trivial but was already mostly done with podman. It is compatible to the extent that the official docker cli mostly works with it oob (having implemented the basic Docker HTTP API endpoints too). AWS/GCP could almost certainly afford to build a "podman" too, instead of licensing Docked.

This is not meant to defend the hyperscalers themselves but should maybe out approaches like this in perspective. Docker got among other things large because it was free, monetizing after that is hard (see also Elasticsearch/Redis and the immediate forks).

Comment by ragall 23 hours ago

> Imagine if Docker the company could charge AWS and Google for their use of their technology.

The technology on which Docker is based, Linux containers, was developed by Google engineers for Borg, and later Docker adopted it when it pivoted away from LXC (an IBM technology).

Comment by dist-epoch 1 day ago

> Imagine if Docker the company could charge AWS and Google for their use of their technology.

I can't imagine. Tell me one software project used in AWS/GCP that Amazon/Google pay for. Not donations (like for Linux), but PAID for.

Docker started as a wrapper over LXC, Amazon has enough developers to implement that in a month.

Comment by ragall 23 hours ago

It's a good thing that the commons are cheap. Imagine where we would be if all electrical devices were still covered by patents related to electricity, all owned by one company.

Comment by ynx 1 day ago

> Docker’s journey reads like a startup trying to find product-market fit, except Docker already had product-market fit - they created the containerization standard that everyone uses. The problem is that Docker the technology became so successful that Docker the company struggled to monetize it. When your core product becomes commoditized and open source, you need to find new ways to add value.

I would argue the reverse: that Docker's value was itself the product-market fit. Docker the technology was commoditized and open-source almost from its genesis, because its technology had been built by Borg engineers at Google. It provided marginally more than ergonomics, but ergonomics was all it needed - the missing link between theory and practice.

Comment by Conan_Kudo 1 day ago

Well, technically the technology was originally built by IBM folks, as that's where LXC came from. But otherwise yes, your point makes sense.

Comment by shykes 22 hours ago

Google developed linux cgroups. IBM developed linux namespaces. Docker developed a completely new application runtime and delivery system, built on cgroups, namespaces, aufs, and tar. This required lots of original design and engineering work. Prior to Docker, there was no runtime contract for distinguishing the portable application bits from the non-portable host-specific bits. You just got a machine, and then had to provision, configure and templatize it - then upload application bits into it yourself.

All three companies contributed significantly to the modern container stack. As the co-founder of Docker, and someone who spent 10 years toiling away at container technology before it finally became cool, I wish people had more appreciation for the amount of engineering and design work that went into that. Google and IBM contributed the primitives that made Docker possible. But Docker made genuine contributions of its own.

Comment by zoobab 1 day ago

Who wants to pay for chroot?

Comment by c0n5pir4cy 1 day ago

Ah - the old magic.

There is a lot more than a simple chroot to Docker though - with FreeBSD Jails being a stepping stone along the way. It's real innovation and why it won over alternatives was the tooling and infrastructure around the containers - particularly distributing them.

Comment by bmitch3020 1 day ago

You're missing image distribution, namespaces (networking, pids, mount, users), seccomp (to limit root powers), cgroups (to limit cpu and memory usage), and so much more. There's also Docker Hub with the official images they maintain. And the Desktop tooling makes an embedded Linux VM much easier to work with than spinning up your own VM, copying files around, and forwarding networking ports.

Comment by lifetimerubyist 1 day ago

My favorite thing about Docker is that it spawned Podman.

Comment by jrm4 1 day ago

I think this deserves a reframing: Docker is perhaps the greatest success story involving a massively invested tech company.

We got an amazing durable essential piece of software from someone investing billions of dollars.

Now, the fact that they didn't get their money back, well, who cares? Not me, it wasn't my money.

Sucks for them, maybe -- but that's far better than enshittification for everyone.

Comment by vegabook 20 hours ago

It's become what happens when others learn your simple card trick.

Comment by koe123 1 day ago

Honestly I reach for podman or `nix develop` any chance I get. What is the edge that docker provides these days?

Comment by szszrk 1 day ago

How do you manage your containers in podman declaratively?

I tried to substitute docker-compose with Podman and Quadlets on a test server the other day, but was shocked how badly described the overall concept is. Most materials I found glimpsed through ability to run it as root/user and how different that is in configuration, and repeated the same 4-6 commands mantra.

Spent a few hours on it and just... failed to run a single container. systemctl never noticed my qualdet definitions, even if podman considered my .container file registered.

A bit.. frustrating, I expected smoother sailing.

Comment by Fabricio20 1 day ago

This has also been my experience, I'm used to using compose everywhere. I like the declarative file - tried podman and I found the documentation around the concept so scarce and all related to running things as non-root instead of telling me how my docker-compose becomes podman-compose. Still using docker everywhere because of that. Docker swarm mode has also worked wonders as an evolution to my compose files.

Comment by szszrk 1 day ago

I know podman-compose, have some homelab services running on it for a few years, but honestly found multiple ones that failed. It's far from drop-in replacement.

Comment by jabl 1 day ago

The podman kube support? It provides similar functionality as docker-compose, using a yaml file which is a subset of the Kubernetes pod definition syntax.

Then you can just create a few line systemd unit definition, and it integrates as a normal systemd unit, with logs visible via journalctl etc.

Comment by unitexe 1 day ago

This seems to be the way.

Short of weeding through the docs, I found the "Play with Kube using Podman" talk on DevConfs YouTube channel helpful.

Comment by szszrk 1 day ago

I will be honest: that is even more confusing :)

> Note: The kube commands in podman focus on simplifying the process of moving containers from podman to a Kubernetes environment and from a Kubernetes environment back to podman.

I'll give it a try, but I'm starting to understand why there is so little use of podman among amateurs.

Comment by unitexe 1 day ago

Personally, I am not interested in kubernetes, just podman for single-node use case. What the kube YAML does for this use case is provide a way to declare a multi-container application.

The podman documentation pages I have found most helpful for this use case are podman-kube-generate (generate kube YAML from an already running pod), podman-kube-play (run the kube manually) and podman-systemd.unit (run the kube as a service).

Edit: I should also mention that there are pod units (which don't require the use of kube YAML) but I skipped over them because they do not support podmans auto-update feature.

Comment by supernes 1 day ago

Podman supports Compose files, so there's that. I've only glimpsed at Quadlets and I agree they seem very esoteric, especially if you're not very well versed in systemd service definitions.

Comment by bootsmann 1 day ago

Yeah I think Quadlet just has bad docs. They document the whole API but iirc there is no: ok this is the hello world for running cowsay as a systemd unit

Comment by exceptione 1 day ago

quadlets fully depend on systemd doing its work. So, assuming you are running rootless, if you change your quadlets, you will need

  systemctl --user daemon-reload
to let systemd ingest the changes. And, if you have configured to start your container on boot, then still you have to start the container by hand, as you typically won't reboot during development. If you have multiple containers, it might be easiest to have them in one pod, so you only need to start the pod.

I agree that the documentation needs a good tutorial to show the complete concept as a starting point. There are multiple ones though on the internet.

Comment by szszrk 1 day ago

yeah, that's exactly what every tutorial says. And I know systemd more or less, daemon-reload is no stranger to me.

That was not sufficient. Both for global o user setup.

Comment by stryan 22 hours ago

The biggest problem with the `systemctl daemon-reload (--user)` workflow to register quadlets with systemd is it hides any generation errors in journald instead of giving immediate feedback. It's a real pain in the ass, and I say this from a place of love.

Quadlets are just a systemd generator: all `daemon-reload` is doing is running `podman-system-generator` which looks at the Quadlet files and turns them into systemd unit files with a big honking `podman run --rm --blah container:tag` as the `ExecStart` property. There's nothing else to it, no daemons or what not

If you ever feel like bothering to give it another shot check journalctl to see if there's any generator errors. Or run the generator directly: on my OpenSUSE box it's at `/usr/lib/systemd/system-generators/podman-system-generator` , Run it with `--dry-run` to just output to stdout and `--user` to get user quadlets.

Comment by jillesvangurp 1 day ago

> What is the edge that docker provides these days?

hub.docker.com mainly, the centralized docker registry. A bit like Github, there are plenty of alternatives. But that's where you find most people pushing their containers.

And then there is Docker Desktop which a lot of users seem to like.

I switched to colima myself recently (on a mac). I think people overthink all this stuff a bit. Colima doesn't have a UI; but that's fine for me. I mainly use it to run stuff from the command line or from scripts. I wasn't using the Docker Desktop UI very much either.

Colima is a simple wrapper around Lima, which is a simple wrapper around qemu or Apple's virtualization layer. The resulting vm runs a simple Linux distribution with some file mounts and network tunneling to give you a similar experience as Docker Desktop. Which does exactly the same thing in the end of course.

Linux runs containers just fine. The main thing you need for containerization is a Linux kernel. People have actually hacked together docker alternatives with just bash and namespaces. I used a plain qemu vm for a while with the docker socket pointing to an ssh tunnel on my mac. Works amazingly well but it has some limitations. Colima is easier to manage.

People have mentioned several of the other alternatives already. They all can work with the same command line tooling. If you need a UI, colima is probably too barebones. But otherwise, things like IDEs and other tools work (e.g. lazydocker, vs code, intellij, etc.) just fine with it. So the added value of extra UI is limited to me at least.

I think the container runtime inside the vm (podman, containerd, whatever) is mostly not that relevant for developers. It's a bit of an implementation detail. As long as docker and docker compose work on the command line, I'm happy.

Comment by b40d-48b2-979e 1 day ago

    What is the edge that docker provides these days?
Enterprise support and Docker Desktop makes it nearly seamless to get set up using containers. I've tried Rancher/podman/buildah and the experience introduced too much friction for me without being on a Linux system.

Comment by troyvit 1 day ago

> [...] without being on a Linux system.

I'll add that needing to be on the "right" Linux system is another strike against Podman. Last I checked if I wasn't on a RedHat derivative I was in the wilderness.

Comment by travisgriggs 1 day ago

Huh. I tried docker. Didn’t like the odor of enshittification, and so switched to podman (desktop). I use it on macOS, and deploy on Ubuntu. It’s been smooth sailing.

I found the signal to noise ratio better in Podland. As a newb to docker space, I was overwhelmed with should I swarm, should I compose, what’s this register my thing? And people are freaking about root stuff. I’m sure I still only use and understand about 10% of the pod(man) space, buts way better than how I felt in the docker space.

I miss when software engineering put a high value on simplicity.

Comment by troyvit 1 day ago

Yeah I was pretty hard on podman in that comment but the truth is I use it over docker wherever I can. I have a mixed environment at home but settled on RedHat for the home server and everything seems totally ok. I really like quadlets, and the ability to go rootless is a big load off my mind to be honest. I do wish they'd package it for other distros though. It would save some headaches.

Comment by koe123 1 day ago

Fair! I haven’t done any container related activities on Windows.

Comment by pzmarzly 1 day ago

Docker, or rather containerd, still has better plugin ecosystem around it. Unregistry https://github.com/psviderski/unregistry, Nydus https://github.com/dragonflyoss/nydus, all the different "snapshotters" (storage formats), or the utils for sharing NVIDIA GPUs with containers, etc.

The gap with Podman is closing though, and most users don't need any of these in the first place.

Comment by darkwater 1 day ago

> What is the edge that docker provides these days?

That you are not the average developer

Comment by swores 1 day ago

Not very clear what you mean... well you haven't actually given them an answer to their question.

Are you suggesting that docker provides an (unspecified) edge to developers who are better than average? Or to those who are mediocre? Or...

Comment by darkwater 1 day ago

I mean that the average developer will follow/use what has the most traction already and in the containers space, like it or not, it's still Docker.

Comment by thiagoperes 1 day ago

Switched to OrbStack in one prompt using Claude. It’s a night and day difference

Comment by scoodah 21 hours ago

You needed Claude for a `brew install orbstack`?

Comment by eigencoder 1 day ago

What's better about it?

Comment by linkage 1 day ago

The host actually gets RAM back after bursty workloads in the container thanks to memory ballooning. Containers also start up to 5x faster and `npm install` is also much faster because OrbStack uses macOS-specific APIs as much as possible.

Comment by chuckadams 1 day ago

The Orbstack dashboard is also something you'll actually enjoy using. It's a native Swift app that launches instantly, not Electron. You get resolvable hostnames for all your containers (though I use traefik instead). Opening a container's filesystem in Finder is another nice trick, I use that one now and then.

Comment by calmoo 18 hours ago

For one, it has dynamic memory allocation and is much faster and resource efficient. Drop in replacement. It also has a rather nice UI,

Comment by wcallahan 23 hours ago

I suspect the timing of this and comments is not coincidental.

I pay for Docker licenses, even though not meeting the criteria for business size requiring it, as I wanted reliable image fetching for my self hosted container CI/CD pipelines failing docker hub image fetches.

But as of now, my oAuth logins to Docker expire within hours now, and I’ve been left with no choice but to scatter in search of diffuse container image alternative sources for my Dockerfiles to stop this madness.

My one way permanent migration from Docker Hub sourced images has finally left me with no reason to keep paying for Docker licenses due to whatever this misguided or blundered rate limit implementation is.

Comment by kordlessagain 15 hours ago

I use it with this every day: https://github.com/DeepBlueDynamics/codex-container

Docker is useful and it’s too bad early and ignorant investors poisoned the well.

Their new AI stuff is bad but maybe if they positioned themselves like Ollama….

Comment by drnick1 1 day ago

Why should a company be making money selling what is a essentially a thin layer of convenience over kernel features?

Comment by JakaJancar 1 day ago

They enshittified/Dropboxified their core Docker Desktop app so much that OrbStack — I believe a single person initially — managed to build a better product. I love this outcome.

Comment by VLM 2 hours ago

The key business mistake is trying to have too large of a company or having the wrong organization structure.

Consider how a SCM like git or bitkeeper is more complicated than a wrapper for LXC. For some odd reason Docker has almost 100x as many employees as bitkeeper. They're just too big. It would be like trying to create a startup of "/bin/ls as a service" with at least 50 employees and 49 of them would not be able to generate enough revenue to break even much less turn into a billion dollar "LSaaS" tech unicorn. There's not enough meat for the pack. FreeBSD has jails and all of FreeBSD (not just jails, the whole thing) is about a third the size of Docker... hmm.

An alternative to having an appropriate sized company would be giving up on profit. There probably is no way to make "real" money doing what Docker is doing, not "real" in the context of 1500+ employees. It would be very real if they could get their current revenue with 20 employees, but ... That is not bad, that just means they're better off as an IRS 501(c)(3) approved charity rather than trying to become a startup unicorn. Large organizations like the Red Cross are a valuable and important addition to the community, despite not being a successful tech unicorn. They got a lot of money from In-Q-Tel so they're already kind of taxpayer funded (via CIA) so going outright charity wouldn't be a stretch.

A good business analogy for Docker would be the small day care my kids attended. They were based in a small church building which permanently limited the size of their state license. It doesn't matter if they hire 3 caregivers or 1500, they only have space for an 8 kid license and revenue will never exceed 8 kids. They can hire 1500 caregivers using VC funds but they'll never get more than 8 kids of revenue. They are not working in a field where they can scale to a billion dollars of revenue. There's nothing "wrong" about a daycare that rents a room of a church, employs a couple "early childhood education major" college grads right around minimum wage, and the kids have fun. Thats Docker. There is perhaps a bigger third problem that they probably sold themselves to investors as an unstoppable money printing machine. Whoops. Nobody makes that mistake with the local church daycare. To some extent lack of due diligence is the fault of the investors. We'd never have had docker without their ... selfless financial donation.

Comment by neom 1 day ago

I think what happened to docker is a bit unfortunate. March 2013 — Docker goes public/open source at PyCon Nov 2013 — Jerry Chen pursues Docker, leads to Greylock Series B - Jan 2014 — Greylock Series B closes ($15M) - June 2014 — Kubernetes announced - July 2015 — Kubernetes 1.0 released.

Jerry is a good friend of mine and I think a great VC, he comes from the VMware world and was part of building the VMware enterprise strategy. When all the container stuff was all going down, I was trying to understand how digialocean needed to play in the container space - so I spent a lot of time talking to people and trying to understand it (decided we basically...shouldn't, although looked at buying Hashi) - but it was clear at the time the docker team went with Jerry because they saw themselves either displacing VMware or doing a VMware style play - either way, we all watched them start the process of moving to a real enterprise footing out of just a pure play devtool in 2014, it might have worked too (although frankly their GTM motions were very very strange), but Kubernetes..yah. You might recall Flo was on the scene too selling his ideas at Mesosphere, and the wonderful Alex Polvi with CoreOS. It was certainly an interesting time, I think about that period often and that it is a bit of a shame what happened to docker. I like Solomon a lot and think he's a genuinely genius dude.

Comment by singularity2001 1 day ago

Superfluous!

Comment by mystraline 1 day ago

Admittedly, on my infrastructure, Ive been de-dockerizing. Theres too many footguns and little gotchas, and they all add up.

For example, sharing a graphics card, say a Intel A380 and Jellyfin, over docker is a TERRIBLE experience.

But the same, with a full VM, and the gfx card shared to it is easy peasy.

Now, for testing applications, docker is great. But when I decide to run a service, I'll de-dockerize OR single VM with docker inside, with cronjobs to once a week update.

And logging/monitoring is also a hell of a lot easier per machine, rather than 8 services through docker.

I'm sure if I need a full dynamic service fabric, sure go with Docker or K8s. But this is for personal and friend usage.

Comment by blackcatsec 1 day ago

I truly do sometimes detest the open source community's often outright hostility towards monetization of software. People gotta eat.

Comment by shortsunblack 2 hours ago

Open source community detests dilettante attempts at rent seeking by building mediocre wrappers over commodity software.

Docker did not invent Linux containers. They did not invent namespaces or chroots.

You'll be hard pressed to name the things they did invent and those things have long ago left Docker to hang dry (OCI imagespec).

OrbStack is built by a single person and it provides an objectively better experience than Docker Desktop, built presumably by dozens of full time engineers.

People detest incompetence and rent seeking. That they do.

The lack of important contributions of Docker can be best summarized by all the alternatives that popped up in no time. With Kubernetes now defaulting to CRI-O, modern container stack has precisely zero Dockerisms.

Comment by onraglanroad 1 day ago

I've been to developer conferences in the US. Lack of food is definitely not a problem.

Comment by jujube3 12 hours ago

He shoots, he scores!

Comment by sneak 1 day ago

Docker is only successful because of free software: the foss docker daemon, the foss docker cli client, and of course linux.

Docker tried to become a proprietary software company, which is rude and user-hostile.

Comment by PlatoIsADisease 1 day ago

I was a contractor code money at a place automating $3M/yr in labor. We reported to a senior that did little programming if at all. He was older than me but newer than myself to the company, I was happy to avoid meetings and code.

He'd always try to get us into various technologies, Docker was one of them. It wasn't really relevant for the job, but I could see its uses.

Now that I think about it, I don't think anything they did on the tech discovery front was useful. Got stuck on Confulence which required us to save as a .pdf for our users to view lmao. Credit for being super smart with coding, he was a wiz on code reviews.