I’m moving to a new machine soon and want to re-evaluate some security practices while I’m doing it. My current server is debian with all apps containerized in docker with root. I’d like to harden some stuff, especially vaultwarden but I’m concerned about transitioning to podman while using complex docker setups like nextcloud-aio. Do you have experience hardening your containers by switching? Is it worth it? How long is a piece of string?
I’m very much biased towards Podman, but from what I understand rootless Docker is a bit of an afterthought, while Podman has been developed from the ground up with rootless in mind. That should be reason enough.
The very few things Docker can do that Podman struggles a bit with are stuff that usually involves mounting the Docker socket in the container or other stupid things. Since you care about security, you wouldn’t do that anyway. Not to mention there’s also rootful Podman, when you need that level of access.
I’d recommend an RPM-based distro with Podman, the few times I’ve tried Podman on a deb distro, there’s always been something wonky. It’s been a while, though.
Podman actually run fine on Debian 12. Though the packaged version is a bit old. Does not support podman compose command. Though podman-compose works.
podman-compose is packaged in a separate
podman-compose
package in Debian 12 (did not try it though). The only thing missing (for me) in Debian 12 is quadlets support (requires podman 4.4+, Debian 12 has 4.3)No its on 4.x I think.
You are right. Quadlets require 4.4, Debian 12 has 4.3
Thanks. Last time I tried it was just after bookworm released, and on ARM, so it has probably got better
Does Podman work well when you have multiple rootless containers that you want to communicate securely in a least-privilege configuration (each container only has access to what it needs)? That is the one thing I couldn’t figure out how to do well with Podman.
Do you mean networking between them? There’s two ways of networking between containers. One of them is to create a custom network for a set of containers that you want to connect between each other. Then you can access other containers in that network using their name and port number like so
container_name:1234
Note: DNS is disabled in the default network by default so you can’t access other containers by their name if using it. You need to create a new network for it to work.
Another way is to group them together with a pod. Then you can access other services in that same pod using localhost like so
localhost:1234
Personally in my current setup I’m using both pods and seperate networks for each of them. The reason is I use traefik and I don’t want all of my containers in a single network along with traefik. So I just made a seperate network for each of my pods and give traefik access to that network. As an example here’s my komga setup:
I have komga and komf running in a single pod with a network called komga assigned to the pod. So now I can communicate between komga and komf using localhost. I also added traefik to the komga network so that I can reverse proxy my komga instance.
Yes, you can easily do that. Set the container name and make them on the same network. Used caddy and whole bunch of Selfhostable services with it and I reverse proxy as
container_name:port
I’m thinking about an immutable OS with podman support first and foremost. Would you recommend Fedora CoreOS?
It’s a really solid combo, but if you’re not familiar with CoreOS I wouldn’t change both at once. Meaning migrate the services to Podman first, then switch the OS. I’ve meant to switch from Alma 9 to CoreOS a long time, but haven’t found the time.
I noticed you run Nextcloud AIO, just so you know, that’s one of those “mount the docker socket” monstrosities. I’d look into switching to the community NC image and separate containers managed yourself. AIO is easy, but if someone gets shell to the NC container, it’s basically giving root to your host.
Either way, you’re going to have trouble running AIO with Podman.