So after months of dealing with problems trying to get the stuff I want to host working on my Raspberry Pi and Synology, I’ve given up and decided I need a real server with an x86_64 processor and a standard Linux distro. So I don’t continue to run into problems after spending a bunch more, I want to seriously consider what I need hardware-wise. What considerations do I need to think about in this?
Initially, the main things I want to host are Nextcloud, Immich (or similar), and my own Node bot @[email protected] (which uses Puppeteer to take screenshots—the big issue that prevents it from running on a Pi or Synology). I’ll definitely want to expand to more things eventually, though I don’t know what. Probably all/most in Docker.
For now I’m likely to keep using Synology’s reverse proxy and built-in Let’s Encrypt certificate support, unless there are good reasons to avoid that. And as much as it’s possible, I’ll want the actual files (used by Nextcloud, Immich, etc.) to be stored on the Synology to take advantage of its large capacity and RAID 5 redundancy.
Is a second-hand Intel-based mini PC likely suitable? I read one thing saying that they can have serious thermal throttling issues because they don’t have great airflow. Is that a problem that matters for a home server, or is it more of an issue with desktops where people try to run games? Is there a particular reason to look at Intel vs AMD? Any particular things I should consider when looking at RAM, CPU power, or internal storage, etc. which might not be immediately obvious?
Bonus question: what’s a good distro to use? My experience so far has mostly been with desktop distros, primarily Kubuntu/Ubuntu, or with niche distros like Raspbian. But all Debian-based. Any reason to consider something else?


Sorry for the late reply. I’m just disorganised and have way too many unread notifications.
LXC containers sound really interesting, especially on a machine that’s hosting a lot of services. But how available are they? One advantage of Docker is its ubiquity, with a lot of useful tools already built as Docker images. Does LXC have a similarly broad supply of images? Or else is it easy to create one yourself?
Re VM vs LXC, have I got this right? You generally use VMs only for things that are intermittently spun up, rather than services you keep running all the time, with a couple of exceptions like HomeAssistant? What’s the reason they’re an exception?
Possibly related: your examples are all that VMs get access to the discrete GPU, containers use the integrated GPU. Is there a particular reason for that distribution?
I’m really curious about the cluster thing too. How simple is that? Is it something where you could start out just using an old spare laptop, then later add a dedicated server and have it transparently expand the power of your server? Or is the advantage just around HA? Or something else?
LXC is more focused on the OS than the application, where docker is more focused in the application. In general, I don’t recommend piping to bash, but take a look here for some lxc build scripts:
https://community-scripts.github.io/ProxmoxVE/
And you can still run docker with proxmox. You can make a VM and put docker in it, or you can run it in an LXC.
Regarding VMs, that’s purely an example of how I am doing things, and only for specific things. I start and stop VMs because I’m passing specific hardware (a discrete GPU) to the VM, its not a shared resource in this case. I’m not making a virt GPU, the VM gets to use the quadro that’s in there directly. I have other VMs (HomeAssistantOS for example) that run all the time.
LXC can be used to share resources with a host. VMs can be used to dedicate resources. LXCs are semi-isolated, and a VM is fully isolated.
My example of the iGPU/dGPU is because of my use cases, nothing more.
Clustering is easy and can be done over time. Your new host needs to join the existing server before adding any VMs or LXCs, that’s about it. A good overview of how to do it is here:
https://www.wundertech.net/how-to-set-up-a-cluster-in-proxmox/