I recently noticed that htop displays a much lower ‘memory in use’ number than free -h, top, or fastfetch on my Ubuntu 25.04 server.
I am using ZFS on this server and I’ve read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn’t show caching used by the kernel but I’m not sure how to confirm ZFS is what’s causing the discrepancy.
I’m also running a bunch of docker containers and am concerned about stability since I don’t know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I’m using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?
Server Memory Usage:
- htop =
8.35G / 30.6G - free -h =
total used free shared buff/cache available
Mem: 30Gi 26Gi 1.3Gi 730Mi 4.2Gi 4.0Gi
- top =
MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache - fastfetch =
26.54GiB / 30.6GiB
EDIT:
tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.


Most of those containers are probably grabbing more memory than they actually need. Consider applying some resource constraints to some of them.
Dozzle is an excellent addition to your docker setup, giving you live performance graphs for all your containers. It can help a lot with fine tuning your setup.
It took a bit of trial and error with Portainer, but under Runtime & Resources, you can adjust the amount of needed resources:
You can also set these limits in your compose file, if you use compose (which you should).
Sure, it’s just easier for me to tweak in Portainer. But, yeah. There are many ways to skin the cat.