I have tested a lot of atomic and traditional distributions lately. Tons of desktop environments strictly for fun and branching out. Having a 1 2 3 backup strategy and not just having it in place, but being able to restore your backup in a timely manner to keep continuity is paramount. You can list infinite reasons why.
Why do atomic distros which are supposed to me more stable, superior to some degree immutable environments lack good backup options? You can hack things together and there are somewhat installable tools. Like timeshift or etc etc. But it seems they place a lot more emphasis on rolling back poor updates in the event than total system backups.
By default it you should have true backups then layer in rollbacks. Not the other way around. Am I missing something?
You do not need to back up your OS, only your personal files.
Timeshift is completely unnecessary. Fedora Atomic’s rollbacking is more powerful and avoids certain issues.
You should only be backing up personal files, not OS files. The OS is replaceable, your personal files are not.
I’ve been backing up my OS and my personal files with borg to my NAS.
Saved me a weekend of setup and config editing once before, when my drive failed.
Or do you just remember all the config changes you did and type them out from the top of your head? And all the apps you have installed? It’s over 300 apps and 100 config files for me.
The OS is tiny compared to personal files. It doesn’t make sense not to back it up.
Or do you just remember all the config changes you did and type them out from the top of your head? And all the apps you have installed? It’s over 300 apps and 100 config files for me.
Well, kinda. I have have scripts to set up most of my system after an installation. It’s nice so that I don’t have to remember everything I’ve done. It means I can reinstall my system or install on a new system with relative ease.
Doesn’t need to be anything complex. Just having a list of packages I want installed and that I can copy into my terminal makes things so much faster.
I install or configure something every week.
In addition to doing the config, I’d have to edit a script as well, which seems like more hassle. At this point, why not go for nixOS and have just the latter part of the hassle without having to also edit config files in / ?
Instead, I run the backup command after I change something. When I want to restore, I can mount any of the last 20 backups from the borg repo and either manually revert a file or use rsync to mass overwrite.
I was thinking of using btrfs send, which would probably be even better for the purposes of recovering from disk failure, but borg file based backup takes way less space and works well so far. And I don’t have the extra effort of a declarative os or setup scripts.
Also works offline as long as I am with my NAS unlike a script that installs a list of packages from the repos.
Okay, so let me break down what I THINK is happening here, which is that you might have a misunderstanding of what what atomic/immutable means.
First, these are made to separate OSspace from UserSpace. Whatever you keep in UserSpace is your responsibility.
Second, the actual running OS is built on layers like containers. The hash of what your OSspace is can be readily gotten to compile the exact same version of it from the repos that hold the presompiled versions of these things. Just like containers.
Third, you don’t need to backup any of the OS because of the above.
Lastly, the general idea is that since you don’t need to backup anything about the OS, and you should be able to checkout a hash of some sort that can download and be eventually bit-consistant with the OS layer, all you have to worry about is the UserSpace content.
How you manage the UserSpace content is up to you. Back your stuff up, start a bare machine and check the OS out to a specific revision where your previous machine was at, then drop your UserSpace stuff in, and it will “just work”.
What I am understanding from the atomics are that your view is right with caveats. Flatpaks only write to /home. But not all apps or software are flatpak. There is no standard for where apps write in Linux so some apps get wrote to system, some apps write to /home. Which allows creep and data scatter throughout a system.
It seems with traditional systems you gain good backups that are easy to redeploy should you need them. But config drift can creep up, updates break more easily, and rollbacks require up to date snapshots.
Atomics make rollbacks easier, but backups harder and more complex during restorations due to fragmented backup locations for different types of files. Also apps don’t always play well with say SElinux on fedora but it’s rare take Mullvad for instance its not a flatpak and they primarily update as. Deb or similar. Requiring distrobox or toolbox. Which is a whole other level of complexity.
I am basically trying to discern if I should go immutable or traditional OS install. Things sound great on paper. But daily driving is a different story.
I want security by default, sandbox/containerized apps, Wayland native, with solid backup support infrastructure. So not if but when and I do it often testing backups or re-deploying a machine. I can boot back in as close to never left as I can.
So continuity is paramount. I been eyeing fedora kiniote, fedora workstation KDE, Debian likely KDE. Only because cinnamon isn’t Wayland native yet and likely won’t be for a while.
Edit: Currently I been running NIXos. It’s been great but config only backups up system apps and not data or app state. However even under /home backups you’ll still lose system files unless their manually tracked and synced as well. It’s one giant hassle. I used to clonezilla but my search for other DEs and OSes that scratched the itch for stock Mints flaws has still evaded me.
I think you misunderstand the point of atomic still. Your base system should be installed entirely through ublue or other. Every time you update ublue will hash it and you can go back to that exact config with a working base system. Flatpaks and distrobox are user applications and should store all the data they need somewhere under your /home. Back up your /home and /etc with rsync or similar. When all is said and done your be able to recreate your system with ublue, and restore your configs and personal files with rsync.
The advantages of ublue is you can easily share or restore your base system without needing to backup gigabytes of data every update
Edit nvm you mentioned NixOS.
I’m pretty sure ublues variants of atomic have easy backup features 🤔 but yes this is one issue that needs to be addressed by a distro, not sure if it exists entirely without setting each install methods working directory manually
I’ve never tried an immutable OS, but I’d love if the ability to do system backups and redeploy to another computer was just part of any OS.
Especially when Linux encourages you to distro hopp.
Clonezilla is great but it already happened to me that one backup wasn’t deployable on another (really old) computer
That kinda exists with NixOS, but you’d have to backup your personal files separately.
You’re not really backing up the OS with NixOS, but the nix configuration file describes how the OS is built in a reproducible way.
Yes I heard about it but apparently NixOS is quite complex and not accessible to someone like me who considers himself as an eternal Linux newbie.
Fam, I loathe saying this, but -please- if you desire engagement, then at least put some honest effort into proofreading your writings before posting them. I’m just assuming stuff at this point because I can barely grasp your intent/writing. *sigh*
Why do atomic distros which are supposed to me more stable, superior to some degree immutable environments lack good backup options? You can hack things together and there are somewhat installable tools. Like timeshift or etc etc.
Which distros even come by default -so installed OOTB- with “good backup options”? Which atomic distros is this statement even based on?
But it seems they place a lot more emphasis on rolling back poor updates in the event than total system backups.
Because their atomicity barely goes beyond updates. The ‘atomic’ in “atomic distros” mostly describes how its updates are atomic; i.e. the system either updates successfully or doesn’t update at all. Thus, by design, we have two possible states after an update: a ‘successfully’ updated system or a ‘failed’ update resulting in the same state as the previous. Atomic distros aren’t smart enough to catch all ‘breakage’ occurred by ‘successful’ updates. As such, most of these breakages will only show them after trying to boot into updated system. Deleting/erasing the previous known good state without verifying that the new/upcoming state works well is foolish. Especially on a distro that’s got robust updates otherwise. Hence, the functionality of rollbacks on updates is almost trivially done/applied to atomic distros, as it (almost) follows by design.
So, what I’m interested in is the following:
- Are you familiar with the notion of stateless systems? Is this (perhaps) what you’re (actually) seeking?
By default it you should have true backups then layer in rollbacks. Not the other way around. Am I missing something?
I think my previous paragraph should be enlightening in this regard. If you disagree (or something/otherwise), then please feel free to elaborate why you think so. Btw, what do you even mean with "true backups?
Based on their post history, I strongly suspect the OP has English as a non-primary language. They are doing fine, their posts are perfectly understandable. There’s no value in harassing them about that.