Hey again! I’ve progressed in my NAS project and I’ve chosen to go for a DIY NAS. I can’t wait for the parts to arrive!

Now I’m a bit struggling to choose an OS. I am starting with 2x10To HDD + 1To NVMe SSD. I plan to use 1 HDD for parity and to add more disks later.

I plan to use this server purely as a NAS because I will be getting a second more powerful server some time next year. But in the meantime, this NAS is a big upgrade over my rpi 4, so I will run some containers or VMs.

I don’t want to go with TrueNAS as I don’t want to use ZFS (my RAM is limited and I’m not sure I can add drives with different sizes). I’ve read btrfs is the second best for NAS, so I may use this.

Unraid seemed like the perfect fit. But the more I read about it, the more I wonder if I shouldn’t switch to Proxmox.

What I like about Unraid is the ability to add a disk without worrying about the size. I don’t care much about the applications Unraid provides and since docker-compose is not fully supported, I’m afraid I won’t be able to do things I could have done easily with a docker-compose.yml I also like that’s it’s easy to share a folder. What I don’t like about Unraid is the cache system and the mover. I understand why the system works this way but I’m not a fan.

I’ve asked myself if I needed instant parity for all my data and if I should put everything in the array.

The thing is that for some of my data I don’t care about parity. For instance, I’m good with only backing up my application data and to have parity for the backup. For my tv shows I don’t care about parity nor backup while I want both for my photos.

After some more research, I found mergerfs and snapraid. I feel that they are more flexible and fix the cache/mover issue from Unraid. Although I’m not sure if snapraid can run with only 2 disks.

If I go with Proxmox I think I would use OpenMediaVault to setup shares.

Is anyone using something like this? What are your recommendations?

Thanks!

  • Kwa@derpzilla.netOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    Don’t I need mergerfs and snapraid with BTRFS?

    Also it’s not clear what LXD/Incus replaces? Is it Promox or Promox + OMV?

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      8
      ·
      9 months ago

      Don’t I need mergerfs and snapraid with BTRFS?

      No. It’s just a FS like any other… actually it’s a proper filesystem unlike Ext4. Why would you need those tools? https://wiki.tnonline.net/w/Btrfs/Profiles

      Also it’s not clear what LXD/Incus replaces? Is it Promox or Promox + OMV?

      It replaces Proxmox and can run both containers and VMs, but it isn’t an entire OS, it’s just something you can install on a clean Debian system (from the Debian repository) and enjoy it.

      Some people also like Cockpit which comes with a nice UI, has basic virtual machine management features and has a Samba plugin to manage users and shares.

      • Kwa@derpzilla.netOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Thanks, I’ve seen Incus have an online demo, this is nice, I’ll give it a try.

        For BTRFS, if I understand correctly, I can have a similar result as mergerfs if I use SINGLE. But as RAID5/6 is unstable it seems I would still need snapraid, or am I missing something?

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          edit-2
          9 months ago

          “Need” is a strong word. But yes BRTFS RAID 5/6 is unstable but unless you’re only after space efficiency RAID 5/6 shouldn’t be used at all, those shames will put you on the worst position possible if something fails (and also low throughput). When you try to rebuild a RAID 5 with large drives it will probably go into days and you’ll be risking the failure of a second drive and lose everything right there.

          Btw, give a try to Cockpit as well. If you don’t require much of the advances features that Proxmox / Incus offer and just a bunch of VMs then it should be enough.

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            “Need” is a strong word. But yes BRTFS RAID 5/6 is unstable but unless you’re only after space efficiency RAID 5/6 shouldn’t be used at all, those shames will put you on the worst position possible if something fails (and also low throughput). When you try to rebuild a RAID 5 with large drives it will probably go into days and you’ll be risking the failure of a second drive and lose everything right there.

            I’ve had a RAID5 for 10+ years, had drives fail and I’ve replaced them. Rebuilds are fine and rare. It’s very unlikely to have two drives fail within a week of each other and I don’t want to only get 1/2 my disk space. RAID6 makes that small chance even smaller. If you’re worried about loss you have backups - RAID is for uptime not recovery.

            You seem to think the way you’ve done things is the one and true right way to do it and that’s not the case.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              9 months ago

              You seem to think the way you’ve done things is the one and true right way to do it and that’s not the case.

              Not at all and I totally agree with what you’ve said. To clarify things, I’m not the only one saying people should stay away from RAID 5/6, even large vendors say that nowadays, and the issue is when a drive fails, if you’ve to run for hours to get a new one rebuild a second hard drive is highly likely to fail on that time - specially if they’ve the same runtime, model etc.

              Obviously that having a real backup will solve the issue, as long as you can retrieve the data from the backup quickly and cheaply enough and that’s not always the case.

              • atzanteol@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                9 months ago

                if you’ve to run for hours to get a new one rebuild a second hard drive is highly likely to fail on that time - specially if they’ve the same runtime, model etc.

                If you’re running a 2-disk RAID-1 you have the same problem.

                And I restate - that risk is small. You’re not running a data center where you have thousands of disks to see the effect.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  And I restate - that risk is small. You’re not running a data center where you have thousands of disks to see the effect.

                  Fair enough, even though I’ve seen that effect in smaller setups than that.