• 0 Posts
  • 3 Comments
Joined 11 months ago
cake
Cake day: November 27th, 2023

help-circle
  • NFS handles permissions based on the UID and GID of the user account accessing the share. (Assuming you haven’t restricted the share to a specific subnet or host IP).

    When you create the NFS share, assign permissions using a group with a non-standard GID (doesn’t matter what, but pick something you’ll remember like 3000).

    How you go about that will depend on the server you’re running the NFS share on. It’ll be different for Ubuntu or TrueNAS or Unraid etc - so read the documentation.

    Once that is sorted, for each VM you need to create a group using that GID and assign the relevant users on each VM to be members of that group.

    If you’re following best practices and running services as non-root, it’s usually also necessary to change the group ownership of the mount point directories on each VM so that the group you’ve just created with GIS 3000 (or whatever) is the owner.

    edit: As a side note, because this tripped me up for a while - if you’re running LXC’s in proxmox, they’ll need to be privileged containers or you need to manually enable the NFS option for the LXC otherwise it doesn’t matter what you do with permissions, you won’t be able to mount the share.



  • You should also have a look at ProxMox.

    This would allow you to run TrueNAS as a VM as well as spin up other VM’s and LXCs as needed (including a VM running docker).

    That said - regardless of which hypervisor you commit to, I’ll recommend that you plan to upgrade your RAM, and look closely at whether plex supports your GPU for hardware acceleration. (Also consider Emby instead of Plex).

    Setting up a server to run VM’s can be quite a memory hog. NAS applications as well as media servers in particular.

    TrueNAS is going to need 16GB of ram minimum right off the bat, and your media server will eat the other 16GB before you even get started with docker containers.