Howdy selfhosters

I’ve got a bit of an interesting one that started as a learning experience but it’s one I think I got a bit over my head in. I had been running the arr stack via docker-compose on my old Ubuntu desktop pc. I got lucky with a recycler and managed to get a decent old workstation and my company tossed out some 15 SAS hdds. Thankfully those worked. I managed to get the proxmox setup finally and got a few drives mounted in a zfs pool that plex presently reads from. I unfortunately failed to manage to save a last backup copy of my old stack, however that one I’ll admit was a bit messy with using gluetun with a vpn tie to a German server for p2p on the stack. I did preserve a lot of my old data though as a migration for the media libraries.

I’m open to suggestions to have the stack running again on proxmox on the work station, I’m not sure how best to go about it with this since accessing a mount point is only accessible via lxc containers and I can’t really figure how to pass the zfs shares to a vm. I feel like I’m over complicating this but needing to maintain a secure connection since burgerland doesn’t make for the best arr stack hosts in my experience. It feels a bit daunting as I’ve tried to tackle it and give a few LLMs to write me up some guidelines to make it easier but I seemed to just not be able to make that work to teach me.

  • Lka1988@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    21 hours ago

    For the file server conundrum, something to keep in mind is that Proxmox is not NAS software and isn’t really set up to do that kind of thing. Plus, the Proxmox devs have been very clear about not installing anything that isn’t absolutely necessary outside of Proxmox (on the same machine).

    However, you can set up a file server inside an LXC and share that through an internal VLAN inside Proxmox. Just treat that LXC as a NAS.

    For your *arr stack, fire up an exclusive VM just for them. Install Docker on the VM, too, of course.

    LLMs

    If you’re gonna use that, please make sure you comb through the output and understand it before implementing it.

    • standarduser@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 minutes ago

      I was able to follow what you said with another comments yt video. I appreciate it. The LLMs were more of just a “explain this to me in simpler terms” or “why doesn’t this work” just cause I was tired after working most the time. it helped but it was also months ago with limited time to record much.

    • gaylord_fartmaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 hours ago

      On the other hand, I’ve been mounting my storage drives on the proxmox host with mergerfs and exposing what I need to the LXCs with bind mounts for years, and I haven’t had a single issue with it across multiple major version upgrades.

        • gaylord_fartmaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          27 seconds ago

          Super simple, like 30 minutes to setup mergerfs and then the bind mounts are a few lines added to the LXC config files at most. This isn’t necessarily needed, but I have users setup on the proxmox host with access to specific directories that are kind of a pain in the ass to remap the LXC users to, but were needed to give my *arr stack access to everything needed without giving access to the entire storage pool. Hard links won’t work across multiple bind mounts because the container will see them as separate file systems, so if your setup is /mnt/storage/TV, /mnt/storage/downloads, etc. then you’d have to pass just /mnt/storage as the bind mount.

    • standarduser@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      I’m mostly worried of any of the network traffic being leaked since I’m not particularly sure how to have a vpn work on just the lxc containers and manage to connect to the zfs shares

    • standarduser@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      36 minutes ago

      Oh shit this is very similar! I forgot about this dudes GitHub. That was my guide through last time. Thank you thank you for this!

    • standarduser@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      25 minutes ago

      I was just reading about it in the other comments YouTube video. Had a GitHub page that said it was explicitly not recommended too. I can see why now after working on it last night. If it was in professional use setting this would be horrendous.