brownmustardminion

@brownmustardminion@lemmy.ml

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

brownmustardminion OP ,

I have a workstation I use for video editing/vfx as well as gaming. Because of my work, I'm fortunate to have the latest high end GPUs and a 160" projector screen. I also have a few TVs in various rooms around the house.

Traditionally, if I want to watch something or play a video game, I have to go to the room with the jellyfin/plex/roku box to watch something and am limited to the work/gaming rig to play games. I can't run renders and game at the same time. Buying an entire new pc so I can do both is a massive waste of money. If I want to do a test screening of a video I'm working on to see how it displays on various devices, I have to transfer the file around to these devices. This is limiting and inefficient to me.

I want to be able to go to any screen in my house: my living room TV, my large projector in my studio room, my tablet, or even my phone and switch between:

  • my workstation display running on a Window 10 VM
  • my linux VM with youtube or jellyfin player I use as a daily driver
  • a fedora or Windows VM dedicated to gaming, maybe SteamOS
  • maybe a friend comes over for a LAN party and we both can game without having to set up a 2nd rig
  • I want to host an LLM or stablediffusion server without having to buy a new GPU with enough VRAM to run SDXL
brownmustardminion OP ,

Maybe my situation is just unique, but due to my job I'm able to have a single workstation with multiple high VRAM GPUs. I wouldn't be able to justify the cost of buying new GPUs and an entire rig just for gaming or AI image/video. I wouldn't foresee more than 2 VMs using the GPU in high priority at any single time.

When I'm not working this system sits idle or is running renders. Why not utilize the amazing resources I have to serve my other needs?

brownmustardminion OP , (edited )

I run a few servers myself with proxmox. FYI there is a script that removes that nag screen as well as configures some other useful things for proxmox self-hosters.

https://tteck.github.io/Proxmox/

brownmustardminion OP ,

That’s such a weird leap in logic to jump to. Are you okay?

brownmustardminion OP , (edited )

I’m not the one making wild accusations about somebody wanting to selfhost a gpu server to edit…incest porn or whatever it is you’re on about.

No idea what lie you think I’m telling. 🤷‍♂️

brownmustardminion OP ,

How are you handling displays and keyboard/mouse? Also what VM software?

brownmustardminion OP ,

I’m curious in a more in depth breakdown of your setup if you don’t mind. What is latency like and how are you handling switching?

brownmustardminion OP ,

Have you tried or do you have any knowledge about utilizing the display ports on the gpu while virtualizing either in lieu or in tandem with streaming displays?

brownmustardminion OP ,

Hmm. I’m running a 3090 and 4090. Looks like vgpu is not possible yet for those cards.

brownmustardminion ,

Do you mean leantime.io?

brownmustardminion ,

Can I hijack this thread to ask if any of these recommendations have iOS apps? Vikunja looks the most enticing to me but seems they don’t have an iOS app sadly.

Nextcloud appreciation post

After months of waiting, I finally got myself an instance with Libre Cloud. I was expecting basic file storage with a few goodies but boy, this is soooo much more. I am amaze by how complete this is!!! Apps let me configure my instance to fit everything I need, my workflow is now crazy fast and I can finally say goodbye to...

brownmustardminion ,

I’m a massive Nextcloud fan and have a server up and running for many years now.

But I understand all of the downvoted commenters. It is clunky and buggy as hell at times. Maybe it’s less noticeable when you’re running a single user instance, but once you have non tech literate users using it you begin to notice how inferior it is to the big boys like google drive in some aspects.

That said, I personally have a decent tolerance for fiddling and slight frustrations as a trade off for avoiding privacy disrespecting and arguably evil corporations.

I would recommend everybody looking for a gdrive, Dropbox, one drive alternative to at least give Nextcloud a go.

brownmustardminion OP , (edited )

Underlying system is running Proxmox. From there I have the relevant two VMs: OMV and Proxmox Backup Server. The hard drives are passed into OMV as SCSI drives. I had to add them from shell as the GUI doesn’t give the option. Within OMV I have the drives in a mergerfs pool, with a shared folder via NFS that is then selected as the storage from within the Proxmox Backup Server VM. OMV has another shared folder that is used by a remote duplicati server via SSH(SFTP?), but otherwise OMV has no other shared folders or services. Duplicati/OMV have no errors. PBS/OMV worked for a couple of months before the aforementioned error cropped up.

Also possibly relevant: No other processes or services are setup to access the shared folder used by PBS.

brownmustardminion OP ,

Looks like my reply got purged in the server update.

Running Proxmox baremetal. Two VMs: Proxmox Backup Server and OMV. Multiple HDDs passed through directly as SCSI to OMV. In OMV they're combined into a mergerfs pool. Two shared folders on the pool: one dedicated to proxmox backups and the other for data backups. The Proxmox backup shared folder is an NFS share and the other shared folder is accessed by a remote duplicati server via SSH (sftp?). Within the proxmox backup server VM, the aforementioned NFS share is set up as a storage location.

I have no problems with the duplicati backups at all. The Proxmox Backup Server was operating fine as well initially but began throwing the estale error after about a month or two.

Is there a way to fix the estale error and also to prevent it from reoccurring?

brownmustardminion OP ,

Third time posting this reply due to the lemmy server upgrade.

Proxmox on bare metal. A VM with OMV and a VM of proxmox backup server. Multiple drives passed through to OMV and then mergerfs pools them together. That pool has two main shared folders. One is for a remote duplicati server that connects via SFTP. The other is an NfS for PBS. The PBS VM uses the NFS shared folder as storage. Everything worked until recently when I started getting estale errors. Duplicati still works fine

brownmustardminion OP ,

Thanks so much for the detailed reply. I have about 20TB of data on the disks otherwise I would take your advice to set up a different scheme. Luckily, as it's a backup server I don't need maximum speed. I set it up with mergerfs and snapraid because I'm essentially recycling old drives into this machine and that setup works pretty well for my situation.

The proxmox host is the default (ext4/lvm I believe). The drives are also all ext4. I very recently did a data drive upgrade and besides some timestamp discrepancies likely due to rsync, the SCSI semi-virtualized thing wasn't an issue. I replaced the old drive with a larger one, hooked the old one up to a usb dongle and passed it through to OMV and I was able to transfer everything and get my new data drive hooked back into the mergerfs pool and snapraid. I'll do a test and see if I can still access the files directly in the proxmox host just for educational purposes.

I'll try to re-mount the NFS and see where that gets me. I'm also considering switching to a CIFS/SMB share as another commenter had posted. Unless that is susceptible to the same estale issue. I won't be back at that location for about a week so I might not have an update for a little while.

brownmustardminion ,

An AI movie would likely be an improvement over the dog shit Amazon and Netflix put out. The streaming services make content chasing algorithms. Sometimes they get lucky and find a legitimately good indie film they can slap their “Netflix Original” branding on. Rest assured they actually had nothing to do with its production and just bought it after the fact. The stuff they produce from scratch is usually the worst.

brownmustardminion OP ,

Your question is a good one. I'm not the one who downvoted you fyi. To answer your question, it is absolutely a personal anecdote based on my own experimentation. I'm sure others will add their own experiences. Based on my experiences there's no doubt about twitch shadowbanning based on VPN use. I'll admit I don't have a basis for Linux and adblockers being a part of the equation, but I made it clear in my original post that those were assumptions.

To further speculate, I have an idea that the shadowban may actually be triggered by somebody using the same VPN server doing something that triggers it, affecting anybody else on that server. I can't possibly provide evidence for that theory, but it would explain the seemingly random nature of the shadowbans.

brownmustardminion ,

I would suggest trying wireguard first as it’s much less complex to set up. Once you have a handle on that, you might consider moving to a mesh network. I personally would love to use a mesh network, but have not been able to get it configured correctly the few times I’ve tried.

brownmustardminion , (edited )

I've tried Nebula before but couldn't get it running properly on all devices. How is Tailscale in terms of compatibility and can you also use wireguard simultaneously? Mesh networks are great for connecting my own devices and servers, but I still need a wireguard interface for certain servers to provide public access through a public router. I also ran into a major issue setting up Nebula on my laptop in which it couldn't be used without disabling my VPN. Is any of that a problem with Tailscale? Also, is Tailscales coordination server self hostable or do you have to use theirs? That seems like a dealbreaker if you’re forced to use a third party coordinator

brownmustardminion OP ,

Forwarded mail but it may be two way in the future so it would probably be smart to just go that route from the beninging.

brownmustardminion OP ,

I ended up going with migadu. Seems great so far. Already up and running with 3 domains and dozens of aliases.

brownmustardminion OP ,

Problem solved. The firewall was attempting to pass traffic through the default gateway. You have to create a firewall rule to allow whatever traffic you want but in the advanced settings you need to select the wireguard gateway instead.

brownmustardminion OP ,

amazonads has already been blocked but I just blocked amazon and waiting to see if that does the trick.

brownmustardminion OP ,

I'm using a pretty good VPN and I still get ads.

brownmustardminion ,

I tried a couple but had no luck running them in VMs so I gave up.

brownmustardminion ,

Yep. Also as extra protection from any phoning home to Topaz. It’s not possible run the software firewalled, since it needs to download the AI models once you try to run anything.

brownmustardminion ,

I haven't.

brownmustardminion OP ,

It’s really that much of a hassle to fiddle with the volume sizes?

brownmustardminion OP ,

You suggested just adding the ISOs to local-lvm. Do you think it would be feasible to simply delete the local storage completely and then extend the local-lvm after, storing the ISOs there? I know extending volumes is much simpler than shrinking. And I imagine deleting completely is also easier than shrinking?

brownmustardminion OP , (edited )

That's a beauty for sure. Do you find it limiting that it has a 25" maximum rip? Adding it to my wishlist, but as of now $300 or so is probably my limit.

Locally that fence costs nearly $700! sheesshhhh

EDIT: Hold on....may have found it for $400. :)

brownmustardminion OP ,

Got a recommendation for a good gauge while we're at it? 1/128th would be a dream.

brownmustardminion OP ,

Due to my understanding of it, I was hesitant to use AC recovery in the case that the power goes down more than once in a short period. It could drain the UPS to the point that it might not be able to sustain enough runtime for a proper shutdown. But I'm also a bit confused about the setup here. If the server is sent a signal to shutdown due to a grid outage, who is telling it the grid was restored? The server would always detect power because of the battery backup, so I don't think AC Power Recovery would work in this case, no? I believe I have the UPS comm server (probably apcupsd) installed on the server itself, so there's no way for it to know to wake up unless from an outside source.

Maybe you have some further incite into how to make that setup work properly.

I'm brainstorming here, but would it be possible/feasible to have the Unifi Dream Machine execute a script everytime it turns on telling the server idrac to power up. I'd have to see if the UDM has that ability as well. The UDM turning on would only really happen if power was restored from an outage. Otherwise I could send a command manually once I have access to the network.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines