@vegetaaaaaaa@lemmy.world avatar

vegetaaaaaaa

@vegetaaaaaaa@lemmy.world

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

vegetaaaaaaa , to Selfhosted in How do you manage your server files?
@vegetaaaaaaa@lemmy.world avatar

sftp://USERNAME@SERVER:PORT in the address bar of most file managers will work. You can omit the port if it's the default (22), you can omit the username if it's the same as your local user.

You can also add the server as a favorite/shortcut in your file manager sidebar (it works at least in Thunar and Nautilus). Or you can edit ~/.config/gtk-3.0/bookmarks directly:

file:///some/local/directory
file:///some/other/directory
sftp://my.example.org/home/myuser my.example.org
sftp://otheruser@my.example.net:2222/home/otheruser my.example.net
vegetaaaaaaa , to Selfhosted in How responsive is your Nextcloud?
@vegetaaaaaaa@lemmy.world avatar

Quite fast.

KVM/libvirt VM with 4GB RAM and 4vCores shared with a dozen other services, storage is not the fastest (qcow2-backed disks on a ext4 partition inside a LUKS volume on a 5400RPM hard drive... I might move it so a SSD sometime soon) so features highly dependent on disk I/O (thumbnailing) are sometimes sluggish. There is an occasional slowdown, I suppose caused by APCu caches periodically being dropped, but once a page is loaded and the cache is warmed up, it becomes fast again.

Standard apache + php-fpm + postgresql setup as described in the Nextcloud official documentation, automated through this ansible role

vegetaaaaaaa , (edited ) to Selfhosted in What's a simple logging service?
@vegetaaaaaaa@lemmy.world avatar

Syslog over TCP with TLS (don't want those sweet packets containing sensitive data leaving your box unencrypted). Bonus points for mutual authentication between the server/clients (just got it working and it's 👌 - my implementation here

It solves the aggregation part but doesn't solve the viewing/analysis part. I usually use lnav on simple setups (gotty as a poor man's web interface for lnav when needed), and graylog on larger ones (definitely costly in terms of RAM and storage though)

vegetaaaaaaa , to Selfhosted in Selfhost wiki (personal)
@vegetaaaaaaa@lemmy.world avatar

Obfuscation can be helpful in not disclosing which are some services or naming schemes

The "obfuscation" benefits of wildcard certificates are very limited (public DNS records can still easily be found with tools such as sublist3r), and they're definitely a security liability (get the private key of the cert stolen from a single server -> TLS potentially compromised on all your servers using the wildcard cert)

vegetaaaaaaa , to Selfhosted in Sanity Check. Docker vs Incus (LXD)
@vegetaaaaaaa@lemmy.world avatar

VMs have a lot of additional overhead.

The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

https://serverfault.com/questions/225719/so-really-what-is-the-overhead-of-virtualization-and-when-should-i-be-concerned

I usually go for bare-metal > on top of that, multiple VMs separated by context (think "tenant", production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it's good to have at least separate VMs for "serious business" and "lab" contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and... ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you're serious about it you will probably end up maintaining your own images)

vegetaaaaaaa , to Selfhosted in Now that vmware is over, what should I move to?
@vegetaaaaaaa@lemmy.world avatar

“buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English?

It sends pretty bad signals when it causes a crash on the first lxd init (sure I could make the case that there are workarounds, switch locales, create the bridge, but it doesn't help make it appear as a better solution than proxmox). Whatever you call it, it's a bad looking bug, and the fact that it was not patched in debian stable or backports makes me think there might be further hacks needed down the road for other stupid bugs like this one, so for now, hard pass on the Debian package (might file a bug on the bts later).

About the link, Proxmox kernel is based on Ubuntu, not Debian…

Thanks for the link mate, Proxmox kernels are based on Ubuntu's, which are in turn based on Debian's, not arguing about that - but I was specifically referring to this comment

having to wait months for fixes already available upstream or so they would fix their own shit

any example/link to bug reports for such fixes not being applied to proxmox kernels? Asking so I can raise an orange flag before it gets adopted without due consideration.

vegetaaaaaaa , to Selfhosted in Authelia Docker Image outdated?
@vegetaaaaaaa@lemmy.world avatar

i was just worried that the libraries in the container image are outdated

They actually are: trivy scan on authelia/authelia:latest https://pastebin.com/raw/czCYq9BF

vegetaaaaaaa , (edited ) to Selfhosted in Now that vmware is over, what should I move to?
@vegetaaaaaaa@lemmy.world avatar

DO NOT migrate / upgrade anything to the snap package

It was already in place when I came in (made me roll my eyes), and it's a mess. As you said, there's no proper upgrade path to anything else. So anyway...

you should migrate into LXD LTS from Debian 12 repositories

The LXD version in Debian 12 is buggy as fuck, this patch has not even been backported https://github.com/canonical/lxd/issues/11902 and 5.0.2-5 is still affected. It was a dealbreaker in my previous tests, and doesn't inspire confidence in the bug testing and patching process on this particular package. On top of it, It will be hard to convice other guys that we should ditch Ubuntu and their shenanigans, and that we should migrate to good old Debian (especially if the lxd package is in such a state). Some parts of the job are cool, but I'm starting to see there's strong resistance to change, so as I said, path of least resistance.

Do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable?

vegetaaaaaaa , (edited ) to Selfhosted in Now that vmware is over, what should I move to?
@vegetaaaaaaa@lemmy.world avatar

clustering != HA

The "clustering" in libvirt is limited to remote controlling multiple nodes, and migrating hosts between them. To get the High Availability part you need to set it up through other means, e.g. pacemaker and a bunch of scripts.

vegetaaaaaaa , to Selfhosted in Bad 4K Performance on Jellyfin
@vegetaaaaaaa@lemmy.world avatar

but more like playing a video game and it drops down to 15fps

Likely not a server-side problem (check CPU usage on the server), if the server was struggling to transcode I think it would result in the playback pausing, and resuming when the encoder catches up. Network/bandwidth problems would result in buffering. This looks like a bad playback performance problem, what client are you using? Try with multiple clients (use the web interface ina browser as a baseline) and see if it makes any difference.

vegetaaaaaaa , to Selfhosted in Password Manager that supports multiple databases/syncing?
@vegetaaaaaaa@lemmy.world avatar

Why not self host vaultwarden?

How does that work when your vaultwarden instance goes down for some reason? Lose access to passwords? Or does the browser extension still have access to a cached copy of the db?

vegetaaaaaaa , to Selfhosted in Now that vmware is over, what should I move to?
@vegetaaaaaaa@lemmy.world avatar

The migration is bound to happen in the next few months, and I can't recommend moving to incus yet since it's not in stable/LTS repositories for Debian/Ubuntu, and I really don't want to encourage adding third-party repositories to the mix - they are already widespread in the setup I inherited (new gig), and part of a major clusterfuck that is upgrade management (or the lack of). I really want to standardize on official distro repositories. On the other hand the current LXD packages are provided by snap (...) so that would still be an improvement, I guess.

Management is already sold to the idea of Proxmox (not by me), so I think I'll take the path of least resistance. I've had mostly good experiences with it in the past, even if I found their custom kernels a bit strange to start with... do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable? I'd still like to put a word of caution about that.

vegetaaaaaaa , to Selfhosted in Now that vmware is over, what should I move to?
@vegetaaaaaaa@lemmy.world avatar

I should RTFM again... https://manpages.debian.org/bookworm/libvirt-clients/virsh.1.en.html has options for virsh migrate such as --copy-storage-all... Not sure how it would work for actual live migrations but I will definitely check it out. Thanks for the hint

vegetaaaaaaa , (edited ) to Selfhosted in Now that vmware is over, what should I move to?
@vegetaaaaaaa@lemmy.world avatar

Did you read? I specifically said it didn't, at least not out-of-the-box.

vegetaaaaaaa , to Selfhosted in How many PostgreSQL services?
@vegetaaaaaaa@lemmy.world avatar

Would it be better to just have one PostgreSQL service running that serves both Nextcloud and Lemmy

Yes, performance and maintenance-wise.

If you're concerned about database maintenance (can't remember the last time I had to do this... Once every few years to migrate postgres clusters to the next major version?) bringing down multiple services, setup master-slave replication and be done with it

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines