vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

VMs have a lot of additional overhead.

The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

https://serverfault.com/questions/225719/so-really-what-is-the-overhead-of-virtualization-and-when-should-i-be-concerned

I usually go for bare-metal > on top of that, multiple VMs separated by context (think "tenant", production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it's good to have at least separate VMs for "serious business" and "lab" contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and... ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you're serious about it you will probably end up maintaining your own images)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • selfhosted@lemmy.world
  • test
  • worldmews
  • mews
  • All magazines