d416 ,

The easiest way to run local LLMs on older hardware is Llamafile
https://github.com/Mozilla-Ocho/llamafile

For non-nvidia GPUs, webgpu is the way to go
https://github.com/abi/secret-llama

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • selfhosted@lemmy.world
  • test
  • worldmews
  • mews
  • All magazines