evranch ,

Do it, it's easy and fun and you'll learn about the actual capabilities of the tech. Started a week ago and I'm a convert on the utility of local AI. Got to go back to Reddit for it but r/localllama has tons of good info. You can actually run useful models at a conversational pace.

This whole thread is silly because VRAM is what you need, I'm running some pretty good coding and general knowledge models in a 12GB Radeon. Almost none of my 32GB system ram is used lol either Microsoft is out of touch or hiding an amazing new algorithm

Running in system ram works but the processing is painfully slow on the regular CPU, over 10x slower

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • test
  • worldmews
  • mews
  • All magazines