Fisch ,
@Fisch@discuss.tchncs.de avatar

Yeah, I usually take the 6bit quants, didn't know the difference is that big. That's probably why tho. Unfortunately, almost all Llama3 models are either 8B or 70B, so there isn't really anything in between but I find Llama3 models to be noticeably better than Llama2 models, otherwise I would have tried bigger models with lower quants.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • artificial_intel@lemmy.ml
  • test
  • worldmews
  • mews
  • All magazines