hobyte , 2 months ago @jerry @GossiTheDog I'm currently running #llama3 locally and it's pretty fast. I have to Walt ~20sec for the response, but after that it gets generated at reading speed.
@jerry @GossiTheDog I'm currently running #llama3 locally and it's pretty fast. I have to Walt ~20sec for the response, but after that it gets generated at reading speed.